Apr 22 15:08:30.881549 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Apr 22 15:08:30.881573 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Mon Mar 24 23:39:14 -00 2025 Apr 22 15:08:30.881583 kernel: KASLR enabled Apr 22 15:08:30.881589 kernel: efi: EFI v2.7 by EDK II Apr 22 15:08:30.881595 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb4ff018 ACPI 2.0=0xd93ef018 RNG=0xd93efa18 MEMRESERVE=0xd91d9d18 Apr 22 15:08:30.881601 kernel: random: crng init done Apr 22 15:08:30.881608 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Apr 22 15:08:30.881614 kernel: secureboot: Secure boot enabled Apr 22 15:08:30.881620 kernel: ACPI: Early table checksum verification disabled Apr 22 15:08:30.881626 kernel: ACPI: RSDP 0x00000000D93EF018 000024 (v02 BOCHS ) Apr 22 15:08:30.881634 kernel: ACPI: XSDT 0x00000000D93EFF18 000064 (v01 BOCHS BXPC 00000001 01000013) Apr 22 15:08:30.881640 kernel: ACPI: FACP 0x00000000D93EFB18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 15:08:30.881646 kernel: ACPI: DSDT 0x00000000D93ED018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 15:08:30.881653 kernel: ACPI: APIC 0x00000000D93EFC98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 15:08:30.881660 kernel: ACPI: PPTT 0x00000000D93EF098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 15:08:30.881668 kernel: ACPI: GTDT 0x00000000D93EF818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 15:08:30.881674 kernel: ACPI: MCFG 0x00000000D93EFA98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 15:08:30.881681 kernel: ACPI: SPCR 0x00000000D93EF918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 15:08:30.881688 kernel: ACPI: DBG2 0x00000000D93EF998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 15:08:30.881694 kernel: ACPI: IORT 0x00000000D93EF198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 22 15:08:30.881700 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Apr 22 15:08:30.881707 kernel: NUMA: Failed to initialise from firmware Apr 22 15:08:30.881713 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Apr 22 15:08:30.881719 kernel: NUMA: NODE_DATA [mem 0xdc728800-0xdc72dfff] Apr 22 15:08:30.881726 kernel: Zone ranges: Apr 22 15:08:30.881734 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Apr 22 15:08:30.881740 kernel: DMA32 empty Apr 22 15:08:30.881746 kernel: Normal empty Apr 22 15:08:30.881753 kernel: Movable zone start for each node Apr 22 15:08:30.881759 kernel: Early memory node ranges Apr 22 15:08:30.881765 kernel: node 0: [mem 0x0000000040000000-0x00000000d93effff] Apr 22 15:08:30.881772 kernel: node 0: [mem 0x00000000d93f0000-0x00000000d972ffff] Apr 22 15:08:30.881778 kernel: node 0: [mem 0x00000000d9730000-0x00000000dcbfffff] Apr 22 15:08:30.881784 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Apr 22 15:08:30.881791 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Apr 22 15:08:30.881797 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Apr 22 15:08:30.881803 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Apr 22 15:08:30.881811 kernel: psci: probing for conduit method from ACPI. Apr 22 15:08:30.881817 kernel: psci: PSCIv1.1 detected in firmware. Apr 22 15:08:30.881824 kernel: psci: Using standard PSCI v0.2 function IDs Apr 22 15:08:30.881833 kernel: psci: Trusted OS migration not required Apr 22 15:08:30.881840 kernel: psci: SMC Calling Convention v1.1 Apr 22 15:08:30.881847 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Apr 22 15:08:30.881854 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Apr 22 15:08:30.881862 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Apr 22 15:08:30.881869 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Apr 22 15:08:30.881876 kernel: Detected PIPT I-cache on CPU0 Apr 22 15:08:30.881883 kernel: CPU features: detected: GIC system register CPU interface Apr 22 15:08:30.881889 kernel: CPU features: detected: Hardware dirty bit management Apr 22 15:08:30.881896 kernel: CPU features: detected: Spectre-v4 Apr 22 15:08:30.881903 kernel: CPU features: detected: Spectre-BHB Apr 22 15:08:30.881910 kernel: CPU features: kernel page table isolation forced ON by KASLR Apr 22 15:08:30.881917 kernel: CPU features: detected: Kernel page table isolation (KPTI) Apr 22 15:08:30.881923 kernel: CPU features: detected: ARM erratum 1418040 Apr 22 15:08:30.881932 kernel: CPU features: detected: SSBS not fully self-synchronizing Apr 22 15:08:30.881938 kernel: alternatives: applying boot alternatives Apr 22 15:08:30.881946 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=b84e5f613acd6cd0a8a878f32f5653a14f2e6fb2820997fecd5b2bd33a4ba3ab Apr 22 15:08:30.881953 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 22 15:08:30.881960 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 22 15:08:30.881967 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 22 15:08:30.881975 kernel: Fallback order for Node 0: 0 Apr 22 15:08:30.881981 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Apr 22 15:08:30.881988 kernel: Policy zone: DMA Apr 22 15:08:30.881995 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 22 15:08:30.882003 kernel: software IO TLB: area num 4. Apr 22 15:08:30.882010 kernel: software IO TLB: mapped [mem 0x00000000d2800000-0x00000000d6800000] (64MB) Apr 22 15:08:30.882017 kernel: Memory: 2385812K/2572288K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38464K init, 897K bss, 186476K reserved, 0K cma-reserved) Apr 22 15:08:30.882024 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 22 15:08:30.882031 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 22 15:08:30.882039 kernel: rcu: RCU event tracing is enabled. Apr 22 15:08:30.882045 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 22 15:08:30.882052 kernel: Trampoline variant of Tasks RCU enabled. Apr 22 15:08:30.882059 kernel: Tracing variant of Tasks RCU enabled. Apr 22 15:08:30.882066 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 22 15:08:30.882073 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 22 15:08:30.882080 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Apr 22 15:08:30.882088 kernel: GICv3: 256 SPIs implemented Apr 22 15:08:30.882095 kernel: GICv3: 0 Extended SPIs implemented Apr 22 15:08:30.882101 kernel: Root IRQ handler: gic_handle_irq Apr 22 15:08:30.882108 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Apr 22 15:08:30.882115 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Apr 22 15:08:30.882122 kernel: ITS [mem 0x08080000-0x0809ffff] Apr 22 15:08:30.882129 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Apr 22 15:08:30.882136 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Apr 22 15:08:30.882142 kernel: GICv3: using LPI property table @0x00000000400f0000 Apr 22 15:08:30.882149 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Apr 22 15:08:30.882156 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 22 15:08:30.882165 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 22 15:08:30.882172 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Apr 22 15:08:30.882179 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Apr 22 15:08:30.882187 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Apr 22 15:08:30.882193 kernel: arm-pv: using stolen time PV Apr 22 15:08:30.882201 kernel: Console: colour dummy device 80x25 Apr 22 15:08:30.882220 kernel: ACPI: Core revision 20230628 Apr 22 15:08:30.882228 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Apr 22 15:08:30.882235 kernel: pid_max: default: 32768 minimum: 301 Apr 22 15:08:30.882243 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 22 15:08:30.882251 kernel: landlock: Up and running. Apr 22 15:08:30.882258 kernel: SELinux: Initializing. Apr 22 15:08:30.882265 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 22 15:08:30.882273 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 22 15:08:30.882280 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 22 15:08:30.882287 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 22 15:08:30.882294 kernel: rcu: Hierarchical SRCU implementation. Apr 22 15:08:30.882301 kernel: rcu: Max phase no-delay instances is 400. Apr 22 15:08:30.882309 kernel: Platform MSI: ITS@0x8080000 domain created Apr 22 15:08:30.882317 kernel: PCI/MSI: ITS@0x8080000 domain created Apr 22 15:08:30.882324 kernel: Remapping and enabling EFI services. Apr 22 15:08:30.882331 kernel: smp: Bringing up secondary CPUs ... Apr 22 15:08:30.882338 kernel: Detected PIPT I-cache on CPU1 Apr 22 15:08:30.882362 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Apr 22 15:08:30.882371 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Apr 22 15:08:30.882383 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 22 15:08:30.882390 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Apr 22 15:08:30.882398 kernel: Detected PIPT I-cache on CPU2 Apr 22 15:08:30.882405 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Apr 22 15:08:30.882415 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Apr 22 15:08:30.882422 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 22 15:08:30.882434 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Apr 22 15:08:30.882442 kernel: Detected PIPT I-cache on CPU3 Apr 22 15:08:30.882450 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Apr 22 15:08:30.882457 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Apr 22 15:08:30.882464 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Apr 22 15:08:30.882472 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Apr 22 15:08:30.882479 kernel: smp: Brought up 1 node, 4 CPUs Apr 22 15:08:30.882486 kernel: SMP: Total of 4 processors activated. Apr 22 15:08:30.882495 kernel: CPU features: detected: 32-bit EL0 Support Apr 22 15:08:30.882502 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Apr 22 15:08:30.882510 kernel: CPU features: detected: Common not Private translations Apr 22 15:08:30.882517 kernel: CPU features: detected: CRC32 instructions Apr 22 15:08:30.882524 kernel: CPU features: detected: Enhanced Virtualization Traps Apr 22 15:08:30.882532 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Apr 22 15:08:30.882539 kernel: CPU features: detected: LSE atomic instructions Apr 22 15:08:30.882547 kernel: CPU features: detected: Privileged Access Never Apr 22 15:08:30.882555 kernel: CPU features: detected: RAS Extension Support Apr 22 15:08:30.882562 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Apr 22 15:08:30.882569 kernel: CPU: All CPU(s) started at EL1 Apr 22 15:08:30.882577 kernel: alternatives: applying system-wide alternatives Apr 22 15:08:30.882584 kernel: devtmpfs: initialized Apr 22 15:08:30.882592 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 22 15:08:30.882599 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 22 15:08:30.882606 kernel: pinctrl core: initialized pinctrl subsystem Apr 22 15:08:30.882615 kernel: SMBIOS 3.0.0 present. Apr 22 15:08:30.882622 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Apr 22 15:08:30.882630 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 22 15:08:30.882637 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Apr 22 15:08:30.882644 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Apr 22 15:08:30.882652 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Apr 22 15:08:30.882659 kernel: audit: initializing netlink subsys (disabled) Apr 22 15:08:30.882667 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Apr 22 15:08:30.882674 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 22 15:08:30.882682 kernel: cpuidle: using governor menu Apr 22 15:08:30.882690 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Apr 22 15:08:30.882697 kernel: ASID allocator initialised with 32768 entries Apr 22 15:08:30.882705 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 22 15:08:30.882712 kernel: Serial: AMBA PL011 UART driver Apr 22 15:08:30.882719 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Apr 22 15:08:30.882727 kernel: Modules: 0 pages in range for non-PLT usage Apr 22 15:08:30.882734 kernel: Modules: 509248 pages in range for PLT usage Apr 22 15:08:30.882741 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 22 15:08:30.882750 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Apr 22 15:08:30.882757 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Apr 22 15:08:30.882764 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Apr 22 15:08:30.882772 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 22 15:08:30.882779 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Apr 22 15:08:30.882787 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Apr 22 15:08:30.882794 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Apr 22 15:08:30.882801 kernel: ACPI: Added _OSI(Module Device) Apr 22 15:08:30.882809 kernel: ACPI: Added _OSI(Processor Device) Apr 22 15:08:30.882817 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 22 15:08:30.882824 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 22 15:08:30.882832 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 22 15:08:30.882839 kernel: ACPI: Interpreter enabled Apr 22 15:08:30.882847 kernel: ACPI: Using GIC for interrupt routing Apr 22 15:08:30.882854 kernel: ACPI: MCFG table detected, 1 entries Apr 22 15:08:30.882861 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Apr 22 15:08:30.882868 kernel: printk: console [ttyAMA0] enabled Apr 22 15:08:30.882876 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 22 15:08:30.883021 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 22 15:08:30.883100 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 22 15:08:30.883174 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 22 15:08:30.883242 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Apr 22 15:08:30.883311 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Apr 22 15:08:30.883322 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Apr 22 15:08:30.883329 kernel: PCI host bridge to bus 0000:00 Apr 22 15:08:30.883493 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Apr 22 15:08:30.883562 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Apr 22 15:08:30.883624 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Apr 22 15:08:30.883685 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 22 15:08:30.883785 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Apr 22 15:08:30.883865 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Apr 22 15:08:30.883944 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Apr 22 15:08:30.884015 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Apr 22 15:08:30.884086 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Apr 22 15:08:30.884156 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Apr 22 15:08:30.884226 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Apr 22 15:08:30.884297 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Apr 22 15:08:30.884378 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Apr 22 15:08:30.884449 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Apr 22 15:08:30.884516 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Apr 22 15:08:30.884526 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Apr 22 15:08:30.884534 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Apr 22 15:08:30.884541 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Apr 22 15:08:30.884549 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Apr 22 15:08:30.884556 kernel: iommu: Default domain type: Translated Apr 22 15:08:30.884564 kernel: iommu: DMA domain TLB invalidation policy: strict mode Apr 22 15:08:30.884571 kernel: efivars: Registered efivars operations Apr 22 15:08:30.884580 kernel: vgaarb: loaded Apr 22 15:08:30.884588 kernel: clocksource: Switched to clocksource arch_sys_counter Apr 22 15:08:30.884595 kernel: VFS: Disk quotas dquot_6.6.0 Apr 22 15:08:30.884602 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 22 15:08:30.884610 kernel: pnp: PnP ACPI init Apr 22 15:08:30.884692 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Apr 22 15:08:30.884703 kernel: pnp: PnP ACPI: found 1 devices Apr 22 15:08:30.884710 kernel: NET: Registered PF_INET protocol family Apr 22 15:08:30.884720 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 22 15:08:30.884727 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 22 15:08:30.884735 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 22 15:08:30.884742 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 22 15:08:30.884750 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 22 15:08:30.884757 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 22 15:08:30.884765 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 22 15:08:30.884772 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 22 15:08:30.884779 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 22 15:08:30.884788 kernel: PCI: CLS 0 bytes, default 64 Apr 22 15:08:30.884796 kernel: kvm [1]: HYP mode not available Apr 22 15:08:30.884803 kernel: Initialise system trusted keyrings Apr 22 15:08:30.884810 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 22 15:08:30.884817 kernel: Key type asymmetric registered Apr 22 15:08:30.884825 kernel: Asymmetric key parser 'x509' registered Apr 22 15:08:30.884832 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 22 15:08:30.884839 kernel: io scheduler mq-deadline registered Apr 22 15:08:30.884846 kernel: io scheduler kyber registered Apr 22 15:08:30.884855 kernel: io scheduler bfq registered Apr 22 15:08:30.884863 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Apr 22 15:08:30.884870 kernel: ACPI: button: Power Button [PWRB] Apr 22 15:08:30.884878 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Apr 22 15:08:30.884948 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Apr 22 15:08:30.884958 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 22 15:08:30.884965 kernel: thunder_xcv, ver 1.0 Apr 22 15:08:30.884973 kernel: thunder_bgx, ver 1.0 Apr 22 15:08:30.884980 kernel: nicpf, ver 1.0 Apr 22 15:08:30.884987 kernel: nicvf, ver 1.0 Apr 22 15:08:30.885066 kernel: rtc-efi rtc-efi.0: registered as rtc0 Apr 22 15:08:30.885131 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-04-22T15:08:30 UTC (1745334510) Apr 22 15:08:30.885141 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 22 15:08:30.885148 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Apr 22 15:08:30.885156 kernel: watchdog: Delayed init of the lockup detector failed: -19 Apr 22 15:08:30.885163 kernel: watchdog: Hard watchdog permanently disabled Apr 22 15:08:30.885171 kernel: NET: Registered PF_INET6 protocol family Apr 22 15:08:30.885181 kernel: Segment Routing with IPv6 Apr 22 15:08:30.885188 kernel: In-situ OAM (IOAM) with IPv6 Apr 22 15:08:30.885195 kernel: NET: Registered PF_PACKET protocol family Apr 22 15:08:30.885202 kernel: Key type dns_resolver registered Apr 22 15:08:30.885210 kernel: registered taskstats version 1 Apr 22 15:08:30.885217 kernel: Loading compiled-in X.509 certificates Apr 22 15:08:30.885224 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: ed4ababe871f0afac8b4236504477de11a6baf07' Apr 22 15:08:30.885232 kernel: Key type .fscrypt registered Apr 22 15:08:30.885239 kernel: Key type fscrypt-provisioning registered Apr 22 15:08:30.885248 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 22 15:08:30.885255 kernel: ima: Allocated hash algorithm: sha1 Apr 22 15:08:30.885263 kernel: ima: No architecture policies found Apr 22 15:08:30.885270 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Apr 22 15:08:30.885277 kernel: clk: Disabling unused clocks Apr 22 15:08:30.885285 kernel: Freeing unused kernel memory: 38464K Apr 22 15:08:30.885292 kernel: Run /init as init process Apr 22 15:08:30.885299 kernel: with arguments: Apr 22 15:08:30.885306 kernel: /init Apr 22 15:08:30.885315 kernel: with environment: Apr 22 15:08:30.885322 kernel: HOME=/ Apr 22 15:08:30.885330 kernel: TERM=linux Apr 22 15:08:30.885337 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 22 15:08:30.885354 systemd[1]: Successfully made /usr/ read-only. Apr 22 15:08:30.885366 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 22 15:08:30.885379 systemd[1]: Detected virtualization kvm. Apr 22 15:08:30.885388 systemd[1]: Detected architecture arm64. Apr 22 15:08:30.885399 systemd[1]: Running in initrd. Apr 22 15:08:30.885407 systemd[1]: No hostname configured, using default hostname. Apr 22 15:08:30.885415 systemd[1]: Hostname set to . Apr 22 15:08:30.885423 systemd[1]: Initializing machine ID from VM UUID. Apr 22 15:08:30.885431 systemd[1]: Queued start job for default target initrd.target. Apr 22 15:08:30.885439 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 22 15:08:30.885447 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 22 15:08:30.885455 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 22 15:08:30.885466 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 22 15:08:30.885474 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 22 15:08:30.885483 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 22 15:08:30.885492 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 22 15:08:30.885500 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 22 15:08:30.885508 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 22 15:08:30.885516 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 22 15:08:30.885527 systemd[1]: Reached target paths.target - Path Units. Apr 22 15:08:30.885535 systemd[1]: Reached target slices.target - Slice Units. Apr 22 15:08:30.885543 systemd[1]: Reached target swap.target - Swaps. Apr 22 15:08:30.885551 systemd[1]: Reached target timers.target - Timer Units. Apr 22 15:08:30.885559 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 22 15:08:30.885567 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 22 15:08:30.885575 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 22 15:08:30.885583 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 22 15:08:30.885593 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 22 15:08:30.885601 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 22 15:08:30.885609 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 22 15:08:30.885617 systemd[1]: Reached target sockets.target - Socket Units. Apr 22 15:08:30.885625 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 22 15:08:30.885633 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 22 15:08:30.885641 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 22 15:08:30.885649 systemd[1]: Starting systemd-fsck-usr.service... Apr 22 15:08:30.885657 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 22 15:08:30.885667 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 22 15:08:30.885675 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 22 15:08:30.885683 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 22 15:08:30.885692 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 22 15:08:30.885700 systemd[1]: Finished systemd-fsck-usr.service. Apr 22 15:08:30.885710 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 22 15:08:30.885719 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 22 15:08:30.885727 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 22 15:08:30.885759 systemd-journald[236]: Collecting audit messages is disabled. Apr 22 15:08:30.885790 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 22 15:08:30.885798 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 22 15:08:30.885809 systemd-journald[236]: Journal started Apr 22 15:08:30.885828 systemd-journald[236]: Runtime Journal (/run/log/journal/814ab5cbc83644d88520a13917484996) is 5.9M, max 47.3M, 41.4M free. Apr 22 15:08:30.894425 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 22 15:08:30.894468 kernel: Bridge firewalling registered Apr 22 15:08:30.877084 systemd-modules-load[237]: Inserted module 'overlay' Apr 22 15:08:30.891766 systemd-modules-load[237]: Inserted module 'br_netfilter' Apr 22 15:08:30.898069 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 22 15:08:30.898090 systemd[1]: Started systemd-journald.service - Journal Service. Apr 22 15:08:30.899797 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 22 15:08:30.903282 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 22 15:08:30.904733 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 22 15:08:30.912752 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 22 15:08:30.915436 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 22 15:08:30.917281 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 22 15:08:30.919565 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 22 15:08:30.923539 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 22 15:08:30.928870 dracut-cmdline[272]: dracut-dracut-053 Apr 22 15:08:30.933010 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=b84e5f613acd6cd0a8a878f32f5653a14f2e6fb2820997fecd5b2bd33a4ba3ab Apr 22 15:08:30.965501 systemd-resolved[283]: Positive Trust Anchors: Apr 22 15:08:30.965519 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 22 15:08:30.965550 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 22 15:08:30.970473 systemd-resolved[283]: Defaulting to hostname 'linux'. Apr 22 15:08:30.971479 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 22 15:08:30.972527 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 22 15:08:30.999378 kernel: SCSI subsystem initialized Apr 22 15:08:31.006364 kernel: Loading iSCSI transport class v2.0-870. Apr 22 15:08:31.013386 kernel: iscsi: registered transport (tcp) Apr 22 15:08:31.026378 kernel: iscsi: registered transport (qla4xxx) Apr 22 15:08:31.026400 kernel: QLogic iSCSI HBA Driver Apr 22 15:08:31.067258 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 22 15:08:31.069495 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 22 15:08:31.097699 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 22 15:08:31.097769 kernel: device-mapper: uevent: version 1.0.3 Apr 22 15:08:31.097789 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 22 15:08:31.145373 kernel: raid6: neonx8 gen() 15796 MB/s Apr 22 15:08:31.162361 kernel: raid6: neonx4 gen() 15839 MB/s Apr 22 15:08:31.179370 kernel: raid6: neonx2 gen() 13215 MB/s Apr 22 15:08:31.196358 kernel: raid6: neonx1 gen() 10500 MB/s Apr 22 15:08:31.213367 kernel: raid6: int64x8 gen() 6796 MB/s Apr 22 15:08:31.230360 kernel: raid6: int64x4 gen() 7353 MB/s Apr 22 15:08:31.247371 kernel: raid6: int64x2 gen() 6114 MB/s Apr 22 15:08:31.264361 kernel: raid6: int64x1 gen() 5062 MB/s Apr 22 15:08:31.264396 kernel: raid6: using algorithm neonx4 gen() 15839 MB/s Apr 22 15:08:31.281367 kernel: raid6: .... xor() 12509 MB/s, rmw enabled Apr 22 15:08:31.281403 kernel: raid6: using neon recovery algorithm Apr 22 15:08:31.286626 kernel: xor: measuring software checksum speed Apr 22 15:08:31.286646 kernel: 8regs : 21630 MB/sec Apr 22 15:08:31.286656 kernel: 32regs : 21704 MB/sec Apr 22 15:08:31.287547 kernel: arm64_neon : 27889 MB/sec Apr 22 15:08:31.287562 kernel: xor: using function: arm64_neon (27889 MB/sec) Apr 22 15:08:31.337399 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 22 15:08:31.349398 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 22 15:08:31.351785 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 22 15:08:31.377546 systemd-udevd[463]: Using default interface naming scheme 'v255'. Apr 22 15:08:31.381237 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 22 15:08:31.383797 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 22 15:08:31.411574 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Apr 22 15:08:31.438266 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 22 15:08:31.441188 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 22 15:08:31.494491 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 22 15:08:31.497488 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 22 15:08:31.517400 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 22 15:08:31.518660 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 22 15:08:31.520248 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 22 15:08:31.522061 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 22 15:08:31.526533 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 22 15:08:31.540309 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Apr 22 15:08:31.555247 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 22 15:08:31.555988 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 22 15:08:31.556003 kernel: GPT:9289727 != 19775487 Apr 22 15:08:31.556013 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 22 15:08:31.556032 kernel: GPT:9289727 != 19775487 Apr 22 15:08:31.556041 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 22 15:08:31.556050 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 22 15:08:31.551677 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 22 15:08:31.551798 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 22 15:08:31.554920 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 22 15:08:31.555725 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 22 15:08:31.555877 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 22 15:08:31.557476 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 22 15:08:31.559006 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 22 15:08:31.560457 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 22 15:08:31.575343 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 22 15:08:31.581413 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (517) Apr 22 15:08:31.581459 kernel: BTRFS: device fsid bf348154-9cb1-474d-801c-0e035a5758cf devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (520) Apr 22 15:08:31.592041 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 22 15:08:31.599304 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 22 15:08:31.610924 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 22 15:08:31.616924 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 22 15:08:31.617849 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 22 15:08:31.620918 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 22 15:08:31.622509 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 22 15:08:31.639005 disk-uuid[551]: Primary Header is updated. Apr 22 15:08:31.639005 disk-uuid[551]: Secondary Entries is updated. Apr 22 15:08:31.639005 disk-uuid[551]: Secondary Header is updated. Apr 22 15:08:31.644384 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 22 15:08:31.646841 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 22 15:08:31.649490 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 22 15:08:32.654380 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 22 15:08:32.655102 disk-uuid[556]: The operation has completed successfully. Apr 22 15:08:32.680021 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 22 15:08:32.680117 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 22 15:08:32.704436 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 22 15:08:32.719971 sh[573]: Success Apr 22 15:08:32.735379 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Apr 22 15:08:32.761857 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 22 15:08:32.764171 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 22 15:08:32.776168 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 22 15:08:32.783369 kernel: BTRFS info (device dm-0): first mount of filesystem bf348154-9cb1-474d-801c-0e035a5758cf Apr 22 15:08:32.783410 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Apr 22 15:08:32.783428 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 22 15:08:32.783438 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 22 15:08:32.783896 kernel: BTRFS info (device dm-0): using free space tree Apr 22 15:08:32.787446 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 22 15:08:32.788454 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 22 15:08:32.789066 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 22 15:08:32.791099 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 22 15:08:32.813647 kernel: BTRFS info (device vda6): first mount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Apr 22 15:08:32.813686 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 22 15:08:32.813698 kernel: BTRFS info (device vda6): using free space tree Apr 22 15:08:32.815413 kernel: BTRFS info (device vda6): auto enabling async discard Apr 22 15:08:32.819621 kernel: BTRFS info (device vda6): last unmount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Apr 22 15:08:32.821706 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 22 15:08:32.823434 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 22 15:08:32.883019 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 22 15:08:32.885476 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 22 15:08:32.923953 ignition[663]: Ignition 2.20.0 Apr 22 15:08:32.923961 ignition[663]: Stage: fetch-offline Apr 22 15:08:32.923992 ignition[663]: no configs at "/usr/lib/ignition/base.d" Apr 22 15:08:32.924000 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 15:08:32.924144 ignition[663]: parsed url from cmdline: "" Apr 22 15:08:32.926702 systemd-networkd[755]: lo: Link UP Apr 22 15:08:32.924147 ignition[663]: no config URL provided Apr 22 15:08:32.926706 systemd-networkd[755]: lo: Gained carrier Apr 22 15:08:32.924152 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Apr 22 15:08:32.927654 systemd-networkd[755]: Enumeration completed Apr 22 15:08:32.924158 ignition[663]: no config at "/usr/lib/ignition/user.ign" Apr 22 15:08:32.927737 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 22 15:08:32.924179 ignition[663]: op(1): [started] loading QEMU firmware config module Apr 22 15:08:32.928039 systemd-networkd[755]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 22 15:08:32.924184 ignition[663]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 22 15:08:32.928043 systemd-networkd[755]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 22 15:08:32.935011 ignition[663]: op(1): [finished] loading QEMU firmware config module Apr 22 15:08:32.929040 systemd[1]: Reached target network.target - Network. Apr 22 15:08:32.929125 systemd-networkd[755]: eth0: Link UP Apr 22 15:08:32.929128 systemd-networkd[755]: eth0: Gained carrier Apr 22 15:08:32.929134 systemd-networkd[755]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 22 15:08:32.958433 systemd-networkd[755]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 22 15:08:32.977480 ignition[663]: parsing config with SHA512: 2e97d4efa036cbf0037ff641d718456bdde64219e4865d1bf3fcfd32df04bd12650528f7ad08b677a2df9622e7367588e24cbb243f8da9bfae2c71bfdfefc120 Apr 22 15:08:32.981940 unknown[663]: fetched base config from "system" Apr 22 15:08:32.981950 unknown[663]: fetched user config from "qemu" Apr 22 15:08:32.982652 ignition[663]: fetch-offline: fetch-offline passed Apr 22 15:08:32.983342 ignition[663]: Ignition finished successfully Apr 22 15:08:32.987375 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 22 15:08:32.988334 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 22 15:08:32.989035 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 22 15:08:33.010881 ignition[769]: Ignition 2.20.0 Apr 22 15:08:33.010891 ignition[769]: Stage: kargs Apr 22 15:08:33.011024 ignition[769]: no configs at "/usr/lib/ignition/base.d" Apr 22 15:08:33.011034 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 15:08:33.011832 ignition[769]: kargs: kargs passed Apr 22 15:08:33.011870 ignition[769]: Ignition finished successfully Apr 22 15:08:33.014951 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 22 15:08:33.016853 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 22 15:08:33.043066 ignition[777]: Ignition 2.20.0 Apr 22 15:08:33.043075 ignition[777]: Stage: disks Apr 22 15:08:33.043212 ignition[777]: no configs at "/usr/lib/ignition/base.d" Apr 22 15:08:33.043221 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 15:08:33.044033 ignition[777]: disks: disks passed Apr 22 15:08:33.045620 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 22 15:08:33.044071 ignition[777]: Ignition finished successfully Apr 22 15:08:33.046740 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 22 15:08:33.047588 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 22 15:08:33.048785 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 22 15:08:33.050133 systemd[1]: Reached target sysinit.target - System Initialization. Apr 22 15:08:33.051366 systemd[1]: Reached target basic.target - Basic System. Apr 22 15:08:33.053438 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 22 15:08:33.078881 systemd-resolved[283]: Detected conflict on linux IN A 10.0.0.54 Apr 22 15:08:33.078896 systemd-resolved[283]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Apr 22 15:08:33.080909 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 22 15:08:33.083819 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 22 15:08:33.086856 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 22 15:08:33.140368 kernel: EXT4-fs (vda9): mounted filesystem a7a89271-ee7d-4bda-a834-705261d6cda9 r/w with ordered data mode. Quota mode: none. Apr 22 15:08:33.140551 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 22 15:08:33.141533 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 22 15:08:33.146447 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 22 15:08:33.154842 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 22 15:08:33.155623 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 22 15:08:33.155658 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 22 15:08:33.155679 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 22 15:08:33.159565 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 22 15:08:33.161882 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 22 15:08:33.168487 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (799) Apr 22 15:08:33.168525 kernel: BTRFS info (device vda6): first mount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Apr 22 15:08:33.168536 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 22 15:08:33.169792 kernel: BTRFS info (device vda6): using free space tree Apr 22 15:08:33.172369 kernel: BTRFS info (device vda6): auto enabling async discard Apr 22 15:08:33.172895 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 22 15:08:33.205077 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Apr 22 15:08:33.209169 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Apr 22 15:08:33.213037 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Apr 22 15:08:33.216382 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Apr 22 15:08:33.279004 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 22 15:08:33.281013 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 22 15:08:33.282480 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 22 15:08:33.296377 kernel: BTRFS info (device vda6): last unmount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Apr 22 15:08:33.308440 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 22 15:08:33.312660 ignition[913]: INFO : Ignition 2.20.0 Apr 22 15:08:33.312660 ignition[913]: INFO : Stage: mount Apr 22 15:08:33.313968 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 22 15:08:33.313968 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 15:08:33.313968 ignition[913]: INFO : mount: mount passed Apr 22 15:08:33.313968 ignition[913]: INFO : Ignition finished successfully Apr 22 15:08:33.316421 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 22 15:08:33.318302 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 22 15:08:33.903416 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 22 15:08:33.904900 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 22 15:08:33.925371 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (926) Apr 22 15:08:33.927712 kernel: BTRFS info (device vda6): first mount of filesystem 09629b08-d05c-4ce3-8bf7-615041c4b2c9 Apr 22 15:08:33.927732 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Apr 22 15:08:33.927743 kernel: BTRFS info (device vda6): using free space tree Apr 22 15:08:33.929368 kernel: BTRFS info (device vda6): auto enabling async discard Apr 22 15:08:33.930556 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 22 15:08:33.953121 ignition[943]: INFO : Ignition 2.20.0 Apr 22 15:08:33.953121 ignition[943]: INFO : Stage: files Apr 22 15:08:33.954576 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 22 15:08:33.954576 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 15:08:33.954576 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Apr 22 15:08:33.957500 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 22 15:08:33.957500 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 22 15:08:33.960043 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 22 15:08:33.961175 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 22 15:08:33.961175 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 22 15:08:33.960534 unknown[943]: wrote ssh authorized keys file for user: core Apr 22 15:08:33.964339 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 22 15:08:33.964339 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Apr 22 15:08:34.029581 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 22 15:08:34.221115 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Apr 22 15:08:34.222725 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 22 15:08:34.222725 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 22 15:08:34.222725 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 22 15:08:34.222725 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 22 15:08:34.222725 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 22 15:08:34.222725 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 22 15:08:34.230879 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 22 15:08:34.230879 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 22 15:08:34.230879 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 22 15:08:34.230879 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 22 15:08:34.230879 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 22 15:08:34.230879 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 22 15:08:34.230879 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 22 15:08:34.230879 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Apr 22 15:08:34.373592 systemd-networkd[755]: eth0: Gained IPv6LL Apr 22 15:08:34.659433 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 22 15:08:35.226379 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Apr 22 15:08:35.226379 ignition[943]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 22 15:08:35.229689 ignition[943]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 22 15:08:35.229689 ignition[943]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 22 15:08:35.229689 ignition[943]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 22 15:08:35.229689 ignition[943]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 22 15:08:35.229689 ignition[943]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 22 15:08:35.229689 ignition[943]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 22 15:08:35.229689 ignition[943]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 22 15:08:35.229689 ignition[943]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 22 15:08:35.245929 ignition[943]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 22 15:08:35.248994 ignition[943]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 22 15:08:35.251358 ignition[943]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 22 15:08:35.251358 ignition[943]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 22 15:08:35.251358 ignition[943]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 22 15:08:35.251358 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 22 15:08:35.251358 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 22 15:08:35.251358 ignition[943]: INFO : files: files passed Apr 22 15:08:35.251358 ignition[943]: INFO : Ignition finished successfully Apr 22 15:08:35.251674 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 22 15:08:35.254060 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 22 15:08:35.256486 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 22 15:08:35.276488 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 22 15:08:35.276573 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 22 15:08:35.278901 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Apr 22 15:08:35.281077 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 22 15:08:35.281077 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 22 15:08:35.284099 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 22 15:08:35.284562 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 22 15:08:35.286621 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 22 15:08:35.288746 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 22 15:08:35.318785 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 22 15:08:35.318890 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 22 15:08:35.320592 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 22 15:08:35.321999 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 22 15:08:35.323470 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 22 15:08:35.324188 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 22 15:08:35.338323 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 22 15:08:35.341495 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 22 15:08:35.359213 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 22 15:08:35.361030 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 22 15:08:35.362093 systemd[1]: Stopped target timers.target - Timer Units. Apr 22 15:08:35.363484 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 22 15:08:35.363608 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 22 15:08:35.365534 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 22 15:08:35.367075 systemd[1]: Stopped target basic.target - Basic System. Apr 22 15:08:35.368357 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 22 15:08:35.369738 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 22 15:08:35.371341 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 22 15:08:35.373078 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 22 15:08:35.374571 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 22 15:08:35.376168 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 22 15:08:35.377749 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 22 15:08:35.379119 systemd[1]: Stopped target swap.target - Swaps. Apr 22 15:08:35.380338 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 22 15:08:35.380510 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 22 15:08:35.382453 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 22 15:08:35.384101 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 22 15:08:35.385673 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 22 15:08:35.387023 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 22 15:08:35.388130 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 22 15:08:35.388250 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 22 15:08:35.390414 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 22 15:08:35.390535 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 22 15:08:35.392064 systemd[1]: Stopped target paths.target - Path Units. Apr 22 15:08:35.393249 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 22 15:08:35.398377 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 22 15:08:35.399504 systemd[1]: Stopped target slices.target - Slice Units. Apr 22 15:08:35.401152 systemd[1]: Stopped target sockets.target - Socket Units. Apr 22 15:08:35.402370 systemd[1]: iscsid.socket: Deactivated successfully. Apr 22 15:08:35.402466 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 22 15:08:35.403674 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 22 15:08:35.403750 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 22 15:08:35.404975 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 22 15:08:35.405087 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 22 15:08:35.406588 systemd[1]: ignition-files.service: Deactivated successfully. Apr 22 15:08:35.406687 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 22 15:08:35.408791 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 22 15:08:35.410017 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 22 15:08:35.410142 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 22 15:08:35.424857 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 22 15:08:35.425660 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 22 15:08:35.425776 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 22 15:08:35.427369 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 22 15:08:35.427540 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 22 15:08:35.433957 ignition[999]: INFO : Ignition 2.20.0 Apr 22 15:08:35.433957 ignition[999]: INFO : Stage: umount Apr 22 15:08:35.436207 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 22 15:08:35.436207 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 22 15:08:35.436207 ignition[999]: INFO : umount: umount passed Apr 22 15:08:35.436207 ignition[999]: INFO : Ignition finished successfully Apr 22 15:08:35.434102 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 22 15:08:35.435069 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 22 15:08:35.437535 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 22 15:08:35.437611 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 22 15:08:35.439676 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 22 15:08:35.440854 systemd[1]: Stopped target network.target - Network. Apr 22 15:08:35.441956 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 22 15:08:35.442023 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 22 15:08:35.443479 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 22 15:08:35.443526 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 22 15:08:35.445046 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 22 15:08:35.445090 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 22 15:08:35.446745 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 22 15:08:35.446790 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 22 15:08:35.448577 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 22 15:08:35.449923 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 22 15:08:35.454304 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 22 15:08:35.454576 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 22 15:08:35.458804 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 22 15:08:35.458999 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 22 15:08:35.459081 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 22 15:08:35.462111 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 22 15:08:35.462743 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 22 15:08:35.462802 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 22 15:08:35.465269 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 22 15:08:35.466075 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 22 15:08:35.466130 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 22 15:08:35.467739 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 22 15:08:35.467776 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 22 15:08:35.470133 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 22 15:08:35.470175 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 22 15:08:35.471808 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 22 15:08:35.471846 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 22 15:08:35.474317 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 22 15:08:35.479084 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 22 15:08:35.479141 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 22 15:08:35.493597 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 22 15:08:35.493768 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 22 15:08:35.495863 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 22 15:08:35.495956 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 22 15:08:35.497310 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 22 15:08:35.497439 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 22 15:08:35.499385 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 22 15:08:35.499453 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 22 15:08:35.500815 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 22 15:08:35.500845 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 22 15:08:35.502204 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 22 15:08:35.502248 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 22 15:08:35.504650 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 22 15:08:35.504699 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 22 15:08:35.506995 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 22 15:08:35.507039 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 22 15:08:35.509325 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 22 15:08:35.509488 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 22 15:08:35.511584 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 22 15:08:35.513132 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 22 15:08:35.513185 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 22 15:08:35.515873 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 22 15:08:35.515913 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 22 15:08:35.519142 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 22 15:08:35.519197 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 22 15:08:35.528832 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 22 15:08:35.528943 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 22 15:08:35.530789 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 22 15:08:35.532899 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 22 15:08:35.546740 systemd[1]: Switching root. Apr 22 15:08:35.573084 systemd-journald[236]: Journal stopped Apr 22 15:08:36.292175 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Apr 22 15:08:36.292237 kernel: SELinux: policy capability network_peer_controls=1 Apr 22 15:08:36.292249 kernel: SELinux: policy capability open_perms=1 Apr 22 15:08:36.292263 kernel: SELinux: policy capability extended_socket_class=1 Apr 22 15:08:36.292273 kernel: SELinux: policy capability always_check_network=0 Apr 22 15:08:36.292286 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 22 15:08:36.292312 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 22 15:08:36.292322 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 22 15:08:36.292331 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 22 15:08:36.292342 kernel: audit: type=1403 audit(1745334515.710:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 22 15:08:36.292437 systemd[1]: Successfully loaded SELinux policy in 30.159ms. Apr 22 15:08:36.292463 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.551ms. Apr 22 15:08:36.292478 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 22 15:08:36.292489 systemd[1]: Detected virtualization kvm. Apr 22 15:08:36.292500 systemd[1]: Detected architecture arm64. Apr 22 15:08:36.292511 systemd[1]: Detected first boot. Apr 22 15:08:36.292522 systemd[1]: Initializing machine ID from VM UUID. Apr 22 15:08:36.292533 kernel: NET: Registered PF_VSOCK protocol family Apr 22 15:08:36.292543 zram_generator::config[1045]: No configuration found. Apr 22 15:08:36.292555 systemd[1]: Populated /etc with preset unit settings. Apr 22 15:08:36.292568 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 22 15:08:36.292582 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 22 15:08:36.292592 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 22 15:08:36.292603 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 22 15:08:36.292615 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 22 15:08:36.292626 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 22 15:08:36.292636 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 22 15:08:36.292647 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 22 15:08:36.292658 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 22 15:08:36.292671 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 22 15:08:36.292682 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 22 15:08:36.292693 systemd[1]: Created slice user.slice - User and Session Slice. Apr 22 15:08:36.292703 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 22 15:08:36.292714 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 22 15:08:36.292725 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 22 15:08:36.292735 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 22 15:08:36.292746 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 22 15:08:36.292759 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 22 15:08:36.292770 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Apr 22 15:08:36.292781 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 22 15:08:36.292792 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 22 15:08:36.292802 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 22 15:08:36.292814 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 22 15:08:36.292824 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 22 15:08:36.292835 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 22 15:08:36.292848 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 22 15:08:36.292858 systemd[1]: Reached target slices.target - Slice Units. Apr 22 15:08:36.292869 systemd[1]: Reached target swap.target - Swaps. Apr 22 15:08:36.292880 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 22 15:08:36.292912 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 22 15:08:36.292922 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 22 15:08:36.292933 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 22 15:08:36.292944 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 22 15:08:36.292955 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 22 15:08:36.292967 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 22 15:08:36.292977 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 22 15:08:36.292988 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 22 15:08:36.292998 systemd[1]: Mounting media.mount - External Media Directory... Apr 22 15:08:36.293009 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 22 15:08:36.293019 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 22 15:08:36.293030 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 22 15:08:36.293042 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 22 15:08:36.293052 systemd[1]: Reached target machines.target - Containers. Apr 22 15:08:36.293066 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 22 15:08:36.293077 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 22 15:08:36.293088 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 22 15:08:36.293098 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 22 15:08:36.293109 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 22 15:08:36.293120 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 22 15:08:36.293131 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 22 15:08:36.293141 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 22 15:08:36.293154 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 22 15:08:36.293169 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 22 15:08:36.293180 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 22 15:08:36.293191 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 22 15:08:36.293201 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 22 15:08:36.293212 systemd[1]: Stopped systemd-fsck-usr.service. Apr 22 15:08:36.293222 kernel: fuse: init (API version 7.39) Apr 22 15:08:36.293233 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 22 15:08:36.293245 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 22 15:08:36.293256 kernel: ACPI: bus type drm_connector registered Apr 22 15:08:36.293268 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 22 15:08:36.293298 kernel: loop: module loaded Apr 22 15:08:36.293308 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 22 15:08:36.293319 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 22 15:08:36.293330 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 22 15:08:36.293370 systemd-journald[1117]: Collecting audit messages is disabled. Apr 22 15:08:36.293394 systemd-journald[1117]: Journal started Apr 22 15:08:36.293421 systemd-journald[1117]: Runtime Journal (/run/log/journal/814ab5cbc83644d88520a13917484996) is 5.9M, max 47.3M, 41.4M free. Apr 22 15:08:36.108592 systemd[1]: Queued start job for default target multi-user.target. Apr 22 15:08:36.121386 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 22 15:08:36.121786 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 22 15:08:36.294360 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 22 15:08:36.296527 systemd[1]: verity-setup.service: Deactivated successfully. Apr 22 15:08:36.296566 systemd[1]: Stopped verity-setup.service. Apr 22 15:08:36.301900 systemd[1]: Started systemd-journald.service - Journal Service. Apr 22 15:08:36.302606 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 22 15:08:36.303611 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 22 15:08:36.304646 systemd[1]: Mounted media.mount - External Media Directory. Apr 22 15:08:36.305602 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 22 15:08:36.306665 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 22 15:08:36.307714 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 22 15:08:36.308788 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 22 15:08:36.311729 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 22 15:08:36.313067 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 22 15:08:36.313230 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 22 15:08:36.314532 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 22 15:08:36.314691 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 22 15:08:36.315863 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 22 15:08:36.316026 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 22 15:08:36.318710 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 22 15:08:36.318864 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 22 15:08:36.320114 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 22 15:08:36.320246 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 22 15:08:36.321650 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 22 15:08:36.321791 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 22 15:08:36.324381 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 22 15:08:36.325578 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 22 15:08:36.326993 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 22 15:08:36.328272 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 22 15:08:36.340264 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 22 15:08:36.342642 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 22 15:08:36.344508 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 22 15:08:36.345504 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 22 15:08:36.345540 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 22 15:08:36.347277 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 22 15:08:36.359198 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 22 15:08:36.361315 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 22 15:08:36.362383 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 22 15:08:36.363467 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 22 15:08:36.365465 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 22 15:08:36.366587 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 22 15:08:36.368527 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 22 15:08:36.369576 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 22 15:08:36.373437 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 22 15:08:36.376573 systemd-journald[1117]: Time spent on flushing to /var/log/journal/814ab5cbc83644d88520a13917484996 is 16.856ms for 866 entries. Apr 22 15:08:36.376573 systemd-journald[1117]: System Journal (/var/log/journal/814ab5cbc83644d88520a13917484996) is 8M, max 195.6M, 187.6M free. Apr 22 15:08:36.398584 systemd-journald[1117]: Received client request to flush runtime journal. Apr 22 15:08:36.398620 kernel: loop0: detected capacity change from 0 to 126448 Apr 22 15:08:36.377573 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 22 15:08:36.379706 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 22 15:08:36.383781 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 22 15:08:36.385092 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 22 15:08:36.388074 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 22 15:08:36.391118 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 22 15:08:36.396408 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 22 15:08:36.398885 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 22 15:08:36.400552 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 22 15:08:36.406067 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 22 15:08:36.411565 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 22 15:08:36.413899 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 22 15:08:36.414851 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 22 15:08:36.417380 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 22 15:08:36.431606 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 22 15:08:36.437602 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 22 15:08:36.447507 kernel: loop1: detected capacity change from 0 to 103832 Apr 22 15:08:36.449495 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 22 15:08:36.461485 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Apr 22 15:08:36.461502 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Apr 22 15:08:36.465876 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 22 15:08:36.487384 kernel: loop2: detected capacity change from 0 to 194096 Apr 22 15:08:36.519396 kernel: loop3: detected capacity change from 0 to 126448 Apr 22 15:08:36.525377 kernel: loop4: detected capacity change from 0 to 103832 Apr 22 15:08:36.530606 kernel: loop5: detected capacity change from 0 to 194096 Apr 22 15:08:36.535225 (sd-merge)[1188]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 22 15:08:36.535984 (sd-merge)[1188]: Merged extensions into '/usr'. Apr 22 15:08:36.539156 systemd[1]: Reload requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... Apr 22 15:08:36.539174 systemd[1]: Reloading... Apr 22 15:08:36.597392 zram_generator::config[1217]: No configuration found. Apr 22 15:08:36.646542 ldconfig[1157]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 22 15:08:36.687012 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 22 15:08:36.736873 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 22 15:08:36.737218 systemd[1]: Reloading finished in 197 ms. Apr 22 15:08:36.758386 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 22 15:08:36.759577 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 22 15:08:36.772785 systemd[1]: Starting ensure-sysext.service... Apr 22 15:08:36.774723 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 22 15:08:36.786393 systemd[1]: Reload requested from client PID 1250 ('systemctl') (unit ensure-sysext.service)... Apr 22 15:08:36.786414 systemd[1]: Reloading... Apr 22 15:08:36.792438 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 22 15:08:36.792641 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 22 15:08:36.793240 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 22 15:08:36.793576 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Apr 22 15:08:36.793625 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Apr 22 15:08:36.796000 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. Apr 22 15:08:36.796010 systemd-tmpfiles[1251]: Skipping /boot Apr 22 15:08:36.804785 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. Apr 22 15:08:36.804799 systemd-tmpfiles[1251]: Skipping /boot Apr 22 15:08:36.834388 zram_generator::config[1280]: No configuration found. Apr 22 15:08:36.921531 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 22 15:08:36.972725 systemd[1]: Reloading finished in 186 ms. Apr 22 15:08:36.985284 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 22 15:08:36.999605 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 22 15:08:37.007229 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 22 15:08:37.009417 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 22 15:08:37.018419 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 22 15:08:37.024639 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 22 15:08:37.030624 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 22 15:08:37.032849 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 22 15:08:37.036609 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 22 15:08:37.049565 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 22 15:08:37.054676 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 22 15:08:37.057174 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 22 15:08:37.058277 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 22 15:08:37.058468 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 22 15:08:37.061466 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 22 15:08:37.065423 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 22 15:08:37.068441 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 22 15:08:37.070361 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 22 15:08:37.070558 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 22 15:08:37.071915 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 22 15:08:37.072069 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 22 15:08:37.077073 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 22 15:08:37.077467 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 22 15:08:37.083971 systemd-udevd[1321]: Using default interface naming scheme 'v255'. Apr 22 15:08:37.084542 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 22 15:08:37.086423 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 22 15:08:37.088578 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 22 15:08:37.092313 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 22 15:08:37.094275 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 22 15:08:37.094581 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 22 15:08:37.106790 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 22 15:08:37.107795 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 22 15:08:37.109426 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 22 15:08:37.111012 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 22 15:08:37.113942 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 22 15:08:37.116143 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 22 15:08:37.116541 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 22 15:08:37.117957 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 22 15:08:37.119018 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 22 15:08:37.125984 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 22 15:08:37.126153 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 22 15:08:37.128287 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 22 15:08:37.146238 augenrules[1383]: No rules Apr 22 15:08:37.159690 systemd[1]: audit-rules.service: Deactivated successfully. Apr 22 15:08:37.159899 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 22 15:08:37.167546 systemd[1]: Finished ensure-sysext.service. Apr 22 15:08:37.173286 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Apr 22 15:08:37.174851 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 22 15:08:37.181399 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 22 15:08:37.183579 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 22 15:08:37.186523 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 22 15:08:37.191454 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1354) Apr 22 15:08:37.204963 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 22 15:08:37.206090 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 22 15:08:37.206141 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 22 15:08:37.208149 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 22 15:08:37.218077 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 22 15:08:37.219079 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 22 15:08:37.219776 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 22 15:08:37.220960 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 22 15:08:37.222289 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 22 15:08:37.222510 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 22 15:08:37.223707 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 22 15:08:37.223865 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 22 15:08:37.225235 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 22 15:08:37.225447 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 22 15:08:37.230001 systemd-resolved[1320]: Positive Trust Anchors: Apr 22 15:08:37.230022 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 22 15:08:37.230054 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 22 15:08:37.240846 systemd-resolved[1320]: Defaulting to hostname 'linux'. Apr 22 15:08:37.242160 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 22 15:08:37.244822 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 22 15:08:37.245895 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 22 15:08:37.245965 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 22 15:08:37.246110 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 22 15:08:37.248319 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 22 15:08:37.278572 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 22 15:08:37.293041 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 22 15:08:37.296590 systemd[1]: Reached target time-set.target - System Time Set. Apr 22 15:08:37.299712 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 22 15:08:37.313478 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 22 15:08:37.316929 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 22 15:08:37.324008 systemd-networkd[1399]: lo: Link UP Apr 22 15:08:37.324260 systemd-networkd[1399]: lo: Gained carrier Apr 22 15:08:37.327237 systemd-networkd[1399]: Enumeration completed Apr 22 15:08:37.327388 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 22 15:08:37.328611 systemd[1]: Reached target network.target - Network. Apr 22 15:08:37.331020 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 22 15:08:37.333121 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 22 15:08:37.337664 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 22 15:08:37.337672 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 22 15:08:37.338556 systemd-networkd[1399]: eth0: Link UP Apr 22 15:08:37.338566 systemd-networkd[1399]: eth0: Gained carrier Apr 22 15:08:37.338582 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 22 15:08:37.347968 lvm[1418]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 22 15:08:37.362447 systemd-networkd[1399]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 22 15:08:37.363288 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Apr 22 15:08:37.365232 systemd-timesyncd[1402]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 22 15:08:37.365284 systemd-timesyncd[1402]: Initial clock synchronization to Tue 2025-04-22 15:08:37.217262 UTC. Apr 22 15:08:37.365303 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 22 15:08:37.369876 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 22 15:08:37.383701 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 22 15:08:37.384913 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 22 15:08:37.385817 systemd[1]: Reached target sysinit.target - System Initialization. Apr 22 15:08:37.386888 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 22 15:08:37.387943 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 22 15:08:37.389143 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 22 15:08:37.390204 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 22 15:08:37.391280 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 22 15:08:37.392320 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 22 15:08:37.392363 systemd[1]: Reached target paths.target - Path Units. Apr 22 15:08:37.393078 systemd[1]: Reached target timers.target - Timer Units. Apr 22 15:08:37.394714 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 22 15:08:37.396943 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 22 15:08:37.399980 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 22 15:08:37.401153 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 22 15:08:37.402120 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 22 15:08:37.407260 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 22 15:08:37.408717 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 22 15:08:37.410786 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 22 15:08:37.412148 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 22 15:08:37.413092 systemd[1]: Reached target sockets.target - Socket Units. Apr 22 15:08:37.413855 systemd[1]: Reached target basic.target - Basic System. Apr 22 15:08:37.414568 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 22 15:08:37.414601 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 22 15:08:37.415594 systemd[1]: Starting containerd.service - containerd container runtime... Apr 22 15:08:37.417420 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 22 15:08:37.420528 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 22 15:08:37.420594 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 22 15:08:37.422278 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 22 15:08:37.423098 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 22 15:08:37.427143 jq[1432]: false Apr 22 15:08:37.426817 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 22 15:08:37.428789 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 22 15:08:37.433681 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 22 15:08:37.436496 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 22 15:08:37.441450 dbus-daemon[1431]: [system] SELinux support is enabled Apr 22 15:08:37.446666 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 22 15:08:37.448293 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 22 15:08:37.448855 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 22 15:08:37.449320 extend-filesystems[1433]: Found loop3 Apr 22 15:08:37.450956 extend-filesystems[1433]: Found loop4 Apr 22 15:08:37.450956 extend-filesystems[1433]: Found loop5 Apr 22 15:08:37.450956 extend-filesystems[1433]: Found vda Apr 22 15:08:37.450956 extend-filesystems[1433]: Found vda1 Apr 22 15:08:37.450956 extend-filesystems[1433]: Found vda2 Apr 22 15:08:37.450956 extend-filesystems[1433]: Found vda3 Apr 22 15:08:37.450956 extend-filesystems[1433]: Found usr Apr 22 15:08:37.450956 extend-filesystems[1433]: Found vda4 Apr 22 15:08:37.450956 extend-filesystems[1433]: Found vda6 Apr 22 15:08:37.450956 extend-filesystems[1433]: Found vda7 Apr 22 15:08:37.450956 extend-filesystems[1433]: Found vda9 Apr 22 15:08:37.450956 extend-filesystems[1433]: Checking size of /dev/vda9 Apr 22 15:08:37.449508 systemd[1]: Starting update-engine.service - Update Engine... Apr 22 15:08:37.467113 extend-filesystems[1433]: Resized partition /dev/vda9 Apr 22 15:08:37.451193 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 22 15:08:37.452963 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 22 15:08:37.459459 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 22 15:08:37.463412 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 22 15:08:37.464213 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 22 15:08:37.464570 systemd[1]: motdgen.service: Deactivated successfully. Apr 22 15:08:37.464741 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 22 15:08:37.474576 extend-filesystems[1455]: resize2fs 1.47.2 (1-Jan-2025) Apr 22 15:08:37.482095 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1362) Apr 22 15:08:37.482625 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 22 15:08:37.486517 jq[1449]: true Apr 22 15:08:37.482857 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 22 15:08:37.494647 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 22 15:08:37.503564 jq[1458]: true Apr 22 15:08:37.503793 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 22 15:08:37.503820 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 22 15:08:37.504840 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 22 15:08:37.504865 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 22 15:08:37.521378 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 22 15:08:37.526906 update_engine[1446]: I20250422 15:08:37.526563 1446 main.cc:92] Flatcar Update Engine starting Apr 22 15:08:37.537506 update_engine[1446]: I20250422 15:08:37.536803 1446 update_check_scheduler.cc:74] Next update check in 6m36s Apr 22 15:08:37.537524 systemd[1]: Started update-engine.service - Update Engine. Apr 22 15:08:37.540576 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 22 15:08:37.549009 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 22 15:08:37.550650 tar[1456]: linux-arm64/helm Apr 22 15:08:37.560259 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (Power Button) Apr 22 15:08:37.560826 extend-filesystems[1455]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 22 15:08:37.560826 extend-filesystems[1455]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 22 15:08:37.560826 extend-filesystems[1455]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 22 15:08:37.577495 extend-filesystems[1433]: Resized filesystem in /dev/vda9 Apr 22 15:08:37.578153 bash[1486]: Updated "/home/core/.ssh/authorized_keys" Apr 22 15:08:37.561742 systemd-logind[1444]: New seat seat0. Apr 22 15:08:37.561788 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 22 15:08:37.562046 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 22 15:08:37.564706 systemd[1]: Started systemd-logind.service - User Login Management. Apr 22 15:08:37.572797 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 22 15:08:37.575137 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 22 15:08:37.631695 locksmithd[1484]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 22 15:08:37.717854 containerd[1459]: time="2025-04-22T15:08:37Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 22 15:08:37.718922 containerd[1459]: time="2025-04-22T15:08:37.718884640Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Apr 22 15:08:37.728092 containerd[1459]: time="2025-04-22T15:08:37.728029560Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.2µs" Apr 22 15:08:37.728092 containerd[1459]: time="2025-04-22T15:08:37.728074560Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 22 15:08:37.728092 containerd[1459]: time="2025-04-22T15:08:37.728102920Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 22 15:08:37.728418 containerd[1459]: time="2025-04-22T15:08:37.728379040Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 22 15:08:37.728418 containerd[1459]: time="2025-04-22T15:08:37.728413400Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 22 15:08:37.728478 containerd[1459]: time="2025-04-22T15:08:37.728448480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 22 15:08:37.728527 containerd[1459]: time="2025-04-22T15:08:37.728508840Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 22 15:08:37.728548 containerd[1459]: time="2025-04-22T15:08:37.728526080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 22 15:08:37.728865 containerd[1459]: time="2025-04-22T15:08:37.728831000Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 22 15:08:37.728865 containerd[1459]: time="2025-04-22T15:08:37.728854240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 22 15:08:37.728905 containerd[1459]: time="2025-04-22T15:08:37.728867040Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 22 15:08:37.728905 containerd[1459]: time="2025-04-22T15:08:37.728876880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 22 15:08:37.728971 containerd[1459]: time="2025-04-22T15:08:37.728956160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 22 15:08:37.729207 containerd[1459]: time="2025-04-22T15:08:37.729175440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 22 15:08:37.729239 containerd[1459]: time="2025-04-22T15:08:37.729215920Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 22 15:08:37.729239 containerd[1459]: time="2025-04-22T15:08:37.729229680Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 22 15:08:37.729290 containerd[1459]: time="2025-04-22T15:08:37.729265880Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 22 15:08:37.729816 containerd[1459]: time="2025-04-22T15:08:37.729604560Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 22 15:08:37.729816 containerd[1459]: time="2025-04-22T15:08:37.729708080Z" level=info msg="metadata content store policy set" policy=shared Apr 22 15:08:37.733311 containerd[1459]: time="2025-04-22T15:08:37.733276400Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 22 15:08:37.733474 containerd[1459]: time="2025-04-22T15:08:37.733457280Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 22 15:08:37.733589 containerd[1459]: time="2025-04-22T15:08:37.733574400Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 22 15:08:37.733646 containerd[1459]: time="2025-04-22T15:08:37.733632680Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 22 15:08:37.733697 containerd[1459]: time="2025-04-22T15:08:37.733684960Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 22 15:08:37.733746 containerd[1459]: time="2025-04-22T15:08:37.733733760Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 22 15:08:37.733803 containerd[1459]: time="2025-04-22T15:08:37.733789880Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 22 15:08:37.733856 containerd[1459]: time="2025-04-22T15:08:37.733843320Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 22 15:08:37.734592 containerd[1459]: time="2025-04-22T15:08:37.733895760Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 22 15:08:37.734592 containerd[1459]: time="2025-04-22T15:08:37.733914000Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 22 15:08:37.734592 containerd[1459]: time="2025-04-22T15:08:37.733925200Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 22 15:08:37.734592 containerd[1459]: time="2025-04-22T15:08:37.733938800Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 22 15:08:37.734592 containerd[1459]: time="2025-04-22T15:08:37.734085400Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 22 15:08:37.734592 containerd[1459]: time="2025-04-22T15:08:37.734109440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 22 15:08:37.734592 containerd[1459]: time="2025-04-22T15:08:37.734123440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 22 15:08:37.734592 containerd[1459]: time="2025-04-22T15:08:37.734135600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 22 15:08:37.734592 containerd[1459]: time="2025-04-22T15:08:37.734147400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 22 15:08:37.734592 containerd[1459]: time="2025-04-22T15:08:37.734159160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 22 15:08:37.734592 containerd[1459]: time="2025-04-22T15:08:37.734174480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 22 15:08:37.734592 containerd[1459]: time="2025-04-22T15:08:37.734186640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 22 15:08:37.734592 containerd[1459]: time="2025-04-22T15:08:37.734201080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 22 15:08:37.734592 containerd[1459]: time="2025-04-22T15:08:37.734213920Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 22 15:08:37.734592 containerd[1459]: time="2025-04-22T15:08:37.734235920Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 22 15:08:37.734874 containerd[1459]: time="2025-04-22T15:08:37.734539480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 22 15:08:37.734874 containerd[1459]: time="2025-04-22T15:08:37.734559200Z" level=info msg="Start snapshots syncer" Apr 22 15:08:37.734952 containerd[1459]: time="2025-04-22T15:08:37.734932760Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 22 15:08:37.736979 containerd[1459]: time="2025-04-22T15:08:37.736927720Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 22 15:08:37.737164 containerd[1459]: time="2025-04-22T15:08:37.737143120Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 22 15:08:37.737309 containerd[1459]: time="2025-04-22T15:08:37.737290760Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 22 15:08:37.737535 containerd[1459]: time="2025-04-22T15:08:37.737511520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 22 15:08:37.737608 containerd[1459]: time="2025-04-22T15:08:37.737594720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 22 15:08:37.737682 containerd[1459]: time="2025-04-22T15:08:37.737668560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 22 15:08:37.737732 containerd[1459]: time="2025-04-22T15:08:37.737720800Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 22 15:08:37.737786 containerd[1459]: time="2025-04-22T15:08:37.737773760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 22 15:08:37.737838 containerd[1459]: time="2025-04-22T15:08:37.737825040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 22 15:08:37.737890 containerd[1459]: time="2025-04-22T15:08:37.737877320Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 22 15:08:37.737976 containerd[1459]: time="2025-04-22T15:08:37.737961320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 22 15:08:37.738045 containerd[1459]: time="2025-04-22T15:08:37.738031200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 22 15:08:37.738098 containerd[1459]: time="2025-04-22T15:08:37.738085800Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 22 15:08:37.738741 containerd[1459]: time="2025-04-22T15:08:37.738706640Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 22 15:08:37.738826 containerd[1459]: time="2025-04-22T15:08:37.738810320Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 22 15:08:37.738877 containerd[1459]: time="2025-04-22T15:08:37.738865320Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 22 15:08:37.738930 containerd[1459]: time="2025-04-22T15:08:37.738915560Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 22 15:08:37.738975 containerd[1459]: time="2025-04-22T15:08:37.738962800Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 22 15:08:37.739064 containerd[1459]: time="2025-04-22T15:08:37.739045400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 22 15:08:37.739117 containerd[1459]: time="2025-04-22T15:08:37.739104440Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 22 15:08:37.739235 containerd[1459]: time="2025-04-22T15:08:37.739224440Z" level=info msg="runtime interface created" Apr 22 15:08:37.739279 containerd[1459]: time="2025-04-22T15:08:37.739268240Z" level=info msg="created NRI interface" Apr 22 15:08:37.739332 containerd[1459]: time="2025-04-22T15:08:37.739317160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 22 15:08:37.739420 containerd[1459]: time="2025-04-22T15:08:37.739399600Z" level=info msg="Connect containerd service" Apr 22 15:08:37.739522 containerd[1459]: time="2025-04-22T15:08:37.739506800Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 22 15:08:37.740364 containerd[1459]: time="2025-04-22T15:08:37.740320120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 22 15:08:37.786251 sshd_keygen[1451]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 22 15:08:37.806210 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 22 15:08:37.810875 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 22 15:08:37.830664 systemd[1]: issuegen.service: Deactivated successfully. Apr 22 15:08:37.830908 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 22 15:08:37.833884 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 22 15:08:37.847470 containerd[1459]: time="2025-04-22T15:08:37.847235600Z" level=info msg="Start subscribing containerd event" Apr 22 15:08:37.847470 containerd[1459]: time="2025-04-22T15:08:37.847392840Z" level=info msg="Start recovering state" Apr 22 15:08:37.847613 containerd[1459]: time="2025-04-22T15:08:37.847591000Z" level=info msg="Start event monitor" Apr 22 15:08:37.847778 containerd[1459]: time="2025-04-22T15:08:37.847763280Z" level=info msg="Start cni network conf syncer for default" Apr 22 15:08:37.847778 containerd[1459]: time="2025-04-22T15:08:37.847774040Z" level=info msg="Start streaming server" Apr 22 15:08:37.847824 containerd[1459]: time="2025-04-22T15:08:37.847784680Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 22 15:08:37.847824 containerd[1459]: time="2025-04-22T15:08:37.847792280Z" level=info msg="runtime interface starting up..." Apr 22 15:08:37.847824 containerd[1459]: time="2025-04-22T15:08:37.847798120Z" level=info msg="starting plugins..." Apr 22 15:08:37.847824 containerd[1459]: time="2025-04-22T15:08:37.847813840Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 22 15:08:37.848133 containerd[1459]: time="2025-04-22T15:08:37.848031040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 22 15:08:37.848133 containerd[1459]: time="2025-04-22T15:08:37.848123840Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 22 15:08:37.848266 containerd[1459]: time="2025-04-22T15:08:37.848239880Z" level=info msg="containerd successfully booted in 0.130806s" Apr 22 15:08:37.851561 systemd[1]: Started containerd.service - containerd container runtime. Apr 22 15:08:37.861702 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 22 15:08:37.867261 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 22 15:08:37.869413 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Apr 22 15:08:37.870442 systemd[1]: Reached target getty.target - Login Prompts. Apr 22 15:08:37.906093 tar[1456]: linux-arm64/LICENSE Apr 22 15:08:37.906225 tar[1456]: linux-arm64/README.md Apr 22 15:08:37.930865 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 22 15:08:38.917477 systemd-networkd[1399]: eth0: Gained IPv6LL Apr 22 15:08:38.921394 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 22 15:08:38.922702 systemd[1]: Reached target network-online.target - Network is Online. Apr 22 15:08:38.924865 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 22 15:08:38.926904 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 15:08:38.928626 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 22 15:08:38.949599 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 22 15:08:38.949786 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 22 15:08:38.951007 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 22 15:08:38.952188 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 22 15:08:39.396210 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 15:08:39.397554 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 22 15:08:39.399251 (kubelet)[1560]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 15:08:39.399394 systemd[1]: Startup finished in 535ms (kernel) + 5.019s (initrd) + 3.725s (userspace) = 9.281s. Apr 22 15:08:39.847562 kubelet[1560]: E0422 15:08:39.847457 1560 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 15:08:39.850225 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 15:08:39.850386 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 15:08:39.850694 systemd[1]: kubelet.service: Consumed 815ms CPU time, 241.4M memory peak. Apr 22 15:08:43.394859 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 22 15:08:43.396146 systemd[1]: Started sshd@0-10.0.0.54:22-10.0.0.1:46742.service - OpenSSH per-connection server daemon (10.0.0.1:46742). Apr 22 15:08:43.464164 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 46742 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:08:43.465836 sshd-session[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:08:43.475863 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 22 15:08:43.476782 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 22 15:08:43.481278 systemd-logind[1444]: New session 1 of user core. Apr 22 15:08:43.497487 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 22 15:08:43.499725 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 22 15:08:43.512155 (systemd)[1579]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 22 15:08:43.514070 systemd-logind[1444]: New session c1 of user core. Apr 22 15:08:43.615425 systemd[1579]: Queued start job for default target default.target. Apr 22 15:08:43.625368 systemd[1579]: Created slice app.slice - User Application Slice. Apr 22 15:08:43.625401 systemd[1579]: Reached target paths.target - Paths. Apr 22 15:08:43.625444 systemd[1579]: Reached target timers.target - Timers. Apr 22 15:08:43.626757 systemd[1579]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 22 15:08:43.635704 systemd[1579]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 22 15:08:43.635859 systemd[1579]: Reached target sockets.target - Sockets. Apr 22 15:08:43.636008 systemd[1579]: Reached target basic.target - Basic System. Apr 22 15:08:43.636045 systemd[1579]: Reached target default.target - Main User Target. Apr 22 15:08:43.636071 systemd[1579]: Startup finished in 117ms. Apr 22 15:08:43.636161 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 22 15:08:43.637520 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 22 15:08:43.702781 systemd[1]: Started sshd@1-10.0.0.54:22-10.0.0.1:46756.service - OpenSSH per-connection server daemon (10.0.0.1:46756). Apr 22 15:08:43.752693 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 46756 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:08:43.753829 sshd-session[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:08:43.757868 systemd-logind[1444]: New session 2 of user core. Apr 22 15:08:43.765632 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 22 15:08:43.815379 sshd[1592]: Connection closed by 10.0.0.1 port 46756 Apr 22 15:08:43.817565 sshd-session[1590]: pam_unix(sshd:session): session closed for user core Apr 22 15:08:43.830450 systemd[1]: sshd@1-10.0.0.54:22-10.0.0.1:46756.service: Deactivated successfully. Apr 22 15:08:43.831801 systemd[1]: session-2.scope: Deactivated successfully. Apr 22 15:08:43.832376 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Apr 22 15:08:43.833879 systemd[1]: Started sshd@2-10.0.0.54:22-10.0.0.1:46760.service - OpenSSH per-connection server daemon (10.0.0.1:46760). Apr 22 15:08:43.834531 systemd-logind[1444]: Removed session 2. Apr 22 15:08:43.882645 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 46760 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:08:43.883686 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:08:43.887465 systemd-logind[1444]: New session 3 of user core. Apr 22 15:08:43.893549 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 22 15:08:43.940461 sshd[1600]: Connection closed by 10.0.0.1 port 46760 Apr 22 15:08:43.940732 sshd-session[1597]: pam_unix(sshd:session): session closed for user core Apr 22 15:08:43.954219 systemd[1]: sshd@2-10.0.0.54:22-10.0.0.1:46760.service: Deactivated successfully. Apr 22 15:08:43.955551 systemd[1]: session-3.scope: Deactivated successfully. Apr 22 15:08:43.957471 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Apr 22 15:08:43.958370 systemd[1]: Started sshd@3-10.0.0.54:22-10.0.0.1:46770.service - OpenSSH per-connection server daemon (10.0.0.1:46770). Apr 22 15:08:43.959455 systemd-logind[1444]: Removed session 3. Apr 22 15:08:44.000198 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 46770 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:08:44.001217 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:08:44.004834 systemd-logind[1444]: New session 4 of user core. Apr 22 15:08:44.018486 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 22 15:08:44.068446 sshd[1608]: Connection closed by 10.0.0.1 port 46770 Apr 22 15:08:44.068737 sshd-session[1605]: pam_unix(sshd:session): session closed for user core Apr 22 15:08:44.083276 systemd[1]: sshd@3-10.0.0.54:22-10.0.0.1:46770.service: Deactivated successfully. Apr 22 15:08:44.086514 systemd[1]: session-4.scope: Deactivated successfully. Apr 22 15:08:44.087688 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Apr 22 15:08:44.088722 systemd[1]: Started sshd@4-10.0.0.54:22-10.0.0.1:46778.service - OpenSSH per-connection server daemon (10.0.0.1:46778). Apr 22 15:08:44.089396 systemd-logind[1444]: Removed session 4. Apr 22 15:08:44.138084 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 46778 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:08:44.139177 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:08:44.143304 systemd-logind[1444]: New session 5 of user core. Apr 22 15:08:44.166465 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 22 15:08:44.225539 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 22 15:08:44.227630 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 22 15:08:44.241198 sudo[1617]: pam_unix(sudo:session): session closed for user root Apr 22 15:08:44.245369 sshd[1616]: Connection closed by 10.0.0.1 port 46778 Apr 22 15:08:44.244845 sshd-session[1613]: pam_unix(sshd:session): session closed for user core Apr 22 15:08:44.254733 systemd[1]: sshd@4-10.0.0.54:22-10.0.0.1:46778.service: Deactivated successfully. Apr 22 15:08:44.256127 systemd[1]: session-5.scope: Deactivated successfully. Apr 22 15:08:44.256799 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Apr 22 15:08:44.258515 systemd[1]: Started sshd@5-10.0.0.54:22-10.0.0.1:46780.service - OpenSSH per-connection server daemon (10.0.0.1:46780). Apr 22 15:08:44.259271 systemd-logind[1444]: Removed session 5. Apr 22 15:08:44.310042 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 46780 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:08:44.311091 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:08:44.314853 systemd-logind[1444]: New session 6 of user core. Apr 22 15:08:44.319513 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 22 15:08:44.368519 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 22 15:08:44.368769 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 22 15:08:44.371471 sudo[1627]: pam_unix(sudo:session): session closed for user root Apr 22 15:08:44.375630 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 22 15:08:44.375872 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 22 15:08:44.384259 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 22 15:08:44.416606 augenrules[1649]: No rules Apr 22 15:08:44.417672 systemd[1]: audit-rules.service: Deactivated successfully. Apr 22 15:08:44.418465 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 22 15:08:44.419228 sudo[1626]: pam_unix(sudo:session): session closed for user root Apr 22 15:08:44.420405 sshd[1625]: Connection closed by 10.0.0.1 port 46780 Apr 22 15:08:44.420673 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Apr 22 15:08:44.432383 systemd[1]: sshd@5-10.0.0.54:22-10.0.0.1:46780.service: Deactivated successfully. Apr 22 15:08:44.434522 systemd[1]: session-6.scope: Deactivated successfully. Apr 22 15:08:44.436519 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Apr 22 15:08:44.437694 systemd[1]: Started sshd@6-10.0.0.54:22-10.0.0.1:46796.service - OpenSSH per-connection server daemon (10.0.0.1:46796). Apr 22 15:08:44.438779 systemd-logind[1444]: Removed session 6. Apr 22 15:08:44.488538 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 46796 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:08:44.489460 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:08:44.493333 systemd-logind[1444]: New session 7 of user core. Apr 22 15:08:44.507488 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 22 15:08:44.557035 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 22 15:08:44.557554 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 22 15:08:44.882259 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 22 15:08:44.896640 (dockerd)[1682]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 22 15:08:45.147898 dockerd[1682]: time="2025-04-22T15:08:45.147775720Z" level=info msg="Starting up" Apr 22 15:08:45.149748 dockerd[1682]: time="2025-04-22T15:08:45.149711743Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 22 15:08:45.244497 dockerd[1682]: time="2025-04-22T15:08:45.244444146Z" level=info msg="Loading containers: start." Apr 22 15:08:45.377368 kernel: Initializing XFRM netlink socket Apr 22 15:08:45.434121 systemd-networkd[1399]: docker0: Link UP Apr 22 15:08:45.498564 dockerd[1682]: time="2025-04-22T15:08:45.498469478Z" level=info msg="Loading containers: done." Apr 22 15:08:45.513200 dockerd[1682]: time="2025-04-22T15:08:45.513145742Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 22 15:08:45.513428 dockerd[1682]: time="2025-04-22T15:08:45.513235411Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Apr 22 15:08:45.513531 dockerd[1682]: time="2025-04-22T15:08:45.513503703Z" level=info msg="Daemon has completed initialization" Apr 22 15:08:45.540722 dockerd[1682]: time="2025-04-22T15:08:45.540651400Z" level=info msg="API listen on /run/docker.sock" Apr 22 15:08:45.541100 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 22 15:08:46.387720 containerd[1459]: time="2025-04-22T15:08:46.387628065Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Apr 22 15:08:47.088093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount869538961.mount: Deactivated successfully. Apr 22 15:08:48.508181 containerd[1459]: time="2025-04-22T15:08:48.507971951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:08:48.508521 containerd[1459]: time="2025-04-22T15:08:48.508240682Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=29793526" Apr 22 15:08:48.509087 containerd[1459]: time="2025-04-22T15:08:48.509061125Z" level=info msg="ImageCreate event name:\"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:08:48.511785 containerd[1459]: time="2025-04-22T15:08:48.511740991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:08:48.512852 containerd[1459]: time="2025-04-22T15:08:48.512810700Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"29790324\" in 2.124611516s" Apr 22 15:08:48.512852 containerd[1459]: time="2025-04-22T15:08:48.512850387Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\"" Apr 22 15:08:48.528272 containerd[1459]: time="2025-04-22T15:08:48.528200054Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Apr 22 15:08:49.861977 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 22 15:08:49.864359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 15:08:50.033037 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 15:08:50.036762 (kubelet)[1976]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 15:08:50.152159 kubelet[1976]: E0422 15:08:50.152036 1976 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 15:08:50.155113 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 15:08:50.155258 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 15:08:50.155706 systemd[1]: kubelet.service: Consumed 142ms CPU time, 97.4M memory peak. Apr 22 15:08:50.213686 containerd[1459]: time="2025-04-22T15:08:50.213632298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:08:50.214400 containerd[1459]: time="2025-04-22T15:08:50.214290800Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=26861169" Apr 22 15:08:50.214963 containerd[1459]: time="2025-04-22T15:08:50.214926879Z" level=info msg="ImageCreate event name:\"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:08:50.218020 containerd[1459]: time="2025-04-22T15:08:50.217982603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:08:50.219449 containerd[1459]: time="2025-04-22T15:08:50.219415990Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"28301963\" in 1.691183172s" Apr 22 15:08:50.219516 containerd[1459]: time="2025-04-22T15:08:50.219449845Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\"" Apr 22 15:08:50.234763 containerd[1459]: time="2025-04-22T15:08:50.234532747Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Apr 22 15:08:51.470568 containerd[1459]: time="2025-04-22T15:08:51.470526604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:08:51.471648 containerd[1459]: time="2025-04-22T15:08:51.471107316Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=16264638" Apr 22 15:08:51.472303 containerd[1459]: time="2025-04-22T15:08:51.472019617Z" level=info msg="ImageCreate event name:\"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:08:51.474455 containerd[1459]: time="2025-04-22T15:08:51.474413018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:08:51.475298 containerd[1459]: time="2025-04-22T15:08:51.475268029Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"17705450\" in 1.240698916s" Apr 22 15:08:51.475357 containerd[1459]: time="2025-04-22T15:08:51.475302651Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\"" Apr 22 15:08:51.490050 containerd[1459]: time="2025-04-22T15:08:51.490013654Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Apr 22 15:08:52.802252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2185318277.mount: Deactivated successfully. Apr 22 15:08:53.005471 containerd[1459]: time="2025-04-22T15:08:53.005418455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:08:53.006377 containerd[1459]: time="2025-04-22T15:08:53.006182484Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=25771850" Apr 22 15:08:53.007096 containerd[1459]: time="2025-04-22T15:08:53.007071472Z" level=info msg="ImageCreate event name:\"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:08:53.009145 containerd[1459]: time="2025-04-22T15:08:53.009111086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:08:53.009828 containerd[1459]: time="2025-04-22T15:08:53.009668403Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"25770867\" in 1.519616696s" Apr 22 15:08:53.009828 containerd[1459]: time="2025-04-22T15:08:53.009714759Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Apr 22 15:08:53.026068 containerd[1459]: time="2025-04-22T15:08:53.026030040Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Apr 22 15:08:53.592337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3329970145.mount: Deactivated successfully. Apr 22 15:08:54.527852 containerd[1459]: time="2025-04-22T15:08:54.527806206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:08:54.528805 containerd[1459]: time="2025-04-22T15:08:54.528616968Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Apr 22 15:08:54.529638 containerd[1459]: time="2025-04-22T15:08:54.529596813Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:08:54.533988 containerd[1459]: time="2025-04-22T15:08:54.533931938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:08:54.535109 containerd[1459]: time="2025-04-22T15:08:54.535075242Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.508995693s" Apr 22 15:08:54.535200 containerd[1459]: time="2025-04-22T15:08:54.535110964Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Apr 22 15:08:54.549620 containerd[1459]: time="2025-04-22T15:08:54.549591867Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Apr 22 15:08:55.054144 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3872944132.mount: Deactivated successfully. Apr 22 15:08:55.057838 containerd[1459]: time="2025-04-22T15:08:55.057800800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:08:55.058220 containerd[1459]: time="2025-04-22T15:08:55.058165153Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Apr 22 15:08:55.059026 containerd[1459]: time="2025-04-22T15:08:55.059003957Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:08:55.060871 containerd[1459]: time="2025-04-22T15:08:55.060825558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:08:55.061655 containerd[1459]: time="2025-04-22T15:08:55.061627556Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 512.002797ms" Apr 22 15:08:55.061723 containerd[1459]: time="2025-04-22T15:08:55.061658062Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Apr 22 15:08:55.077275 containerd[1459]: time="2025-04-22T15:08:55.077243663Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Apr 22 15:08:55.556766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3092644298.mount: Deactivated successfully. Apr 22 15:08:58.275957 containerd[1459]: time="2025-04-22T15:08:58.275906399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:08:58.280351 containerd[1459]: time="2025-04-22T15:08:58.280287272Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Apr 22 15:08:58.281329 containerd[1459]: time="2025-04-22T15:08:58.281278546Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:08:58.285525 containerd[1459]: time="2025-04-22T15:08:58.284557468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:08:58.285525 containerd[1459]: time="2025-04-22T15:08:58.285393697Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.20805883s" Apr 22 15:08:58.285525 containerd[1459]: time="2025-04-22T15:08:58.285428288Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Apr 22 15:09:00.361695 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 22 15:09:00.363268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 15:09:00.498713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 15:09:00.502294 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 22 15:09:00.538779 kubelet[2234]: E0422 15:09:00.538686 2234 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 22 15:09:00.541022 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 22 15:09:00.541262 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 22 15:09:00.541780 systemd[1]: kubelet.service: Consumed 131ms CPU time, 96.7M memory peak. Apr 22 15:09:05.245803 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 15:09:05.246137 systemd[1]: kubelet.service: Consumed 131ms CPU time, 96.7M memory peak. Apr 22 15:09:05.248153 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 15:09:05.275426 systemd[1]: Reload requested from client PID 2250 ('systemctl') (unit session-7.scope)... Apr 22 15:09:05.275444 systemd[1]: Reloading... Apr 22 15:09:05.348380 zram_generator::config[2294]: No configuration found. Apr 22 15:09:05.470060 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 22 15:09:05.541940 systemd[1]: Reloading finished in 266 ms. Apr 22 15:09:05.598652 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 15:09:05.600171 systemd[1]: kubelet.service: Deactivated successfully. Apr 22 15:09:05.600379 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 15:09:05.600424 systemd[1]: kubelet.service: Consumed 81ms CPU time, 82.6M memory peak. Apr 22 15:09:05.601766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 15:09:05.697932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 15:09:05.701628 (kubelet)[2340]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 22 15:09:05.736823 kubelet[2340]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 22 15:09:05.736823 kubelet[2340]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 22 15:09:05.736823 kubelet[2340]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 22 15:09:05.737724 kubelet[2340]: I0422 15:09:05.737676 2340 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 22 15:09:08.228123 kubelet[2340]: I0422 15:09:08.228079 2340 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 22 15:09:08.228123 kubelet[2340]: I0422 15:09:08.228110 2340 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 22 15:09:08.228537 kubelet[2340]: I0422 15:09:08.228304 2340 server.go:927] "Client rotation is on, will bootstrap in background" Apr 22 15:09:08.267532 kubelet[2340]: E0422 15:09:08.267485 2340 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.54:6443: connect: connection refused Apr 22 15:09:08.267616 kubelet[2340]: I0422 15:09:08.267560 2340 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 22 15:09:08.275548 kubelet[2340]: I0422 15:09:08.275526 2340 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 22 15:09:08.276682 kubelet[2340]: I0422 15:09:08.276647 2340 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 22 15:09:08.276843 kubelet[2340]: I0422 15:09:08.276681 2340 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 22 15:09:08.276927 kubelet[2340]: I0422 15:09:08.276900 2340 topology_manager.go:138] "Creating topology manager with none policy" Apr 22 15:09:08.276927 kubelet[2340]: I0422 15:09:08.276910 2340 container_manager_linux.go:301] "Creating device plugin manager" Apr 22 15:09:08.277171 kubelet[2340]: I0422 15:09:08.277148 2340 state_mem.go:36] "Initialized new in-memory state store" Apr 22 15:09:08.279942 kubelet[2340]: I0422 15:09:08.279917 2340 kubelet.go:400] "Attempting to sync node with API server" Apr 22 15:09:08.279942 kubelet[2340]: I0422 15:09:08.279939 2340 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 22 15:09:08.280293 kubelet[2340]: I0422 15:09:08.280270 2340 kubelet.go:312] "Adding apiserver pod source" Apr 22 15:09:08.280558 kubelet[2340]: I0422 15:09:08.280528 2340 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 22 15:09:08.284750 kubelet[2340]: W0422 15:09:08.283103 2340 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 22 15:09:08.284750 kubelet[2340]: E0422 15:09:08.283160 2340 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 22 15:09:08.284750 kubelet[2340]: W0422 15:09:08.283222 2340 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 22 15:09:08.284750 kubelet[2340]: E0422 15:09:08.283247 2340 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 22 15:09:08.286259 kubelet[2340]: I0422 15:09:08.285969 2340 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Apr 22 15:09:08.286481 kubelet[2340]: I0422 15:09:08.286451 2340 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 22 15:09:08.286601 kubelet[2340]: W0422 15:09:08.286589 2340 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 22 15:09:08.287663 kubelet[2340]: I0422 15:09:08.287392 2340 server.go:1264] "Started kubelet" Apr 22 15:09:08.287724 kubelet[2340]: I0422 15:09:08.287654 2340 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 22 15:09:08.288758 kubelet[2340]: I0422 15:09:08.288743 2340 server.go:455] "Adding debug handlers to kubelet server" Apr 22 15:09:08.290372 kubelet[2340]: I0422 15:09:08.289891 2340 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 22 15:09:08.290372 kubelet[2340]: I0422 15:09:08.290129 2340 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 22 15:09:08.291215 kubelet[2340]: I0422 15:09:08.291185 2340 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 22 15:09:08.294161 kubelet[2340]: E0422 15:09:08.292541 2340 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.54:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.54:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1838ac8644e2a67e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-04-22 15:09:08.287366782 +0000 UTC m=+2.582611686,LastTimestamp:2025-04-22 15:09:08.287366782 +0000 UTC m=+2.582611686,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 22 15:09:08.294161 kubelet[2340]: E0422 15:09:08.292838 2340 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 15:09:08.294161 kubelet[2340]: I0422 15:09:08.293011 2340 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 22 15:09:08.294161 kubelet[2340]: I0422 15:09:08.293072 2340 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 22 15:09:08.294161 kubelet[2340]: I0422 15:09:08.294073 2340 reconciler.go:26] "Reconciler: start to sync state" Apr 22 15:09:08.294519 kubelet[2340]: W0422 15:09:08.294484 2340 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 22 15:09:08.294581 kubelet[2340]: E0422 15:09:08.294526 2340 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 22 15:09:08.294909 kubelet[2340]: E0422 15:09:08.294711 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="200ms" Apr 22 15:09:08.296796 kubelet[2340]: I0422 15:09:08.296769 2340 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 22 15:09:08.301699 kubelet[2340]: I0422 15:09:08.301673 2340 factory.go:221] Registration of the containerd container factory successfully Apr 22 15:09:08.301699 kubelet[2340]: I0422 15:09:08.301692 2340 factory.go:221] Registration of the systemd container factory successfully Apr 22 15:09:08.302652 kubelet[2340]: E0422 15:09:08.302630 2340 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 22 15:09:08.307358 kubelet[2340]: I0422 15:09:08.307310 2340 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 22 15:09:08.308195 kubelet[2340]: I0422 15:09:08.308157 2340 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 22 15:09:08.308330 kubelet[2340]: I0422 15:09:08.308312 2340 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 22 15:09:08.308330 kubelet[2340]: I0422 15:09:08.308329 2340 kubelet.go:2337] "Starting kubelet main sync loop" Apr 22 15:09:08.308433 kubelet[2340]: E0422 15:09:08.308413 2340 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 22 15:09:08.312646 kubelet[2340]: W0422 15:09:08.312604 2340 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 22 15:09:08.312728 kubelet[2340]: E0422 15:09:08.312652 2340 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 22 15:09:08.312728 kubelet[2340]: I0422 15:09:08.312722 2340 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 22 15:09:08.312782 kubelet[2340]: I0422 15:09:08.312730 2340 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 22 15:09:08.312782 kubelet[2340]: I0422 15:09:08.312744 2340 state_mem.go:36] "Initialized new in-memory state store" Apr 22 15:09:08.315093 kubelet[2340]: I0422 15:09:08.315071 2340 policy_none.go:49] "None policy: Start" Apr 22 15:09:08.315658 kubelet[2340]: I0422 15:09:08.315631 2340 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 22 15:09:08.315658 kubelet[2340]: I0422 15:09:08.315657 2340 state_mem.go:35] "Initializing new in-memory state store" Apr 22 15:09:08.321021 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 22 15:09:08.337670 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 22 15:09:08.340470 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 22 15:09:08.347283 kubelet[2340]: I0422 15:09:08.347101 2340 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 22 15:09:08.347381 kubelet[2340]: I0422 15:09:08.347270 2340 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 22 15:09:08.347413 kubelet[2340]: I0422 15:09:08.347384 2340 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 22 15:09:08.348816 kubelet[2340]: E0422 15:09:08.348790 2340 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 22 15:09:08.394504 kubelet[2340]: I0422 15:09:08.394451 2340 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 22 15:09:08.394790 kubelet[2340]: E0422 15:09:08.394753 2340 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Apr 22 15:09:08.409208 kubelet[2340]: I0422 15:09:08.409116 2340 topology_manager.go:215] "Topology Admit Handler" podUID="7432ab4c6fe37c1a63115267b2d6eb03" podNamespace="kube-system" podName="kube-apiserver-localhost" Apr 22 15:09:08.410434 kubelet[2340]: I0422 15:09:08.409886 2340 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Apr 22 15:09:08.410617 kubelet[2340]: I0422 15:09:08.410578 2340 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Apr 22 15:09:08.416642 systemd[1]: Created slice kubepods-burstable-pod7432ab4c6fe37c1a63115267b2d6eb03.slice - libcontainer container kubepods-burstable-pod7432ab4c6fe37c1a63115267b2d6eb03.slice. Apr 22 15:09:08.443486 systemd[1]: Created slice kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice - libcontainer container kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice. Apr 22 15:09:08.455559 systemd[1]: Created slice kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice - libcontainer container kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice. Apr 22 15:09:08.495171 kubelet[2340]: I0422 15:09:08.494985 2340 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7432ab4c6fe37c1a63115267b2d6eb03-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7432ab4c6fe37c1a63115267b2d6eb03\") " pod="kube-system/kube-apiserver-localhost" Apr 22 15:09:08.495800 kubelet[2340]: E0422 15:09:08.495763 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="400ms" Apr 22 15:09:08.595443 kubelet[2340]: I0422 15:09:08.595334 2340 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 15:09:08.595845 kubelet[2340]: I0422 15:09:08.595452 2340 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 15:09:08.595845 kubelet[2340]: I0422 15:09:08.595501 2340 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 15:09:08.595845 kubelet[2340]: I0422 15:09:08.595547 2340 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 15:09:08.595845 kubelet[2340]: I0422 15:09:08.595595 2340 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7432ab4c6fe37c1a63115267b2d6eb03-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7432ab4c6fe37c1a63115267b2d6eb03\") " pod="kube-system/kube-apiserver-localhost" Apr 22 15:09:08.595845 kubelet[2340]: I0422 15:09:08.595619 2340 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 15:09:08.596022 kubelet[2340]: I0422 15:09:08.595640 2340 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7432ab4c6fe37c1a63115267b2d6eb03-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7432ab4c6fe37c1a63115267b2d6eb03\") " pod="kube-system/kube-apiserver-localhost" Apr 22 15:09:08.596022 kubelet[2340]: I0422 15:09:08.595661 2340 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Apr 22 15:09:08.596528 kubelet[2340]: I0422 15:09:08.596503 2340 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 22 15:09:08.596942 kubelet[2340]: E0422 15:09:08.596914 2340 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Apr 22 15:09:08.740770 containerd[1459]: time="2025-04-22T15:09:08.740507798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7432ab4c6fe37c1a63115267b2d6eb03,Namespace:kube-system,Attempt:0,}" Apr 22 15:09:08.746030 containerd[1459]: time="2025-04-22T15:09:08.745951010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,}" Apr 22 15:09:08.757648 containerd[1459]: time="2025-04-22T15:09:08.757616595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,}" Apr 22 15:09:08.896838 kubelet[2340]: E0422 15:09:08.896788 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="800ms" Apr 22 15:09:08.998008 kubelet[2340]: I0422 15:09:08.997931 2340 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 22 15:09:08.998373 kubelet[2340]: E0422 15:09:08.998250 2340 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Apr 22 15:09:09.135097 kubelet[2340]: W0422 15:09:09.135032 2340 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 22 15:09:09.135097 kubelet[2340]: E0422 15:09:09.135091 2340 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 22 15:09:09.370925 kubelet[2340]: W0422 15:09:09.370867 2340 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 22 15:09:09.370925 kubelet[2340]: E0422 15:09:09.370920 2340 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 22 15:09:09.384208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2380011054.mount: Deactivated successfully. Apr 22 15:09:09.388613 containerd[1459]: time="2025-04-22T15:09:09.388565200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 22 15:09:09.390189 containerd[1459]: time="2025-04-22T15:09:09.390134835Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 22 15:09:09.390771 containerd[1459]: time="2025-04-22T15:09:09.390734764Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 22 15:09:09.391335 containerd[1459]: time="2025-04-22T15:09:09.391313040Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 22 15:09:09.391796 containerd[1459]: time="2025-04-22T15:09:09.391699277Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 22 15:09:09.392517 containerd[1459]: time="2025-04-22T15:09:09.392462482Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 22 15:09:09.393169 containerd[1459]: time="2025-04-22T15:09:09.392988583Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Apr 22 15:09:09.394900 containerd[1459]: time="2025-04-22T15:09:09.394391587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 22 15:09:09.396669 containerd[1459]: time="2025-04-22T15:09:09.396638695Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 550.916377ms" Apr 22 15:09:09.397935 containerd[1459]: time="2025-04-22T15:09:09.397661734Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 552.661811ms" Apr 22 15:09:09.399378 containerd[1459]: time="2025-04-22T15:09:09.399300483Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 554.970905ms" Apr 22 15:09:09.416830 containerd[1459]: time="2025-04-22T15:09:09.416777568Z" level=info msg="connecting to shim cf10fbdec8199fa9dfeec9fabb70982ca24a5664735bc165fc099dad66764711" address="unix:///run/containerd/s/4539bbb355b1eb0848f9677defae06c06881c193f40f01027953b39658f94da0" namespace=k8s.io protocol=ttrpc version=3 Apr 22 15:09:09.418970 containerd[1459]: time="2025-04-22T15:09:09.418938624Z" level=info msg="connecting to shim feb222b734b623fe718dd314dc73d50c2b23635274cebb1841a1dd7739d18fd8" address="unix:///run/containerd/s/766a658dbe9f78957024009af3078181460bfbe4ff8e517e5db4a0b4e4bfb41d" namespace=k8s.io protocol=ttrpc version=3 Apr 22 15:09:09.422947 containerd[1459]: time="2025-04-22T15:09:09.422909214Z" level=info msg="connecting to shim a5a2e29753e94c710d6a66bfbc007763c3404bd4fd7afaca15913bc9831ac186" address="unix:///run/containerd/s/1b9e9e9ef3939ab0c4a5dcf6f82b33c842ecd4274892dc905774e630d871933a" namespace=k8s.io protocol=ttrpc version=3 Apr 22 15:09:09.440525 systemd[1]: Started cri-containerd-cf10fbdec8199fa9dfeec9fabb70982ca24a5664735bc165fc099dad66764711.scope - libcontainer container cf10fbdec8199fa9dfeec9fabb70982ca24a5664735bc165fc099dad66764711. Apr 22 15:09:09.444295 systemd[1]: Started cri-containerd-a5a2e29753e94c710d6a66bfbc007763c3404bd4fd7afaca15913bc9831ac186.scope - libcontainer container a5a2e29753e94c710d6a66bfbc007763c3404bd4fd7afaca15913bc9831ac186. Apr 22 15:09:09.446096 systemd[1]: Started cri-containerd-feb222b734b623fe718dd314dc73d50c2b23635274cebb1841a1dd7739d18fd8.scope - libcontainer container feb222b734b623fe718dd314dc73d50c2b23635274cebb1841a1dd7739d18fd8. Apr 22 15:09:09.473609 containerd[1459]: time="2025-04-22T15:09:09.473557381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf10fbdec8199fa9dfeec9fabb70982ca24a5664735bc165fc099dad66764711\"" Apr 22 15:09:09.482374 containerd[1459]: time="2025-04-22T15:09:09.482322651Z" level=info msg="CreateContainer within sandbox \"cf10fbdec8199fa9dfeec9fabb70982ca24a5664735bc165fc099dad66764711\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 22 15:09:09.488212 containerd[1459]: time="2025-04-22T15:09:09.488180199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7432ab4c6fe37c1a63115267b2d6eb03,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5a2e29753e94c710d6a66bfbc007763c3404bd4fd7afaca15913bc9831ac186\"" Apr 22 15:09:09.489879 containerd[1459]: time="2025-04-22T15:09:09.489850189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"feb222b734b623fe718dd314dc73d50c2b23635274cebb1841a1dd7739d18fd8\"" Apr 22 15:09:09.492606 containerd[1459]: time="2025-04-22T15:09:09.491882685Z" level=info msg="CreateContainer within sandbox \"a5a2e29753e94c710d6a66bfbc007763c3404bd4fd7afaca15913bc9831ac186\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 22 15:09:09.493295 containerd[1459]: time="2025-04-22T15:09:09.493260600Z" level=info msg="CreateContainer within sandbox \"feb222b734b623fe718dd314dc73d50c2b23635274cebb1841a1dd7739d18fd8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 22 15:09:09.494125 containerd[1459]: time="2025-04-22T15:09:09.494100589Z" level=info msg="Container 6f2509178955a914bb68227ec0c0c5e3cab4ff8e4f4d336ffb79e459b4d69f0e: CDI devices from CRI Config.CDIDevices: []" Apr 22 15:09:09.497833 containerd[1459]: time="2025-04-22T15:09:09.497787375Z" level=info msg="Container af2c8f097fef5b1703177bc087cdcae5bbd182906ec0ec9a6cd8409a24341257: CDI devices from CRI Config.CDIDevices: []" Apr 22 15:09:09.502906 containerd[1459]: time="2025-04-22T15:09:09.502874767Z" level=info msg="CreateContainer within sandbox \"cf10fbdec8199fa9dfeec9fabb70982ca24a5664735bc165fc099dad66764711\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6f2509178955a914bb68227ec0c0c5e3cab4ff8e4f4d336ffb79e459b4d69f0e\"" Apr 22 15:09:09.503419 containerd[1459]: time="2025-04-22T15:09:09.503369827Z" level=info msg="Container de32d214e2d470b29d95d8c85bc24b7119715ce47e86681e53dedbc72ad8edfc: CDI devices from CRI Config.CDIDevices: []" Apr 22 15:09:09.503478 containerd[1459]: time="2025-04-22T15:09:09.503436624Z" level=info msg="StartContainer for \"6f2509178955a914bb68227ec0c0c5e3cab4ff8e4f4d336ffb79e459b4d69f0e\"" Apr 22 15:09:09.504706 containerd[1459]: time="2025-04-22T15:09:09.504484073Z" level=info msg="connecting to shim 6f2509178955a914bb68227ec0c0c5e3cab4ff8e4f4d336ffb79e459b4d69f0e" address="unix:///run/containerd/s/4539bbb355b1eb0848f9677defae06c06881c193f40f01027953b39658f94da0" protocol=ttrpc version=3 Apr 22 15:09:09.507615 containerd[1459]: time="2025-04-22T15:09:09.507574525Z" level=info msg="CreateContainer within sandbox \"a5a2e29753e94c710d6a66bfbc007763c3404bd4fd7afaca15913bc9831ac186\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"af2c8f097fef5b1703177bc087cdcae5bbd182906ec0ec9a6cd8409a24341257\"" Apr 22 15:09:09.508071 containerd[1459]: time="2025-04-22T15:09:09.508042859Z" level=info msg="StartContainer for \"af2c8f097fef5b1703177bc087cdcae5bbd182906ec0ec9a6cd8409a24341257\"" Apr 22 15:09:09.509027 containerd[1459]: time="2025-04-22T15:09:09.508984240Z" level=info msg="connecting to shim af2c8f097fef5b1703177bc087cdcae5bbd182906ec0ec9a6cd8409a24341257" address="unix:///run/containerd/s/1b9e9e9ef3939ab0c4a5dcf6f82b33c842ecd4274892dc905774e630d871933a" protocol=ttrpc version=3 Apr 22 15:09:09.513999 containerd[1459]: time="2025-04-22T15:09:09.513963968Z" level=info msg="CreateContainer within sandbox \"feb222b734b623fe718dd314dc73d50c2b23635274cebb1841a1dd7739d18fd8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"de32d214e2d470b29d95d8c85bc24b7119715ce47e86681e53dedbc72ad8edfc\"" Apr 22 15:09:09.514918 containerd[1459]: time="2025-04-22T15:09:09.514870633Z" level=info msg="StartContainer for \"de32d214e2d470b29d95d8c85bc24b7119715ce47e86681e53dedbc72ad8edfc\"" Apr 22 15:09:09.517782 containerd[1459]: time="2025-04-22T15:09:09.517601375Z" level=info msg="connecting to shim de32d214e2d470b29d95d8c85bc24b7119715ce47e86681e53dedbc72ad8edfc" address="unix:///run/containerd/s/766a658dbe9f78957024009af3078181460bfbe4ff8e517e5db4a0b4e4bfb41d" protocol=ttrpc version=3 Apr 22 15:09:09.524504 systemd[1]: Started cri-containerd-af2c8f097fef5b1703177bc087cdcae5bbd182906ec0ec9a6cd8409a24341257.scope - libcontainer container af2c8f097fef5b1703177bc087cdcae5bbd182906ec0ec9a6cd8409a24341257. Apr 22 15:09:09.528185 systemd[1]: Started cri-containerd-6f2509178955a914bb68227ec0c0c5e3cab4ff8e4f4d336ffb79e459b4d69f0e.scope - libcontainer container 6f2509178955a914bb68227ec0c0c5e3cab4ff8e4f4d336ffb79e459b4d69f0e. Apr 22 15:09:09.541569 systemd[1]: Started cri-containerd-de32d214e2d470b29d95d8c85bc24b7119715ce47e86681e53dedbc72ad8edfc.scope - libcontainer container de32d214e2d470b29d95d8c85bc24b7119715ce47e86681e53dedbc72ad8edfc. Apr 22 15:09:09.579867 containerd[1459]: time="2025-04-22T15:09:09.579776715Z" level=info msg="StartContainer for \"af2c8f097fef5b1703177bc087cdcae5bbd182906ec0ec9a6cd8409a24341257\" returns successfully" Apr 22 15:09:09.588206 containerd[1459]: time="2025-04-22T15:09:09.587970859Z" level=info msg="StartContainer for \"6f2509178955a914bb68227ec0c0c5e3cab4ff8e4f4d336ffb79e459b4d69f0e\" returns successfully" Apr 22 15:09:09.588206 containerd[1459]: time="2025-04-22T15:09:09.588088152Z" level=info msg="StartContainer for \"de32d214e2d470b29d95d8c85bc24b7119715ce47e86681e53dedbc72ad8edfc\" returns successfully" Apr 22 15:09:09.590167 kubelet[2340]: W0422 15:09:09.590083 2340 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 22 15:09:09.590167 kubelet[2340]: E0422 15:09:09.590145 2340 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 22 15:09:09.697556 kubelet[2340]: E0422 15:09:09.697181 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="1.6s" Apr 22 15:09:09.753752 kubelet[2340]: W0422 15:09:09.753667 2340 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 22 15:09:09.753752 kubelet[2340]: E0422 15:09:09.753730 2340 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Apr 22 15:09:09.799423 kubelet[2340]: I0422 15:09:09.799374 2340 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 22 15:09:11.069034 kubelet[2340]: I0422 15:09:11.068969 2340 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Apr 22 15:09:11.078070 kubelet[2340]: E0422 15:09:11.078024 2340 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 15:09:11.178589 kubelet[2340]: E0422 15:09:11.178544 2340 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 15:09:11.279099 kubelet[2340]: E0422 15:09:11.279055 2340 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 15:09:11.379195 kubelet[2340]: E0422 15:09:11.379145 2340 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 15:09:11.479662 kubelet[2340]: E0422 15:09:11.479629 2340 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 15:09:11.580245 kubelet[2340]: E0422 15:09:11.580205 2340 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 15:09:11.680865 kubelet[2340]: E0422 15:09:11.680552 2340 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 15:09:11.781169 kubelet[2340]: E0422 15:09:11.781129 2340 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 15:09:11.881738 kubelet[2340]: E0422 15:09:11.881704 2340 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 15:09:11.982388 kubelet[2340]: E0422 15:09:11.982228 2340 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 15:09:12.082890 kubelet[2340]: E0422 15:09:12.082845 2340 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 15:09:12.183834 kubelet[2340]: E0422 15:09:12.183451 2340 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 15:09:12.284270 kubelet[2340]: E0422 15:09:12.284131 2340 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 15:09:12.385030 kubelet[2340]: E0422 15:09:12.384988 2340 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 15:09:12.485590 kubelet[2340]: E0422 15:09:12.485552 2340 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 15:09:13.149659 systemd[1]: Reload requested from client PID 2619 ('systemctl') (unit session-7.scope)... Apr 22 15:09:13.149675 systemd[1]: Reloading... Apr 22 15:09:13.218395 zram_generator::config[2666]: No configuration found. Apr 22 15:09:13.287022 kubelet[2340]: I0422 15:09:13.286983 2340 apiserver.go:52] "Watching apiserver" Apr 22 15:09:13.293502 kubelet[2340]: I0422 15:09:13.293459 2340 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 22 15:09:13.298553 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 22 15:09:13.384027 systemd[1]: Reloading finished in 234 ms. Apr 22 15:09:13.404587 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 15:09:13.410720 systemd[1]: kubelet.service: Deactivated successfully. Apr 22 15:09:13.410938 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 15:09:13.410983 systemd[1]: kubelet.service: Consumed 2.943s CPU time, 117M memory peak. Apr 22 15:09:13.413294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 22 15:09:13.529388 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 22 15:09:13.532814 (kubelet)[2705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 22 15:09:13.571433 kubelet[2705]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 22 15:09:13.571433 kubelet[2705]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 22 15:09:13.571433 kubelet[2705]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 22 15:09:13.571765 kubelet[2705]: I0422 15:09:13.571478 2705 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 22 15:09:13.575830 kubelet[2705]: I0422 15:09:13.575802 2705 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Apr 22 15:09:13.575830 kubelet[2705]: I0422 15:09:13.575827 2705 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 22 15:09:13.575994 kubelet[2705]: I0422 15:09:13.575979 2705 server.go:927] "Client rotation is on, will bootstrap in background" Apr 22 15:09:13.577270 kubelet[2705]: I0422 15:09:13.577245 2705 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 22 15:09:13.578480 kubelet[2705]: I0422 15:09:13.578455 2705 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 22 15:09:13.585459 kubelet[2705]: I0422 15:09:13.585437 2705 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 22 15:09:13.585641 kubelet[2705]: I0422 15:09:13.585619 2705 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 22 15:09:13.585796 kubelet[2705]: I0422 15:09:13.585644 2705 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Apr 22 15:09:13.585870 kubelet[2705]: I0422 15:09:13.585801 2705 topology_manager.go:138] "Creating topology manager with none policy" Apr 22 15:09:13.585870 kubelet[2705]: I0422 15:09:13.585811 2705 container_manager_linux.go:301] "Creating device plugin manager" Apr 22 15:09:13.585870 kubelet[2705]: I0422 15:09:13.585848 2705 state_mem.go:36] "Initialized new in-memory state store" Apr 22 15:09:13.585949 kubelet[2705]: I0422 15:09:13.585936 2705 kubelet.go:400] "Attempting to sync node with API server" Apr 22 15:09:13.585979 kubelet[2705]: I0422 15:09:13.585956 2705 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 22 15:09:13.586002 kubelet[2705]: I0422 15:09:13.585984 2705 kubelet.go:312] "Adding apiserver pod source" Apr 22 15:09:13.586030 kubelet[2705]: I0422 15:09:13.586020 2705 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 22 15:09:13.586601 kubelet[2705]: I0422 15:09:13.586494 2705 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Apr 22 15:09:13.586782 kubelet[2705]: I0422 15:09:13.586760 2705 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 22 15:09:13.587729 kubelet[2705]: I0422 15:09:13.587714 2705 server.go:1264] "Started kubelet" Apr 22 15:09:13.588603 kubelet[2705]: I0422 15:09:13.588541 2705 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 22 15:09:13.588841 kubelet[2705]: I0422 15:09:13.588769 2705 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 22 15:09:13.588841 kubelet[2705]: I0422 15:09:13.588818 2705 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 22 15:09:13.589209 kubelet[2705]: I0422 15:09:13.589195 2705 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 22 15:09:13.589713 kubelet[2705]: I0422 15:09:13.589690 2705 server.go:455] "Adding debug handlers to kubelet server" Apr 22 15:09:13.592739 kubelet[2705]: E0422 15:09:13.592708 2705 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 22 15:09:13.592806 kubelet[2705]: I0422 15:09:13.592749 2705 volume_manager.go:291] "Starting Kubelet Volume Manager" Apr 22 15:09:13.592838 kubelet[2705]: I0422 15:09:13.592830 2705 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Apr 22 15:09:13.594253 kubelet[2705]: I0422 15:09:13.592965 2705 reconciler.go:26] "Reconciler: start to sync state" Apr 22 15:09:13.602503 kubelet[2705]: E0422 15:09:13.602481 2705 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 22 15:09:13.607279 kubelet[2705]: I0422 15:09:13.606832 2705 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 22 15:09:13.608394 kubelet[2705]: I0422 15:09:13.607990 2705 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 22 15:09:13.608394 kubelet[2705]: I0422 15:09:13.608043 2705 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 22 15:09:13.608394 kubelet[2705]: I0422 15:09:13.608062 2705 kubelet.go:2337] "Starting kubelet main sync loop" Apr 22 15:09:13.608394 kubelet[2705]: E0422 15:09:13.608106 2705 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 22 15:09:13.608394 kubelet[2705]: I0422 15:09:13.608145 2705 factory.go:221] Registration of the containerd container factory successfully Apr 22 15:09:13.608394 kubelet[2705]: I0422 15:09:13.608162 2705 factory.go:221] Registration of the systemd container factory successfully Apr 22 15:09:13.608394 kubelet[2705]: I0422 15:09:13.608262 2705 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 22 15:09:13.641116 kubelet[2705]: I0422 15:09:13.641088 2705 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 22 15:09:13.641116 kubelet[2705]: I0422 15:09:13.641105 2705 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 22 15:09:13.641116 kubelet[2705]: I0422 15:09:13.641124 2705 state_mem.go:36] "Initialized new in-memory state store" Apr 22 15:09:13.641424 kubelet[2705]: I0422 15:09:13.641405 2705 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 22 15:09:13.641463 kubelet[2705]: I0422 15:09:13.641425 2705 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 22 15:09:13.641463 kubelet[2705]: I0422 15:09:13.641443 2705 policy_none.go:49] "None policy: Start" Apr 22 15:09:13.642210 kubelet[2705]: I0422 15:09:13.641968 2705 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 22 15:09:13.642210 kubelet[2705]: I0422 15:09:13.642001 2705 state_mem.go:35] "Initializing new in-memory state store" Apr 22 15:09:13.642210 kubelet[2705]: I0422 15:09:13.642126 2705 state_mem.go:75] "Updated machine memory state" Apr 22 15:09:13.646496 kubelet[2705]: I0422 15:09:13.646478 2705 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 22 15:09:13.646833 kubelet[2705]: I0422 15:09:13.646659 2705 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 22 15:09:13.646833 kubelet[2705]: I0422 15:09:13.646765 2705 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 22 15:09:13.694444 kubelet[2705]: I0422 15:09:13.694306 2705 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Apr 22 15:09:13.699990 kubelet[2705]: I0422 15:09:13.699960 2705 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Apr 22 15:09:13.700080 kubelet[2705]: I0422 15:09:13.700050 2705 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Apr 22 15:09:13.708375 kubelet[2705]: I0422 15:09:13.708273 2705 topology_manager.go:215] "Topology Admit Handler" podUID="7432ab4c6fe37c1a63115267b2d6eb03" podNamespace="kube-system" podName="kube-apiserver-localhost" Apr 22 15:09:13.708546 kubelet[2705]: I0422 15:09:13.708501 2705 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Apr 22 15:09:13.708708 kubelet[2705]: I0422 15:09:13.708675 2705 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Apr 22 15:09:13.894774 kubelet[2705]: I0422 15:09:13.894719 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7432ab4c6fe37c1a63115267b2d6eb03-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7432ab4c6fe37c1a63115267b2d6eb03\") " pod="kube-system/kube-apiserver-localhost" Apr 22 15:09:13.894774 kubelet[2705]: I0422 15:09:13.894769 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 15:09:13.894935 kubelet[2705]: I0422 15:09:13.894797 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Apr 22 15:09:13.894935 kubelet[2705]: I0422 15:09:13.894818 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7432ab4c6fe37c1a63115267b2d6eb03-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7432ab4c6fe37c1a63115267b2d6eb03\") " pod="kube-system/kube-apiserver-localhost" Apr 22 15:09:13.894935 kubelet[2705]: I0422 15:09:13.894834 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 15:09:13.894935 kubelet[2705]: I0422 15:09:13.894849 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 15:09:13.894935 kubelet[2705]: I0422 15:09:13.894863 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 15:09:13.895064 kubelet[2705]: I0422 15:09:13.894880 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Apr 22 15:09:13.895064 kubelet[2705]: I0422 15:09:13.894898 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7432ab4c6fe37c1a63115267b2d6eb03-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7432ab4c6fe37c1a63115267b2d6eb03\") " pod="kube-system/kube-apiserver-localhost" Apr 22 15:09:14.586658 kubelet[2705]: I0422 15:09:14.586626 2705 apiserver.go:52] "Watching apiserver" Apr 22 15:09:14.593515 kubelet[2705]: I0422 15:09:14.593467 2705 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Apr 22 15:09:14.650292 kubelet[2705]: E0422 15:09:14.650254 2705 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 22 15:09:14.656991 kubelet[2705]: I0422 15:09:14.656935 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.656921299 podStartE2EDuration="1.656921299s" podCreationTimestamp="2025-04-22 15:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-22 15:09:14.650727391 +0000 UTC m=+1.114983851" watchObservedRunningTime="2025-04-22 15:09:14.656921299 +0000 UTC m=+1.121177759" Apr 22 15:09:14.657134 kubelet[2705]: I0422 15:09:14.657048 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.657035955 podStartE2EDuration="1.657035955s" podCreationTimestamp="2025-04-22 15:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-22 15:09:14.656792495 +0000 UTC m=+1.121048955" watchObservedRunningTime="2025-04-22 15:09:14.657035955 +0000 UTC m=+1.121292415" Apr 22 15:09:14.664628 kubelet[2705]: I0422 15:09:14.664580 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6645674110000002 podStartE2EDuration="1.664567411s" podCreationTimestamp="2025-04-22 15:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-22 15:09:14.664024183 +0000 UTC m=+1.128280643" watchObservedRunningTime="2025-04-22 15:09:14.664567411 +0000 UTC m=+1.128823831" Apr 22 15:09:18.654862 sudo[1661]: pam_unix(sudo:session): session closed for user root Apr 22 15:09:18.656705 sshd[1660]: Connection closed by 10.0.0.1 port 46796 Apr 22 15:09:18.657151 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Apr 22 15:09:18.660253 systemd[1]: sshd@6-10.0.0.54:22-10.0.0.1:46796.service: Deactivated successfully. Apr 22 15:09:18.664233 systemd[1]: session-7.scope: Deactivated successfully. Apr 22 15:09:18.664602 systemd[1]: session-7.scope: Consumed 8.576s CPU time, 236.5M memory peak. Apr 22 15:09:18.666284 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Apr 22 15:09:18.667125 systemd-logind[1444]: Removed session 7. Apr 22 15:09:23.120457 update_engine[1446]: I20250422 15:09:23.120389 1446 update_attempter.cc:509] Updating boot flags... Apr 22 15:09:23.147457 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2799) Apr 22 15:09:23.187501 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2800) Apr 22 15:09:29.247128 kubelet[2705]: I0422 15:09:29.247011 2705 topology_manager.go:215] "Topology Admit Handler" podUID="b423394b-df09-481c-8d9d-c895b9a5610c" podNamespace="tigera-operator" podName="tigera-operator-6479d6dc54-z2bjz" Apr 22 15:09:29.259106 systemd[1]: Created slice kubepods-besteffort-podb423394b_df09_481c_8d9d_c895b9a5610c.slice - libcontainer container kubepods-besteffort-podb423394b_df09_481c_8d9d_c895b9a5610c.slice. Apr 22 15:09:29.350544 kubelet[2705]: I0422 15:09:29.350504 2705 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 22 15:09:29.350942 containerd[1459]: time="2025-04-22T15:09:29.350904721Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 22 15:09:29.351232 kubelet[2705]: I0422 15:09:29.351063 2705 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 22 15:09:29.392180 kubelet[2705]: I0422 15:09:29.392132 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzclt\" (UniqueName: \"kubernetes.io/projected/b423394b-df09-481c-8d9d-c895b9a5610c-kube-api-access-pzclt\") pod \"tigera-operator-6479d6dc54-z2bjz\" (UID: \"b423394b-df09-481c-8d9d-c895b9a5610c\") " pod="tigera-operator/tigera-operator-6479d6dc54-z2bjz" Apr 22 15:09:29.392180 kubelet[2705]: I0422 15:09:29.392182 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b423394b-df09-481c-8d9d-c895b9a5610c-var-lib-calico\") pod \"tigera-operator-6479d6dc54-z2bjz\" (UID: \"b423394b-df09-481c-8d9d-c895b9a5610c\") " pod="tigera-operator/tigera-operator-6479d6dc54-z2bjz" Apr 22 15:09:29.501056 kubelet[2705]: I0422 15:09:29.500941 2705 topology_manager.go:215] "Topology Admit Handler" podUID="03ca33b3-20f1-46d4-8f23-bb846a38a895" podNamespace="kube-system" podName="kube-proxy-bqr24" Apr 22 15:09:29.509718 systemd[1]: Created slice kubepods-besteffort-pod03ca33b3_20f1_46d4_8f23_bb846a38a895.slice - libcontainer container kubepods-besteffort-pod03ca33b3_20f1_46d4_8f23_bb846a38a895.slice. Apr 22 15:09:29.566865 containerd[1459]: time="2025-04-22T15:09:29.566820295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6479d6dc54-z2bjz,Uid:b423394b-df09-481c-8d9d-c895b9a5610c,Namespace:tigera-operator,Attempt:0,}" Apr 22 15:09:29.593679 kubelet[2705]: I0422 15:09:29.593545 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22pnc\" (UniqueName: \"kubernetes.io/projected/03ca33b3-20f1-46d4-8f23-bb846a38a895-kube-api-access-22pnc\") pod \"kube-proxy-bqr24\" (UID: \"03ca33b3-20f1-46d4-8f23-bb846a38a895\") " pod="kube-system/kube-proxy-bqr24" Apr 22 15:09:29.593679 kubelet[2705]: I0422 15:09:29.593614 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03ca33b3-20f1-46d4-8f23-bb846a38a895-lib-modules\") pod \"kube-proxy-bqr24\" (UID: \"03ca33b3-20f1-46d4-8f23-bb846a38a895\") " pod="kube-system/kube-proxy-bqr24" Apr 22 15:09:29.593679 kubelet[2705]: I0422 15:09:29.593636 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/03ca33b3-20f1-46d4-8f23-bb846a38a895-kube-proxy\") pod \"kube-proxy-bqr24\" (UID: \"03ca33b3-20f1-46d4-8f23-bb846a38a895\") " pod="kube-system/kube-proxy-bqr24" Apr 22 15:09:29.593679 kubelet[2705]: I0422 15:09:29.593664 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03ca33b3-20f1-46d4-8f23-bb846a38a895-xtables-lock\") pod \"kube-proxy-bqr24\" (UID: \"03ca33b3-20f1-46d4-8f23-bb846a38a895\") " pod="kube-system/kube-proxy-bqr24" Apr 22 15:09:29.611079 containerd[1459]: time="2025-04-22T15:09:29.610771338Z" level=info msg="connecting to shim 692fc0b8cca0000d372cb672a55f41aaa734682c10e7105a56ce31d32798aa4d" address="unix:///run/containerd/s/ae63bc0d2427fcfc4aa97697db76c24ff88b18fbbd35f55ad394b18aef44f740" namespace=k8s.io protocol=ttrpc version=3 Apr 22 15:09:29.666587 systemd[1]: Started cri-containerd-692fc0b8cca0000d372cb672a55f41aaa734682c10e7105a56ce31d32798aa4d.scope - libcontainer container 692fc0b8cca0000d372cb672a55f41aaa734682c10e7105a56ce31d32798aa4d. Apr 22 15:09:29.707875 containerd[1459]: time="2025-04-22T15:09:29.707831153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6479d6dc54-z2bjz,Uid:b423394b-df09-481c-8d9d-c895b9a5610c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"692fc0b8cca0000d372cb672a55f41aaa734682c10e7105a56ce31d32798aa4d\"" Apr 22 15:09:29.715390 containerd[1459]: time="2025-04-22T15:09:29.715358604Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Apr 22 15:09:29.817144 containerd[1459]: time="2025-04-22T15:09:29.817035232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bqr24,Uid:03ca33b3-20f1-46d4-8f23-bb846a38a895,Namespace:kube-system,Attempt:0,}" Apr 22 15:09:29.835997 containerd[1459]: time="2025-04-22T15:09:29.835904941Z" level=info msg="connecting to shim 56f981faf669d22a94e290ed5fdffc8851020d6e18211843a16f40f475faf1d2" address="unix:///run/containerd/s/258bb2615ef0c07566f04d47fd855c04aaae8dd30fee26d0b2825a203fdf086a" namespace=k8s.io protocol=ttrpc version=3 Apr 22 15:09:29.868540 systemd[1]: Started cri-containerd-56f981faf669d22a94e290ed5fdffc8851020d6e18211843a16f40f475faf1d2.scope - libcontainer container 56f981faf669d22a94e290ed5fdffc8851020d6e18211843a16f40f475faf1d2. Apr 22 15:09:29.896138 containerd[1459]: time="2025-04-22T15:09:29.896090800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bqr24,Uid:03ca33b3-20f1-46d4-8f23-bb846a38a895,Namespace:kube-system,Attempt:0,} returns sandbox id \"56f981faf669d22a94e290ed5fdffc8851020d6e18211843a16f40f475faf1d2\"" Apr 22 15:09:29.898820 containerd[1459]: time="2025-04-22T15:09:29.898575145Z" level=info msg="CreateContainer within sandbox \"56f981faf669d22a94e290ed5fdffc8851020d6e18211843a16f40f475faf1d2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 22 15:09:29.918280 containerd[1459]: time="2025-04-22T15:09:29.918204953Z" level=info msg="Container 87796d4ed7220dd8f8b2e353bc8a0bc0ee96bca734025068d080676b9e077ef9: CDI devices from CRI Config.CDIDevices: []" Apr 22 15:09:29.925461 containerd[1459]: time="2025-04-22T15:09:29.925415433Z" level=info msg="CreateContainer within sandbox \"56f981faf669d22a94e290ed5fdffc8851020d6e18211843a16f40f475faf1d2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"87796d4ed7220dd8f8b2e353bc8a0bc0ee96bca734025068d080676b9e077ef9\"" Apr 22 15:09:29.926285 containerd[1459]: time="2025-04-22T15:09:29.926231153Z" level=info msg="StartContainer for \"87796d4ed7220dd8f8b2e353bc8a0bc0ee96bca734025068d080676b9e077ef9\"" Apr 22 15:09:29.927597 containerd[1459]: time="2025-04-22T15:09:29.927571012Z" level=info msg="connecting to shim 87796d4ed7220dd8f8b2e353bc8a0bc0ee96bca734025068d080676b9e077ef9" address="unix:///run/containerd/s/258bb2615ef0c07566f04d47fd855c04aaae8dd30fee26d0b2825a203fdf086a" protocol=ttrpc version=3 Apr 22 15:09:29.947563 systemd[1]: Started cri-containerd-87796d4ed7220dd8f8b2e353bc8a0bc0ee96bca734025068d080676b9e077ef9.scope - libcontainer container 87796d4ed7220dd8f8b2e353bc8a0bc0ee96bca734025068d080676b9e077ef9. Apr 22 15:09:29.981622 containerd[1459]: time="2025-04-22T15:09:29.981463595Z" level=info msg="StartContainer for \"87796d4ed7220dd8f8b2e353bc8a0bc0ee96bca734025068d080676b9e077ef9\" returns successfully" Apr 22 15:09:30.680198 kubelet[2705]: I0422 15:09:30.680120 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bqr24" podStartSLOduration=1.6801017790000001 podStartE2EDuration="1.680101779s" podCreationTimestamp="2025-04-22 15:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-22 15:09:30.679627452 +0000 UTC m=+17.143883912" watchObservedRunningTime="2025-04-22 15:09:30.680101779 +0000 UTC m=+17.144358239" Apr 22 15:09:31.252701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount448673460.mount: Deactivated successfully. Apr 22 15:09:32.417461 containerd[1459]: time="2025-04-22T15:09:32.417408366Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:09:32.418463 containerd[1459]: time="2025-04-22T15:09:32.418226254Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=19271115" Apr 22 15:09:32.419202 containerd[1459]: time="2025-04-22T15:09:32.419161629Z" level=info msg="ImageCreate event name:\"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:09:32.421775 containerd[1459]: time="2025-04-22T15:09:32.421722903Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:09:32.422453 containerd[1459]: time="2025-04-22T15:09:32.422417786Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"19267110\" in 2.707025114s" Apr 22 15:09:32.422515 containerd[1459]: time="2025-04-22T15:09:32.422460174Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\"" Apr 22 15:09:32.426430 containerd[1459]: time="2025-04-22T15:09:32.426383262Z" level=info msg="CreateContainer within sandbox \"692fc0b8cca0000d372cb672a55f41aaa734682c10e7105a56ce31d32798aa4d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 22 15:09:32.445653 containerd[1459]: time="2025-04-22T15:09:32.445079644Z" level=info msg="Container 4bba5ac3c5eb96ce6e70b4dd906cf449279108a0faa60afc04a24771fd480453: CDI devices from CRI Config.CDIDevices: []" Apr 22 15:09:32.450334 containerd[1459]: time="2025-04-22T15:09:32.450288488Z" level=info msg="CreateContainer within sandbox \"692fc0b8cca0000d372cb672a55f41aaa734682c10e7105a56ce31d32798aa4d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4bba5ac3c5eb96ce6e70b4dd906cf449279108a0faa60afc04a24771fd480453\"" Apr 22 15:09:32.450756 containerd[1459]: time="2025-04-22T15:09:32.450728843Z" level=info msg="StartContainer for \"4bba5ac3c5eb96ce6e70b4dd906cf449279108a0faa60afc04a24771fd480453\"" Apr 22 15:09:32.452334 containerd[1459]: time="2025-04-22T15:09:32.452288881Z" level=info msg="connecting to shim 4bba5ac3c5eb96ce6e70b4dd906cf449279108a0faa60afc04a24771fd480453" address="unix:///run/containerd/s/ae63bc0d2427fcfc4aa97697db76c24ff88b18fbbd35f55ad394b18aef44f740" protocol=ttrpc version=3 Apr 22 15:09:32.480560 systemd[1]: Started cri-containerd-4bba5ac3c5eb96ce6e70b4dd906cf449279108a0faa60afc04a24771fd480453.scope - libcontainer container 4bba5ac3c5eb96ce6e70b4dd906cf449279108a0faa60afc04a24771fd480453. Apr 22 15:09:32.506708 containerd[1459]: time="2025-04-22T15:09:32.506671349Z" level=info msg="StartContainer for \"4bba5ac3c5eb96ce6e70b4dd906cf449279108a0faa60afc04a24771fd480453\" returns successfully" Apr 22 15:09:36.885075 kubelet[2705]: I0422 15:09:36.884963 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6479d6dc54-z2bjz" podStartSLOduration=5.175217126 podStartE2EDuration="7.884904735s" podCreationTimestamp="2025-04-22 15:09:29 +0000 UTC" firstStartedPulling="2025-04-22 15:09:29.714903041 +0000 UTC m=+16.179159501" lastFinishedPulling="2025-04-22 15:09:32.42459069 +0000 UTC m=+18.888847110" observedRunningTime="2025-04-22 15:09:32.682539749 +0000 UTC m=+19.146796209" watchObservedRunningTime="2025-04-22 15:09:36.884904735 +0000 UTC m=+23.349161155" Apr 22 15:09:36.885631 kubelet[2705]: I0422 15:09:36.885473 2705 topology_manager.go:215] "Topology Admit Handler" podUID="8c3c4a75-9a7a-4837-b233-d0b44ffe7065" podNamespace="calico-system" podName="calico-typha-6b866f746f-gk6kg" Apr 22 15:09:36.888690 kubelet[2705]: W0422 15:09:36.888582 2705 reflector.go:547] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Apr 22 15:09:36.888690 kubelet[2705]: W0422 15:09:36.888663 2705 reflector.go:547] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Apr 22 15:09:36.890844 kubelet[2705]: E0422 15:09:36.890441 2705 reflector.go:150] object-"calico-system"/"tigera-ca-bundle": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Apr 22 15:09:36.892762 kubelet[2705]: E0422 15:09:36.891946 2705 reflector.go:150] object-"calico-system"/"typha-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Apr 22 15:09:36.897197 systemd[1]: Created slice kubepods-besteffort-pod8c3c4a75_9a7a_4837_b233_d0b44ffe7065.slice - libcontainer container kubepods-besteffort-pod8c3c4a75_9a7a_4837_b233_d0b44ffe7065.slice. Apr 22 15:09:36.938180 kubelet[2705]: I0422 15:09:36.938130 2705 topology_manager.go:215] "Topology Admit Handler" podUID="e68a5acb-4271-4645-bc91-6ccc74d511d9" podNamespace="calico-system" podName="calico-node-gv54k" Apr 22 15:09:36.945967 systemd[1]: Created slice kubepods-besteffort-pode68a5acb_4271_4645_bc91_6ccc74d511d9.slice - libcontainer container kubepods-besteffort-pode68a5acb_4271_4645_bc91_6ccc74d511d9.slice. Apr 22 15:09:37.045888 kubelet[2705]: I0422 15:09:37.045420 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e68a5acb-4271-4645-bc91-6ccc74d511d9-lib-modules\") pod \"calico-node-gv54k\" (UID: \"e68a5acb-4271-4645-bc91-6ccc74d511d9\") " pod="calico-system/calico-node-gv54k" Apr 22 15:09:37.045888 kubelet[2705]: I0422 15:09:37.045487 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e68a5acb-4271-4645-bc91-6ccc74d511d9-cni-bin-dir\") pod \"calico-node-gv54k\" (UID: \"e68a5acb-4271-4645-bc91-6ccc74d511d9\") " pod="calico-system/calico-node-gv54k" Apr 22 15:09:37.045888 kubelet[2705]: I0422 15:09:37.045512 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e68a5acb-4271-4645-bc91-6ccc74d511d9-cni-log-dir\") pod \"calico-node-gv54k\" (UID: \"e68a5acb-4271-4645-bc91-6ccc74d511d9\") " pod="calico-system/calico-node-gv54k" Apr 22 15:09:37.045888 kubelet[2705]: I0422 15:09:37.045534 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c3c4a75-9a7a-4837-b233-d0b44ffe7065-tigera-ca-bundle\") pod \"calico-typha-6b866f746f-gk6kg\" (UID: \"8c3c4a75-9a7a-4837-b233-d0b44ffe7065\") " pod="calico-system/calico-typha-6b866f746f-gk6kg" Apr 22 15:09:37.045888 kubelet[2705]: I0422 15:09:37.045554 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8c3c4a75-9a7a-4837-b233-d0b44ffe7065-typha-certs\") pod \"calico-typha-6b866f746f-gk6kg\" (UID: \"8c3c4a75-9a7a-4837-b233-d0b44ffe7065\") " pod="calico-system/calico-typha-6b866f746f-gk6kg" Apr 22 15:09:37.046103 kubelet[2705]: I0422 15:09:37.045575 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxl69\" (UniqueName: \"kubernetes.io/projected/8c3c4a75-9a7a-4837-b233-d0b44ffe7065-kube-api-access-cxl69\") pod \"calico-typha-6b866f746f-gk6kg\" (UID: \"8c3c4a75-9a7a-4837-b233-d0b44ffe7065\") " pod="calico-system/calico-typha-6b866f746f-gk6kg" Apr 22 15:09:37.046103 kubelet[2705]: I0422 15:09:37.045591 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e68a5acb-4271-4645-bc91-6ccc74d511d9-xtables-lock\") pod \"calico-node-gv54k\" (UID: \"e68a5acb-4271-4645-bc91-6ccc74d511d9\") " pod="calico-system/calico-node-gv54k" Apr 22 15:09:37.046103 kubelet[2705]: I0422 15:09:37.045607 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e68a5acb-4271-4645-bc91-6ccc74d511d9-policysync\") pod \"calico-node-gv54k\" (UID: \"e68a5acb-4271-4645-bc91-6ccc74d511d9\") " pod="calico-system/calico-node-gv54k" Apr 22 15:09:37.046103 kubelet[2705]: I0422 15:09:37.045623 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e68a5acb-4271-4645-bc91-6ccc74d511d9-var-run-calico\") pod \"calico-node-gv54k\" (UID: \"e68a5acb-4271-4645-bc91-6ccc74d511d9\") " pod="calico-system/calico-node-gv54k" Apr 22 15:09:37.046103 kubelet[2705]: I0422 15:09:37.045640 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e68a5acb-4271-4645-bc91-6ccc74d511d9-cni-net-dir\") pod \"calico-node-gv54k\" (UID: \"e68a5acb-4271-4645-bc91-6ccc74d511d9\") " pod="calico-system/calico-node-gv54k" Apr 22 15:09:37.046204 kubelet[2705]: I0422 15:09:37.045657 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e68a5acb-4271-4645-bc91-6ccc74d511d9-node-certs\") pod \"calico-node-gv54k\" (UID: \"e68a5acb-4271-4645-bc91-6ccc74d511d9\") " pod="calico-system/calico-node-gv54k" Apr 22 15:09:37.046204 kubelet[2705]: I0422 15:09:37.045690 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e68a5acb-4271-4645-bc91-6ccc74d511d9-flexvol-driver-host\") pod \"calico-node-gv54k\" (UID: \"e68a5acb-4271-4645-bc91-6ccc74d511d9\") " pod="calico-system/calico-node-gv54k" Apr 22 15:09:37.046204 kubelet[2705]: I0422 15:09:37.045706 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e68a5acb-4271-4645-bc91-6ccc74d511d9-tigera-ca-bundle\") pod \"calico-node-gv54k\" (UID: \"e68a5acb-4271-4645-bc91-6ccc74d511d9\") " pod="calico-system/calico-node-gv54k" Apr 22 15:09:37.046204 kubelet[2705]: I0422 15:09:37.045723 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e68a5acb-4271-4645-bc91-6ccc74d511d9-var-lib-calico\") pod \"calico-node-gv54k\" (UID: \"e68a5acb-4271-4645-bc91-6ccc74d511d9\") " pod="calico-system/calico-node-gv54k" Apr 22 15:09:37.046204 kubelet[2705]: I0422 15:09:37.045762 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg7hp\" (UniqueName: \"kubernetes.io/projected/e68a5acb-4271-4645-bc91-6ccc74d511d9-kube-api-access-rg7hp\") pod \"calico-node-gv54k\" (UID: \"e68a5acb-4271-4645-bc91-6ccc74d511d9\") " pod="calico-system/calico-node-gv54k" Apr 22 15:09:37.059012 kubelet[2705]: I0422 15:09:37.058684 2705 topology_manager.go:215] "Topology Admit Handler" podUID="7eb1ff10-8266-4cd9-92c6-ab19f470fcc9" podNamespace="calico-system" podName="csi-node-driver-sw8fk" Apr 22 15:09:37.059012 kubelet[2705]: E0422 15:09:37.058955 2705 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sw8fk" podUID="7eb1ff10-8266-4cd9-92c6-ab19f470fcc9" Apr 22 15:09:37.148235 kubelet[2705]: I0422 15:09:37.146015 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7eb1ff10-8266-4cd9-92c6-ab19f470fcc9-kubelet-dir\") pod \"csi-node-driver-sw8fk\" (UID: \"7eb1ff10-8266-4cd9-92c6-ab19f470fcc9\") " pod="calico-system/csi-node-driver-sw8fk" Apr 22 15:09:37.148235 kubelet[2705]: I0422 15:09:37.146065 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7eb1ff10-8266-4cd9-92c6-ab19f470fcc9-varrun\") pod \"csi-node-driver-sw8fk\" (UID: \"7eb1ff10-8266-4cd9-92c6-ab19f470fcc9\") " pod="calico-system/csi-node-driver-sw8fk" Apr 22 15:09:37.148235 kubelet[2705]: I0422 15:09:37.146092 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7eb1ff10-8266-4cd9-92c6-ab19f470fcc9-registration-dir\") pod \"csi-node-driver-sw8fk\" (UID: \"7eb1ff10-8266-4cd9-92c6-ab19f470fcc9\") " pod="calico-system/csi-node-driver-sw8fk" Apr 22 15:09:37.148235 kubelet[2705]: I0422 15:09:37.146121 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7eb1ff10-8266-4cd9-92c6-ab19f470fcc9-socket-dir\") pod \"csi-node-driver-sw8fk\" (UID: \"7eb1ff10-8266-4cd9-92c6-ab19f470fcc9\") " pod="calico-system/csi-node-driver-sw8fk" Apr 22 15:09:37.148235 kubelet[2705]: I0422 15:09:37.146192 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94m5d\" (UniqueName: \"kubernetes.io/projected/7eb1ff10-8266-4cd9-92c6-ab19f470fcc9-kube-api-access-94m5d\") pod \"csi-node-driver-sw8fk\" (UID: \"7eb1ff10-8266-4cd9-92c6-ab19f470fcc9\") " pod="calico-system/csi-node-driver-sw8fk" Apr 22 15:09:37.157136 kubelet[2705]: E0422 15:09:37.157112 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.157267 kubelet[2705]: W0422 15:09:37.157249 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.157704 kubelet[2705]: E0422 15:09:37.157599 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.157704 kubelet[2705]: W0422 15:09:37.157615 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.158864 kubelet[2705]: E0422 15:09:37.158847 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.159012 kubelet[2705]: W0422 15:09:37.158996 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.159315 kubelet[2705]: E0422 15:09:37.159301 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.159481 kubelet[2705]: E0422 15:09:37.159447 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.159827 kubelet[2705]: E0422 15:09:37.159782 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.247447 kubelet[2705]: E0422 15:09:37.247411 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.247447 kubelet[2705]: W0422 15:09:37.247434 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.247447 kubelet[2705]: E0422 15:09:37.247460 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.247673 kubelet[2705]: E0422 15:09:37.247659 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.247673 kubelet[2705]: W0422 15:09:37.247672 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.247801 kubelet[2705]: E0422 15:09:37.247684 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.247911 kubelet[2705]: E0422 15:09:37.247896 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.247911 kubelet[2705]: W0422 15:09:37.247908 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.247964 kubelet[2705]: E0422 15:09:37.247922 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.248095 kubelet[2705]: E0422 15:09:37.248081 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.248095 kubelet[2705]: W0422 15:09:37.248093 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.248182 kubelet[2705]: E0422 15:09:37.248103 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.248268 kubelet[2705]: E0422 15:09:37.248255 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.248268 kubelet[2705]: W0422 15:09:37.248266 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.248345 kubelet[2705]: E0422 15:09:37.248275 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.248434 kubelet[2705]: E0422 15:09:37.248419 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.248434 kubelet[2705]: W0422 15:09:37.248430 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.248546 kubelet[2705]: E0422 15:09:37.248441 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.248633 kubelet[2705]: E0422 15:09:37.248621 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.248665 kubelet[2705]: W0422 15:09:37.248633 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.248665 kubelet[2705]: E0422 15:09:37.248647 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.248823 kubelet[2705]: E0422 15:09:37.248810 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.248823 kubelet[2705]: W0422 15:09:37.248821 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.248881 kubelet[2705]: E0422 15:09:37.248831 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.249085 kubelet[2705]: E0422 15:09:37.249072 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.249085 kubelet[2705]: W0422 15:09:37.249084 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.249182 kubelet[2705]: E0422 15:09:37.249161 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.249246 kubelet[2705]: E0422 15:09:37.249236 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.249246 kubelet[2705]: W0422 15:09:37.249245 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.249324 kubelet[2705]: E0422 15:09:37.249310 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.249390 kubelet[2705]: E0422 15:09:37.249380 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.249390 kubelet[2705]: W0422 15:09:37.249390 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.249534 kubelet[2705]: E0422 15:09:37.249470 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.249534 kubelet[2705]: E0422 15:09:37.249520 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.249534 kubelet[2705]: W0422 15:09:37.249527 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.249702 kubelet[2705]: E0422 15:09:37.249591 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.249702 kubelet[2705]: E0422 15:09:37.249651 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.249702 kubelet[2705]: W0422 15:09:37.249657 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.249702 kubelet[2705]: E0422 15:09:37.249670 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.249807 kubelet[2705]: E0422 15:09:37.249794 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.249807 kubelet[2705]: W0422 15:09:37.249804 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.249862 kubelet[2705]: E0422 15:09:37.249817 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.249958 kubelet[2705]: E0422 15:09:37.249947 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.249958 kubelet[2705]: W0422 15:09:37.249958 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.250008 kubelet[2705]: E0422 15:09:37.249966 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.250178 kubelet[2705]: E0422 15:09:37.250165 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.250178 kubelet[2705]: W0422 15:09:37.250176 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.250242 kubelet[2705]: E0422 15:09:37.250188 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.250357 kubelet[2705]: E0422 15:09:37.250338 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.250357 kubelet[2705]: W0422 15:09:37.250357 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.250417 kubelet[2705]: E0422 15:09:37.250369 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.250511 kubelet[2705]: E0422 15:09:37.250491 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.250511 kubelet[2705]: W0422 15:09:37.250507 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.250663 kubelet[2705]: E0422 15:09:37.250557 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.250663 kubelet[2705]: E0422 15:09:37.250634 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.250663 kubelet[2705]: W0422 15:09:37.250641 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.250727 kubelet[2705]: E0422 15:09:37.250706 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.250772 kubelet[2705]: E0422 15:09:37.250760 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.250772 kubelet[2705]: W0422 15:09:37.250770 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.250840 kubelet[2705]: E0422 15:09:37.250828 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.250896 kubelet[2705]: E0422 15:09:37.250887 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.250896 kubelet[2705]: W0422 15:09:37.250895 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.250979 kubelet[2705]: E0422 15:09:37.250958 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.251039 kubelet[2705]: E0422 15:09:37.251027 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.251039 kubelet[2705]: W0422 15:09:37.251037 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.251090 kubelet[2705]: E0422 15:09:37.251049 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.251183 kubelet[2705]: E0422 15:09:37.251172 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.251183 kubelet[2705]: W0422 15:09:37.251183 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.251236 kubelet[2705]: E0422 15:09:37.251194 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.251380 kubelet[2705]: E0422 15:09:37.251370 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.251380 kubelet[2705]: W0422 15:09:37.251380 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.251443 kubelet[2705]: E0422 15:09:37.251388 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.251530 kubelet[2705]: E0422 15:09:37.251520 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.251530 kubelet[2705]: W0422 15:09:37.251530 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.251573 kubelet[2705]: E0422 15:09:37.251541 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.251838 kubelet[2705]: E0422 15:09:37.251826 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.251838 kubelet[2705]: W0422 15:09:37.251838 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.251895 kubelet[2705]: E0422 15:09:37.251851 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.252017 kubelet[2705]: E0422 15:09:37.252006 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.252017 kubelet[2705]: W0422 15:09:37.252017 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.252070 kubelet[2705]: E0422 15:09:37.252026 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.252445 kubelet[2705]: E0422 15:09:37.252429 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.252445 kubelet[2705]: W0422 15:09:37.252443 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.252532 kubelet[2705]: E0422 15:09:37.252463 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.265079 kubelet[2705]: E0422 15:09:37.265057 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.265205 kubelet[2705]: W0422 15:09:37.265155 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.265205 kubelet[2705]: E0422 15:09:37.265175 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.349620 kubelet[2705]: E0422 15:09:37.349504 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.349620 kubelet[2705]: W0422 15:09:37.349526 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.349620 kubelet[2705]: E0422 15:09:37.349544 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.349800 kubelet[2705]: E0422 15:09:37.349756 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.349800 kubelet[2705]: W0422 15:09:37.349765 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.349800 kubelet[2705]: E0422 15:09:37.349773 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.349964 kubelet[2705]: E0422 15:09:37.349952 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.349964 kubelet[2705]: W0422 15:09:37.349964 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.350018 kubelet[2705]: E0422 15:09:37.349972 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.451196 kubelet[2705]: E0422 15:09:37.451089 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.451196 kubelet[2705]: W0422 15:09:37.451113 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.451196 kubelet[2705]: E0422 15:09:37.451131 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.451385 kubelet[2705]: E0422 15:09:37.451372 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.451385 kubelet[2705]: W0422 15:09:37.451385 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.451452 kubelet[2705]: E0422 15:09:37.451395 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.451632 kubelet[2705]: E0422 15:09:37.451566 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.451632 kubelet[2705]: W0422 15:09:37.451580 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.451632 kubelet[2705]: E0422 15:09:37.451588 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.552783 kubelet[2705]: E0422 15:09:37.552754 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.552783 kubelet[2705]: W0422 15:09:37.552776 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.552951 kubelet[2705]: E0422 15:09:37.552795 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.553022 kubelet[2705]: E0422 15:09:37.553009 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.553022 kubelet[2705]: W0422 15:09:37.553022 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.553073 kubelet[2705]: E0422 15:09:37.553031 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.553259 kubelet[2705]: E0422 15:09:37.553246 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.553259 kubelet[2705]: W0422 15:09:37.553259 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.553331 kubelet[2705]: E0422 15:09:37.553269 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.654275 kubelet[2705]: E0422 15:09:37.654236 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.654275 kubelet[2705]: W0422 15:09:37.654259 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.654275 kubelet[2705]: E0422 15:09:37.654277 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.654489 kubelet[2705]: E0422 15:09:37.654462 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.654489 kubelet[2705]: W0422 15:09:37.654471 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.654489 kubelet[2705]: E0422 15:09:37.654481 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.654679 kubelet[2705]: E0422 15:09:37.654653 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.654679 kubelet[2705]: W0422 15:09:37.654666 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.654679 kubelet[2705]: E0422 15:09:37.654675 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.729276 kubelet[2705]: E0422 15:09:37.728133 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.729276 kubelet[2705]: W0422 15:09:37.728157 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.729276 kubelet[2705]: E0422 15:09:37.728177 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.729276 kubelet[2705]: E0422 15:09:37.728567 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.729276 kubelet[2705]: W0422 15:09:37.728580 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.729276 kubelet[2705]: E0422 15:09:37.728599 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.755901 kubelet[2705]: E0422 15:09:37.755863 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.755901 kubelet[2705]: W0422 15:09:37.755886 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.755901 kubelet[2705]: E0422 15:09:37.755904 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.852430 containerd[1459]: time="2025-04-22T15:09:37.852380518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gv54k,Uid:e68a5acb-4271-4645-bc91-6ccc74d511d9,Namespace:calico-system,Attempt:0,}" Apr 22 15:09:37.857144 kubelet[2705]: E0422 15:09:37.857103 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.857144 kubelet[2705]: W0422 15:09:37.857127 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.857267 kubelet[2705]: E0422 15:09:37.857162 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:37.869327 containerd[1459]: time="2025-04-22T15:09:37.869241538Z" level=info msg="connecting to shim 53cb37324e80c6af5cecfe145f8c2f830d264b47afc8d2abc707a07477d568a5" address="unix:///run/containerd/s/8d73e4b0520627eb635fc8f055148d4b7ecb0efc7a07b5344c380e6e768c1aef" namespace=k8s.io protocol=ttrpc version=3 Apr 22 15:09:37.893527 systemd[1]: Started cri-containerd-53cb37324e80c6af5cecfe145f8c2f830d264b47afc8d2abc707a07477d568a5.scope - libcontainer container 53cb37324e80c6af5cecfe145f8c2f830d264b47afc8d2abc707a07477d568a5. Apr 22 15:09:37.918916 containerd[1459]: time="2025-04-22T15:09:37.918877032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gv54k,Uid:e68a5acb-4271-4645-bc91-6ccc74d511d9,Namespace:calico-system,Attempt:0,} returns sandbox id \"53cb37324e80c6af5cecfe145f8c2f830d264b47afc8d2abc707a07477d568a5\"" Apr 22 15:09:37.920685 containerd[1459]: time="2025-04-22T15:09:37.920654707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Apr 22 15:09:37.957973 kubelet[2705]: E0422 15:09:37.957937 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:37.957973 kubelet[2705]: W0422 15:09:37.957961 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:37.957973 kubelet[2705]: E0422 15:09:37.957980 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:38.039628 kubelet[2705]: E0422 15:09:38.039545 2705 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 22 15:09:38.039628 kubelet[2705]: W0422 15:09:38.039567 2705 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 22 15:09:38.039628 kubelet[2705]: E0422 15:09:38.039584 2705 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 22 15:09:38.103375 containerd[1459]: time="2025-04-22T15:09:38.103306059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b866f746f-gk6kg,Uid:8c3c4a75-9a7a-4837-b233-d0b44ffe7065,Namespace:calico-system,Attempt:0,}" Apr 22 15:09:38.132669 containerd[1459]: time="2025-04-22T15:09:38.132555111Z" level=info msg="connecting to shim b242d950a856610e948c01753a1416aa196be93eb266ef87e730b023dadb104a" address="unix:///run/containerd/s/1155dca3bae0c831fc452878f27a4d5439dc98e030c6e6d763a8cb048425367b" namespace=k8s.io protocol=ttrpc version=3 Apr 22 15:09:38.153517 systemd[1]: Started cri-containerd-b242d950a856610e948c01753a1416aa196be93eb266ef87e730b023dadb104a.scope - libcontainer container b242d950a856610e948c01753a1416aa196be93eb266ef87e730b023dadb104a. Apr 22 15:09:38.188276 containerd[1459]: time="2025-04-22T15:09:38.188202325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b866f746f-gk6kg,Uid:8c3c4a75-9a7a-4837-b233-d0b44ffe7065,Namespace:calico-system,Attempt:0,} returns sandbox id \"b242d950a856610e948c01753a1416aa196be93eb266ef87e730b023dadb104a\"" Apr 22 15:09:38.608514 kubelet[2705]: E0422 15:09:38.608462 2705 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sw8fk" podUID="7eb1ff10-8266-4cd9-92c6-ab19f470fcc9" Apr 22 15:09:39.125182 containerd[1459]: time="2025-04-22T15:09:39.124777658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:09:39.125554 containerd[1459]: time="2025-04-22T15:09:39.125331919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5120152" Apr 22 15:09:39.126637 containerd[1459]: time="2025-04-22T15:09:39.126601729Z" level=info msg="ImageCreate event name:\"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:09:39.128224 containerd[1459]: time="2025-04-22T15:09:39.128180165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:09:39.130598 containerd[1459]: time="2025-04-22T15:09:39.129045609Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6489869\" in 1.208354949s" Apr 22 15:09:39.130598 containerd[1459]: time="2025-04-22T15:09:39.130407283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\"" Apr 22 15:09:39.133975 containerd[1459]: time="2025-04-22T15:09:39.133035369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Apr 22 15:09:39.134625 containerd[1459]: time="2025-04-22T15:09:39.134586369Z" level=info msg="CreateContainer within sandbox \"53cb37324e80c6af5cecfe145f8c2f830d264b47afc8d2abc707a07477d568a5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 22 15:09:39.148068 containerd[1459]: time="2025-04-22T15:09:39.148018227Z" level=info msg="Container bf9fdfe420c0a66f2e76d44ded57828ce12145a11e68093d6739ef9cb335dba9: CDI devices from CRI Config.CDIDevices: []" Apr 22 15:09:39.154972 containerd[1459]: time="2025-04-22T15:09:39.154927581Z" level=info msg="CreateContainer within sandbox \"53cb37324e80c6af5cecfe145f8c2f830d264b47afc8d2abc707a07477d568a5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bf9fdfe420c0a66f2e76d44ded57828ce12145a11e68093d6739ef9cb335dba9\"" Apr 22 15:09:39.155693 containerd[1459]: time="2025-04-22T15:09:39.155657129Z" level=info msg="StartContainer for \"bf9fdfe420c0a66f2e76d44ded57828ce12145a11e68093d6739ef9cb335dba9\"" Apr 22 15:09:39.157248 containerd[1459]: time="2025-04-22T15:09:39.157216048Z" level=info msg="connecting to shim bf9fdfe420c0a66f2e76d44ded57828ce12145a11e68093d6739ef9cb335dba9" address="unix:///run/containerd/s/8d73e4b0520627eb635fc8f055148d4b7ecb0efc7a07b5344c380e6e768c1aef" protocol=ttrpc version=3 Apr 22 15:09:39.174512 systemd[1]: Started cri-containerd-bf9fdfe420c0a66f2e76d44ded57828ce12145a11e68093d6739ef9cb335dba9.scope - libcontainer container bf9fdfe420c0a66f2e76d44ded57828ce12145a11e68093d6739ef9cb335dba9. Apr 22 15:09:39.223413 containerd[1459]: time="2025-04-22T15:09:39.223334482Z" level=info msg="StartContainer for \"bf9fdfe420c0a66f2e76d44ded57828ce12145a11e68093d6739ef9cb335dba9\" returns successfully" Apr 22 15:09:39.254767 systemd[1]: cri-containerd-bf9fdfe420c0a66f2e76d44ded57828ce12145a11e68093d6739ef9cb335dba9.scope: Deactivated successfully. Apr 22 15:09:39.266199 containerd[1459]: time="2025-04-22T15:09:39.266099089Z" level=info msg="received exit event container_id:\"bf9fdfe420c0a66f2e76d44ded57828ce12145a11e68093d6739ef9cb335dba9\" id:\"bf9fdfe420c0a66f2e76d44ded57828ce12145a11e68093d6739ef9cb335dba9\" pid:3263 exited_at:{seconds:1745334579 nanos:260696184}" Apr 22 15:09:39.266556 containerd[1459]: time="2025-04-22T15:09:39.266388877Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bf9fdfe420c0a66f2e76d44ded57828ce12145a11e68093d6739ef9cb335dba9\" id:\"bf9fdfe420c0a66f2e76d44ded57828ce12145a11e68093d6739ef9cb335dba9\" pid:3263 exited_at:{seconds:1745334579 nanos:260696184}" Apr 22 15:09:39.302879 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf9fdfe420c0a66f2e76d44ded57828ce12145a11e68093d6739ef9cb335dba9-rootfs.mount: Deactivated successfully. Apr 22 15:09:40.609009 kubelet[2705]: E0422 15:09:40.608830 2705 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sw8fk" podUID="7eb1ff10-8266-4cd9-92c6-ab19f470fcc9" Apr 22 15:09:40.795358 containerd[1459]: time="2025-04-22T15:09:40.795297478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:09:40.795816 containerd[1459]: time="2025-04-22T15:09:40.795748002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=28363957" Apr 22 15:09:40.796723 containerd[1459]: time="2025-04-22T15:09:40.796683604Z" level=info msg="ImageCreate event name:\"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:09:40.798402 containerd[1459]: time="2025-04-22T15:09:40.798374758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:09:40.799108 containerd[1459]: time="2025-04-22T15:09:40.798900189Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"29733706\" in 1.665830186s" Apr 22 15:09:40.799108 containerd[1459]: time="2025-04-22T15:09:40.798933544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\"" Apr 22 15:09:40.799978 containerd[1459]: time="2025-04-22T15:09:40.799949132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Apr 22 15:09:40.812078 containerd[1459]: time="2025-04-22T15:09:40.812048366Z" level=info msg="CreateContainer within sandbox \"b242d950a856610e948c01753a1416aa196be93eb266ef87e730b023dadb104a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 22 15:09:40.820543 containerd[1459]: time="2025-04-22T15:09:40.820514974Z" level=info msg="Container 9223c8db662c0d2f7f86d9f22dc8cbc3820da8da8342ccd9b4b612123b1934f1: CDI devices from CRI Config.CDIDevices: []" Apr 22 15:09:40.827332 containerd[1459]: time="2025-04-22T15:09:40.827294588Z" level=info msg="CreateContainer within sandbox \"b242d950a856610e948c01753a1416aa196be93eb266ef87e730b023dadb104a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9223c8db662c0d2f7f86d9f22dc8cbc3820da8da8342ccd9b4b612123b1934f1\"" Apr 22 15:09:40.828088 containerd[1459]: time="2025-04-22T15:09:40.828060738Z" level=info msg="StartContainer for \"9223c8db662c0d2f7f86d9f22dc8cbc3820da8da8342ccd9b4b612123b1934f1\"" Apr 22 15:09:40.829188 containerd[1459]: time="2025-04-22T15:09:40.829162032Z" level=info msg="connecting to shim 9223c8db662c0d2f7f86d9f22dc8cbc3820da8da8342ccd9b4b612123b1934f1" address="unix:///run/containerd/s/1155dca3bae0c831fc452878f27a4d5439dc98e030c6e6d763a8cb048425367b" protocol=ttrpc version=3 Apr 22 15:09:40.861778 systemd[1]: Started cri-containerd-9223c8db662c0d2f7f86d9f22dc8cbc3820da8da8342ccd9b4b612123b1934f1.scope - libcontainer container 9223c8db662c0d2f7f86d9f22dc8cbc3820da8da8342ccd9b4b612123b1934f1. Apr 22 15:09:40.987455 containerd[1459]: time="2025-04-22T15:09:40.987070172Z" level=info msg="StartContainer for \"9223c8db662c0d2f7f86d9f22dc8cbc3820da8da8342ccd9b4b612123b1934f1\" returns successfully" Apr 22 15:09:41.714668 kubelet[2705]: I0422 15:09:41.714459 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6b866f746f-gk6kg" podStartSLOduration=3.104168624 podStartE2EDuration="5.714401234s" podCreationTimestamp="2025-04-22 15:09:36 +0000 UTC" firstStartedPulling="2025-04-22 15:09:38.189400055 +0000 UTC m=+24.653656515" lastFinishedPulling="2025-04-22 15:09:40.799632665 +0000 UTC m=+27.263889125" observedRunningTime="2025-04-22 15:09:41.713772459 +0000 UTC m=+28.178028919" watchObservedRunningTime="2025-04-22 15:09:41.714401234 +0000 UTC m=+28.178657694" Apr 22 15:09:42.609037 kubelet[2705]: E0422 15:09:42.608955 2705 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sw8fk" podUID="7eb1ff10-8266-4cd9-92c6-ab19f470fcc9" Apr 22 15:09:42.700495 kubelet[2705]: I0422 15:09:42.700463 2705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 22 15:09:43.197960 systemd[1]: Started sshd@7-10.0.0.54:22-10.0.0.1:37670.service - OpenSSH per-connection server daemon (10.0.0.1:37670). Apr 22 15:09:43.259135 sshd[3345]: Accepted publickey for core from 10.0.0.1 port 37670 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:09:43.260664 sshd-session[3345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:09:43.264452 systemd-logind[1444]: New session 8 of user core. Apr 22 15:09:43.271503 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 22 15:09:43.391053 sshd[3347]: Connection closed by 10.0.0.1 port 37670 Apr 22 15:09:43.391368 sshd-session[3345]: pam_unix(sshd:session): session closed for user core Apr 22 15:09:43.395248 systemd[1]: sshd@7-10.0.0.54:22-10.0.0.1:37670.service: Deactivated successfully. Apr 22 15:09:43.397636 systemd[1]: session-8.scope: Deactivated successfully. Apr 22 15:09:43.399001 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Apr 22 15:09:43.400562 systemd-logind[1444]: Removed session 8. Apr 22 15:09:44.096799 kubelet[2705]: I0422 15:09:44.096730 2705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 22 15:09:44.608758 kubelet[2705]: E0422 15:09:44.608710 2705 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sw8fk" podUID="7eb1ff10-8266-4cd9-92c6-ab19f470fcc9" Apr 22 15:09:45.069976 containerd[1459]: time="2025-04-22T15:09:45.069907690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:09:45.070952 containerd[1459]: time="2025-04-22T15:09:45.070758588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=91227396" Apr 22 15:09:45.071691 containerd[1459]: time="2025-04-22T15:09:45.071622925Z" level=info msg="ImageCreate event name:\"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:09:45.073678 containerd[1459]: time="2025-04-22T15:09:45.073625699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:09:45.074306 containerd[1459]: time="2025-04-22T15:09:45.074272371Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"92597153\" in 4.274289524s" Apr 22 15:09:45.074415 containerd[1459]: time="2025-04-22T15:09:45.074307329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\"" Apr 22 15:09:45.079680 containerd[1459]: time="2025-04-22T15:09:45.079632419Z" level=info msg="CreateContainer within sandbox \"53cb37324e80c6af5cecfe145f8c2f830d264b47afc8d2abc707a07477d568a5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 22 15:09:45.091603 containerd[1459]: time="2025-04-22T15:09:45.090464507Z" level=info msg="Container f1653807b544b730e2a4258c03db06892502fd17f764618dd9405c9852969ebc: CDI devices from CRI Config.CDIDevices: []" Apr 22 15:09:45.098819 containerd[1459]: time="2025-04-22T15:09:45.098765140Z" level=info msg="CreateContainer within sandbox \"53cb37324e80c6af5cecfe145f8c2f830d264b47afc8d2abc707a07477d568a5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f1653807b544b730e2a4258c03db06892502fd17f764618dd9405c9852969ebc\"" Apr 22 15:09:45.099466 containerd[1459]: time="2025-04-22T15:09:45.099343618Z" level=info msg="StartContainer for \"f1653807b544b730e2a4258c03db06892502fd17f764618dd9405c9852969ebc\"" Apr 22 15:09:45.100999 containerd[1459]: time="2025-04-22T15:09:45.100962660Z" level=info msg="connecting to shim f1653807b544b730e2a4258c03db06892502fd17f764618dd9405c9852969ebc" address="unix:///run/containerd/s/8d73e4b0520627eb635fc8f055148d4b7ecb0efc7a07b5344c380e6e768c1aef" protocol=ttrpc version=3 Apr 22 15:09:45.122560 systemd[1]: Started cri-containerd-f1653807b544b730e2a4258c03db06892502fd17f764618dd9405c9852969ebc.scope - libcontainer container f1653807b544b730e2a4258c03db06892502fd17f764618dd9405c9852969ebc. Apr 22 15:09:45.162017 containerd[1459]: time="2025-04-22T15:09:45.161884765Z" level=info msg="StartContainer for \"f1653807b544b730e2a4258c03db06892502fd17f764618dd9405c9852969ebc\" returns successfully" Apr 22 15:09:45.690712 containerd[1459]: time="2025-04-22T15:09:45.690665342Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 22 15:09:45.692511 systemd[1]: cri-containerd-f1653807b544b730e2a4258c03db06892502fd17f764618dd9405c9852969ebc.scope: Deactivated successfully. Apr 22 15:09:45.692811 systemd[1]: cri-containerd-f1653807b544b730e2a4258c03db06892502fd17f764618dd9405c9852969ebc.scope: Consumed 463ms CPU time, 158.1M memory peak, 4K read from disk, 150.3M written to disk. Apr 22 15:09:45.699145 containerd[1459]: time="2025-04-22T15:09:45.699092006Z" level=info msg="received exit event container_id:\"f1653807b544b730e2a4258c03db06892502fd17f764618dd9405c9852969ebc\" id:\"f1653807b544b730e2a4258c03db06892502fd17f764618dd9405c9852969ebc\" pid:3387 exited_at:{seconds:1745334585 nanos:698732312}" Apr 22 15:09:45.699145 containerd[1459]: time="2025-04-22T15:09:45.699130443Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1653807b544b730e2a4258c03db06892502fd17f764618dd9405c9852969ebc\" id:\"f1653807b544b730e2a4258c03db06892502fd17f764618dd9405c9852969ebc\" pid:3387 exited_at:{seconds:1745334585 nanos:698732312}" Apr 22 15:09:45.720246 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1653807b544b730e2a4258c03db06892502fd17f764618dd9405c9852969ebc-rootfs.mount: Deactivated successfully. Apr 22 15:09:45.752676 kubelet[2705]: I0422 15:09:45.752643 2705 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Apr 22 15:09:45.819527 kubelet[2705]: I0422 15:09:45.819481 2705 topology_manager.go:215] "Topology Admit Handler" podUID="84927b68-56ab-4969-b1fe-dbce339a18a5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gpmj7" Apr 22 15:09:45.826281 kubelet[2705]: I0422 15:09:45.826235 2705 topology_manager.go:215] "Topology Admit Handler" podUID="3db1631a-c539-423a-8063-5260e70b95b8" podNamespace="calico-system" podName="calico-kube-controllers-6fdb866d4d-8nv7n" Apr 22 15:09:45.830466 systemd[1]: Created slice kubepods-burstable-pod84927b68_56ab_4969_b1fe_dbce339a18a5.slice - libcontainer container kubepods-burstable-pod84927b68_56ab_4969_b1fe_dbce339a18a5.slice. Apr 22 15:09:45.831447 kubelet[2705]: I0422 15:09:45.831412 2705 topology_manager.go:215] "Topology Admit Handler" podUID="3bd5e49a-6b31-43e9-b126-0ea574379167" podNamespace="kube-system" podName="coredns-7db6d8ff4d-n7827" Apr 22 15:09:45.832598 kubelet[2705]: I0422 15:09:45.832515 2705 topology_manager.go:215] "Topology Admit Handler" podUID="ad328a64-c186-4d75-8652-9167b5eb2598" podNamespace="calico-apiserver" podName="calico-apiserver-79cf6657d9-9xlkg" Apr 22 15:09:45.836404 kubelet[2705]: I0422 15:09:45.836373 2705 topology_manager.go:215] "Topology Admit Handler" podUID="f76d273b-83c4-488a-9a10-4a1ed8030bc8" podNamespace="calico-apiserver" podName="calico-apiserver-79cf6657d9-glzn9" Apr 22 15:09:45.842425 systemd[1]: Created slice kubepods-besteffort-pod3db1631a_c539_423a_8063_5260e70b95b8.slice - libcontainer container kubepods-besteffort-pod3db1631a_c539_423a_8063_5260e70b95b8.slice. Apr 22 15:09:45.847722 systemd[1]: Created slice kubepods-burstable-pod3bd5e49a_6b31_43e9_b126_0ea574379167.slice - libcontainer container kubepods-burstable-pod3bd5e49a_6b31_43e9_b126_0ea574379167.slice. Apr 22 15:09:45.856074 systemd[1]: Created slice kubepods-besteffort-podad328a64_c186_4d75_8652_9167b5eb2598.slice - libcontainer container kubepods-besteffort-podad328a64_c186_4d75_8652_9167b5eb2598.slice. Apr 22 15:09:45.863007 systemd[1]: Created slice kubepods-besteffort-podf76d273b_83c4_488a_9a10_4a1ed8030bc8.slice - libcontainer container kubepods-besteffort-podf76d273b_83c4_488a_9a10_4a1ed8030bc8.slice. Apr 22 15:09:45.915057 kubelet[2705]: I0422 15:09:45.915015 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfbl2\" (UniqueName: \"kubernetes.io/projected/84927b68-56ab-4969-b1fe-dbce339a18a5-kube-api-access-rfbl2\") pod \"coredns-7db6d8ff4d-gpmj7\" (UID: \"84927b68-56ab-4969-b1fe-dbce339a18a5\") " pod="kube-system/coredns-7db6d8ff4d-gpmj7" Apr 22 15:09:45.915057 kubelet[2705]: I0422 15:09:45.915063 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/84927b68-56ab-4969-b1fe-dbce339a18a5-config-volume\") pod \"coredns-7db6d8ff4d-gpmj7\" (UID: \"84927b68-56ab-4969-b1fe-dbce339a18a5\") " pod="kube-system/coredns-7db6d8ff4d-gpmj7" Apr 22 15:09:46.015998 kubelet[2705]: I0422 15:09:46.015665 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62b9m\" (UniqueName: \"kubernetes.io/projected/f76d273b-83c4-488a-9a10-4a1ed8030bc8-kube-api-access-62b9m\") pod \"calico-apiserver-79cf6657d9-glzn9\" (UID: \"f76d273b-83c4-488a-9a10-4a1ed8030bc8\") " pod="calico-apiserver/calico-apiserver-79cf6657d9-glzn9" Apr 22 15:09:46.015998 kubelet[2705]: I0422 15:09:46.015724 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f76d273b-83c4-488a-9a10-4a1ed8030bc8-calico-apiserver-certs\") pod \"calico-apiserver-79cf6657d9-glzn9\" (UID: \"f76d273b-83c4-488a-9a10-4a1ed8030bc8\") " pod="calico-apiserver/calico-apiserver-79cf6657d9-glzn9" Apr 22 15:09:46.015998 kubelet[2705]: I0422 15:09:46.015747 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-248d5\" (UniqueName: \"kubernetes.io/projected/ad328a64-c186-4d75-8652-9167b5eb2598-kube-api-access-248d5\") pod \"calico-apiserver-79cf6657d9-9xlkg\" (UID: \"ad328a64-c186-4d75-8652-9167b5eb2598\") " pod="calico-apiserver/calico-apiserver-79cf6657d9-9xlkg" Apr 22 15:09:46.015998 kubelet[2705]: I0422 15:09:46.015765 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l97h2\" (UniqueName: \"kubernetes.io/projected/3db1631a-c539-423a-8063-5260e70b95b8-kube-api-access-l97h2\") pod \"calico-kube-controllers-6fdb866d4d-8nv7n\" (UID: \"3db1631a-c539-423a-8063-5260e70b95b8\") " pod="calico-system/calico-kube-controllers-6fdb866d4d-8nv7n" Apr 22 15:09:46.015998 kubelet[2705]: I0422 15:09:46.015784 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bd5e49a-6b31-43e9-b126-0ea574379167-config-volume\") pod \"coredns-7db6d8ff4d-n7827\" (UID: \"3bd5e49a-6b31-43e9-b126-0ea574379167\") " pod="kube-system/coredns-7db6d8ff4d-n7827" Apr 22 15:09:46.016198 kubelet[2705]: I0422 15:09:46.015878 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ad328a64-c186-4d75-8652-9167b5eb2598-calico-apiserver-certs\") pod \"calico-apiserver-79cf6657d9-9xlkg\" (UID: \"ad328a64-c186-4d75-8652-9167b5eb2598\") " pod="calico-apiserver/calico-apiserver-79cf6657d9-9xlkg" Apr 22 15:09:46.016198 kubelet[2705]: I0422 15:09:46.015912 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3db1631a-c539-423a-8063-5260e70b95b8-tigera-ca-bundle\") pod \"calico-kube-controllers-6fdb866d4d-8nv7n\" (UID: \"3db1631a-c539-423a-8063-5260e70b95b8\") " pod="calico-system/calico-kube-controllers-6fdb866d4d-8nv7n" Apr 22 15:09:46.016198 kubelet[2705]: I0422 15:09:46.015932 2705 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2552z\" (UniqueName: \"kubernetes.io/projected/3bd5e49a-6b31-43e9-b126-0ea574379167-kube-api-access-2552z\") pod \"coredns-7db6d8ff4d-n7827\" (UID: \"3bd5e49a-6b31-43e9-b126-0ea574379167\") " pod="kube-system/coredns-7db6d8ff4d-n7827" Apr 22 15:09:46.138842 containerd[1459]: time="2025-04-22T15:09:46.138528110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gpmj7,Uid:84927b68-56ab-4969-b1fe-dbce339a18a5,Namespace:kube-system,Attempt:0,}" Apr 22 15:09:46.147536 containerd[1459]: time="2025-04-22T15:09:46.147482273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fdb866d4d-8nv7n,Uid:3db1631a-c539-423a-8063-5260e70b95b8,Namespace:calico-system,Attempt:0,}" Apr 22 15:09:46.154150 containerd[1459]: time="2025-04-22T15:09:46.153994210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n7827,Uid:3bd5e49a-6b31-43e9-b126-0ea574379167,Namespace:kube-system,Attempt:0,}" Apr 22 15:09:46.166232 containerd[1459]: time="2025-04-22T15:09:46.165949400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79cf6657d9-9xlkg,Uid:ad328a64-c186-4d75-8652-9167b5eb2598,Namespace:calico-apiserver,Attempt:0,}" Apr 22 15:09:46.176105 containerd[1459]: time="2025-04-22T15:09:46.176064081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79cf6657d9-glzn9,Uid:f76d273b-83c4-488a-9a10-4a1ed8030bc8,Namespace:calico-apiserver,Attempt:0,}" Apr 22 15:09:46.547514 containerd[1459]: time="2025-04-22T15:09:46.547441752Z" level=error msg="Failed to destroy network for sandbox \"20437d2b5137e596f1544b8082d0891b996bbdf8b4435e3e6e0cfb312975aa64\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 22 15:09:46.548057 containerd[1459]: time="2025-04-22T15:09:46.547997433Z" level=error msg="Failed to destroy network for sandbox \"66df6ced24f737d4939ead831f2bde1dbb7e67522b6fa157a671359e9c0177d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 22 15:09:46.550206 containerd[1459]: time="2025-04-22T15:09:46.550161519Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fdb866d4d-8nv7n,Uid:3db1631a-c539-423a-8063-5260e70b95b8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"20437d2b5137e596f1544b8082d0891b996bbdf8b4435e3e6e0cfb312975aa64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 22 15:09:46.550810 kubelet[2705]: E0422 15:09:46.550760 2705 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20437d2b5137e596f1544b8082d0891b996bbdf8b4435e3e6e0cfb312975aa64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 22 15:09:46.550893 kubelet[2705]: E0422 15:09:46.550832 2705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20437d2b5137e596f1544b8082d0891b996bbdf8b4435e3e6e0cfb312975aa64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fdb866d4d-8nv7n" Apr 22 15:09:46.550893 kubelet[2705]: E0422 15:09:46.550852 2705 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20437d2b5137e596f1544b8082d0891b996bbdf8b4435e3e6e0cfb312975aa64\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6fdb866d4d-8nv7n" Apr 22 15:09:46.551013 kubelet[2705]: E0422 15:09:46.550889 2705 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6fdb866d4d-8nv7n_calico-system(3db1631a-c539-423a-8063-5260e70b95b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6fdb866d4d-8nv7n_calico-system(3db1631a-c539-423a-8063-5260e70b95b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20437d2b5137e596f1544b8082d0891b996bbdf8b4435e3e6e0cfb312975aa64\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6fdb866d4d-8nv7n" podUID="3db1631a-c539-423a-8063-5260e70b95b8" Apr 22 15:09:46.551966 containerd[1459]: time="2025-04-22T15:09:46.551866678Z" level=error msg="Failed to destroy network for sandbox \"f38ef67bf2d5fd4f4448cefaceda89d035bccb853f7d36c083eee7495e3de690\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 22 15:09:46.552090 containerd[1459]: time="2025-04-22T15:09:46.552062784Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n7827,Uid:3bd5e49a-6b31-43e9-b126-0ea574379167,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"66df6ced24f737d4939ead831f2bde1dbb7e67522b6fa157a671359e9c0177d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 22 15:09:46.552410 kubelet[2705]: E0422 15:09:46.552269 2705 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66df6ced24f737d4939ead831f2bde1dbb7e67522b6fa157a671359e9c0177d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 22 15:09:46.552410 kubelet[2705]: E0422 15:09:46.552357 2705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66df6ced24f737d4939ead831f2bde1dbb7e67522b6fa157a671359e9c0177d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-n7827" Apr 22 15:09:46.552410 kubelet[2705]: E0422 15:09:46.552398 2705 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66df6ced24f737d4939ead831f2bde1dbb7e67522b6fa157a671359e9c0177d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-n7827" Apr 22 15:09:46.552523 kubelet[2705]: E0422 15:09:46.552439 2705 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-n7827_kube-system(3bd5e49a-6b31-43e9-b126-0ea574379167)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-n7827_kube-system(3bd5e49a-6b31-43e9-b126-0ea574379167)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66df6ced24f737d4939ead831f2bde1dbb7e67522b6fa157a671359e9c0177d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-n7827" podUID="3bd5e49a-6b31-43e9-b126-0ea574379167" Apr 22 15:09:46.553324 containerd[1459]: time="2025-04-22T15:09:46.553225501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79cf6657d9-9xlkg,Uid:ad328a64-c186-4d75-8652-9167b5eb2598,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f38ef67bf2d5fd4f4448cefaceda89d035bccb853f7d36c083eee7495e3de690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 22 15:09:46.553521 kubelet[2705]: E0422 15:09:46.553454 2705 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f38ef67bf2d5fd4f4448cefaceda89d035bccb853f7d36c083eee7495e3de690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 22 15:09:46.553521 kubelet[2705]: E0422 15:09:46.553495 2705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f38ef67bf2d5fd4f4448cefaceda89d035bccb853f7d36c083eee7495e3de690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79cf6657d9-9xlkg" Apr 22 15:09:46.553521 kubelet[2705]: E0422 15:09:46.553511 2705 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f38ef67bf2d5fd4f4448cefaceda89d035bccb853f7d36c083eee7495e3de690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79cf6657d9-9xlkg" Apr 22 15:09:46.553605 kubelet[2705]: E0422 15:09:46.553538 2705 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79cf6657d9-9xlkg_calico-apiserver(ad328a64-c186-4d75-8652-9167b5eb2598)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79cf6657d9-9xlkg_calico-apiserver(ad328a64-c186-4d75-8652-9167b5eb2598)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f38ef67bf2d5fd4f4448cefaceda89d035bccb853f7d36c083eee7495e3de690\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79cf6657d9-9xlkg" podUID="ad328a64-c186-4d75-8652-9167b5eb2598" Apr 22 15:09:46.555865 containerd[1459]: time="2025-04-22T15:09:46.555756041Z" level=error msg="Failed to destroy network for sandbox \"70788baa7ff85ccf19b8c31cd077176a8e68eaa628c504986fad8b655f4d9be0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 22 15:09:46.559475 containerd[1459]: time="2025-04-22T15:09:46.559396742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79cf6657d9-glzn9,Uid:f76d273b-83c4-488a-9a10-4a1ed8030bc8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"70788baa7ff85ccf19b8c31cd077176a8e68eaa628c504986fad8b655f4d9be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 22 15:09:46.559685 kubelet[2705]: E0422 15:09:46.559629 2705 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70788baa7ff85ccf19b8c31cd077176a8e68eaa628c504986fad8b655f4d9be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 22 15:09:46.559769 kubelet[2705]: E0422 15:09:46.559691 2705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70788baa7ff85ccf19b8c31cd077176a8e68eaa628c504986fad8b655f4d9be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79cf6657d9-glzn9" Apr 22 15:09:46.559769 kubelet[2705]: E0422 15:09:46.559716 2705 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70788baa7ff85ccf19b8c31cd077176a8e68eaa628c504986fad8b655f4d9be0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-79cf6657d9-glzn9" Apr 22 15:09:46.559822 kubelet[2705]: E0422 15:09:46.559759 2705 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-79cf6657d9-glzn9_calico-apiserver(f76d273b-83c4-488a-9a10-4a1ed8030bc8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-79cf6657d9-glzn9_calico-apiserver(f76d273b-83c4-488a-9a10-4a1ed8030bc8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70788baa7ff85ccf19b8c31cd077176a8e68eaa628c504986fad8b655f4d9be0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-79cf6657d9-glzn9" podUID="f76d273b-83c4-488a-9a10-4a1ed8030bc8" Apr 22 15:09:46.560173 containerd[1459]: time="2025-04-22T15:09:46.560138410Z" level=error msg="Failed to destroy network for sandbox \"5727ec2be0d40fcc2b388bfd3dee5d97f217f813bb80252297c075166540b83b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 22 15:09:46.561127 containerd[1459]: time="2025-04-22T15:09:46.561078943Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gpmj7,Uid:84927b68-56ab-4969-b1fe-dbce339a18a5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5727ec2be0d40fcc2b388bfd3dee5d97f217f813bb80252297c075166540b83b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 22 15:09:46.561377 kubelet[2705]: E0422 15:09:46.561329 2705 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5727ec2be0d40fcc2b388bfd3dee5d97f217f813bb80252297c075166540b83b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 22 15:09:46.561427 kubelet[2705]: E0422 15:09:46.561394 2705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5727ec2be0d40fcc2b388bfd3dee5d97f217f813bb80252297c075166540b83b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gpmj7" Apr 22 15:09:46.561427 kubelet[2705]: E0422 15:09:46.561413 2705 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5727ec2be0d40fcc2b388bfd3dee5d97f217f813bb80252297c075166540b83b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gpmj7" Apr 22 15:09:46.561491 kubelet[2705]: E0422 15:09:46.561451 2705 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-gpmj7_kube-system(84927b68-56ab-4969-b1fe-dbce339a18a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-gpmj7_kube-system(84927b68-56ab-4969-b1fe-dbce339a18a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5727ec2be0d40fcc2b388bfd3dee5d97f217f813bb80252297c075166540b83b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gpmj7" podUID="84927b68-56ab-4969-b1fe-dbce339a18a5" Apr 22 15:09:46.615685 systemd[1]: Created slice kubepods-besteffort-pod7eb1ff10_8266_4cd9_92c6_ab19f470fcc9.slice - libcontainer container kubepods-besteffort-pod7eb1ff10_8266_4cd9_92c6_ab19f470fcc9.slice. Apr 22 15:09:46.618295 containerd[1459]: time="2025-04-22T15:09:46.618245758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sw8fk,Uid:7eb1ff10-8266-4cd9-92c6-ab19f470fcc9,Namespace:calico-system,Attempt:0,}" Apr 22 15:09:46.662704 containerd[1459]: time="2025-04-22T15:09:46.662650720Z" level=error msg="Failed to destroy network for sandbox \"c7f1ccc94d4118ae278c401b1e443d72eaf82ba937d4072a68d3ce4474abe48a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 22 15:09:46.663732 containerd[1459]: time="2025-04-22T15:09:46.663688286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sw8fk,Uid:7eb1ff10-8266-4cd9-92c6-ab19f470fcc9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7f1ccc94d4118ae278c401b1e443d72eaf82ba937d4072a68d3ce4474abe48a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 22 15:09:46.663990 kubelet[2705]: E0422 15:09:46.663942 2705 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7f1ccc94d4118ae278c401b1e443d72eaf82ba937d4072a68d3ce4474abe48a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 22 15:09:46.664047 kubelet[2705]: E0422 15:09:46.664008 2705 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7f1ccc94d4118ae278c401b1e443d72eaf82ba937d4072a68d3ce4474abe48a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sw8fk" Apr 22 15:09:46.664047 kubelet[2705]: E0422 15:09:46.664028 2705 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7f1ccc94d4118ae278c401b1e443d72eaf82ba937d4072a68d3ce4474abe48a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sw8fk" Apr 22 15:09:46.664106 kubelet[2705]: E0422 15:09:46.664074 2705 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sw8fk_calico-system(7eb1ff10-8266-4cd9-92c6-ab19f470fcc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sw8fk_calico-system(7eb1ff10-8266-4cd9-92c6-ab19f470fcc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7f1ccc94d4118ae278c401b1e443d72eaf82ba937d4072a68d3ce4474abe48a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sw8fk" podUID="7eb1ff10-8266-4cd9-92c6-ab19f470fcc9" Apr 22 15:09:46.713307 containerd[1459]: time="2025-04-22T15:09:46.713209165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Apr 22 15:09:47.120971 systemd[1]: run-netns-cni\x2daa6512a7\x2dcc79\x2dc28b\x2dbdf2\x2d8d76beda87f1.mount: Deactivated successfully. Apr 22 15:09:48.404773 systemd[1]: Started sshd@8-10.0.0.54:22-10.0.0.1:37672.service - OpenSSH per-connection server daemon (10.0.0.1:37672). Apr 22 15:09:48.467990 sshd[3648]: Accepted publickey for core from 10.0.0.1 port 37672 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:09:48.469622 sshd-session[3648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:09:48.475615 systemd-logind[1444]: New session 9 of user core. Apr 22 15:09:48.485581 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 22 15:09:48.612028 sshd[3650]: Connection closed by 10.0.0.1 port 37672 Apr 22 15:09:48.612831 sshd-session[3648]: pam_unix(sshd:session): session closed for user core Apr 22 15:09:48.616100 systemd[1]: sshd@8-10.0.0.54:22-10.0.0.1:37672.service: Deactivated successfully. Apr 22 15:09:48.618247 systemd[1]: session-9.scope: Deactivated successfully. Apr 22 15:09:48.621572 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Apr 22 15:09:48.622809 systemd-logind[1444]: Removed session 9. Apr 22 15:09:50.656874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount239205750.mount: Deactivated successfully. Apr 22 15:09:50.817758 containerd[1459]: time="2025-04-22T15:09:50.817712798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:09:50.818756 containerd[1459]: time="2025-04-22T15:09:50.818434312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=137086024" Apr 22 15:09:50.819757 containerd[1459]: time="2025-04-22T15:09:50.819302696Z" level=info msg="ImageCreate event name:\"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:09:50.821601 containerd[1459]: time="2025-04-22T15:09:50.821513876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:09:50.832789 containerd[1459]: time="2025-04-22T15:09:50.832442700Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"137085886\" in 4.119170299s" Apr 22 15:09:50.832789 containerd[1459]: time="2025-04-22T15:09:50.832492736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\"" Apr 22 15:09:50.844687 containerd[1459]: time="2025-04-22T15:09:50.844634883Z" level=info msg="CreateContainer within sandbox \"53cb37324e80c6af5cecfe145f8c2f830d264b47afc8d2abc707a07477d568a5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 22 15:09:50.853834 containerd[1459]: time="2025-04-22T15:09:50.853386286Z" level=info msg="Container 5d70b893bc529bba58e1dd2fce4cef07144a8113dd5e241395fe2fee761a856d: CDI devices from CRI Config.CDIDevices: []" Apr 22 15:09:50.868528 containerd[1459]: time="2025-04-22T15:09:50.868477165Z" level=info msg="CreateContainer within sandbox \"53cb37324e80c6af5cecfe145f8c2f830d264b47afc8d2abc707a07477d568a5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5d70b893bc529bba58e1dd2fce4cef07144a8113dd5e241395fe2fee761a856d\"" Apr 22 15:09:50.869231 containerd[1459]: time="2025-04-22T15:09:50.869189400Z" level=info msg="StartContainer for \"5d70b893bc529bba58e1dd2fce4cef07144a8113dd5e241395fe2fee761a856d\"" Apr 22 15:09:50.870744 containerd[1459]: time="2025-04-22T15:09:50.870707543Z" level=info msg="connecting to shim 5d70b893bc529bba58e1dd2fce4cef07144a8113dd5e241395fe2fee761a856d" address="unix:///run/containerd/s/8d73e4b0520627eb635fc8f055148d4b7ecb0efc7a07b5344c380e6e768c1aef" protocol=ttrpc version=3 Apr 22 15:09:50.894523 systemd[1]: Started cri-containerd-5d70b893bc529bba58e1dd2fce4cef07144a8113dd5e241395fe2fee761a856d.scope - libcontainer container 5d70b893bc529bba58e1dd2fce4cef07144a8113dd5e241395fe2fee761a856d. Apr 22 15:09:50.933630 containerd[1459]: time="2025-04-22T15:09:50.933433948Z" level=info msg="StartContainer for \"5d70b893bc529bba58e1dd2fce4cef07144a8113dd5e241395fe2fee761a856d\" returns successfully" Apr 22 15:09:51.085249 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 22 15:09:51.085524 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 22 15:09:51.749071 kubelet[2705]: I0422 15:09:51.748686 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gv54k" podStartSLOduration=2.834386342 podStartE2EDuration="15.748663955s" podCreationTimestamp="2025-04-22 15:09:36 +0000 UTC" firstStartedPulling="2025-04-22 15:09:37.920222436 +0000 UTC m=+24.384478856" lastFinishedPulling="2025-04-22 15:09:50.834500049 +0000 UTC m=+37.298756469" observedRunningTime="2025-04-22 15:09:51.746453692 +0000 UTC m=+38.210710152" watchObservedRunningTime="2025-04-22 15:09:51.748663955 +0000 UTC m=+38.212920415" Apr 22 15:09:52.545632 kernel: bpftool[3862]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 22 15:09:52.696561 systemd-networkd[1399]: vxlan.calico: Link UP Apr 22 15:09:52.696567 systemd-networkd[1399]: vxlan.calico: Gained carrier Apr 22 15:09:52.731999 kubelet[2705]: I0422 15:09:52.731964 2705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 22 15:09:53.628034 systemd[1]: Started sshd@9-10.0.0.54:22-10.0.0.1:40614.service - OpenSSH per-connection server daemon (10.0.0.1:40614). Apr 22 15:09:53.691013 sshd[3934]: Accepted publickey for core from 10.0.0.1 port 40614 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:09:53.692697 sshd-session[3934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:09:53.701544 systemd-logind[1444]: New session 10 of user core. Apr 22 15:09:53.708720 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 22 15:09:53.853067 sshd[3936]: Connection closed by 10.0.0.1 port 40614 Apr 22 15:09:53.853855 sshd-session[3934]: pam_unix(sshd:session): session closed for user core Apr 22 15:09:53.866766 systemd[1]: sshd@9-10.0.0.54:22-10.0.0.1:40614.service: Deactivated successfully. Apr 22 15:09:53.868531 systemd[1]: session-10.scope: Deactivated successfully. Apr 22 15:09:53.869829 systemd-logind[1444]: Session 10 logged out. Waiting for processes to exit. Apr 22 15:09:53.871750 systemd[1]: Started sshd@10-10.0.0.54:22-10.0.0.1:40620.service - OpenSSH per-connection server daemon (10.0.0.1:40620). Apr 22 15:09:53.872745 systemd-logind[1444]: Removed session 10. Apr 22 15:09:53.927107 sshd[3949]: Accepted publickey for core from 10.0.0.1 port 40620 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:09:53.928417 sshd-session[3949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:09:53.932762 systemd-logind[1444]: New session 11 of user core. Apr 22 15:09:53.941543 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 22 15:09:54.119398 sshd[3952]: Connection closed by 10.0.0.1 port 40620 Apr 22 15:09:54.118774 sshd-session[3949]: pam_unix(sshd:session): session closed for user core Apr 22 15:09:54.130938 systemd[1]: sshd@10-10.0.0.54:22-10.0.0.1:40620.service: Deactivated successfully. Apr 22 15:09:54.133407 systemd[1]: session-11.scope: Deactivated successfully. Apr 22 15:09:54.134496 systemd-logind[1444]: Session 11 logged out. Waiting for processes to exit. Apr 22 15:09:54.136937 systemd[1]: Started sshd@11-10.0.0.54:22-10.0.0.1:40628.service - OpenSSH per-connection server daemon (10.0.0.1:40628). Apr 22 15:09:54.137915 systemd-logind[1444]: Removed session 11. Apr 22 15:09:54.188368 sshd[3963]: Accepted publickey for core from 10.0.0.1 port 40628 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:09:54.189869 sshd-session[3963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:09:54.194698 systemd-logind[1444]: New session 12 of user core. Apr 22 15:09:54.205547 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 22 15:09:54.245774 systemd-networkd[1399]: vxlan.calico: Gained IPv6LL Apr 22 15:09:54.340434 sshd[3966]: Connection closed by 10.0.0.1 port 40628 Apr 22 15:09:54.340790 sshd-session[3963]: pam_unix(sshd:session): session closed for user core Apr 22 15:09:54.344471 systemd[1]: sshd@11-10.0.0.54:22-10.0.0.1:40628.service: Deactivated successfully. Apr 22 15:09:54.347094 systemd[1]: session-12.scope: Deactivated successfully. Apr 22 15:09:54.348267 systemd-logind[1444]: Session 12 logged out. Waiting for processes to exit. Apr 22 15:09:54.349158 systemd-logind[1444]: Removed session 12. Apr 22 15:09:55.653924 kubelet[2705]: I0422 15:09:55.653885 2705 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 22 15:09:55.743909 containerd[1459]: time="2025-04-22T15:09:55.743859245Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5d70b893bc529bba58e1dd2fce4cef07144a8113dd5e241395fe2fee761a856d\" id:\"9969a662fc12306087d32bc4377b126b56cb84ddd714a0df342d69c1b007f4bd\" pid:3997 exit_status:1 exited_at:{seconds:1745334595 nanos:743555942}" Apr 22 15:09:55.812241 containerd[1459]: time="2025-04-22T15:09:55.812133727Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5d70b893bc529bba58e1dd2fce4cef07144a8113dd5e241395fe2fee761a856d\" id:\"1447d16c65e1d49319688ab6daed62160dc684e6ef83ccedd08be71f7fdc9810\" pid:4020 exit_status:1 exited_at:{seconds:1745334595 nanos:811891020}" Apr 22 15:09:57.609395 containerd[1459]: time="2025-04-22T15:09:57.609281363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n7827,Uid:3bd5e49a-6b31-43e9-b126-0ea574379167,Namespace:kube-system,Attempt:0,}" Apr 22 15:09:57.807187 systemd-networkd[1399]: cali37369a8b196: Link UP Apr 22 15:09:57.808910 systemd-networkd[1399]: cali37369a8b196: Gained carrier Apr 22 15:09:57.823674 containerd[1459]: 2025-04-22 15:09:57.671 [INFO][4033] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--n7827-eth0 coredns-7db6d8ff4d- kube-system 3bd5e49a-6b31-43e9-b126-0ea574379167 757 0 2025-04-22 15:09:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-n7827 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali37369a8b196 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n7827" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--n7827-" Apr 22 15:09:57.823674 containerd[1459]: 2025-04-22 15:09:57.671 [INFO][4033] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n7827" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--n7827-eth0" Apr 22 15:09:57.823674 containerd[1459]: 2025-04-22 15:09:57.757 [INFO][4046] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5" HandleID="k8s-pod-network.1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5" Workload="localhost-k8s-coredns--7db6d8ff4d--n7827-eth0" Apr 22 15:09:57.823896 containerd[1459]: 2025-04-22 15:09:57.770 [INFO][4046] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5" HandleID="k8s-pod-network.1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5" Workload="localhost-k8s-coredns--7db6d8ff4d--n7827-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000283d60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-n7827", "timestamp":"2025-04-22 15:09:57.75700225 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 22 15:09:57.823896 containerd[1459]: 2025-04-22 15:09:57.770 [INFO][4046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 22 15:09:57.823896 containerd[1459]: 2025-04-22 15:09:57.771 [INFO][4046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 22 15:09:57.823896 containerd[1459]: 2025-04-22 15:09:57.771 [INFO][4046] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 22 15:09:57.823896 containerd[1459]: 2025-04-22 15:09:57.772 [INFO][4046] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5" host="localhost" Apr 22 15:09:57.823896 containerd[1459]: 2025-04-22 15:09:57.780 [INFO][4046] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 22 15:09:57.823896 containerd[1459]: 2025-04-22 15:09:57.785 [INFO][4046] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 22 15:09:57.823896 containerd[1459]: 2025-04-22 15:09:57.787 [INFO][4046] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 22 15:09:57.823896 containerd[1459]: 2025-04-22 15:09:57.789 [INFO][4046] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 22 15:09:57.823896 containerd[1459]: 2025-04-22 15:09:57.789 [INFO][4046] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5" host="localhost" Apr 22 15:09:57.824106 containerd[1459]: 2025-04-22 15:09:57.791 [INFO][4046] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5 Apr 22 15:09:57.824106 containerd[1459]: 2025-04-22 15:09:57.794 [INFO][4046] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5" host="localhost" Apr 22 15:09:57.824106 containerd[1459]: 2025-04-22 15:09:57.800 [INFO][4046] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5" host="localhost" Apr 22 15:09:57.824106 containerd[1459]: 2025-04-22 15:09:57.800 [INFO][4046] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5" host="localhost" Apr 22 15:09:57.824106 containerd[1459]: 2025-04-22 15:09:57.800 [INFO][4046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 22 15:09:57.824106 containerd[1459]: 2025-04-22 15:09:57.800 [INFO][4046] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5" HandleID="k8s-pod-network.1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5" Workload="localhost-k8s-coredns--7db6d8ff4d--n7827-eth0" Apr 22 15:09:57.824222 containerd[1459]: 2025-04-22 15:09:57.803 [INFO][4033] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n7827" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--n7827-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--n7827-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3bd5e49a-6b31-43e9-b126-0ea574379167", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.April, 22, 15, 9, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-n7827", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali37369a8b196", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 22 15:09:57.824273 containerd[1459]: 2025-04-22 15:09:57.803 [INFO][4033] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n7827" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--n7827-eth0" Apr 22 15:09:57.824273 containerd[1459]: 2025-04-22 15:09:57.803 [INFO][4033] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37369a8b196 ContainerID="1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n7827" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--n7827-eth0" Apr 22 15:09:57.824273 containerd[1459]: 2025-04-22 15:09:57.811 [INFO][4033] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n7827" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--n7827-eth0" Apr 22 15:09:57.824337 containerd[1459]: 2025-04-22 15:09:57.811 [INFO][4033] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n7827" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--n7827-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--n7827-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"3bd5e49a-6b31-43e9-b126-0ea574379167", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.April, 22, 15, 9, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5", Pod:"coredns-7db6d8ff4d-n7827", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali37369a8b196", MAC:"9a:39:9d:c9:89:ee", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 22 15:09:57.824337 containerd[1459]: 2025-04-22 15:09:57.821 [INFO][4033] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n7827" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--n7827-eth0" Apr 22 15:09:57.902177 containerd[1459]: time="2025-04-22T15:09:57.902116555Z" level=info msg="connecting to shim 1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5" address="unix:///run/containerd/s/abbb586999676be019cb72778d24f1c68f42fdc04eadc7c0a92ddd2955dad34a" namespace=k8s.io protocol=ttrpc version=3 Apr 22 15:09:57.931547 systemd[1]: Started cri-containerd-1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5.scope - libcontainer container 1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5. Apr 22 15:09:57.945661 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 22 15:09:58.002250 containerd[1459]: time="2025-04-22T15:09:58.002197117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n7827,Uid:3bd5e49a-6b31-43e9-b126-0ea574379167,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5\"" Apr 22 15:09:58.005573 containerd[1459]: time="2025-04-22T15:09:58.005522906Z" level=info msg="CreateContainer within sandbox \"1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 22 15:09:58.021930 containerd[1459]: time="2025-04-22T15:09:58.021145103Z" level=info msg="Container 77aa4cacd83cfb158840ec27153efa3a72646ef725a4c10599c214bc8ae02bab: CDI devices from CRI Config.CDIDevices: []" Apr 22 15:09:58.024276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount845888898.mount: Deactivated successfully. Apr 22 15:09:58.028793 containerd[1459]: time="2025-04-22T15:09:58.028745033Z" level=info msg="CreateContainer within sandbox \"1c698a46c41a66f4710e18aa79c1cf3c9b2324e6b07e10464f2ccacce3937cd5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"77aa4cacd83cfb158840ec27153efa3a72646ef725a4c10599c214bc8ae02bab\"" Apr 22 15:09:58.029341 containerd[1459]: time="2025-04-22T15:09:58.029303604Z" level=info msg="StartContainer for \"77aa4cacd83cfb158840ec27153efa3a72646ef725a4c10599c214bc8ae02bab\"" Apr 22 15:09:58.030429 containerd[1459]: time="2025-04-22T15:09:58.030393428Z" level=info msg="connecting to shim 77aa4cacd83cfb158840ec27153efa3a72646ef725a4c10599c214bc8ae02bab" address="unix:///run/containerd/s/abbb586999676be019cb72778d24f1c68f42fdc04eadc7c0a92ddd2955dad34a" protocol=ttrpc version=3 Apr 22 15:09:58.051564 systemd[1]: Started cri-containerd-77aa4cacd83cfb158840ec27153efa3a72646ef725a4c10599c214bc8ae02bab.scope - libcontainer container 77aa4cacd83cfb158840ec27153efa3a72646ef725a4c10599c214bc8ae02bab. Apr 22 15:09:58.088862 containerd[1459]: time="2025-04-22T15:09:58.088811146Z" level=info msg="StartContainer for \"77aa4cacd83cfb158840ec27153efa3a72646ef725a4c10599c214bc8ae02bab\" returns successfully" Apr 22 15:09:58.609365 containerd[1459]: time="2025-04-22T15:09:58.609316963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79cf6657d9-glzn9,Uid:f76d273b-83c4-488a-9a10-4a1ed8030bc8,Namespace:calico-apiserver,Attempt:0,}" Apr 22 15:09:58.725666 systemd-networkd[1399]: calice2c7f9d144: Link UP Apr 22 15:09:58.726132 systemd-networkd[1399]: calice2c7f9d144: Gained carrier Apr 22 15:09:58.738682 containerd[1459]: 2025-04-22 15:09:58.648 [INFO][4155] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79cf6657d9--glzn9-eth0 calico-apiserver-79cf6657d9- calico-apiserver f76d273b-83c4-488a-9a10-4a1ed8030bc8 759 0 2025-04-22 15:09:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79cf6657d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79cf6657d9-glzn9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calice2c7f9d144 [] []}} ContainerID="34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0" Namespace="calico-apiserver" Pod="calico-apiserver-79cf6657d9-glzn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cf6657d9--glzn9-" Apr 22 15:09:58.738682 containerd[1459]: 2025-04-22 15:09:58.649 [INFO][4155] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0" Namespace="calico-apiserver" Pod="calico-apiserver-79cf6657d9-glzn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cf6657d9--glzn9-eth0" Apr 22 15:09:58.738682 containerd[1459]: 2025-04-22 15:09:58.679 [INFO][4169] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0" HandleID="k8s-pod-network.34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0" Workload="localhost-k8s-calico--apiserver--79cf6657d9--glzn9-eth0" Apr 22 15:09:58.738682 containerd[1459]: 2025-04-22 15:09:58.691 [INFO][4169] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0" HandleID="k8s-pod-network.34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0" Workload="localhost-k8s-calico--apiserver--79cf6657d9--glzn9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000604890), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79cf6657d9-glzn9", "timestamp":"2025-04-22 15:09:58.679747784 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 22 15:09:58.738682 containerd[1459]: 2025-04-22 15:09:58.691 [INFO][4169] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 22 15:09:58.738682 containerd[1459]: 2025-04-22 15:09:58.692 [INFO][4169] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 22 15:09:58.738682 containerd[1459]: 2025-04-22 15:09:58.692 [INFO][4169] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 22 15:09:58.738682 containerd[1459]: 2025-04-22 15:09:58.694 [INFO][4169] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0" host="localhost" Apr 22 15:09:58.738682 containerd[1459]: 2025-04-22 15:09:58.698 [INFO][4169] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 22 15:09:58.738682 containerd[1459]: 2025-04-22 15:09:58.704 [INFO][4169] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 22 15:09:58.738682 containerd[1459]: 2025-04-22 15:09:58.706 [INFO][4169] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 22 15:09:58.738682 containerd[1459]: 2025-04-22 15:09:58.709 [INFO][4169] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 22 15:09:58.738682 containerd[1459]: 2025-04-22 15:09:58.709 [INFO][4169] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0" host="localhost" Apr 22 15:09:58.738682 containerd[1459]: 2025-04-22 15:09:58.711 [INFO][4169] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0 Apr 22 15:09:58.738682 containerd[1459]: 2025-04-22 15:09:58.715 [INFO][4169] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0" host="localhost" Apr 22 15:09:58.738682 containerd[1459]: 2025-04-22 15:09:58.721 [INFO][4169] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0" host="localhost" Apr 22 15:09:58.738682 containerd[1459]: 2025-04-22 15:09:58.721 [INFO][4169] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0" host="localhost" Apr 22 15:09:58.738682 containerd[1459]: 2025-04-22 15:09:58.721 [INFO][4169] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 22 15:09:58.738682 containerd[1459]: 2025-04-22 15:09:58.721 [INFO][4169] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0" HandleID="k8s-pod-network.34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0" Workload="localhost-k8s-calico--apiserver--79cf6657d9--glzn9-eth0" Apr 22 15:09:58.739802 containerd[1459]: 2025-04-22 15:09:58.724 [INFO][4155] cni-plugin/k8s.go 386: Populated endpoint ContainerID="34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0" Namespace="calico-apiserver" Pod="calico-apiserver-79cf6657d9-glzn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cf6657d9--glzn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79cf6657d9--glzn9-eth0", GenerateName:"calico-apiserver-79cf6657d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"f76d273b-83c4-488a-9a10-4a1ed8030bc8", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2025, time.April, 22, 15, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79cf6657d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79cf6657d9-glzn9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calice2c7f9d144", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 22 15:09:58.739802 containerd[1459]: 2025-04-22 15:09:58.724 [INFO][4155] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0" Namespace="calico-apiserver" Pod="calico-apiserver-79cf6657d9-glzn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cf6657d9--glzn9-eth0" Apr 22 15:09:58.739802 containerd[1459]: 2025-04-22 15:09:58.724 [INFO][4155] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice2c7f9d144 ContainerID="34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0" Namespace="calico-apiserver" Pod="calico-apiserver-79cf6657d9-glzn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cf6657d9--glzn9-eth0" Apr 22 15:09:58.739802 containerd[1459]: 2025-04-22 15:09:58.725 [INFO][4155] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0" Namespace="calico-apiserver" Pod="calico-apiserver-79cf6657d9-glzn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cf6657d9--glzn9-eth0" Apr 22 15:09:58.739802 containerd[1459]: 2025-04-22 15:09:58.726 [INFO][4155] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0" Namespace="calico-apiserver" Pod="calico-apiserver-79cf6657d9-glzn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cf6657d9--glzn9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79cf6657d9--glzn9-eth0", GenerateName:"calico-apiserver-79cf6657d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"f76d273b-83c4-488a-9a10-4a1ed8030bc8", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2025, time.April, 22, 15, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79cf6657d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0", Pod:"calico-apiserver-79cf6657d9-glzn9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calice2c7f9d144", MAC:"6a:db:36:5d:12:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 22 15:09:58.739802 containerd[1459]: 2025-04-22 15:09:58.735 [INFO][4155] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0" Namespace="calico-apiserver" Pod="calico-apiserver-79cf6657d9-glzn9" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cf6657d9--glzn9-eth0" Apr 22 15:09:58.764970 containerd[1459]: time="2025-04-22T15:09:58.764904369Z" level=info msg="connecting to shim 34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0" address="unix:///run/containerd/s/e3d078a08d53c50388cf4b435b082d8245ba7ea09b6571adc620abb862878c75" namespace=k8s.io protocol=ttrpc version=3 Apr 22 15:09:58.777817 kubelet[2705]: I0422 15:09:58.777733 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-n7827" podStartSLOduration=29.77771651 podStartE2EDuration="29.77771651s" podCreationTimestamp="2025-04-22 15:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-22 15:09:58.763210376 +0000 UTC m=+45.227466836" watchObservedRunningTime="2025-04-22 15:09:58.77771651 +0000 UTC m=+45.241972970" Apr 22 15:09:58.800611 systemd[1]: Started cri-containerd-34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0.scope - libcontainer container 34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0. Apr 22 15:09:58.813442 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 22 15:09:58.840758 containerd[1459]: time="2025-04-22T15:09:58.840716234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79cf6657d9-glzn9,Uid:f76d273b-83c4-488a-9a10-4a1ed8030bc8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0\"" Apr 22 15:09:58.843110 containerd[1459]: time="2025-04-22T15:09:58.843062953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Apr 22 15:09:58.853489 systemd-networkd[1399]: cali37369a8b196: Gained IPv6LL Apr 22 15:09:59.356588 systemd[1]: Started sshd@12-10.0.0.54:22-10.0.0.1:40642.service - OpenSSH per-connection server daemon (10.0.0.1:40642). Apr 22 15:09:59.409329 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 40642 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:09:59.411066 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:09:59.419448 systemd-logind[1444]: New session 13 of user core. Apr 22 15:09:59.430625 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 22 15:09:59.610084 containerd[1459]: time="2025-04-22T15:09:59.609952998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fdb866d4d-8nv7n,Uid:3db1631a-c539-423a-8063-5260e70b95b8,Namespace:calico-system,Attempt:0,}" Apr 22 15:09:59.610752 containerd[1459]: time="2025-04-22T15:09:59.610406815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sw8fk,Uid:7eb1ff10-8266-4cd9-92c6-ab19f470fcc9,Namespace:calico-system,Attempt:0,}" Apr 22 15:09:59.610752 containerd[1459]: time="2025-04-22T15:09:59.610413815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79cf6657d9-9xlkg,Uid:ad328a64-c186-4d75-8652-9167b5eb2598,Namespace:calico-apiserver,Attempt:0,}" Apr 22 15:09:59.626296 sshd[4246]: Connection closed by 10.0.0.1 port 40642 Apr 22 15:09:59.626720 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Apr 22 15:09:59.631998 systemd[1]: sshd@12-10.0.0.54:22-10.0.0.1:40642.service: Deactivated successfully. Apr 22 15:09:59.635992 systemd[1]: session-13.scope: Deactivated successfully. Apr 22 15:09:59.638421 systemd-logind[1444]: Session 13 logged out. Waiting for processes to exit. Apr 22 15:09:59.639493 systemd-logind[1444]: Removed session 13. Apr 22 15:09:59.763812 systemd-networkd[1399]: cali078f79917bd: Link UP Apr 22 15:09:59.764341 systemd-networkd[1399]: cali078f79917bd: Gained carrier Apr 22 15:09:59.778105 containerd[1459]: 2025-04-22 15:09:59.673 [INFO][4269] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--sw8fk-eth0 csi-node-driver- calico-system 7eb1ff10-8266-4cd9-92c6-ab19f470fcc9 634 0 2025-04-22 15:09:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:69ddf5d45d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-sw8fk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali078f79917bd [] []}} ContainerID="8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7" Namespace="calico-system" Pod="csi-node-driver-sw8fk" WorkloadEndpoint="localhost-k8s-csi--node--driver--sw8fk-" Apr 22 15:09:59.778105 containerd[1459]: 2025-04-22 15:09:59.673 [INFO][4269] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7" Namespace="calico-system" Pod="csi-node-driver-sw8fk" WorkloadEndpoint="localhost-k8s-csi--node--driver--sw8fk-eth0" Apr 22 15:09:59.778105 containerd[1459]: 2025-04-22 15:09:59.707 [INFO][4306] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7" HandleID="k8s-pod-network.8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7" Workload="localhost-k8s-csi--node--driver--sw8fk-eth0" Apr 22 15:09:59.778105 containerd[1459]: 2025-04-22 15:09:59.726 [INFO][4306] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7" HandleID="k8s-pod-network.8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7" Workload="localhost-k8s-csi--node--driver--sw8fk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000142e80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-sw8fk", "timestamp":"2025-04-22 15:09:59.707841979 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 22 15:09:59.778105 containerd[1459]: 2025-04-22 15:09:59.726 [INFO][4306] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 22 15:09:59.778105 containerd[1459]: 2025-04-22 15:09:59.726 [INFO][4306] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 22 15:09:59.778105 containerd[1459]: 2025-04-22 15:09:59.726 [INFO][4306] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 22 15:09:59.778105 containerd[1459]: 2025-04-22 15:09:59.728 [INFO][4306] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7" host="localhost" Apr 22 15:09:59.778105 containerd[1459]: 2025-04-22 15:09:59.733 [INFO][4306] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 22 15:09:59.778105 containerd[1459]: 2025-04-22 15:09:59.737 [INFO][4306] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 22 15:09:59.778105 containerd[1459]: 2025-04-22 15:09:59.739 [INFO][4306] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 22 15:09:59.778105 containerd[1459]: 2025-04-22 15:09:59.741 [INFO][4306] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 22 15:09:59.778105 containerd[1459]: 2025-04-22 15:09:59.741 [INFO][4306] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7" host="localhost" Apr 22 15:09:59.778105 containerd[1459]: 2025-04-22 15:09:59.743 [INFO][4306] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7 Apr 22 15:09:59.778105 containerd[1459]: 2025-04-22 15:09:59.747 [INFO][4306] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7" host="localhost" Apr 22 15:09:59.778105 containerd[1459]: 2025-04-22 15:09:59.754 [INFO][4306] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7" host="localhost" Apr 22 15:09:59.778105 containerd[1459]: 2025-04-22 15:09:59.754 [INFO][4306] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7" host="localhost" Apr 22 15:09:59.778105 containerd[1459]: 2025-04-22 15:09:59.754 [INFO][4306] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 22 15:09:59.778105 containerd[1459]: 2025-04-22 15:09:59.754 [INFO][4306] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7" HandleID="k8s-pod-network.8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7" Workload="localhost-k8s-csi--node--driver--sw8fk-eth0" Apr 22 15:09:59.778945 containerd[1459]: 2025-04-22 15:09:59.758 [INFO][4269] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7" Namespace="calico-system" Pod="csi-node-driver-sw8fk" WorkloadEndpoint="localhost-k8s-csi--node--driver--sw8fk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sw8fk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7eb1ff10-8266-4cd9-92c6-ab19f470fcc9", ResourceVersion:"634", Generation:0, CreationTimestamp:time.Date(2025, time.April, 22, 15, 9, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-sw8fk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali078f79917bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 22 15:09:59.778945 containerd[1459]: 2025-04-22 15:09:59.758 [INFO][4269] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7" Namespace="calico-system" Pod="csi-node-driver-sw8fk" WorkloadEndpoint="localhost-k8s-csi--node--driver--sw8fk-eth0" Apr 22 15:09:59.778945 containerd[1459]: 2025-04-22 15:09:59.758 [INFO][4269] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali078f79917bd ContainerID="8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7" Namespace="calico-system" Pod="csi-node-driver-sw8fk" WorkloadEndpoint="localhost-k8s-csi--node--driver--sw8fk-eth0" Apr 22 15:09:59.778945 containerd[1459]: 2025-04-22 15:09:59.763 [INFO][4269] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7" Namespace="calico-system" Pod="csi-node-driver-sw8fk" WorkloadEndpoint="localhost-k8s-csi--node--driver--sw8fk-eth0" Apr 22 15:09:59.778945 containerd[1459]: 2025-04-22 15:09:59.763 [INFO][4269] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7" Namespace="calico-system" Pod="csi-node-driver-sw8fk" WorkloadEndpoint="localhost-k8s-csi--node--driver--sw8fk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sw8fk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7eb1ff10-8266-4cd9-92c6-ab19f470fcc9", ResourceVersion:"634", Generation:0, CreationTimestamp:time.Date(2025, time.April, 22, 15, 9, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7", Pod:"csi-node-driver-sw8fk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali078f79917bd", MAC:"a2:01:02:92:e8:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 22 15:09:59.778945 containerd[1459]: 2025-04-22 15:09:59.775 [INFO][4269] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7" Namespace="calico-system" Pod="csi-node-driver-sw8fk" WorkloadEndpoint="localhost-k8s-csi--node--driver--sw8fk-eth0" Apr 22 15:09:59.809737 systemd-networkd[1399]: calibd9243479e7: Link UP Apr 22 15:09:59.810009 systemd-networkd[1399]: calibd9243479e7: Gained carrier Apr 22 15:09:59.824012 containerd[1459]: time="2025-04-22T15:09:59.823441473Z" level=info msg="connecting to shim 8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7" address="unix:///run/containerd/s/fa2e28550e8209ac62f820f3508d4824670b696f0e8995e2300343fb022aea48" namespace=k8s.io protocol=ttrpc version=3 Apr 22 15:09:59.825402 containerd[1459]: 2025-04-22 15:09:59.677 [INFO][4256] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6fdb866d4d--8nv7n-eth0 calico-kube-controllers-6fdb866d4d- calico-system 3db1631a-c539-423a-8063-5260e70b95b8 756 0 2025-04-22 15:09:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6fdb866d4d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6fdb866d4d-8nv7n eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibd9243479e7 [] []}} ContainerID="d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e" Namespace="calico-system" Pod="calico-kube-controllers-6fdb866d4d-8nv7n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fdb866d4d--8nv7n-" Apr 22 15:09:59.825402 containerd[1459]: 2025-04-22 15:09:59.677 [INFO][4256] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e" Namespace="calico-system" Pod="calico-kube-controllers-6fdb866d4d-8nv7n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fdb866d4d--8nv7n-eth0" Apr 22 15:09:59.825402 containerd[1459]: 2025-04-22 15:09:59.715 [INFO][4312] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e" HandleID="k8s-pod-network.d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e" Workload="localhost-k8s-calico--kube--controllers--6fdb866d4d--8nv7n-eth0" Apr 22 15:09:59.825402 containerd[1459]: 2025-04-22 15:09:59.726 [INFO][4312] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e" HandleID="k8s-pod-network.d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e" Workload="localhost-k8s-calico--kube--controllers--6fdb866d4d--8nv7n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d970), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6fdb866d4d-8nv7n", "timestamp":"2025-04-22 15:09:59.715455438 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 22 15:09:59.825402 containerd[1459]: 2025-04-22 15:09:59.726 [INFO][4312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 22 15:09:59.825402 containerd[1459]: 2025-04-22 15:09:59.755 [INFO][4312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 22 15:09:59.825402 containerd[1459]: 2025-04-22 15:09:59.755 [INFO][4312] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 22 15:09:59.825402 containerd[1459]: 2025-04-22 15:09:59.759 [INFO][4312] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e" host="localhost" Apr 22 15:09:59.825402 containerd[1459]: 2025-04-22 15:09:59.767 [INFO][4312] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 22 15:09:59.825402 containerd[1459]: 2025-04-22 15:09:59.778 [INFO][4312] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 22 15:09:59.825402 containerd[1459]: 2025-04-22 15:09:59.781 [INFO][4312] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 22 15:09:59.825402 containerd[1459]: 2025-04-22 15:09:59.784 [INFO][4312] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 22 15:09:59.825402 containerd[1459]: 2025-04-22 15:09:59.785 [INFO][4312] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e" host="localhost" Apr 22 15:09:59.825402 containerd[1459]: 2025-04-22 15:09:59.787 [INFO][4312] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e Apr 22 15:09:59.825402 containerd[1459]: 2025-04-22 15:09:59.791 [INFO][4312] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e" host="localhost" Apr 22 15:09:59.825402 containerd[1459]: 2025-04-22 15:09:59.798 [INFO][4312] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e" host="localhost" Apr 22 15:09:59.825402 containerd[1459]: 2025-04-22 15:09:59.798 [INFO][4312] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e" host="localhost" Apr 22 15:09:59.825402 containerd[1459]: 2025-04-22 15:09:59.798 [INFO][4312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 22 15:09:59.825402 containerd[1459]: 2025-04-22 15:09:59.798 [INFO][4312] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e" HandleID="k8s-pod-network.d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e" Workload="localhost-k8s-calico--kube--controllers--6fdb866d4d--8nv7n-eth0" Apr 22 15:09:59.825924 containerd[1459]: 2025-04-22 15:09:59.804 [INFO][4256] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e" Namespace="calico-system" Pod="calico-kube-controllers-6fdb866d4d-8nv7n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fdb866d4d--8nv7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6fdb866d4d--8nv7n-eth0", GenerateName:"calico-kube-controllers-6fdb866d4d-", Namespace:"calico-system", SelfLink:"", UID:"3db1631a-c539-423a-8063-5260e70b95b8", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.April, 22, 15, 9, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fdb866d4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6fdb866d4d-8nv7n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibd9243479e7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 22 15:09:59.825924 containerd[1459]: 2025-04-22 15:09:59.804 [INFO][4256] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e" Namespace="calico-system" Pod="calico-kube-controllers-6fdb866d4d-8nv7n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fdb866d4d--8nv7n-eth0" Apr 22 15:09:59.825924 containerd[1459]: 2025-04-22 15:09:59.804 [INFO][4256] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd9243479e7 ContainerID="d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e" Namespace="calico-system" Pod="calico-kube-controllers-6fdb866d4d-8nv7n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fdb866d4d--8nv7n-eth0" Apr 22 15:09:59.825924 containerd[1459]: 2025-04-22 15:09:59.809 [INFO][4256] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e" Namespace="calico-system" Pod="calico-kube-controllers-6fdb866d4d-8nv7n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fdb866d4d--8nv7n-eth0" Apr 22 15:09:59.825924 containerd[1459]: 2025-04-22 15:09:59.809 [INFO][4256] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e" Namespace="calico-system" Pod="calico-kube-controllers-6fdb866d4d-8nv7n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fdb866d4d--8nv7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6fdb866d4d--8nv7n-eth0", GenerateName:"calico-kube-controllers-6fdb866d4d-", Namespace:"calico-system", SelfLink:"", UID:"3db1631a-c539-423a-8063-5260e70b95b8", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.April, 22, 15, 9, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6fdb866d4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e", Pod:"calico-kube-controllers-6fdb866d4d-8nv7n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibd9243479e7", MAC:"d2:ba:f4:de:26:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 22 15:09:59.825924 containerd[1459]: 2025-04-22 15:09:59.821 [INFO][4256] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e" Namespace="calico-system" Pod="calico-kube-controllers-6fdb866d4d-8nv7n" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6fdb866d4d--8nv7n-eth0" Apr 22 15:09:59.856639 systemd[1]: Started cri-containerd-8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7.scope - libcontainer container 8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7. Apr 22 15:09:59.862602 containerd[1459]: time="2025-04-22T15:09:59.862557995Z" level=info msg="connecting to shim d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e" address="unix:///run/containerd/s/07c30325450073b55fad41d2ddbe9277e1511970f8b1112b0f807182b03698ea" namespace=k8s.io protocol=ttrpc version=3 Apr 22 15:09:59.872116 systemd-networkd[1399]: calie5e441af798: Link UP Apr 22 15:09:59.873125 systemd-networkd[1399]: calie5e441af798: Gained carrier Apr 22 15:09:59.882208 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 22 15:09:59.888769 containerd[1459]: 2025-04-22 15:09:59.678 [INFO][4279] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--79cf6657d9--9xlkg-eth0 calico-apiserver-79cf6657d9- calico-apiserver ad328a64-c186-4d75-8652-9167b5eb2598 758 0 2025-04-22 15:09:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:79cf6657d9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-79cf6657d9-9xlkg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie5e441af798 [] []}} ContainerID="e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11" Namespace="calico-apiserver" Pod="calico-apiserver-79cf6657d9-9xlkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cf6657d9--9xlkg-" Apr 22 15:09:59.888769 containerd[1459]: 2025-04-22 15:09:59.678 [INFO][4279] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11" Namespace="calico-apiserver" Pod="calico-apiserver-79cf6657d9-9xlkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cf6657d9--9xlkg-eth0" Apr 22 15:09:59.888769 containerd[1459]: 2025-04-22 15:09:59.716 [INFO][4318] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11" HandleID="k8s-pod-network.e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11" Workload="localhost-k8s-calico--apiserver--79cf6657d9--9xlkg-eth0" Apr 22 15:09:59.888769 containerd[1459]: 2025-04-22 15:09:59.731 [INFO][4318] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11" HandleID="k8s-pod-network.e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11" Workload="localhost-k8s-calico--apiserver--79cf6657d9--9xlkg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000260b70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-79cf6657d9-9xlkg", "timestamp":"2025-04-22 15:09:59.716132684 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 22 15:09:59.888769 containerd[1459]: 2025-04-22 15:09:59.732 [INFO][4318] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 22 15:09:59.888769 containerd[1459]: 2025-04-22 15:09:59.799 [INFO][4318] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 22 15:09:59.888769 containerd[1459]: 2025-04-22 15:09:59.800 [INFO][4318] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 22 15:09:59.888769 containerd[1459]: 2025-04-22 15:09:59.804 [INFO][4318] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11" host="localhost" Apr 22 15:09:59.888769 containerd[1459]: 2025-04-22 15:09:59.820 [INFO][4318] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 22 15:09:59.888769 containerd[1459]: 2025-04-22 15:09:59.833 [INFO][4318] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 22 15:09:59.888769 containerd[1459]: 2025-04-22 15:09:59.835 [INFO][4318] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 22 15:09:59.888769 containerd[1459]: 2025-04-22 15:09:59.843 [INFO][4318] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 22 15:09:59.888769 containerd[1459]: 2025-04-22 15:09:59.843 [INFO][4318] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11" host="localhost" Apr 22 15:09:59.888769 containerd[1459]: 2025-04-22 15:09:59.846 [INFO][4318] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11 Apr 22 15:09:59.888769 containerd[1459]: 2025-04-22 15:09:59.854 [INFO][4318] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11" host="localhost" Apr 22 15:09:59.888769 containerd[1459]: 2025-04-22 15:09:59.864 [INFO][4318] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11" host="localhost" Apr 22 15:09:59.888769 containerd[1459]: 2025-04-22 15:09:59.864 [INFO][4318] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11" host="localhost" Apr 22 15:09:59.888769 containerd[1459]: 2025-04-22 15:09:59.864 [INFO][4318] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 22 15:09:59.888769 containerd[1459]: 2025-04-22 15:09:59.864 [INFO][4318] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11" HandleID="k8s-pod-network.e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11" Workload="localhost-k8s-calico--apiserver--79cf6657d9--9xlkg-eth0" Apr 22 15:09:59.889575 containerd[1459]: 2025-04-22 15:09:59.868 [INFO][4279] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11" Namespace="calico-apiserver" Pod="calico-apiserver-79cf6657d9-9xlkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cf6657d9--9xlkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79cf6657d9--9xlkg-eth0", GenerateName:"calico-apiserver-79cf6657d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad328a64-c186-4d75-8652-9167b5eb2598", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2025, time.April, 22, 15, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79cf6657d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-79cf6657d9-9xlkg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5e441af798", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 22 15:09:59.889575 containerd[1459]: 2025-04-22 15:09:59.868 [INFO][4279] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11" Namespace="calico-apiserver" Pod="calico-apiserver-79cf6657d9-9xlkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cf6657d9--9xlkg-eth0" Apr 22 15:09:59.889575 containerd[1459]: 2025-04-22 15:09:59.868 [INFO][4279] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie5e441af798 ContainerID="e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11" Namespace="calico-apiserver" Pod="calico-apiserver-79cf6657d9-9xlkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cf6657d9--9xlkg-eth0" Apr 22 15:09:59.889575 containerd[1459]: 2025-04-22 15:09:59.872 [INFO][4279] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11" Namespace="calico-apiserver" Pod="calico-apiserver-79cf6657d9-9xlkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cf6657d9--9xlkg-eth0" Apr 22 15:09:59.889575 containerd[1459]: 2025-04-22 15:09:59.872 [INFO][4279] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11" Namespace="calico-apiserver" Pod="calico-apiserver-79cf6657d9-9xlkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cf6657d9--9xlkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--79cf6657d9--9xlkg-eth0", GenerateName:"calico-apiserver-79cf6657d9-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad328a64-c186-4d75-8652-9167b5eb2598", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2025, time.April, 22, 15, 9, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"79cf6657d9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11", Pod:"calico-apiserver-79cf6657d9-9xlkg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5e441af798", MAC:"c2:ef:69:e5:c9:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 22 15:09:59.889575 containerd[1459]: 2025-04-22 15:09:59.884 [INFO][4279] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11" Namespace="calico-apiserver" Pod="calico-apiserver-79cf6657d9-9xlkg" WorkloadEndpoint="localhost-k8s-calico--apiserver--79cf6657d9--9xlkg-eth0" Apr 22 15:09:59.897715 systemd[1]: Started cri-containerd-d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e.scope - libcontainer container d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e. Apr 22 15:09:59.908296 containerd[1459]: time="2025-04-22T15:09:59.908238629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sw8fk,Uid:7eb1ff10-8266-4cd9-92c6-ab19f470fcc9,Namespace:calico-system,Attempt:0,} returns sandbox id \"8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7\"" Apr 22 15:09:59.916185 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 22 15:09:59.921111 containerd[1459]: time="2025-04-22T15:09:59.921065427Z" level=info msg="connecting to shim e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11" address="unix:///run/containerd/s/27fbdc0de2b712c7736508a2c466a16d4299a182dcc18d08f01781be08fdeeba" namespace=k8s.io protocol=ttrpc version=3 Apr 22 15:09:59.941629 systemd[1]: Started cri-containerd-e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11.scope - libcontainer container e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11. Apr 22 15:09:59.942171 systemd-networkd[1399]: calice2c7f9d144: Gained IPv6LL Apr 22 15:09:59.946069 containerd[1459]: time="2025-04-22T15:09:59.945925582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6fdb866d4d-8nv7n,Uid:3db1631a-c539-423a-8063-5260e70b95b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e\"" Apr 22 15:09:59.954541 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 22 15:09:59.979243 containerd[1459]: time="2025-04-22T15:09:59.979131520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-79cf6657d9-9xlkg,Uid:ad328a64-c186-4d75-8652-9167b5eb2598,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11\"" Apr 22 15:10:00.575245 containerd[1459]: time="2025-04-22T15:10:00.575193466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:10:00.575676 containerd[1459]: time="2025-04-22T15:10:00.575622485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=40253267" Apr 22 15:10:00.576632 containerd[1459]: time="2025-04-22T15:10:00.576573359Z" level=info msg="ImageCreate event name:\"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:10:00.579399 containerd[1459]: time="2025-04-22T15:10:00.579326265Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:10:00.580249 containerd[1459]: time="2025-04-22T15:10:00.580174344Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"41623040\" in 1.737069553s" Apr 22 15:10:00.580249 containerd[1459]: time="2025-04-22T15:10:00.580206222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\"" Apr 22 15:10:00.581965 containerd[1459]: time="2025-04-22T15:10:00.581360486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Apr 22 15:10:00.583951 containerd[1459]: time="2025-04-22T15:10:00.583918841Z" level=info msg="CreateContainer within sandbox \"34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 22 15:10:00.590199 containerd[1459]: time="2025-04-22T15:10:00.589966826Z" level=info msg="Container c4a7236619bc72d450ce105fa90cf0d19bdee3f98f580ef3a06e40e86cab1b61: CDI devices from CRI Config.CDIDevices: []" Apr 22 15:10:00.604381 containerd[1459]: time="2025-04-22T15:10:00.603323415Z" level=info msg="CreateContainer within sandbox \"34834836fcb8f824adb7ae08ec2c8d3f98d8ebf788415332e940f4a97509f0e0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c4a7236619bc72d450ce105fa90cf0d19bdee3f98f580ef3a06e40e86cab1b61\"" Apr 22 15:10:00.605288 containerd[1459]: time="2025-04-22T15:10:00.605253801Z" level=info msg="StartContainer for \"c4a7236619bc72d450ce105fa90cf0d19bdee3f98f580ef3a06e40e86cab1b61\"" Apr 22 15:10:00.606429 containerd[1459]: time="2025-04-22T15:10:00.606401465Z" level=info msg="connecting to shim c4a7236619bc72d450ce105fa90cf0d19bdee3f98f580ef3a06e40e86cab1b61" address="unix:///run/containerd/s/e3d078a08d53c50388cf4b435b082d8245ba7ea09b6571adc620abb862878c75" protocol=ttrpc version=3 Apr 22 15:10:00.610175 containerd[1459]: time="2025-04-22T15:10:00.609859736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gpmj7,Uid:84927b68-56ab-4969-b1fe-dbce339a18a5,Namespace:kube-system,Attempt:0,}" Apr 22 15:10:00.640578 systemd[1]: Started cri-containerd-c4a7236619bc72d450ce105fa90cf0d19bdee3f98f580ef3a06e40e86cab1b61.scope - libcontainer container c4a7236619bc72d450ce105fa90cf0d19bdee3f98f580ef3a06e40e86cab1b61. Apr 22 15:10:00.730110 containerd[1459]: time="2025-04-22T15:10:00.730063635Z" level=info msg="StartContainer for \"c4a7236619bc72d450ce105fa90cf0d19bdee3f98f580ef3a06e40e86cab1b61\" returns successfully" Apr 22 15:10:00.802554 systemd-networkd[1399]: caliaf41ae4a60c: Link UP Apr 22 15:10:00.803258 systemd-networkd[1399]: caliaf41ae4a60c: Gained carrier Apr 22 15:10:00.818271 containerd[1459]: 2025-04-22 15:10:00.662 [INFO][4533] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--gpmj7-eth0 coredns-7db6d8ff4d- kube-system 84927b68-56ab-4969-b1fe-dbce339a18a5 755 0 2025-04-22 15:09:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-gpmj7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaf41ae4a60c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gpmj7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gpmj7-" Apr 22 15:10:00.818271 containerd[1459]: 2025-04-22 15:10:00.662 [INFO][4533] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gpmj7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gpmj7-eth0" Apr 22 15:10:00.818271 containerd[1459]: 2025-04-22 15:10:00.739 [INFO][4558] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18" HandleID="k8s-pod-network.e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18" Workload="localhost-k8s-coredns--7db6d8ff4d--gpmj7-eth0" Apr 22 15:10:00.818271 containerd[1459]: 2025-04-22 15:10:00.753 [INFO][4558] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18" HandleID="k8s-pod-network.e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18" Workload="localhost-k8s-coredns--7db6d8ff4d--gpmj7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000429740), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-gpmj7", "timestamp":"2025-04-22 15:10:00.739879356 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 22 15:10:00.818271 containerd[1459]: 2025-04-22 15:10:00.753 [INFO][4558] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 22 15:10:00.818271 containerd[1459]: 2025-04-22 15:10:00.754 [INFO][4558] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 22 15:10:00.818271 containerd[1459]: 2025-04-22 15:10:00.754 [INFO][4558] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 22 15:10:00.818271 containerd[1459]: 2025-04-22 15:10:00.755 [INFO][4558] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18" host="localhost" Apr 22 15:10:00.818271 containerd[1459]: 2025-04-22 15:10:00.761 [INFO][4558] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Apr 22 15:10:00.818271 containerd[1459]: 2025-04-22 15:10:00.772 [INFO][4558] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Apr 22 15:10:00.818271 containerd[1459]: 2025-04-22 15:10:00.775 [INFO][4558] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 22 15:10:00.818271 containerd[1459]: 2025-04-22 15:10:00.780 [INFO][4558] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 22 15:10:00.818271 containerd[1459]: 2025-04-22 15:10:00.781 [INFO][4558] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18" host="localhost" Apr 22 15:10:00.818271 containerd[1459]: 2025-04-22 15:10:00.784 [INFO][4558] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18 Apr 22 15:10:00.818271 containerd[1459]: 2025-04-22 15:10:00.788 [INFO][4558] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18" host="localhost" Apr 22 15:10:00.818271 containerd[1459]: 2025-04-22 15:10:00.797 [INFO][4558] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18" host="localhost" Apr 22 15:10:00.818271 containerd[1459]: 2025-04-22 15:10:00.797 [INFO][4558] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18" host="localhost" Apr 22 15:10:00.818271 containerd[1459]: 2025-04-22 15:10:00.797 [INFO][4558] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 22 15:10:00.818271 containerd[1459]: 2025-04-22 15:10:00.797 [INFO][4558] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18" HandleID="k8s-pod-network.e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18" Workload="localhost-k8s-coredns--7db6d8ff4d--gpmj7-eth0" Apr 22 15:10:00.819209 containerd[1459]: 2025-04-22 15:10:00.799 [INFO][4533] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gpmj7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gpmj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--gpmj7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"84927b68-56ab-4969-b1fe-dbce339a18a5", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.April, 22, 15, 9, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-gpmj7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaf41ae4a60c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 22 15:10:00.819209 containerd[1459]: 2025-04-22 15:10:00.799 [INFO][4533] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gpmj7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gpmj7-eth0" Apr 22 15:10:00.819209 containerd[1459]: 2025-04-22 15:10:00.799 [INFO][4533] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf41ae4a60c ContainerID="e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gpmj7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gpmj7-eth0" Apr 22 15:10:00.819209 containerd[1459]: 2025-04-22 15:10:00.803 [INFO][4533] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gpmj7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gpmj7-eth0" Apr 22 15:10:00.819209 containerd[1459]: 2025-04-22 15:10:00.805 [INFO][4533] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gpmj7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gpmj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--gpmj7-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"84927b68-56ab-4969-b1fe-dbce339a18a5", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.April, 22, 15, 9, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18", Pod:"coredns-7db6d8ff4d-gpmj7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaf41ae4a60c", MAC:"fe:03:ec:6d:aa:20", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 22 15:10:00.819209 containerd[1459]: 2025-04-22 15:10:00.815 [INFO][4533] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gpmj7" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--gpmj7-eth0" Apr 22 15:10:00.819450 kubelet[2705]: I0422 15:10:00.815150 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79cf6657d9-glzn9" podStartSLOduration=23.076586646 podStartE2EDuration="24.815131726s" podCreationTimestamp="2025-04-22 15:09:36 +0000 UTC" firstStartedPulling="2025-04-22 15:09:58.842653254 +0000 UTC m=+45.306909714" lastFinishedPulling="2025-04-22 15:10:00.581198334 +0000 UTC m=+47.045454794" observedRunningTime="2025-04-22 15:10:00.777649554 +0000 UTC m=+47.241906014" watchObservedRunningTime="2025-04-22 15:10:00.815131726 +0000 UTC m=+47.279388146" Apr 22 15:10:00.842149 containerd[1459]: time="2025-04-22T15:10:00.841934939Z" level=info msg="connecting to shim e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18" address="unix:///run/containerd/s/e11b53c2cdf98346fb87a463ff706244c7a7d968109aefb5a23c5b0ecd87930d" namespace=k8s.io protocol=ttrpc version=3 Apr 22 15:10:00.875607 systemd[1]: Started cri-containerd-e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18.scope - libcontainer container e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18. Apr 22 15:10:00.886465 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 22 15:10:00.908021 containerd[1459]: time="2025-04-22T15:10:00.907982639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gpmj7,Uid:84927b68-56ab-4969-b1fe-dbce339a18a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18\"" Apr 22 15:10:00.911767 containerd[1459]: time="2025-04-22T15:10:00.911728776Z" level=info msg="CreateContainer within sandbox \"e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 22 15:10:00.921171 containerd[1459]: time="2025-04-22T15:10:00.920520707Z" level=info msg="Container 1428678e7088811db6b39bcca02d4aa52a0ac23ae7cab13b195c7cd4c66b0111: CDI devices from CRI Config.CDIDevices: []" Apr 22 15:10:00.927098 containerd[1459]: time="2025-04-22T15:10:00.927057109Z" level=info msg="CreateContainer within sandbox \"e05645e6d92269b7c7fd728e4e68d09e9e6528ed3c4ae31f104e2c1c0abd8b18\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1428678e7088811db6b39bcca02d4aa52a0ac23ae7cab13b195c7cd4c66b0111\"" Apr 22 15:10:00.927669 containerd[1459]: time="2025-04-22T15:10:00.927639280Z" level=info msg="StartContainer for \"1428678e7088811db6b39bcca02d4aa52a0ac23ae7cab13b195c7cd4c66b0111\"" Apr 22 15:10:00.930063 containerd[1459]: time="2025-04-22T15:10:00.930020124Z" level=info msg="connecting to shim 1428678e7088811db6b39bcca02d4aa52a0ac23ae7cab13b195c7cd4c66b0111" address="unix:///run/containerd/s/e11b53c2cdf98346fb87a463ff706244c7a7d968109aefb5a23c5b0ecd87930d" protocol=ttrpc version=3 Apr 22 15:10:00.949554 systemd[1]: Started cri-containerd-1428678e7088811db6b39bcca02d4aa52a0ac23ae7cab13b195c7cd4c66b0111.scope - libcontainer container 1428678e7088811db6b39bcca02d4aa52a0ac23ae7cab13b195c7cd4c66b0111. Apr 22 15:10:00.965478 systemd-networkd[1399]: cali078f79917bd: Gained IPv6LL Apr 22 15:10:00.978608 containerd[1459]: time="2025-04-22T15:10:00.978568517Z" level=info msg="StartContainer for \"1428678e7088811db6b39bcca02d4aa52a0ac23ae7cab13b195c7cd4c66b0111\" returns successfully" Apr 22 15:10:01.030635 systemd-networkd[1399]: calie5e441af798: Gained IPv6LL Apr 22 15:10:01.478497 systemd-networkd[1399]: calibd9243479e7: Gained IPv6LL Apr 22 15:10:01.789096 kubelet[2705]: I0422 15:10:01.788937 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gpmj7" podStartSLOduration=32.788918865 podStartE2EDuration="32.788918865s" podCreationTimestamp="2025-04-22 15:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-22 15:10:01.779104131 +0000 UTC m=+48.243360591" watchObservedRunningTime="2025-04-22 15:10:01.788918865 +0000 UTC m=+48.253175325" Apr 22 15:10:02.231884 containerd[1459]: time="2025-04-22T15:10:02.231829898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:10:02.232851 containerd[1459]: time="2025-04-22T15:10:02.232809773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7473801" Apr 22 15:10:02.234132 containerd[1459]: time="2025-04-22T15:10:02.233841565Z" level=info msg="ImageCreate event name:\"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:10:02.235874 containerd[1459]: time="2025-04-22T15:10:02.235832913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:10:02.236680 containerd[1459]: time="2025-04-22T15:10:02.236651435Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"8843558\" in 1.655254432s" Apr 22 15:10:02.236780 containerd[1459]: time="2025-04-22T15:10:02.236762590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\"" Apr 22 15:10:02.238051 containerd[1459]: time="2025-04-22T15:10:02.238002373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\"" Apr 22 15:10:02.239281 containerd[1459]: time="2025-04-22T15:10:02.239248675Z" level=info msg="CreateContainer within sandbox \"8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 22 15:10:02.247103 containerd[1459]: time="2025-04-22T15:10:02.246883721Z" level=info msg="Container e07f28e919a94fc2af077a511fa50cec202cc3896239f8fc4df80c4a419cfe6f: CDI devices from CRI Config.CDIDevices: []" Apr 22 15:10:02.261860 containerd[1459]: time="2025-04-22T15:10:02.261816630Z" level=info msg="CreateContainer within sandbox \"8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e07f28e919a94fc2af077a511fa50cec202cc3896239f8fc4df80c4a419cfe6f\"" Apr 22 15:10:02.262558 containerd[1459]: time="2025-04-22T15:10:02.262333086Z" level=info msg="StartContainer for \"e07f28e919a94fc2af077a511fa50cec202cc3896239f8fc4df80c4a419cfe6f\"" Apr 22 15:10:02.264783 containerd[1459]: time="2025-04-22T15:10:02.264749774Z" level=info msg="connecting to shim e07f28e919a94fc2af077a511fa50cec202cc3896239f8fc4df80c4a419cfe6f" address="unix:///run/containerd/s/fa2e28550e8209ac62f820f3508d4824670b696f0e8995e2300343fb022aea48" protocol=ttrpc version=3 Apr 22 15:10:02.288568 systemd[1]: Started cri-containerd-e07f28e919a94fc2af077a511fa50cec202cc3896239f8fc4df80c4a419cfe6f.scope - libcontainer container e07f28e919a94fc2af077a511fa50cec202cc3896239f8fc4df80c4a419cfe6f. Apr 22 15:10:02.309466 systemd-networkd[1399]: caliaf41ae4a60c: Gained IPv6LL Apr 22 15:10:02.345712 containerd[1459]: time="2025-04-22T15:10:02.345664987Z" level=info msg="StartContainer for \"e07f28e919a94fc2af077a511fa50cec202cc3896239f8fc4df80c4a419cfe6f\" returns successfully" Apr 22 15:10:03.917108 containerd[1459]: time="2025-04-22T15:10:03.917053055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:10:03.917558 containerd[1459]: time="2025-04-22T15:10:03.917453557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.2: active requests=0, bytes read=32560257" Apr 22 15:10:03.918219 containerd[1459]: time="2025-04-22T15:10:03.918193923Z" level=info msg="ImageCreate event name:\"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:10:03.920304 containerd[1459]: time="2025-04-22T15:10:03.920266190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:10:03.920894 containerd[1459]: time="2025-04-22T15:10:03.920858763Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" with image id \"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\", size \"33929982\" in 1.682816432s" Apr 22 15:10:03.920933 containerd[1459]: time="2025-04-22T15:10:03.920896121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" returns image reference \"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\"" Apr 22 15:10:03.921852 containerd[1459]: time="2025-04-22T15:10:03.921824719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Apr 22 15:10:03.931004 containerd[1459]: time="2025-04-22T15:10:03.930962067Z" level=info msg="CreateContainer within sandbox \"d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 22 15:10:03.942604 containerd[1459]: time="2025-04-22T15:10:03.942548104Z" level=info msg="Container 30b3304bd36c3a429f64ddab1e92957e363245bfa97bfb8bdfaf0207e922eafa: CDI devices from CRI Config.CDIDevices: []" Apr 22 15:10:03.949985 containerd[1459]: time="2025-04-22T15:10:03.949930171Z" level=info msg="CreateContainer within sandbox \"d73ce8275a26cc8c1bedd2ea39bac519c0014dc3bb604de8267ad6ca6751ff7e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"30b3304bd36c3a429f64ddab1e92957e363245bfa97bfb8bdfaf0207e922eafa\"" Apr 22 15:10:03.950645 containerd[1459]: time="2025-04-22T15:10:03.950520864Z" level=info msg="StartContainer for \"30b3304bd36c3a429f64ddab1e92957e363245bfa97bfb8bdfaf0207e922eafa\"" Apr 22 15:10:03.952582 containerd[1459]: time="2025-04-22T15:10:03.952550972Z" level=info msg="connecting to shim 30b3304bd36c3a429f64ddab1e92957e363245bfa97bfb8bdfaf0207e922eafa" address="unix:///run/containerd/s/07c30325450073b55fad41d2ddbe9277e1511970f8b1112b0f807182b03698ea" protocol=ttrpc version=3 Apr 22 15:10:03.977683 systemd[1]: Started cri-containerd-30b3304bd36c3a429f64ddab1e92957e363245bfa97bfb8bdfaf0207e922eafa.scope - libcontainer container 30b3304bd36c3a429f64ddab1e92957e363245bfa97bfb8bdfaf0207e922eafa. Apr 22 15:10:04.012955 containerd[1459]: time="2025-04-22T15:10:04.012515159Z" level=info msg="StartContainer for \"30b3304bd36c3a429f64ddab1e92957e363245bfa97bfb8bdfaf0207e922eafa\" returns successfully" Apr 22 15:10:04.226696 containerd[1459]: time="2025-04-22T15:10:04.226546781Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:10:04.227461 containerd[1459]: time="2025-04-22T15:10:04.227417263Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=77" Apr 22 15:10:04.229266 containerd[1459]: time="2025-04-22T15:10:04.229242382Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"41623040\" in 307.384825ms" Apr 22 15:10:04.229373 containerd[1459]: time="2025-04-22T15:10:04.229269261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\"" Apr 22 15:10:04.230121 containerd[1459]: time="2025-04-22T15:10:04.230099225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Apr 22 15:10:04.232318 containerd[1459]: time="2025-04-22T15:10:04.231949863Z" level=info msg="CreateContainer within sandbox \"e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 22 15:10:04.239524 containerd[1459]: time="2025-04-22T15:10:04.239494131Z" level=info msg="Container 915611ec2cab101729deeb6dc5d394e3b3ef513f124edddb1264f8d6be076991: CDI devices from CRI Config.CDIDevices: []" Apr 22 15:10:04.244847 containerd[1459]: time="2025-04-22T15:10:04.244814177Z" level=info msg="CreateContainer within sandbox \"e6569eb208c98bb876c7cb2f52850869a8b0ed8775b20ca63c399038722e0a11\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"915611ec2cab101729deeb6dc5d394e3b3ef513f124edddb1264f8d6be076991\"" Apr 22 15:10:04.245264 containerd[1459]: time="2025-04-22T15:10:04.245238959Z" level=info msg="StartContainer for \"915611ec2cab101729deeb6dc5d394e3b3ef513f124edddb1264f8d6be076991\"" Apr 22 15:10:04.246743 containerd[1459]: time="2025-04-22T15:10:04.246704414Z" level=info msg="connecting to shim 915611ec2cab101729deeb6dc5d394e3b3ef513f124edddb1264f8d6be076991" address="unix:///run/containerd/s/27fbdc0de2b712c7736508a2c466a16d4299a182dcc18d08f01781be08fdeeba" protocol=ttrpc version=3 Apr 22 15:10:04.272560 systemd[1]: Started cri-containerd-915611ec2cab101729deeb6dc5d394e3b3ef513f124edddb1264f8d6be076991.scope - libcontainer container 915611ec2cab101729deeb6dc5d394e3b3ef513f124edddb1264f8d6be076991. Apr 22 15:10:04.377864 containerd[1459]: time="2025-04-22T15:10:04.377782446Z" level=info msg="StartContainer for \"915611ec2cab101729deeb6dc5d394e3b3ef513f124edddb1264f8d6be076991\" returns successfully" Apr 22 15:10:04.637883 systemd[1]: Started sshd@13-10.0.0.54:22-10.0.0.1:41582.service - OpenSSH per-connection server daemon (10.0.0.1:41582). Apr 22 15:10:04.703082 sshd[4798]: Accepted publickey for core from 10.0.0.1 port 41582 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:10:04.703727 sshd-session[4798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:10:04.710269 systemd-logind[1444]: New session 14 of user core. Apr 22 15:10:04.723409 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 22 15:10:04.801292 kubelet[2705]: I0422 15:10:04.801172 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-79cf6657d9-9xlkg" podStartSLOduration=24.552777065 podStartE2EDuration="28.801152578s" podCreationTimestamp="2025-04-22 15:09:36 +0000 UTC" firstStartedPulling="2025-04-22 15:09:59.981598757 +0000 UTC m=+46.445855217" lastFinishedPulling="2025-04-22 15:10:04.22997427 +0000 UTC m=+50.694230730" observedRunningTime="2025-04-22 15:10:04.796613418 +0000 UTC m=+51.260869878" watchObservedRunningTime="2025-04-22 15:10:04.801152578 +0000 UTC m=+51.265409038" Apr 22 15:10:04.885297 containerd[1459]: time="2025-04-22T15:10:04.883778182Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30b3304bd36c3a429f64ddab1e92957e363245bfa97bfb8bdfaf0207e922eafa\" id:\"1c00eb78883b176823a546cca50d26cf9b55aaa5ec8f171a2dd5940014f15ce5\" pid:4824 exited_at:{seconds:1745334604 nanos:874402555}" Apr 22 15:10:04.896196 kubelet[2705]: I0422 15:10:04.896060 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6fdb866d4d-8nv7n" podStartSLOduration=23.922454791 podStartE2EDuration="27.896043323s" podCreationTimestamp="2025-04-22 15:09:37 +0000 UTC" firstStartedPulling="2025-04-22 15:09:59.948109273 +0000 UTC m=+46.412365733" lastFinishedPulling="2025-04-22 15:10:03.921697725 +0000 UTC m=+50.385954265" observedRunningTime="2025-04-22 15:10:04.814242762 +0000 UTC m=+51.278499222" watchObservedRunningTime="2025-04-22 15:10:04.896043323 +0000 UTC m=+51.360299783" Apr 22 15:10:04.986516 sshd[4800]: Connection closed by 10.0.0.1 port 41582 Apr 22 15:10:04.987166 sshd-session[4798]: pam_unix(sshd:session): session closed for user core Apr 22 15:10:04.991137 systemd[1]: sshd@13-10.0.0.54:22-10.0.0.1:41582.service: Deactivated successfully. Apr 22 15:10:04.992979 systemd[1]: session-14.scope: Deactivated successfully. Apr 22 15:10:04.993667 systemd-logind[1444]: Session 14 logged out. Waiting for processes to exit. Apr 22 15:10:04.994387 systemd-logind[1444]: Removed session 14. Apr 22 15:10:05.726950 containerd[1459]: time="2025-04-22T15:10:05.726896682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:10:05.728556 containerd[1459]: time="2025-04-22T15:10:05.728419376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13121717" Apr 22 15:10:05.729538 containerd[1459]: time="2025-04-22T15:10:05.729484731Z" level=info msg="ImageCreate event name:\"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:10:05.733332 containerd[1459]: time="2025-04-22T15:10:05.733131334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 22 15:10:05.733879 containerd[1459]: time="2025-04-22T15:10:05.733841264Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"14491426\" in 1.50371356s" Apr 22 15:10:05.733879 containerd[1459]: time="2025-04-22T15:10:05.733876382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\"" Apr 22 15:10:05.735956 containerd[1459]: time="2025-04-22T15:10:05.735910055Z" level=info msg="CreateContainer within sandbox \"8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 22 15:10:05.744007 containerd[1459]: time="2025-04-22T15:10:05.742722403Z" level=info msg="Container aaeaaea0f20727d63cf371f2d4bfe3a8546fc8b222820be7d38dc55f29aa9b09: CDI devices from CRI Config.CDIDevices: []" Apr 22 15:10:05.751712 containerd[1459]: time="2025-04-22T15:10:05.751663739Z" level=info msg="CreateContainer within sandbox \"8d8078b2fe43be12b76420fe2791e088c0995e5ac15c2a58f314e2ff273f8ec7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"aaeaaea0f20727d63cf371f2d4bfe3a8546fc8b222820be7d38dc55f29aa9b09\"" Apr 22 15:10:05.752359 containerd[1459]: time="2025-04-22T15:10:05.752314551Z" level=info msg="StartContainer for \"aaeaaea0f20727d63cf371f2d4bfe3a8546fc8b222820be7d38dc55f29aa9b09\"" Apr 22 15:10:05.753683 containerd[1459]: time="2025-04-22T15:10:05.753656534Z" level=info msg="connecting to shim aaeaaea0f20727d63cf371f2d4bfe3a8546fc8b222820be7d38dc55f29aa9b09" address="unix:///run/containerd/s/fa2e28550e8209ac62f820f3508d4824670b696f0e8995e2300343fb022aea48" protocol=ttrpc version=3 Apr 22 15:10:05.775538 systemd[1]: Started cri-containerd-aaeaaea0f20727d63cf371f2d4bfe3a8546fc8b222820be7d38dc55f29aa9b09.scope - libcontainer container aaeaaea0f20727d63cf371f2d4bfe3a8546fc8b222820be7d38dc55f29aa9b09. Apr 22 15:10:05.817540 containerd[1459]: time="2025-04-22T15:10:05.816902420Z" level=info msg="StartContainer for \"aaeaaea0f20727d63cf371f2d4bfe3a8546fc8b222820be7d38dc55f29aa9b09\" returns successfully" Apr 22 15:10:06.705169 kubelet[2705]: I0422 15:10:06.705035 2705 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 22 15:10:06.705169 kubelet[2705]: I0422 15:10:06.705085 2705 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 22 15:10:06.815019 kubelet[2705]: I0422 15:10:06.814647 2705 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-sw8fk" podStartSLOduration=23.99462716 podStartE2EDuration="29.814629562s" podCreationTimestamp="2025-04-22 15:09:37 +0000 UTC" firstStartedPulling="2025-04-22 15:09:59.914657027 +0000 UTC m=+46.378913447" lastFinishedPulling="2025-04-22 15:10:05.734659389 +0000 UTC m=+52.198915849" observedRunningTime="2025-04-22 15:10:06.814054026 +0000 UTC m=+53.278310486" watchObservedRunningTime="2025-04-22 15:10:06.814629562 +0000 UTC m=+53.278885982" Apr 22 15:10:10.003921 systemd[1]: Started sshd@14-10.0.0.54:22-10.0.0.1:41586.service - OpenSSH per-connection server daemon (10.0.0.1:41586). Apr 22 15:10:10.065085 sshd[4880]: Accepted publickey for core from 10.0.0.1 port 41586 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:10:10.066459 sshd-session[4880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:10:10.070681 systemd-logind[1444]: New session 15 of user core. Apr 22 15:10:10.081651 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 22 15:10:10.226997 sshd[4882]: Connection closed by 10.0.0.1 port 41586 Apr 22 15:10:10.227365 sshd-session[4880]: pam_unix(sshd:session): session closed for user core Apr 22 15:10:10.231317 systemd-logind[1444]: Session 15 logged out. Waiting for processes to exit. Apr 22 15:10:10.231862 systemd[1]: sshd@14-10.0.0.54:22-10.0.0.1:41586.service: Deactivated successfully. Apr 22 15:10:10.233563 systemd[1]: session-15.scope: Deactivated successfully. Apr 22 15:10:10.234448 systemd-logind[1444]: Removed session 15. Apr 22 15:10:15.238969 systemd[1]: Started sshd@15-10.0.0.54:22-10.0.0.1:50284.service - OpenSSH per-connection server daemon (10.0.0.1:50284). Apr 22 15:10:15.297716 sshd[4908]: Accepted publickey for core from 10.0.0.1 port 50284 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:10:15.299154 sshd-session[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:10:15.303193 systemd-logind[1444]: New session 16 of user core. Apr 22 15:10:15.317548 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 22 15:10:15.445840 sshd[4910]: Connection closed by 10.0.0.1 port 50284 Apr 22 15:10:15.446313 sshd-session[4908]: pam_unix(sshd:session): session closed for user core Apr 22 15:10:15.449804 systemd[1]: sshd@15-10.0.0.54:22-10.0.0.1:50284.service: Deactivated successfully. Apr 22 15:10:15.452523 systemd[1]: session-16.scope: Deactivated successfully. Apr 22 15:10:15.453318 systemd-logind[1444]: Session 16 logged out. Waiting for processes to exit. Apr 22 15:10:15.454479 systemd-logind[1444]: Removed session 16. Apr 22 15:10:16.177053 containerd[1459]: time="2025-04-22T15:10:16.177001069Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30b3304bd36c3a429f64ddab1e92957e363245bfa97bfb8bdfaf0207e922eafa\" id:\"11cc6027038d2bfcfa5681b9db77af7491d77de851217246be0e90d709d1cf97\" pid:4934 exited_at:{seconds:1745334616 nanos:176733918}" Apr 22 15:10:20.456763 systemd[1]: Started sshd@16-10.0.0.54:22-10.0.0.1:50296.service - OpenSSH per-connection server daemon (10.0.0.1:50296). Apr 22 15:10:20.508009 sshd[4945]: Accepted publickey for core from 10.0.0.1 port 50296 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:10:20.509173 sshd-session[4945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:10:20.513254 systemd-logind[1444]: New session 17 of user core. Apr 22 15:10:20.524478 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 22 15:10:20.683883 sshd[4947]: Connection closed by 10.0.0.1 port 50296 Apr 22 15:10:20.684411 sshd-session[4945]: pam_unix(sshd:session): session closed for user core Apr 22 15:10:20.699236 systemd[1]: sshd@16-10.0.0.54:22-10.0.0.1:50296.service: Deactivated successfully. Apr 22 15:10:20.701153 systemd[1]: session-17.scope: Deactivated successfully. Apr 22 15:10:20.703226 systemd-logind[1444]: Session 17 logged out. Waiting for processes to exit. Apr 22 15:10:20.704598 systemd[1]: Started sshd@17-10.0.0.54:22-10.0.0.1:50312.service - OpenSSH per-connection server daemon (10.0.0.1:50312). Apr 22 15:10:20.706088 systemd-logind[1444]: Removed session 17. Apr 22 15:10:20.760948 sshd[4960]: Accepted publickey for core from 10.0.0.1 port 50312 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:10:20.762112 sshd-session[4960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:10:20.766213 systemd-logind[1444]: New session 18 of user core. Apr 22 15:10:20.776515 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 22 15:10:21.008280 sshd[4963]: Connection closed by 10.0.0.1 port 50312 Apr 22 15:10:21.008962 sshd-session[4960]: pam_unix(sshd:session): session closed for user core Apr 22 15:10:21.020638 systemd[1]: sshd@17-10.0.0.54:22-10.0.0.1:50312.service: Deactivated successfully. Apr 22 15:10:21.022122 systemd[1]: session-18.scope: Deactivated successfully. Apr 22 15:10:21.023561 systemd-logind[1444]: Session 18 logged out. Waiting for processes to exit. Apr 22 15:10:21.025004 systemd[1]: Started sshd@18-10.0.0.54:22-10.0.0.1:50318.service - OpenSSH per-connection server daemon (10.0.0.1:50318). Apr 22 15:10:21.026564 systemd-logind[1444]: Removed session 18. Apr 22 15:10:21.081745 sshd[4975]: Accepted publickey for core from 10.0.0.1 port 50318 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:10:21.082546 sshd-session[4975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:10:21.087139 systemd-logind[1444]: New session 19 of user core. Apr 22 15:10:21.095557 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 22 15:10:22.625400 sshd[4978]: Connection closed by 10.0.0.1 port 50318 Apr 22 15:10:22.626246 sshd-session[4975]: pam_unix(sshd:session): session closed for user core Apr 22 15:10:22.642255 systemd[1]: Started sshd@19-10.0.0.54:22-10.0.0.1:53202.service - OpenSSH per-connection server daemon (10.0.0.1:53202). Apr 22 15:10:22.643003 systemd[1]: sshd@18-10.0.0.54:22-10.0.0.1:50318.service: Deactivated successfully. Apr 22 15:10:22.645102 systemd[1]: session-19.scope: Deactivated successfully. Apr 22 15:10:22.645332 systemd[1]: session-19.scope: Consumed 508ms CPU time, 70.3M memory peak. Apr 22 15:10:22.647544 systemd-logind[1444]: Session 19 logged out. Waiting for processes to exit. Apr 22 15:10:22.650476 systemd-logind[1444]: Removed session 19. Apr 22 15:10:22.700243 sshd[5002]: Accepted publickey for core from 10.0.0.1 port 53202 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:10:22.701569 sshd-session[5002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:10:22.706417 systemd-logind[1444]: New session 20 of user core. Apr 22 15:10:22.716544 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 22 15:10:23.023565 sshd[5008]: Connection closed by 10.0.0.1 port 53202 Apr 22 15:10:23.024617 sshd-session[5002]: pam_unix(sshd:session): session closed for user core Apr 22 15:10:23.039334 systemd[1]: sshd@19-10.0.0.54:22-10.0.0.1:53202.service: Deactivated successfully. Apr 22 15:10:23.041987 systemd[1]: session-20.scope: Deactivated successfully. Apr 22 15:10:23.043468 systemd-logind[1444]: Session 20 logged out. Waiting for processes to exit. Apr 22 15:10:23.045069 systemd[1]: Started sshd@20-10.0.0.54:22-10.0.0.1:53214.service - OpenSSH per-connection server daemon (10.0.0.1:53214). Apr 22 15:10:23.047922 systemd-logind[1444]: Removed session 20. Apr 22 15:10:23.109445 sshd[5019]: Accepted publickey for core from 10.0.0.1 port 53214 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:10:23.110896 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:10:23.116873 systemd-logind[1444]: New session 21 of user core. Apr 22 15:10:23.123542 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 22 15:10:23.241167 sshd[5022]: Connection closed by 10.0.0.1 port 53214 Apr 22 15:10:23.241510 sshd-session[5019]: pam_unix(sshd:session): session closed for user core Apr 22 15:10:23.244749 systemd[1]: sshd@20-10.0.0.54:22-10.0.0.1:53214.service: Deactivated successfully. Apr 22 15:10:23.247951 systemd[1]: session-21.scope: Deactivated successfully. Apr 22 15:10:23.248867 systemd-logind[1444]: Session 21 logged out. Waiting for processes to exit. Apr 22 15:10:23.249829 systemd-logind[1444]: Removed session 21. Apr 22 15:10:25.727252 containerd[1459]: time="2025-04-22T15:10:25.727212048Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5d70b893bc529bba58e1dd2fce4cef07144a8113dd5e241395fe2fee761a856d\" id:\"1ceb4cf96014de9c0e773e5e1f534020f65632e47efe8e82bdbbc18356e72ccb\" pid:5046 exited_at:{seconds:1745334625 nanos:726921216}" Apr 22 15:10:28.257029 systemd[1]: Started sshd@21-10.0.0.54:22-10.0.0.1:53228.service - OpenSSH per-connection server daemon (10.0.0.1:53228). Apr 22 15:10:28.309749 sshd[5062]: Accepted publickey for core from 10.0.0.1 port 53228 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:10:28.310832 sshd-session[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:10:28.315169 systemd-logind[1444]: New session 22 of user core. Apr 22 15:10:28.319558 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 22 15:10:28.451413 sshd[5064]: Connection closed by 10.0.0.1 port 53228 Apr 22 15:10:28.451686 sshd-session[5062]: pam_unix(sshd:session): session closed for user core Apr 22 15:10:28.454977 systemd[1]: sshd@21-10.0.0.54:22-10.0.0.1:53228.service: Deactivated successfully. Apr 22 15:10:28.456806 systemd[1]: session-22.scope: Deactivated successfully. Apr 22 15:10:28.457412 systemd-logind[1444]: Session 22 logged out. Waiting for processes to exit. Apr 22 15:10:28.458425 systemd-logind[1444]: Removed session 22. Apr 22 15:10:33.466067 systemd[1]: Started sshd@22-10.0.0.54:22-10.0.0.1:47518.service - OpenSSH per-connection server daemon (10.0.0.1:47518). Apr 22 15:10:33.517188 sshd[5087]: Accepted publickey for core from 10.0.0.1 port 47518 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:10:33.517704 sshd-session[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:10:33.521800 systemd-logind[1444]: New session 23 of user core. Apr 22 15:10:33.527513 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 22 15:10:33.674702 sshd[5089]: Connection closed by 10.0.0.1 port 47518 Apr 22 15:10:33.675416 sshd-session[5087]: pam_unix(sshd:session): session closed for user core Apr 22 15:10:33.678683 systemd[1]: sshd@22-10.0.0.54:22-10.0.0.1:47518.service: Deactivated successfully. Apr 22 15:10:33.680449 systemd[1]: session-23.scope: Deactivated successfully. Apr 22 15:10:33.681153 systemd-logind[1444]: Session 23 logged out. Waiting for processes to exit. Apr 22 15:10:33.681982 systemd-logind[1444]: Removed session 23. Apr 22 15:10:36.139452 containerd[1459]: time="2025-04-22T15:10:36.139387457Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30b3304bd36c3a429f64ddab1e92957e363245bfa97bfb8bdfaf0207e922eafa\" id:\"422e27cf1da4b87cf65e762db2209ea279b4415e2ce9b2b374e919b02560bd50\" pid:5113 exited_at:{seconds:1745334636 nanos:139150222}" Apr 22 15:10:38.688240 systemd[1]: Started sshd@23-10.0.0.54:22-10.0.0.1:47534.service - OpenSSH per-connection server daemon (10.0.0.1:47534). Apr 22 15:10:38.755547 sshd[5124]: Accepted publickey for core from 10.0.0.1 port 47534 ssh2: RSA SHA256:vSMEaMy/bsMRI0wkzsr2vqgekxsKtnIZxYOZanmPdeI Apr 22 15:10:38.755989 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 22 15:10:38.760254 systemd-logind[1444]: New session 24 of user core. Apr 22 15:10:38.769502 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 22 15:10:38.909678 sshd[5126]: Connection closed by 10.0.0.1 port 47534 Apr 22 15:10:38.910123 sshd-session[5124]: pam_unix(sshd:session): session closed for user core Apr 22 15:10:38.913753 systemd[1]: sshd@23-10.0.0.54:22-10.0.0.1:47534.service: Deactivated successfully. Apr 22 15:10:38.916505 systemd[1]: session-24.scope: Deactivated successfully. Apr 22 15:10:38.917374 systemd-logind[1444]: Session 24 logged out. Waiting for processes to exit. Apr 22 15:10:38.918094 systemd-logind[1444]: Removed session 24.