Mar 21 12:42:16.900879 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 21 12:42:16.900901 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Fri Mar 21 10:53:54 -00 2025 Mar 21 12:42:16.900911 kernel: KASLR enabled Mar 21 12:42:16.900917 kernel: efi: EFI v2.7 by EDK II Mar 21 12:42:16.900923 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Mar 21 12:42:16.900929 kernel: random: crng init done Mar 21 12:42:16.900935 kernel: secureboot: Secure boot disabled Mar 21 12:42:16.900941 kernel: ACPI: Early table checksum verification disabled Mar 21 12:42:16.900947 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Mar 21 12:42:16.900957 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 21 12:42:16.900975 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:42:16.900981 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:42:16.900988 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:42:16.900994 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:42:16.901000 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:42:16.901008 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:42:16.901014 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:42:16.901020 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:42:16.901034 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:42:16.901041 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 21 12:42:16.901047 kernel: NUMA: Failed to initialise from firmware Mar 21 12:42:16.901053 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 21 12:42:16.901059 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Mar 21 12:42:16.901065 kernel: Zone ranges: Mar 21 12:42:16.901071 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 21 12:42:16.901078 kernel: DMA32 empty Mar 21 12:42:16.901084 kernel: Normal empty Mar 21 12:42:16.901090 kernel: Movable zone start for each node Mar 21 12:42:16.901096 kernel: Early memory node ranges Mar 21 12:42:16.901103 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Mar 21 12:42:16.901108 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Mar 21 12:42:16.901114 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Mar 21 12:42:16.901120 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Mar 21 12:42:16.901126 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Mar 21 12:42:16.901133 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Mar 21 12:42:16.901138 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Mar 21 12:42:16.901144 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Mar 21 12:42:16.901151 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Mar 21 12:42:16.901157 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 21 12:42:16.901163 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 21 12:42:16.901172 kernel: psci: probing for conduit method from ACPI. Mar 21 12:42:16.901178 kernel: psci: PSCIv1.1 detected in firmware. Mar 21 12:42:16.901185 kernel: psci: Using standard PSCI v0.2 function IDs Mar 21 12:42:16.901192 kernel: psci: Trusted OS migration not required Mar 21 12:42:16.901198 kernel: psci: SMC Calling Convention v1.1 Mar 21 12:42:16.901205 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 21 12:42:16.901211 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Mar 21 12:42:16.901217 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Mar 21 12:42:16.901224 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 21 12:42:16.901230 kernel: Detected PIPT I-cache on CPU0 Mar 21 12:42:16.901236 kernel: CPU features: detected: GIC system register CPU interface Mar 21 12:42:16.901243 kernel: CPU features: detected: Hardware dirty bit management Mar 21 12:42:16.901249 kernel: CPU features: detected: Spectre-v4 Mar 21 12:42:16.901256 kernel: CPU features: detected: Spectre-BHB Mar 21 12:42:16.901263 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 21 12:42:16.901269 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 21 12:42:16.901275 kernel: CPU features: detected: ARM erratum 1418040 Mar 21 12:42:16.901282 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 21 12:42:16.901288 kernel: alternatives: applying boot alternatives Mar 21 12:42:16.901295 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=93cb17f03b776356c0810b716fff0c7c2012572bbe395c702f6873d17674684f Mar 21 12:42:16.901302 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 21 12:42:16.901308 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 21 12:42:16.901315 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 21 12:42:16.901321 kernel: Fallback order for Node 0: 0 Mar 21 12:42:16.901329 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 21 12:42:16.901335 kernel: Policy zone: DMA Mar 21 12:42:16.901341 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 21 12:42:16.901347 kernel: software IO TLB: area num 4. Mar 21 12:42:16.901354 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Mar 21 12:42:16.901361 kernel: Memory: 2387408K/2572288K available (10304K kernel code, 2186K rwdata, 8096K rodata, 38464K init, 897K bss, 184880K reserved, 0K cma-reserved) Mar 21 12:42:16.901367 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 21 12:42:16.901382 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 21 12:42:16.901390 kernel: rcu: RCU event tracing is enabled. Mar 21 12:42:16.901396 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 21 12:42:16.901403 kernel: Trampoline variant of Tasks RCU enabled. Mar 21 12:42:16.901409 kernel: Tracing variant of Tasks RCU enabled. Mar 21 12:42:16.901418 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 21 12:42:16.901425 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 21 12:42:16.901431 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 21 12:42:16.901437 kernel: GICv3: 256 SPIs implemented Mar 21 12:42:16.901443 kernel: GICv3: 0 Extended SPIs implemented Mar 21 12:42:16.901449 kernel: Root IRQ handler: gic_handle_irq Mar 21 12:42:16.901456 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Mar 21 12:42:16.901462 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 21 12:42:16.901468 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 21 12:42:16.901475 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Mar 21 12:42:16.901481 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Mar 21 12:42:16.901489 kernel: GICv3: using LPI property table @0x00000000400f0000 Mar 21 12:42:16.901495 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Mar 21 12:42:16.901502 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 21 12:42:16.901508 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 21 12:42:16.901514 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 21 12:42:16.901521 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 21 12:42:16.901527 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 21 12:42:16.901534 kernel: arm-pv: using stolen time PV Mar 21 12:42:16.901540 kernel: Console: colour dummy device 80x25 Mar 21 12:42:16.901547 kernel: ACPI: Core revision 20230628 Mar 21 12:42:16.901553 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 21 12:42:16.901561 kernel: pid_max: default: 32768 minimum: 301 Mar 21 12:42:16.901568 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 21 12:42:16.901574 kernel: landlock: Up and running. Mar 21 12:42:16.901581 kernel: SELinux: Initializing. Mar 21 12:42:16.901587 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 21 12:42:16.901593 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 21 12:42:16.901600 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 21 12:42:16.901607 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 21 12:42:16.901613 kernel: rcu: Hierarchical SRCU implementation. Mar 21 12:42:16.901621 kernel: rcu: Max phase no-delay instances is 400. Mar 21 12:42:16.901632 kernel: Platform MSI: ITS@0x8080000 domain created Mar 21 12:42:16.901645 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 21 12:42:16.901651 kernel: Remapping and enabling EFI services. Mar 21 12:42:16.901658 kernel: smp: Bringing up secondary CPUs ... Mar 21 12:42:16.901664 kernel: Detected PIPT I-cache on CPU1 Mar 21 12:42:16.901671 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 21 12:42:16.901677 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Mar 21 12:42:16.901684 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 21 12:42:16.901691 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 21 12:42:16.901698 kernel: Detected PIPT I-cache on CPU2 Mar 21 12:42:16.901709 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 21 12:42:16.901718 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Mar 21 12:42:16.901725 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 21 12:42:16.901732 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 21 12:42:16.901739 kernel: Detected PIPT I-cache on CPU3 Mar 21 12:42:16.901746 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 21 12:42:16.901753 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Mar 21 12:42:16.901761 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 21 12:42:16.901768 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 21 12:42:16.901775 kernel: smp: Brought up 1 node, 4 CPUs Mar 21 12:42:16.901782 kernel: SMP: Total of 4 processors activated. Mar 21 12:42:16.901789 kernel: CPU features: detected: 32-bit EL0 Support Mar 21 12:42:16.901795 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 21 12:42:16.901802 kernel: CPU features: detected: Common not Private translations Mar 21 12:42:16.901809 kernel: CPU features: detected: CRC32 instructions Mar 21 12:42:16.901817 kernel: CPU features: detected: Enhanced Virtualization Traps Mar 21 12:42:16.901824 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 21 12:42:16.901831 kernel: CPU features: detected: LSE atomic instructions Mar 21 12:42:16.901838 kernel: CPU features: detected: Privileged Access Never Mar 21 12:42:16.901844 kernel: CPU features: detected: RAS Extension Support Mar 21 12:42:16.901851 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 21 12:42:16.901858 kernel: CPU: All CPU(s) started at EL1 Mar 21 12:42:16.901865 kernel: alternatives: applying system-wide alternatives Mar 21 12:42:16.901871 kernel: devtmpfs: initialized Mar 21 12:42:16.901878 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 21 12:42:16.901887 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 21 12:42:16.901894 kernel: pinctrl core: initialized pinctrl subsystem Mar 21 12:42:16.901901 kernel: SMBIOS 3.0.0 present. Mar 21 12:42:16.901907 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Mar 21 12:42:16.901914 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 21 12:42:16.901921 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 21 12:42:16.901928 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 21 12:42:16.901935 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 21 12:42:16.901943 kernel: audit: initializing netlink subsys (disabled) Mar 21 12:42:16.901950 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Mar 21 12:42:16.901957 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 21 12:42:16.901964 kernel: cpuidle: using governor menu Mar 21 12:42:16.901971 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 21 12:42:16.901978 kernel: ASID allocator initialised with 32768 entries Mar 21 12:42:16.901984 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 21 12:42:16.901991 kernel: Serial: AMBA PL011 UART driver Mar 21 12:42:16.901998 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Mar 21 12:42:16.902006 kernel: Modules: 0 pages in range for non-PLT usage Mar 21 12:42:16.902013 kernel: Modules: 509248 pages in range for PLT usage Mar 21 12:42:16.902020 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 21 12:42:16.902030 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Mar 21 12:42:16.902039 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Mar 21 12:42:16.902046 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Mar 21 12:42:16.902052 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 21 12:42:16.902059 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Mar 21 12:42:16.902066 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Mar 21 12:42:16.902073 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Mar 21 12:42:16.902081 kernel: ACPI: Added _OSI(Module Device) Mar 21 12:42:16.902088 kernel: ACPI: Added _OSI(Processor Device) Mar 21 12:42:16.902101 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 21 12:42:16.902116 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 21 12:42:16.902123 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 21 12:42:16.902130 kernel: ACPI: Interpreter enabled Mar 21 12:42:16.902136 kernel: ACPI: Using GIC for interrupt routing Mar 21 12:42:16.902143 kernel: ACPI: MCFG table detected, 1 entries Mar 21 12:42:16.902150 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 21 12:42:16.902158 kernel: printk: console [ttyAMA0] enabled Mar 21 12:42:16.902165 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 21 12:42:16.902294 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 21 12:42:16.902370 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 21 12:42:16.902451 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 21 12:42:16.902518 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 21 12:42:16.902586 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 21 12:42:16.902598 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 21 12:42:16.902605 kernel: PCI host bridge to bus 0000:00 Mar 21 12:42:16.902677 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 21 12:42:16.902738 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 21 12:42:16.902796 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 21 12:42:16.902854 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 21 12:42:16.902936 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 21 12:42:16.903012 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 21 12:42:16.903089 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 21 12:42:16.903157 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 21 12:42:16.903223 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 21 12:42:16.903288 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 21 12:42:16.903353 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 21 12:42:16.903456 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 21 12:42:16.903524 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 21 12:42:16.903583 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 21 12:42:16.903641 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 21 12:42:16.903650 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 21 12:42:16.903657 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 21 12:42:16.903664 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 21 12:42:16.903671 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 21 12:42:16.903680 kernel: iommu: Default domain type: Translated Mar 21 12:42:16.903687 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 21 12:42:16.903694 kernel: efivars: Registered efivars operations Mar 21 12:42:16.903701 kernel: vgaarb: loaded Mar 21 12:42:16.903708 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 21 12:42:16.903714 kernel: VFS: Disk quotas dquot_6.6.0 Mar 21 12:42:16.903721 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 21 12:42:16.903728 kernel: pnp: PnP ACPI init Mar 21 12:42:16.903799 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 21 12:42:16.903811 kernel: pnp: PnP ACPI: found 1 devices Mar 21 12:42:16.903818 kernel: NET: Registered PF_INET protocol family Mar 21 12:42:16.903825 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 21 12:42:16.903832 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 21 12:42:16.903839 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 21 12:42:16.903846 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 21 12:42:16.903853 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 21 12:42:16.903860 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 21 12:42:16.903867 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 21 12:42:16.903876 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 21 12:42:16.903882 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 21 12:42:16.903889 kernel: PCI: CLS 0 bytes, default 64 Mar 21 12:42:16.903896 kernel: kvm [1]: HYP mode not available Mar 21 12:42:16.903903 kernel: Initialise system trusted keyrings Mar 21 12:42:16.903910 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 21 12:42:16.903917 kernel: Key type asymmetric registered Mar 21 12:42:16.903923 kernel: Asymmetric key parser 'x509' registered Mar 21 12:42:16.903930 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 21 12:42:16.903938 kernel: io scheduler mq-deadline registered Mar 21 12:42:16.903945 kernel: io scheduler kyber registered Mar 21 12:42:16.903952 kernel: io scheduler bfq registered Mar 21 12:42:16.903959 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 21 12:42:16.903966 kernel: ACPI: button: Power Button [PWRB] Mar 21 12:42:16.903973 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 21 12:42:16.904047 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 21 12:42:16.904057 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 21 12:42:16.904064 kernel: thunder_xcv, ver 1.0 Mar 21 12:42:16.904073 kernel: thunder_bgx, ver 1.0 Mar 21 12:42:16.904080 kernel: nicpf, ver 1.0 Mar 21 12:42:16.904086 kernel: nicvf, ver 1.0 Mar 21 12:42:16.904160 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 21 12:42:16.904225 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-21T12:42:16 UTC (1742560936) Mar 21 12:42:16.904234 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 21 12:42:16.904241 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 21 12:42:16.904248 kernel: watchdog: Delayed init of the lockup detector failed: -19 Mar 21 12:42:16.904257 kernel: watchdog: Hard watchdog permanently disabled Mar 21 12:42:16.904264 kernel: NET: Registered PF_INET6 protocol family Mar 21 12:42:16.904271 kernel: Segment Routing with IPv6 Mar 21 12:42:16.904277 kernel: In-situ OAM (IOAM) with IPv6 Mar 21 12:42:16.904284 kernel: NET: Registered PF_PACKET protocol family Mar 21 12:42:16.904291 kernel: Key type dns_resolver registered Mar 21 12:42:16.904298 kernel: registered taskstats version 1 Mar 21 12:42:16.904304 kernel: Loading compiled-in X.509 certificates Mar 21 12:42:16.904311 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 5eb113f0b3321dedaccf2566eff1e4f54032526e' Mar 21 12:42:16.904319 kernel: Key type .fscrypt registered Mar 21 12:42:16.904326 kernel: Key type fscrypt-provisioning registered Mar 21 12:42:16.904333 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 21 12:42:16.904340 kernel: ima: Allocated hash algorithm: sha1 Mar 21 12:42:16.904346 kernel: ima: No architecture policies found Mar 21 12:42:16.904353 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 21 12:42:16.904360 kernel: clk: Disabling unused clocks Mar 21 12:42:16.904367 kernel: Freeing unused kernel memory: 38464K Mar 21 12:42:16.904383 kernel: Run /init as init process Mar 21 12:42:16.904392 kernel: with arguments: Mar 21 12:42:16.904399 kernel: /init Mar 21 12:42:16.904405 kernel: with environment: Mar 21 12:42:16.904412 kernel: HOME=/ Mar 21 12:42:16.904419 kernel: TERM=linux Mar 21 12:42:16.904425 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 21 12:42:16.904433 systemd[1]: Successfully made /usr/ read-only. Mar 21 12:42:16.904442 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 21 12:42:16.904452 systemd[1]: Detected virtualization kvm. Mar 21 12:42:16.904459 systemd[1]: Detected architecture arm64. Mar 21 12:42:16.904466 systemd[1]: Running in initrd. Mar 21 12:42:16.904473 systemd[1]: No hostname configured, using default hostname. Mar 21 12:42:16.904481 systemd[1]: Hostname set to . Mar 21 12:42:16.904488 systemd[1]: Initializing machine ID from VM UUID. Mar 21 12:42:16.904495 systemd[1]: Queued start job for default target initrd.target. Mar 21 12:42:16.904502 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 21 12:42:16.904511 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 21 12:42:16.904519 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 21 12:42:16.904527 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 21 12:42:16.904534 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 21 12:42:16.904543 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 21 12:42:16.904551 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 21 12:42:16.904560 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 21 12:42:16.904567 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 21 12:42:16.904575 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 21 12:42:16.904582 systemd[1]: Reached target paths.target - Path Units. Mar 21 12:42:16.904589 systemd[1]: Reached target slices.target - Slice Units. Mar 21 12:42:16.904597 systemd[1]: Reached target swap.target - Swaps. Mar 21 12:42:16.904604 systemd[1]: Reached target timers.target - Timer Units. Mar 21 12:42:16.904611 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 21 12:42:16.904619 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 21 12:42:16.904627 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 21 12:42:16.904635 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 21 12:42:16.904642 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 21 12:42:16.904650 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 21 12:42:16.904657 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 21 12:42:16.904664 systemd[1]: Reached target sockets.target - Socket Units. Mar 21 12:42:16.904672 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 21 12:42:16.904679 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 21 12:42:16.904688 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 21 12:42:16.904695 systemd[1]: Starting systemd-fsck-usr.service... Mar 21 12:42:16.904702 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 21 12:42:16.904710 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 21 12:42:16.904717 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:42:16.904725 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 21 12:42:16.904732 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 21 12:42:16.904741 systemd[1]: Finished systemd-fsck-usr.service. Mar 21 12:42:16.904749 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 21 12:42:16.904756 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:42:16.904779 systemd-journald[237]: Collecting audit messages is disabled. Mar 21 12:42:16.904799 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 21 12:42:16.904806 kernel: Bridge firewalling registered Mar 21 12:42:16.904814 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 21 12:42:16.904821 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 21 12:42:16.904829 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 21 12:42:16.904837 systemd-journald[237]: Journal started Mar 21 12:42:16.904855 systemd-journald[237]: Runtime Journal (/run/log/journal/6aff1540c9d5433597866044f032b1f5) is 5.9M, max 47.3M, 41.4M free. Mar 21 12:42:16.885127 systemd-modules-load[238]: Inserted module 'overlay' Mar 21 12:42:16.900205 systemd-modules-load[238]: Inserted module 'br_netfilter' Mar 21 12:42:16.908897 systemd[1]: Started systemd-journald.service - Journal Service. Mar 21 12:42:16.911478 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 21 12:42:16.912716 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 21 12:42:16.915512 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 21 12:42:16.922771 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 12:42:16.927477 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 21 12:42:16.929510 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 21 12:42:16.931340 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 21 12:42:16.934090 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 21 12:42:16.937471 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 21 12:42:16.940456 dracut-cmdline[272]: dracut-dracut-053 Mar 21 12:42:16.942920 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=93cb17f03b776356c0810b716fff0c7c2012572bbe395c702f6873d17674684f Mar 21 12:42:16.978202 systemd-resolved[281]: Positive Trust Anchors: Mar 21 12:42:16.978217 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 21 12:42:16.978247 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 21 12:42:16.983069 systemd-resolved[281]: Defaulting to hostname 'linux'. Mar 21 12:42:16.984149 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 21 12:42:16.985763 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 21 12:42:17.017401 kernel: SCSI subsystem initialized Mar 21 12:42:17.021387 kernel: Loading iSCSI transport class v2.0-870. Mar 21 12:42:17.028393 kernel: iscsi: registered transport (tcp) Mar 21 12:42:17.041402 kernel: iscsi: registered transport (qla4xxx) Mar 21 12:42:17.041417 kernel: QLogic iSCSI HBA Driver Mar 21 12:42:17.082429 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 21 12:42:17.084432 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 21 12:42:17.112637 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 21 12:42:17.112665 kernel: device-mapper: uevent: version 1.0.3 Mar 21 12:42:17.115393 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 21 12:42:17.160400 kernel: raid6: neonx8 gen() 15766 MB/s Mar 21 12:42:17.177387 kernel: raid6: neonx4 gen() 15795 MB/s Mar 21 12:42:17.194388 kernel: raid6: neonx2 gen() 13182 MB/s Mar 21 12:42:17.211395 kernel: raid6: neonx1 gen() 10438 MB/s Mar 21 12:42:17.228404 kernel: raid6: int64x8 gen() 6786 MB/s Mar 21 12:42:17.245400 kernel: raid6: int64x4 gen() 7344 MB/s Mar 21 12:42:17.262399 kernel: raid6: int64x2 gen() 6108 MB/s Mar 21 12:42:17.279397 kernel: raid6: int64x1 gen() 5058 MB/s Mar 21 12:42:17.279422 kernel: raid6: using algorithm neonx4 gen() 15795 MB/s Mar 21 12:42:17.296404 kernel: raid6: .... xor() 12381 MB/s, rmw enabled Mar 21 12:42:17.296429 kernel: raid6: using neon recovery algorithm Mar 21 12:42:17.301697 kernel: xor: measuring software checksum speed Mar 21 12:42:17.301724 kernel: 8regs : 21636 MB/sec Mar 21 12:42:17.301742 kernel: 32regs : 21704 MB/sec Mar 21 12:42:17.302610 kernel: arm64_neon : 27917 MB/sec Mar 21 12:42:17.302628 kernel: xor: using function: arm64_neon (27917 MB/sec) Mar 21 12:42:17.351396 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 21 12:42:17.364433 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 21 12:42:17.367074 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 21 12:42:17.394129 systemd-udevd[460]: Using default interface naming scheme 'v255'. Mar 21 12:42:17.397803 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 21 12:42:17.401399 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 21 12:42:17.426493 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Mar 21 12:42:17.451198 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 21 12:42:17.452770 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 21 12:42:17.519016 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 21 12:42:17.521721 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 21 12:42:17.542417 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 21 12:42:17.543767 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 21 12:42:17.544842 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 21 12:42:17.546650 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 21 12:42:17.550558 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 21 12:42:17.566401 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Mar 21 12:42:17.572916 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 21 12:42:17.573019 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 21 12:42:17.573040 kernel: GPT:9289727 != 19775487 Mar 21 12:42:17.573050 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 21 12:42:17.573059 kernel: GPT:9289727 != 19775487 Mar 21 12:42:17.573067 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 21 12:42:17.573076 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 21 12:42:17.566509 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 21 12:42:17.572704 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 21 12:42:17.572812 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 12:42:17.575165 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 21 12:42:17.576545 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 21 12:42:17.576677 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:42:17.579604 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:42:17.584090 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:42:17.599388 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (511) Mar 21 12:42:17.600307 kernel: BTRFS: device fsid bdcda679-e2cc-43ec-88ed-d0a5c8807e76 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (509) Mar 21 12:42:17.607867 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 21 12:42:17.608962 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:42:17.621646 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 21 12:42:17.633206 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 21 12:42:17.639111 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 21 12:42:17.640010 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 21 12:42:17.642549 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 21 12:42:17.644867 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 21 12:42:17.657960 disk-uuid[549]: Primary Header is updated. Mar 21 12:42:17.657960 disk-uuid[549]: Secondary Entries is updated. Mar 21 12:42:17.657960 disk-uuid[549]: Secondary Header is updated. Mar 21 12:42:17.662195 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 21 12:42:17.669513 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 12:42:18.677410 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 21 12:42:18.677696 disk-uuid[554]: The operation has completed successfully. Mar 21 12:42:18.702698 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 21 12:42:18.702799 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 21 12:42:18.726982 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 21 12:42:18.741341 sh[570]: Success Mar 21 12:42:18.754390 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 21 12:42:18.782385 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 21 12:42:18.784852 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 21 12:42:18.800798 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 21 12:42:18.805863 kernel: BTRFS info (device dm-0): first mount of filesystem bdcda679-e2cc-43ec-88ed-d0a5c8807e76 Mar 21 12:42:18.805899 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Mar 21 12:42:18.805909 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 21 12:42:18.806631 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 21 12:42:18.807615 kernel: BTRFS info (device dm-0): using free space tree Mar 21 12:42:18.810800 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 21 12:42:18.811878 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 21 12:42:18.812596 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 21 12:42:18.815298 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 21 12:42:18.835931 kernel: BTRFS info (device vda6): first mount of filesystem fea78075-4b56-496a-88c9-8f4cfa7493bf Mar 21 12:42:18.835965 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 21 12:42:18.835976 kernel: BTRFS info (device vda6): using free space tree Mar 21 12:42:18.839723 kernel: BTRFS info (device vda6): auto enabling async discard Mar 21 12:42:18.843445 kernel: BTRFS info (device vda6): last unmount of filesystem fea78075-4b56-496a-88c9-8f4cfa7493bf Mar 21 12:42:18.845899 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 21 12:42:18.847530 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 21 12:42:18.919103 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 21 12:42:18.922174 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 21 12:42:18.953900 ignition[658]: Ignition 2.20.0 Mar 21 12:42:18.953909 ignition[658]: Stage: fetch-offline Mar 21 12:42:18.953942 ignition[658]: no configs at "/usr/lib/ignition/base.d" Mar 21 12:42:18.953950 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:42:18.954110 ignition[658]: parsed url from cmdline: "" Mar 21 12:42:18.954113 ignition[658]: no config URL provided Mar 21 12:42:18.954118 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Mar 21 12:42:18.954126 ignition[658]: no config at "/usr/lib/ignition/user.ign" Mar 21 12:42:18.954149 ignition[658]: op(1): [started] loading QEMU firmware config module Mar 21 12:42:18.954153 ignition[658]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 21 12:42:18.960515 ignition[658]: op(1): [finished] loading QEMU firmware config module Mar 21 12:42:18.970060 systemd-networkd[761]: lo: Link UP Mar 21 12:42:18.970068 systemd-networkd[761]: lo: Gained carrier Mar 21 12:42:18.970874 systemd-networkd[761]: Enumeration completed Mar 21 12:42:18.970977 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 21 12:42:18.971993 systemd[1]: Reached target network.target - Network. Mar 21 12:42:18.973759 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 12:42:18.973763 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 21 12:42:18.974473 systemd-networkd[761]: eth0: Link UP Mar 21 12:42:18.974476 systemd-networkd[761]: eth0: Gained carrier Mar 21 12:42:18.974482 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 12:42:18.997411 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 21 12:42:19.013784 ignition[658]: parsing config with SHA512: f60ee4441a836278628091d4622002e362beb924ccd15a7936d6ad4546d7d1be3c91abbb4980852ced960140c8c372207362a00645a577399e5a37e0ed609c9b Mar 21 12:42:19.018230 unknown[658]: fetched base config from "system" Mar 21 12:42:19.018246 unknown[658]: fetched user config from "qemu" Mar 21 12:42:19.018636 ignition[658]: fetch-offline: fetch-offline passed Mar 21 12:42:19.018712 ignition[658]: Ignition finished successfully Mar 21 12:42:19.021435 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 21 12:42:19.023273 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 21 12:42:19.025994 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 21 12:42:19.049326 ignition[769]: Ignition 2.20.0 Mar 21 12:42:19.049337 ignition[769]: Stage: kargs Mar 21 12:42:19.049507 ignition[769]: no configs at "/usr/lib/ignition/base.d" Mar 21 12:42:19.049517 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:42:19.052337 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 21 12:42:19.050330 ignition[769]: kargs: kargs passed Mar 21 12:42:19.050399 ignition[769]: Ignition finished successfully Mar 21 12:42:19.054657 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 21 12:42:19.077229 ignition[778]: Ignition 2.20.0 Mar 21 12:42:19.077239 ignition[778]: Stage: disks Mar 21 12:42:19.077416 ignition[778]: no configs at "/usr/lib/ignition/base.d" Mar 21 12:42:19.077427 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:42:19.078205 ignition[778]: disks: disks passed Mar 21 12:42:19.079562 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 21 12:42:19.078248 ignition[778]: Ignition finished successfully Mar 21 12:42:19.081828 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 21 12:42:19.083116 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 21 12:42:19.084912 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 21 12:42:19.086528 systemd[1]: Reached target sysinit.target - System Initialization. Mar 21 12:42:19.088145 systemd[1]: Reached target basic.target - Basic System. Mar 21 12:42:19.090408 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 21 12:42:19.114524 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 21 12:42:19.118576 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 21 12:42:19.121395 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 21 12:42:19.176397 kernel: EXT4-fs (vda9): mounted filesystem 3004295c-1fab-4723-a953-2dc6fc131037 r/w with ordered data mode. Quota mode: none. Mar 21 12:42:19.176535 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 21 12:42:19.177533 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 21 12:42:19.179532 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 21 12:42:19.180931 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 21 12:42:19.181723 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 21 12:42:19.181762 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 21 12:42:19.181785 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 21 12:42:19.189128 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 21 12:42:19.191299 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 21 12:42:19.193660 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (797) Mar 21 12:42:19.195793 kernel: BTRFS info (device vda6): first mount of filesystem fea78075-4b56-496a-88c9-8f4cfa7493bf Mar 21 12:42:19.195821 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 21 12:42:19.195831 kernel: BTRFS info (device vda6): using free space tree Mar 21 12:42:19.198413 kernel: BTRFS info (device vda6): auto enabling async discard Mar 21 12:42:19.199180 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 21 12:42:19.232599 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Mar 21 12:42:19.236691 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Mar 21 12:42:19.239931 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Mar 21 12:42:19.243356 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Mar 21 12:42:19.311127 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 21 12:42:19.313138 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 21 12:42:19.314588 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 21 12:42:19.331402 kernel: BTRFS info (device vda6): last unmount of filesystem fea78075-4b56-496a-88c9-8f4cfa7493bf Mar 21 12:42:19.352616 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 21 12:42:19.361893 ignition[913]: INFO : Ignition 2.20.0 Mar 21 12:42:19.361893 ignition[913]: INFO : Stage: mount Mar 21 12:42:19.363110 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 21 12:42:19.363110 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:42:19.363110 ignition[913]: INFO : mount: mount passed Mar 21 12:42:19.363110 ignition[913]: INFO : Ignition finished successfully Mar 21 12:42:19.365739 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 21 12:42:19.367480 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 21 12:42:19.945327 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 21 12:42:19.946978 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 21 12:42:19.965710 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (927) Mar 21 12:42:19.965744 kernel: BTRFS info (device vda6): first mount of filesystem fea78075-4b56-496a-88c9-8f4cfa7493bf Mar 21 12:42:19.965754 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 21 12:42:19.966835 kernel: BTRFS info (device vda6): using free space tree Mar 21 12:42:19.969397 kernel: BTRFS info (device vda6): auto enabling async discard Mar 21 12:42:19.970078 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 21 12:42:20.000387 ignition[945]: INFO : Ignition 2.20.0 Mar 21 12:42:20.001269 ignition[945]: INFO : Stage: files Mar 21 12:42:20.001269 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 21 12:42:20.001269 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:42:20.004273 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Mar 21 12:42:20.004273 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 21 12:42:20.004273 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 21 12:42:20.006493 systemd-networkd[761]: eth0: Gained IPv6LL Mar 21 12:42:20.009086 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 21 12:42:20.009086 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 21 12:42:20.011811 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 21 12:42:20.011811 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 21 12:42:20.011811 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 21 12:42:20.009179 unknown[945]: wrote ssh authorized keys file for user: core Mar 21 12:42:20.132502 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 21 12:42:20.475002 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 21 12:42:20.476573 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 21 12:42:20.476573 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 21 12:42:20.476573 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 21 12:42:20.476573 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 21 12:42:20.476573 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 21 12:42:20.476573 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 21 12:42:20.476573 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 21 12:42:20.476573 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 21 12:42:20.476573 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 21 12:42:20.491057 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 21 12:42:20.491057 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 21 12:42:20.491057 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 21 12:42:20.491057 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 21 12:42:20.491057 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Mar 21 12:42:20.811187 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 21 12:42:21.023656 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 21 12:42:21.023656 ignition[945]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 21 12:42:21.027420 ignition[945]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 21 12:42:21.027420 ignition[945]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 21 12:42:21.027420 ignition[945]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 21 12:42:21.027420 ignition[945]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 21 12:42:21.027420 ignition[945]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 21 12:42:21.027420 ignition[945]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 21 12:42:21.027420 ignition[945]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 21 12:42:21.027420 ignition[945]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 21 12:42:21.042465 ignition[945]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 21 12:42:21.045412 ignition[945]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 21 12:42:21.046964 ignition[945]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 21 12:42:21.046964 ignition[945]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 21 12:42:21.046964 ignition[945]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 21 12:42:21.046964 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 21 12:42:21.046964 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 21 12:42:21.046964 ignition[945]: INFO : files: files passed Mar 21 12:42:21.046964 ignition[945]: INFO : Ignition finished successfully Mar 21 12:42:21.048344 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 21 12:42:21.050085 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 21 12:42:21.052523 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 21 12:42:21.066444 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 21 12:42:21.066523 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 21 12:42:21.070081 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Mar 21 12:42:21.072272 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 21 12:42:21.072272 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 21 12:42:21.075213 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 21 12:42:21.075352 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 21 12:42:21.077767 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 21 12:42:21.079948 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 21 12:42:21.119952 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 21 12:42:21.120060 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 21 12:42:21.121626 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 21 12:42:21.122937 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 21 12:42:21.124264 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 21 12:42:21.124943 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 21 12:42:21.149537 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 21 12:42:21.151514 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 21 12:42:21.169765 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 21 12:42:21.170680 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 21 12:42:21.172170 systemd[1]: Stopped target timers.target - Timer Units. Mar 21 12:42:21.173472 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 21 12:42:21.173581 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 21 12:42:21.175369 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 21 12:42:21.176951 systemd[1]: Stopped target basic.target - Basic System. Mar 21 12:42:21.178109 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 21 12:42:21.179498 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 21 12:42:21.180897 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 21 12:42:21.182291 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 21 12:42:21.183738 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 21 12:42:21.185174 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 21 12:42:21.186752 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 21 12:42:21.188017 systemd[1]: Stopped target swap.target - Swaps. Mar 21 12:42:21.189106 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 21 12:42:21.189214 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 21 12:42:21.190914 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 21 12:42:21.192277 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 21 12:42:21.193856 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 21 12:42:21.193946 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 21 12:42:21.195353 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 21 12:42:21.195479 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 21 12:42:21.197506 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 21 12:42:21.197623 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 21 12:42:21.198990 systemd[1]: Stopped target paths.target - Path Units. Mar 21 12:42:21.200221 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 21 12:42:21.200955 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 21 12:42:21.202585 systemd[1]: Stopped target slices.target - Slice Units. Mar 21 12:42:21.203718 systemd[1]: Stopped target sockets.target - Socket Units. Mar 21 12:42:21.204984 systemd[1]: iscsid.socket: Deactivated successfully. Mar 21 12:42:21.205071 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 21 12:42:21.206537 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 21 12:42:21.206611 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 21 12:42:21.207737 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 21 12:42:21.207841 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 21 12:42:21.209152 systemd[1]: ignition-files.service: Deactivated successfully. Mar 21 12:42:21.209253 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 21 12:42:21.210928 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 21 12:42:21.212041 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 21 12:42:21.212161 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 21 12:42:21.214141 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 21 12:42:21.215557 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 21 12:42:21.215674 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 21 12:42:21.217000 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 21 12:42:21.217108 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 21 12:42:21.221569 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 21 12:42:21.221643 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 21 12:42:21.230248 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 21 12:42:21.231789 ignition[1000]: INFO : Ignition 2.20.0 Mar 21 12:42:21.231789 ignition[1000]: INFO : Stage: umount Mar 21 12:42:21.233153 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 21 12:42:21.233153 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:42:21.233153 ignition[1000]: INFO : umount: umount passed Mar 21 12:42:21.233153 ignition[1000]: INFO : Ignition finished successfully Mar 21 12:42:21.234518 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 21 12:42:21.234613 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 21 12:42:21.235539 systemd[1]: Stopped target network.target - Network. Mar 21 12:42:21.236628 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 21 12:42:21.236684 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 21 12:42:21.237909 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 21 12:42:21.237947 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 21 12:42:21.239164 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 21 12:42:21.239209 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 21 12:42:21.240471 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 21 12:42:21.240511 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 21 12:42:21.241760 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 21 12:42:21.243014 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 21 12:42:21.248338 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 21 12:42:21.248477 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 21 12:42:21.252035 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 21 12:42:21.252264 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 21 12:42:21.252353 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 21 12:42:21.254913 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 21 12:42:21.255528 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 21 12:42:21.255582 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 21 12:42:21.257451 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 21 12:42:21.258729 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 21 12:42:21.258785 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 21 12:42:21.260189 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 21 12:42:21.260228 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 21 12:42:21.261651 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 21 12:42:21.261693 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 21 12:42:21.263146 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 21 12:42:21.263184 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 21 12:42:21.265185 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 21 12:42:21.267721 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 21 12:42:21.267781 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 21 12:42:21.274136 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 21 12:42:21.274294 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 21 12:42:21.276335 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 21 12:42:21.276448 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 21 12:42:21.277584 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 21 12:42:21.277617 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 21 12:42:21.278945 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 21 12:42:21.278989 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 21 12:42:21.283147 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 21 12:42:21.283205 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 21 12:42:21.285271 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 21 12:42:21.285320 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 12:42:21.288291 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 21 12:42:21.289201 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 21 12:42:21.289273 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 21 12:42:21.291140 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 21 12:42:21.291185 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:42:21.294422 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 21 12:42:21.294477 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 21 12:42:21.295626 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 21 12:42:21.295710 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 21 12:42:21.296833 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 21 12:42:21.296899 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 21 12:42:21.298771 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 21 12:42:21.298826 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 21 12:42:21.299942 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 21 12:42:21.301408 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 21 12:42:21.302985 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 21 12:42:21.304576 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 21 12:42:21.318669 systemd[1]: Switching root. Mar 21 12:42:21.341141 systemd-journald[237]: Journal stopped Mar 21 12:42:22.048876 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Mar 21 12:42:22.048935 kernel: SELinux: policy capability network_peer_controls=1 Mar 21 12:42:22.048947 kernel: SELinux: policy capability open_perms=1 Mar 21 12:42:22.048956 kernel: SELinux: policy capability extended_socket_class=1 Mar 21 12:42:22.048966 kernel: SELinux: policy capability always_check_network=0 Mar 21 12:42:22.048975 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 21 12:42:22.048984 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 21 12:42:22.048994 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 21 12:42:22.049016 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 21 12:42:22.049027 kernel: audit: type=1403 audit(1742560941.476:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 21 12:42:22.049037 systemd[1]: Successfully loaded SELinux policy in 31.389ms. Mar 21 12:42:22.049056 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.369ms. Mar 21 12:42:22.049067 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 21 12:42:22.049079 systemd[1]: Detected virtualization kvm. Mar 21 12:42:22.049089 systemd[1]: Detected architecture arm64. Mar 21 12:42:22.049099 systemd[1]: Detected first boot. Mar 21 12:42:22.049111 systemd[1]: Initializing machine ID from VM UUID. Mar 21 12:42:22.049121 zram_generator::config[1047]: No configuration found. Mar 21 12:42:22.049132 kernel: NET: Registered PF_VSOCK protocol family Mar 21 12:42:22.049141 systemd[1]: Populated /etc with preset unit settings. Mar 21 12:42:22.049154 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 21 12:42:22.049165 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 21 12:42:22.049175 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 21 12:42:22.049185 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 21 12:42:22.049195 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 21 12:42:22.049207 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 21 12:42:22.049217 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 21 12:42:22.049227 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 21 12:42:22.049237 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 21 12:42:22.049247 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 21 12:42:22.049258 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 21 12:42:22.049268 systemd[1]: Created slice user.slice - User and Session Slice. Mar 21 12:42:22.049278 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 21 12:42:22.049290 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 21 12:42:22.049301 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 21 12:42:22.049312 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 21 12:42:22.049322 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 21 12:42:22.049332 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 21 12:42:22.049342 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Mar 21 12:42:22.049352 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 21 12:42:22.049362 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 21 12:42:22.049400 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 21 12:42:22.049411 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 21 12:42:22.049422 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 21 12:42:22.049432 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 21 12:42:22.049442 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 21 12:42:22.049452 systemd[1]: Reached target slices.target - Slice Units. Mar 21 12:42:22.049462 systemd[1]: Reached target swap.target - Swaps. Mar 21 12:42:22.049472 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 21 12:42:22.049482 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 21 12:42:22.049499 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 21 12:42:22.049510 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 21 12:42:22.049520 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 21 12:42:22.049530 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 21 12:42:22.049540 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 21 12:42:22.049552 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 21 12:42:22.049563 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 21 12:42:22.049573 systemd[1]: Mounting media.mount - External Media Directory... Mar 21 12:42:22.049583 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 21 12:42:22.049594 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 21 12:42:22.049604 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 21 12:42:22.049615 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 21 12:42:22.049625 systemd[1]: Reached target machines.target - Containers. Mar 21 12:42:22.049635 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 21 12:42:22.049645 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 21 12:42:22.049656 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 21 12:42:22.049666 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 21 12:42:22.049678 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 21 12:42:22.049688 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 21 12:42:22.049698 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 21 12:42:22.049708 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 21 12:42:22.049718 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 21 12:42:22.049729 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 21 12:42:22.049739 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 21 12:42:22.049748 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 21 12:42:22.049758 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 21 12:42:22.049771 systemd[1]: Stopped systemd-fsck-usr.service. Mar 21 12:42:22.049782 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 21 12:42:22.049792 kernel: fuse: init (API version 7.39) Mar 21 12:42:22.049801 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 21 12:42:22.049811 kernel: loop: module loaded Mar 21 12:42:22.049820 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 21 12:42:22.049830 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 21 12:42:22.049840 kernel: ACPI: bus type drm_connector registered Mar 21 12:42:22.049850 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 21 12:42:22.049862 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 21 12:42:22.049877 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 21 12:42:22.049887 systemd[1]: verity-setup.service: Deactivated successfully. Mar 21 12:42:22.049897 systemd[1]: Stopped verity-setup.service. Mar 21 12:42:22.049926 systemd-journald[1115]: Collecting audit messages is disabled. Mar 21 12:42:22.049949 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 21 12:42:22.049959 systemd-journald[1115]: Journal started Mar 21 12:42:22.049979 systemd-journald[1115]: Runtime Journal (/run/log/journal/6aff1540c9d5433597866044f032b1f5) is 5.9M, max 47.3M, 41.4M free. Mar 21 12:42:21.862767 systemd[1]: Queued start job for default target multi-user.target. Mar 21 12:42:21.875222 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 21 12:42:21.875587 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 21 12:42:22.052010 systemd[1]: Started systemd-journald.service - Journal Service. Mar 21 12:42:22.053232 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 21 12:42:22.054229 systemd[1]: Mounted media.mount - External Media Directory. Mar 21 12:42:22.055118 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 21 12:42:22.056080 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 21 12:42:22.056991 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 21 12:42:22.059400 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 21 12:42:22.060519 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 21 12:42:22.061696 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 21 12:42:22.061862 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 21 12:42:22.062995 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 21 12:42:22.063174 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 21 12:42:22.064272 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 21 12:42:22.064445 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 21 12:42:22.065631 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 21 12:42:22.065799 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 21 12:42:22.066986 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 21 12:42:22.067162 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 21 12:42:22.068233 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 21 12:42:22.068405 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 21 12:42:22.069659 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 21 12:42:22.070735 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 21 12:42:22.071898 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 21 12:42:22.073178 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 21 12:42:22.085482 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 21 12:42:22.087559 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 21 12:42:22.089292 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 21 12:42:22.090215 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 21 12:42:22.090244 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 21 12:42:22.091863 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 21 12:42:22.097533 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 21 12:42:22.099289 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 21 12:42:22.100244 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 21 12:42:22.101439 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 21 12:42:22.103089 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 21 12:42:22.104070 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 21 12:42:22.105212 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 21 12:42:22.106654 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 21 12:42:22.107573 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 21 12:42:22.112298 systemd-journald[1115]: Time spent on flushing to /var/log/journal/6aff1540c9d5433597866044f032b1f5 is 18.658ms for 866 entries. Mar 21 12:42:22.112298 systemd-journald[1115]: System Journal (/var/log/journal/6aff1540c9d5433597866044f032b1f5) is 8M, max 195.6M, 187.6M free. Mar 21 12:42:22.138679 systemd-journald[1115]: Received client request to flush runtime journal. Mar 21 12:42:22.138720 kernel: loop0: detected capacity change from 0 to 194096 Mar 21 12:42:22.112356 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 21 12:42:22.115489 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 21 12:42:22.119404 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 21 12:42:22.120469 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 21 12:42:22.121492 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 21 12:42:22.122614 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 21 12:42:22.123851 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 21 12:42:22.126506 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 21 12:42:22.132890 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 21 12:42:22.137509 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 21 12:42:22.145403 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 21 12:42:22.146406 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 21 12:42:22.162335 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 21 12:42:22.163728 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 21 12:42:22.165270 udevadm[1175]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 21 12:42:22.171013 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 21 12:42:22.173304 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 21 12:42:22.185403 kernel: loop1: detected capacity change from 0 to 103832 Mar 21 12:42:22.207211 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Mar 21 12:42:22.207511 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Mar 21 12:42:22.211745 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 21 12:42:22.225849 kernel: loop2: detected capacity change from 0 to 126448 Mar 21 12:42:22.270643 kernel: loop3: detected capacity change from 0 to 194096 Mar 21 12:42:22.276567 kernel: loop4: detected capacity change from 0 to 103832 Mar 21 12:42:22.281394 kernel: loop5: detected capacity change from 0 to 126448 Mar 21 12:42:22.287183 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 21 12:42:22.287586 (sd-merge)[1191]: Merged extensions into '/usr'. Mar 21 12:42:22.290685 systemd[1]: Reload requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Mar 21 12:42:22.290704 systemd[1]: Reloading... Mar 21 12:42:22.350404 zram_generator::config[1218]: No configuration found. Mar 21 12:42:22.404958 ldconfig[1160]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 21 12:42:22.440740 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 12:42:22.489653 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 21 12:42:22.490101 systemd[1]: Reloading finished in 199 ms. Mar 21 12:42:22.509986 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 21 12:42:22.511535 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 21 12:42:22.528722 systemd[1]: Starting ensure-sysext.service... Mar 21 12:42:22.530486 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 21 12:42:22.541019 systemd[1]: Reload requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... Mar 21 12:42:22.541034 systemd[1]: Reloading... Mar 21 12:42:22.551163 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 21 12:42:22.551361 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 21 12:42:22.551985 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 21 12:42:22.552199 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Mar 21 12:42:22.552250 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Mar 21 12:42:22.555178 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Mar 21 12:42:22.555286 systemd-tmpfiles[1254]: Skipping /boot Mar 21 12:42:22.564113 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Mar 21 12:42:22.564212 systemd-tmpfiles[1254]: Skipping /boot Mar 21 12:42:22.583407 zram_generator::config[1280]: No configuration found. Mar 21 12:42:22.670239 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 12:42:22.719414 systemd[1]: Reloading finished in 178 ms. Mar 21 12:42:22.731835 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 21 12:42:22.753485 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 21 12:42:22.762841 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 21 12:42:22.765247 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 21 12:42:22.766501 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 21 12:42:22.776135 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 21 12:42:22.778212 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 21 12:42:22.782606 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 21 12:42:22.783869 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 21 12:42:22.783983 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 21 12:42:22.785303 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 21 12:42:22.788240 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 21 12:42:22.791760 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 21 12:42:22.795191 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 21 12:42:22.797913 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 21 12:42:22.799489 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 21 12:42:22.803612 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 21 12:42:22.803771 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 21 12:42:22.805727 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 21 12:42:22.807380 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 21 12:42:22.809048 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 21 12:42:22.817901 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 21 12:42:22.819465 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 21 12:42:22.825638 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 21 12:42:22.828186 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 21 12:42:22.829403 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 21 12:42:22.829561 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 21 12:42:22.830657 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Mar 21 12:42:22.834637 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 21 12:42:22.838917 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 21 12:42:22.841943 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 21 12:42:22.842111 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 21 12:42:22.843785 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 21 12:42:22.843950 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 21 12:42:22.852661 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 21 12:42:22.854439 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 21 12:42:22.856036 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 21 12:42:22.859343 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 21 12:42:22.860870 augenrules[1357]: No rules Mar 21 12:42:22.862250 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 21 12:42:22.863987 systemd[1]: audit-rules.service: Deactivated successfully. Mar 21 12:42:22.864175 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 21 12:42:22.868336 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 21 12:42:22.885399 systemd[1]: Finished ensure-sysext.service. Mar 21 12:42:22.896797 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 21 12:42:22.898549 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 21 12:42:22.900908 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 21 12:42:22.913479 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1364) Mar 21 12:42:22.917634 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 21 12:42:22.922513 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 21 12:42:22.925534 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 21 12:42:22.927195 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 21 12:42:22.927241 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 21 12:42:22.929726 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 21 12:42:22.932315 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 21 12:42:22.933938 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 21 12:42:22.935258 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 21 12:42:22.937980 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 21 12:42:22.939412 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 21 12:42:22.940484 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 21 12:42:22.940629 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 21 12:42:22.941653 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 21 12:42:22.942035 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 21 12:42:22.944018 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 21 12:42:22.944219 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 21 12:42:22.947538 augenrules[1388]: /sbin/augenrules: No change Mar 21 12:42:22.950148 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Mar 21 12:42:22.963194 augenrules[1427]: No rules Mar 21 12:42:22.967493 systemd[1]: audit-rules.service: Deactivated successfully. Mar 21 12:42:22.967709 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 21 12:42:22.972470 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 21 12:42:22.974721 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 21 12:42:22.975545 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 21 12:42:22.975606 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 21 12:42:22.998419 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 21 12:42:23.033357 systemd-resolved[1331]: Positive Trust Anchors: Mar 21 12:42:23.035271 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 21 12:42:23.035310 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 21 12:42:23.036397 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 21 12:42:23.037631 systemd[1]: Reached target time-set.target - System Time Set. Mar 21 12:42:23.041619 systemd-networkd[1406]: lo: Link UP Mar 21 12:42:23.041863 systemd-networkd[1406]: lo: Gained carrier Mar 21 12:42:23.042901 systemd-networkd[1406]: Enumeration completed Mar 21 12:42:23.043142 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 21 12:42:23.043619 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 12:42:23.043688 systemd-networkd[1406]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 21 12:42:23.044273 systemd-resolved[1331]: Defaulting to hostname 'linux'. Mar 21 12:42:23.045715 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 21 12:42:23.050283 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 21 12:42:23.051099 systemd-networkd[1406]: eth0: Link UP Mar 21 12:42:23.051102 systemd-networkd[1406]: eth0: Gained carrier Mar 21 12:42:23.051116 systemd-networkd[1406]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 12:42:23.061982 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 21 12:42:23.064922 systemd[1]: Reached target network.target - Network. Mar 21 12:42:23.065777 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 21 12:42:23.068124 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:42:23.069445 systemd-networkd[1406]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 21 12:42:23.070580 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Mar 21 12:42:23.072238 systemd-timesyncd[1407]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 21 12:42:23.072294 systemd-timesyncd[1407]: Initial clock synchronization to Fri 2025-03-21 12:42:22.820659 UTC. Mar 21 12:42:23.072466 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 21 12:42:23.077805 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 21 12:42:23.080484 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 21 12:42:23.105944 lvm[1446]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 21 12:42:23.107715 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:42:23.137684 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 21 12:42:23.138792 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 21 12:42:23.141472 systemd[1]: Reached target sysinit.target - System Initialization. Mar 21 12:42:23.142268 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 21 12:42:23.143178 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 21 12:42:23.144260 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 21 12:42:23.145152 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 21 12:42:23.146175 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 21 12:42:23.147079 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 21 12:42:23.147112 systemd[1]: Reached target paths.target - Path Units. Mar 21 12:42:23.147762 systemd[1]: Reached target timers.target - Timer Units. Mar 21 12:42:23.149238 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 21 12:42:23.151267 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 21 12:42:23.154191 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 21 12:42:23.155302 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 21 12:42:23.156266 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 21 12:42:23.165243 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 21 12:42:23.166416 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 21 12:42:23.168215 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 21 12:42:23.169552 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 21 12:42:23.170413 systemd[1]: Reached target sockets.target - Socket Units. Mar 21 12:42:23.171104 systemd[1]: Reached target basic.target - Basic System. Mar 21 12:42:23.171792 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 21 12:42:23.171822 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 21 12:42:23.172710 systemd[1]: Starting containerd.service - containerd container runtime... Mar 21 12:42:23.174445 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 21 12:42:23.177517 lvm[1454]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 21 12:42:23.178476 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 21 12:42:23.180076 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 21 12:42:23.180855 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 21 12:42:23.182653 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 21 12:42:23.184285 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 21 12:42:23.186358 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 21 12:42:23.188615 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 21 12:42:23.198503 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 21 12:42:23.200827 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 21 12:42:23.201239 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 21 12:42:23.201954 jq[1457]: false Mar 21 12:42:23.202185 extend-filesystems[1458]: Found loop3 Mar 21 12:42:23.202185 extend-filesystems[1458]: Found loop4 Mar 21 12:42:23.202185 extend-filesystems[1458]: Found loop5 Mar 21 12:42:23.202185 extend-filesystems[1458]: Found vda Mar 21 12:42:23.202185 extend-filesystems[1458]: Found vda1 Mar 21 12:42:23.202185 extend-filesystems[1458]: Found vda2 Mar 21 12:42:23.202185 extend-filesystems[1458]: Found vda3 Mar 21 12:42:23.213478 extend-filesystems[1458]: Found usr Mar 21 12:42:23.213478 extend-filesystems[1458]: Found vda4 Mar 21 12:42:23.213478 extend-filesystems[1458]: Found vda6 Mar 21 12:42:23.213478 extend-filesystems[1458]: Found vda7 Mar 21 12:42:23.213478 extend-filesystems[1458]: Found vda9 Mar 21 12:42:23.213478 extend-filesystems[1458]: Checking size of /dev/vda9 Mar 21 12:42:23.252106 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 21 12:42:23.261814 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 21 12:42:23.261855 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1368) Mar 21 12:42:23.203541 systemd[1]: Starting update-engine.service - Update Engine... Mar 21 12:42:23.210473 dbus-daemon[1456]: [system] SELinux support is enabled Mar 21 12:42:23.265784 extend-filesystems[1458]: Resized partition /dev/vda9 Mar 21 12:42:23.212747 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 21 12:42:23.268642 extend-filesystems[1479]: resize2fs 1.47.2 (1-Jan-2025) Mar 21 12:42:23.268642 extend-filesystems[1479]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 21 12:42:23.268642 extend-filesystems[1479]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 21 12:42:23.268642 extend-filesystems[1479]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 21 12:42:23.214402 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 21 12:42:23.283652 update_engine[1467]: I20250321 12:42:23.246084 1467 main.cc:92] Flatcar Update Engine starting Mar 21 12:42:23.283652 update_engine[1467]: I20250321 12:42:23.251411 1467 update_check_scheduler.cc:74] Next update check in 11m1s Mar 21 12:42:23.293527 extend-filesystems[1458]: Resized filesystem in /dev/vda9 Mar 21 12:42:23.217647 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 21 12:42:23.299237 tar[1481]: linux-arm64/helm Mar 21 12:42:23.220265 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 21 12:42:23.220482 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 21 12:42:23.306636 jq[1476]: true Mar 21 12:42:23.221608 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 21 12:42:23.221780 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 21 12:42:23.306917 jq[1492]: true Mar 21 12:42:23.228148 systemd[1]: motdgen.service: Deactivated successfully. Mar 21 12:42:23.228365 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 21 12:42:23.237446 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 21 12:42:23.237494 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 21 12:42:23.244578 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 21 12:42:23.244599 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 21 12:42:23.260773 systemd[1]: Started update-engine.service - Update Engine. Mar 21 12:42:23.262732 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 21 12:42:23.264417 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 21 12:42:23.274667 (ntainerd)[1491]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 21 12:42:23.290634 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 21 12:42:23.292705 systemd-logind[1464]: Watching system buttons on /dev/input/event0 (Power Button) Mar 21 12:42:23.293808 systemd-logind[1464]: New seat seat0. Mar 21 12:42:23.301102 systemd[1]: Started systemd-logind.service - User Login Management. Mar 21 12:42:23.379389 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Mar 21 12:42:23.381554 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 21 12:42:23.383078 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 21 12:42:23.385444 locksmithd[1494]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 21 12:42:23.499485 containerd[1491]: time="2025-03-21T12:42:23Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 21 12:42:23.501746 containerd[1491]: time="2025-03-21T12:42:23.500721160Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 21 12:42:23.514357 containerd[1491]: time="2025-03-21T12:42:23.514319840Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.96µs" Mar 21 12:42:23.514562 containerd[1491]: time="2025-03-21T12:42:23.514537160Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 21 12:42:23.514687 containerd[1491]: time="2025-03-21T12:42:23.514667680Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 21 12:42:23.515279 containerd[1491]: time="2025-03-21T12:42:23.515107200Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 21 12:42:23.515279 containerd[1491]: time="2025-03-21T12:42:23.515187600Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 21 12:42:23.515279 containerd[1491]: time="2025-03-21T12:42:23.515221920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 21 12:42:23.515762 containerd[1491]: time="2025-03-21T12:42:23.515614200Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 21 12:42:23.515762 containerd[1491]: time="2025-03-21T12:42:23.515640760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 21 12:42:23.516763 containerd[1491]: time="2025-03-21T12:42:23.516361600Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 21 12:42:23.516763 containerd[1491]: time="2025-03-21T12:42:23.516410960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 21 12:42:23.516763 containerd[1491]: time="2025-03-21T12:42:23.516480920Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 21 12:42:23.516763 containerd[1491]: time="2025-03-21T12:42:23.516495600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 21 12:42:23.516763 containerd[1491]: time="2025-03-21T12:42:23.516600480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 21 12:42:23.516910 containerd[1491]: time="2025-03-21T12:42:23.516789680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 21 12:42:23.516910 containerd[1491]: time="2025-03-21T12:42:23.516822240Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 21 12:42:23.516910 containerd[1491]: time="2025-03-21T12:42:23.516832240Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 21 12:42:23.519478 containerd[1491]: time="2025-03-21T12:42:23.518482200Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 21 12:42:23.519478 containerd[1491]: time="2025-03-21T12:42:23.518892240Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 21 12:42:23.519478 containerd[1491]: time="2025-03-21T12:42:23.519084480Z" level=info msg="metadata content store policy set" policy=shared Mar 21 12:42:23.523180 containerd[1491]: time="2025-03-21T12:42:23.523129320Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 21 12:42:23.523227 containerd[1491]: time="2025-03-21T12:42:23.523188560Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 21 12:42:23.523227 containerd[1491]: time="2025-03-21T12:42:23.523204120Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 21 12:42:23.523227 containerd[1491]: time="2025-03-21T12:42:23.523217080Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 21 12:42:23.523297 containerd[1491]: time="2025-03-21T12:42:23.523229600Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 21 12:42:23.523297 containerd[1491]: time="2025-03-21T12:42:23.523241080Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 21 12:42:23.523297 containerd[1491]: time="2025-03-21T12:42:23.523252840Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 21 12:42:23.523297 containerd[1491]: time="2025-03-21T12:42:23.523265200Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 21 12:42:23.523297 containerd[1491]: time="2025-03-21T12:42:23.523279040Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 21 12:42:23.523297 containerd[1491]: time="2025-03-21T12:42:23.523289440Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 21 12:42:23.523297 containerd[1491]: time="2025-03-21T12:42:23.523298240Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 21 12:42:23.523425 containerd[1491]: time="2025-03-21T12:42:23.523310360Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 21 12:42:23.523493 containerd[1491]: time="2025-03-21T12:42:23.523439360Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 21 12:42:23.523493 containerd[1491]: time="2025-03-21T12:42:23.523469360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 21 12:42:23.523658 containerd[1491]: time="2025-03-21T12:42:23.523634320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 21 12:42:23.523738 containerd[1491]: time="2025-03-21T12:42:23.523659200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 21 12:42:23.523766 containerd[1491]: time="2025-03-21T12:42:23.523735440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 21 12:42:23.523766 containerd[1491]: time="2025-03-21T12:42:23.523750640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 21 12:42:23.523766 containerd[1491]: time="2025-03-21T12:42:23.523763200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 21 12:42:23.523868 containerd[1491]: time="2025-03-21T12:42:23.523835640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 21 12:42:23.523941 containerd[1491]: time="2025-03-21T12:42:23.523870240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 21 12:42:23.523963 containerd[1491]: time="2025-03-21T12:42:23.523951920Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 21 12:42:23.523986 containerd[1491]: time="2025-03-21T12:42:23.523965840Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 21 12:42:23.524430 containerd[1491]: time="2025-03-21T12:42:23.524403840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 21 12:42:23.524467 containerd[1491]: time="2025-03-21T12:42:23.524434480Z" level=info msg="Start snapshots syncer" Mar 21 12:42:23.524607 containerd[1491]: time="2025-03-21T12:42:23.524552720Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 21 12:42:23.525085 containerd[1491]: time="2025-03-21T12:42:23.525026680Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 21 12:42:23.525186 containerd[1491]: time="2025-03-21T12:42:23.525111680Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 21 12:42:23.525318 containerd[1491]: time="2025-03-21T12:42:23.525277440Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 21 12:42:23.525544 containerd[1491]: time="2025-03-21T12:42:23.525520400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 21 12:42:23.525570 containerd[1491]: time="2025-03-21T12:42:23.525562480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 21 12:42:23.525830 containerd[1491]: time="2025-03-21T12:42:23.525575360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 21 12:42:23.525858 containerd[1491]: time="2025-03-21T12:42:23.525825160Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 21 12:42:23.525858 containerd[1491]: time="2025-03-21T12:42:23.525843440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 21 12:42:23.525858 containerd[1491]: time="2025-03-21T12:42:23.525854720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 21 12:42:23.525906 containerd[1491]: time="2025-03-21T12:42:23.525887800Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 21 12:42:23.525923 containerd[1491]: time="2025-03-21T12:42:23.525916640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 21 12:42:23.525948 containerd[1491]: time="2025-03-21T12:42:23.525936320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 21 12:42:23.525966 containerd[1491]: time="2025-03-21T12:42:23.525957440Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 21 12:42:23.526857 containerd[1491]: time="2025-03-21T12:42:23.526823160Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 21 12:42:23.526891 containerd[1491]: time="2025-03-21T12:42:23.526875280Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 21 12:42:23.526891 containerd[1491]: time="2025-03-21T12:42:23.526886840Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 21 12:42:23.526928 containerd[1491]: time="2025-03-21T12:42:23.526897960Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 21 12:42:23.526928 containerd[1491]: time="2025-03-21T12:42:23.526907480Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 21 12:42:23.527028 containerd[1491]: time="2025-03-21T12:42:23.526923880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 21 12:42:23.527028 containerd[1491]: time="2025-03-21T12:42:23.526991920Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 21 12:42:23.527163 containerd[1491]: time="2025-03-21T12:42:23.527133880Z" level=info msg="runtime interface created" Mar 21 12:42:23.527163 containerd[1491]: time="2025-03-21T12:42:23.527150720Z" level=info msg="created NRI interface" Mar 21 12:42:23.527163 containerd[1491]: time="2025-03-21T12:42:23.527162320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 21 12:42:23.527229 containerd[1491]: time="2025-03-21T12:42:23.527181200Z" level=info msg="Connect containerd service" Mar 21 12:42:23.527297 containerd[1491]: time="2025-03-21T12:42:23.527270400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 21 12:42:23.530002 containerd[1491]: time="2025-03-21T12:42:23.529522720Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 21 12:42:23.615790 tar[1481]: linux-arm64/LICENSE Mar 21 12:42:23.615983 tar[1481]: linux-arm64/README.md Mar 21 12:42:23.634431 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 21 12:42:23.638000 containerd[1491]: time="2025-03-21T12:42:23.637953440Z" level=info msg="Start subscribing containerd event" Mar 21 12:42:23.638213 containerd[1491]: time="2025-03-21T12:42:23.638104680Z" level=info msg="Start recovering state" Mar 21 12:42:23.638341 containerd[1491]: time="2025-03-21T12:42:23.638323480Z" level=info msg="Start event monitor" Mar 21 12:42:23.638475 containerd[1491]: time="2025-03-21T12:42:23.638415760Z" level=info msg="Start cni network conf syncer for default" Mar 21 12:42:23.638475 containerd[1491]: time="2025-03-21T12:42:23.638429920Z" level=info msg="Start streaming server" Mar 21 12:42:23.638475 containerd[1491]: time="2025-03-21T12:42:23.638439640Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 21 12:42:23.638475 containerd[1491]: time="2025-03-21T12:42:23.638447680Z" level=info msg="runtime interface starting up..." Mar 21 12:42:23.638475 containerd[1491]: time="2025-03-21T12:42:23.638454040Z" level=info msg="starting plugins..." Mar 21 12:42:23.638616 containerd[1491]: time="2025-03-21T12:42:23.638328120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 21 12:42:23.638616 containerd[1491]: time="2025-03-21T12:42:23.638533320Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 21 12:42:23.638824 containerd[1491]: time="2025-03-21T12:42:23.638702080Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 21 12:42:23.639115 containerd[1491]: time="2025-03-21T12:42:23.639099360Z" level=info msg="containerd successfully booted in 0.140061s" Mar 21 12:42:23.641485 systemd[1]: Started containerd.service - containerd container runtime. Mar 21 12:42:24.229513 systemd-networkd[1406]: eth0: Gained IPv6LL Mar 21 12:42:24.231704 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 21 12:42:24.232979 systemd[1]: Reached target network-online.target - Network is Online. Mar 21 12:42:24.235079 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 21 12:42:24.237057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:42:24.251583 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 21 12:42:24.273489 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 21 12:42:24.275017 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 21 12:42:24.275215 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 21 12:42:24.277356 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 21 12:42:24.412178 sshd_keygen[1474]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 21 12:42:24.430260 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 21 12:42:24.433116 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 21 12:42:24.451012 systemd[1]: issuegen.service: Deactivated successfully. Mar 21 12:42:24.451191 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 21 12:42:24.454142 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 21 12:42:24.474187 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 21 12:42:24.476801 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 21 12:42:24.478769 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Mar 21 12:42:24.479736 systemd[1]: Reached target getty.target - Login Prompts. Mar 21 12:42:24.726073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:42:24.727322 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 21 12:42:24.729326 (kubelet)[1585]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 21 12:42:24.730702 systemd[1]: Startup finished in 526ms (kernel) + 4.784s (initrd) + 3.285s (userspace) = 8.596s. Mar 21 12:42:25.169746 kubelet[1585]: E0321 12:42:25.169624 1585 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 21 12:42:25.172087 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 21 12:42:25.172227 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 21 12:42:25.172581 systemd[1]: kubelet.service: Consumed 815ms CPU time, 240.4M memory peak. Mar 21 12:42:29.583850 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 21 12:42:29.584951 systemd[1]: Started sshd@0-10.0.0.147:22-10.0.0.1:34846.service - OpenSSH per-connection server daemon (10.0.0.1:34846). Mar 21 12:42:29.663399 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 34846 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:42:29.665100 sshd-session[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:42:29.670789 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 21 12:42:29.671617 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 21 12:42:29.676761 systemd-logind[1464]: New session 1 of user core. Mar 21 12:42:29.696899 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 21 12:42:29.700603 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 21 12:42:29.720155 (systemd)[1604]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 21 12:42:29.722065 systemd-logind[1464]: New session c1 of user core. Mar 21 12:42:29.826159 systemd[1604]: Queued start job for default target default.target. Mar 21 12:42:29.837310 systemd[1604]: Created slice app.slice - User Application Slice. Mar 21 12:42:29.837338 systemd[1604]: Reached target paths.target - Paths. Mar 21 12:42:29.837404 systemd[1604]: Reached target timers.target - Timers. Mar 21 12:42:29.838526 systemd[1604]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 21 12:42:29.846662 systemd[1604]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 21 12:42:29.846722 systemd[1604]: Reached target sockets.target - Sockets. Mar 21 12:42:29.846758 systemd[1604]: Reached target basic.target - Basic System. Mar 21 12:42:29.846785 systemd[1604]: Reached target default.target - Main User Target. Mar 21 12:42:29.846808 systemd[1604]: Startup finished in 119ms. Mar 21 12:42:29.846970 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 21 12:42:29.848299 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 21 12:42:29.902393 systemd[1]: Started sshd@1-10.0.0.147:22-10.0.0.1:34848.service - OpenSSH per-connection server daemon (10.0.0.1:34848). Mar 21 12:42:29.953436 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 34848 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:42:29.954522 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:42:29.958438 systemd-logind[1464]: New session 2 of user core. Mar 21 12:42:29.964502 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 21 12:42:30.012916 sshd[1618]: Connection closed by 10.0.0.1 port 34848 Mar 21 12:42:30.013301 sshd-session[1616]: pam_unix(sshd:session): session closed for user core Mar 21 12:42:30.026277 systemd[1]: sshd@1-10.0.0.147:22-10.0.0.1:34848.service: Deactivated successfully. Mar 21 12:42:30.027657 systemd[1]: session-2.scope: Deactivated successfully. Mar 21 12:42:30.028277 systemd-logind[1464]: Session 2 logged out. Waiting for processes to exit. Mar 21 12:42:30.029830 systemd[1]: Started sshd@2-10.0.0.147:22-10.0.0.1:34856.service - OpenSSH per-connection server daemon (10.0.0.1:34856). Mar 21 12:42:30.031702 systemd-logind[1464]: Removed session 2. Mar 21 12:42:30.079968 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 34856 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:42:30.080990 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:42:30.084302 systemd-logind[1464]: New session 3 of user core. Mar 21 12:42:30.093501 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 21 12:42:30.139919 sshd[1626]: Connection closed by 10.0.0.1 port 34856 Mar 21 12:42:30.140331 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Mar 21 12:42:30.149156 systemd[1]: sshd@2-10.0.0.147:22-10.0.0.1:34856.service: Deactivated successfully. Mar 21 12:42:30.150294 systemd[1]: session-3.scope: Deactivated successfully. Mar 21 12:42:30.150998 systemd-logind[1464]: Session 3 logged out. Waiting for processes to exit. Mar 21 12:42:30.152579 systemd[1]: Started sshd@3-10.0.0.147:22-10.0.0.1:34862.service - OpenSSH per-connection server daemon (10.0.0.1:34862). Mar 21 12:42:30.154564 systemd-logind[1464]: Removed session 3. Mar 21 12:42:30.207690 sshd[1631]: Accepted publickey for core from 10.0.0.1 port 34862 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:42:30.208707 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:42:30.212409 systemd-logind[1464]: New session 4 of user core. Mar 21 12:42:30.226523 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 21 12:42:30.275543 sshd[1634]: Connection closed by 10.0.0.1 port 34862 Mar 21 12:42:30.275790 sshd-session[1631]: pam_unix(sshd:session): session closed for user core Mar 21 12:42:30.285470 systemd[1]: sshd@3-10.0.0.147:22-10.0.0.1:34862.service: Deactivated successfully. Mar 21 12:42:30.286788 systemd[1]: session-4.scope: Deactivated successfully. Mar 21 12:42:30.287404 systemd-logind[1464]: Session 4 logged out. Waiting for processes to exit. Mar 21 12:42:30.288963 systemd[1]: Started sshd@4-10.0.0.147:22-10.0.0.1:34876.service - OpenSSH per-connection server daemon (10.0.0.1:34876). Mar 21 12:42:30.289777 systemd-logind[1464]: Removed session 4. Mar 21 12:42:30.339184 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 34876 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:42:30.340250 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:42:30.344098 systemd-logind[1464]: New session 5 of user core. Mar 21 12:42:30.360557 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 21 12:42:30.419775 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 21 12:42:30.421894 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 12:42:30.437184 sudo[1643]: pam_unix(sudo:session): session closed for user root Mar 21 12:42:30.440254 sshd[1642]: Connection closed by 10.0.0.1 port 34876 Mar 21 12:42:30.440737 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Mar 21 12:42:30.458289 systemd[1]: sshd@4-10.0.0.147:22-10.0.0.1:34876.service: Deactivated successfully. Mar 21 12:42:30.460502 systemd[1]: session-5.scope: Deactivated successfully. Mar 21 12:42:30.461232 systemd-logind[1464]: Session 5 logged out. Waiting for processes to exit. Mar 21 12:42:30.463002 systemd[1]: Started sshd@5-10.0.0.147:22-10.0.0.1:34888.service - OpenSSH per-connection server daemon (10.0.0.1:34888). Mar 21 12:42:30.463660 systemd-logind[1464]: Removed session 5. Mar 21 12:42:30.512790 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 34888 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:42:30.513876 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:42:30.517581 systemd-logind[1464]: New session 6 of user core. Mar 21 12:42:30.529518 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 21 12:42:30.578293 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 21 12:42:30.578818 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 12:42:30.581562 sudo[1653]: pam_unix(sudo:session): session closed for user root Mar 21 12:42:30.585613 sudo[1652]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 21 12:42:30.585863 sudo[1652]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 12:42:30.593425 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 21 12:42:30.625414 augenrules[1675]: No rules Mar 21 12:42:30.626478 systemd[1]: audit-rules.service: Deactivated successfully. Mar 21 12:42:30.626667 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 21 12:42:30.627469 sudo[1652]: pam_unix(sudo:session): session closed for user root Mar 21 12:42:30.631301 sshd[1651]: Connection closed by 10.0.0.1 port 34888 Mar 21 12:42:30.631352 sshd-session[1648]: pam_unix(sshd:session): session closed for user core Mar 21 12:42:30.643494 systemd[1]: sshd@5-10.0.0.147:22-10.0.0.1:34888.service: Deactivated successfully. Mar 21 12:42:30.644843 systemd[1]: session-6.scope: Deactivated successfully. Mar 21 12:42:30.645495 systemd-logind[1464]: Session 6 logged out. Waiting for processes to exit. Mar 21 12:42:30.648586 systemd[1]: Started sshd@6-10.0.0.147:22-10.0.0.1:34902.service - OpenSSH per-connection server daemon (10.0.0.1:34902). Mar 21 12:42:30.649352 systemd-logind[1464]: Removed session 6. Mar 21 12:42:30.689221 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 34902 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:42:30.690330 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:42:30.694127 systemd-logind[1464]: New session 7 of user core. Mar 21 12:42:30.707576 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 21 12:42:30.756461 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 21 12:42:30.756701 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 12:42:31.090694 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 21 12:42:31.102751 (dockerd)[1708]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 21 12:42:31.334238 dockerd[1708]: time="2025-03-21T12:42:31.334177559Z" level=info msg="Starting up" Mar 21 12:42:31.336124 dockerd[1708]: time="2025-03-21T12:42:31.336087841Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 21 12:42:31.434733 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2875482900-merged.mount: Deactivated successfully. Mar 21 12:42:31.451435 dockerd[1708]: time="2025-03-21T12:42:31.451397172Z" level=info msg="Loading containers: start." Mar 21 12:42:31.587395 kernel: Initializing XFRM netlink socket Mar 21 12:42:31.651274 systemd-networkd[1406]: docker0: Link UP Mar 21 12:42:31.723588 dockerd[1708]: time="2025-03-21T12:42:31.723501009Z" level=info msg="Loading containers: done." Mar 21 12:42:31.740164 dockerd[1708]: time="2025-03-21T12:42:31.740110715Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 21 12:42:31.740283 dockerd[1708]: time="2025-03-21T12:42:31.740196483Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 21 12:42:31.740419 dockerd[1708]: time="2025-03-21T12:42:31.740395674Z" level=info msg="Daemon has completed initialization" Mar 21 12:42:31.766175 dockerd[1708]: time="2025-03-21T12:42:31.766131034Z" level=info msg="API listen on /run/docker.sock" Mar 21 12:42:31.766266 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 21 12:42:32.422118 containerd[1491]: time="2025-03-21T12:42:32.422062731Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 21 12:42:33.167819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4024668243.mount: Deactivated successfully. Mar 21 12:42:34.508396 containerd[1491]: time="2025-03-21T12:42:34.508265796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:42:34.508876 containerd[1491]: time="2025-03-21T12:42:34.508818626Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=29793526" Mar 21 12:42:34.509595 containerd[1491]: time="2025-03-21T12:42:34.509571389Z" level=info msg="ImageCreate event name:\"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:42:34.512483 containerd[1491]: time="2025-03-21T12:42:34.512439987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:42:34.513970 containerd[1491]: time="2025-03-21T12:42:34.513928414Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"29790324\" in 2.091823285s" Mar 21 12:42:34.513970 containerd[1491]: time="2025-03-21T12:42:34.513969472Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\"" Mar 21 12:42:34.528383 containerd[1491]: time="2025-03-21T12:42:34.528344713Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 21 12:42:35.422724 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 21 12:42:35.424117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:42:35.523550 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:42:35.526810 (kubelet)[1991]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 21 12:42:35.563740 kubelet[1991]: E0321 12:42:35.563695 1991 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 21 12:42:35.566784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 21 12:42:35.566934 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 21 12:42:35.567335 systemd[1]: kubelet.service: Consumed 135ms CPU time, 96.9M memory peak. Mar 21 12:42:36.216188 containerd[1491]: time="2025-03-21T12:42:36.216127866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:42:36.216968 containerd[1491]: time="2025-03-21T12:42:36.216925702Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=26861169" Mar 21 12:42:36.217631 containerd[1491]: time="2025-03-21T12:42:36.217598534Z" level=info msg="ImageCreate event name:\"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:42:36.222621 containerd[1491]: time="2025-03-21T12:42:36.222548114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:42:36.223268 containerd[1491]: time="2025-03-21T12:42:36.223136683Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"28301963\" in 1.69473864s" Mar 21 12:42:36.223268 containerd[1491]: time="2025-03-21T12:42:36.223170945Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\"" Mar 21 12:42:36.238436 containerd[1491]: time="2025-03-21T12:42:36.238367468Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 21 12:42:37.335212 containerd[1491]: time="2025-03-21T12:42:37.335035463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:42:37.336020 containerd[1491]: time="2025-03-21T12:42:37.335761896Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=16264638" Mar 21 12:42:37.336710 containerd[1491]: time="2025-03-21T12:42:37.336674969Z" level=info msg="ImageCreate event name:\"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:42:37.339099 containerd[1491]: time="2025-03-21T12:42:37.339070821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:42:37.340164 containerd[1491]: time="2025-03-21T12:42:37.340134773Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"17705450\" in 1.101719284s" Mar 21 12:42:37.340164 containerd[1491]: time="2025-03-21T12:42:37.340161823Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\"" Mar 21 12:42:37.354780 containerd[1491]: time="2025-03-21T12:42:37.354753726Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 21 12:42:38.330931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount899305723.mount: Deactivated successfully. Mar 21 12:42:38.644059 containerd[1491]: time="2025-03-21T12:42:38.643900619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:42:38.644531 containerd[1491]: time="2025-03-21T12:42:38.644477807Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=25771850" Mar 21 12:42:38.645206 containerd[1491]: time="2025-03-21T12:42:38.645166492Z" level=info msg="ImageCreate event name:\"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:42:38.647110 containerd[1491]: time="2025-03-21T12:42:38.647085583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:42:38.647725 containerd[1491]: time="2025-03-21T12:42:38.647689601Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"25770867\" in 1.292900067s" Mar 21 12:42:38.647792 containerd[1491]: time="2025-03-21T12:42:38.647732273Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Mar 21 12:42:38.662549 containerd[1491]: time="2025-03-21T12:42:38.662518720Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 21 12:42:39.271849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1258546.mount: Deactivated successfully. Mar 21 12:42:39.941051 containerd[1491]: time="2025-03-21T12:42:39.940994850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:42:39.941465 containerd[1491]: time="2025-03-21T12:42:39.941402713Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" Mar 21 12:42:39.942362 containerd[1491]: time="2025-03-21T12:42:39.942330401Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:42:39.945499 containerd[1491]: time="2025-03-21T12:42:39.945463056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:42:39.946124 containerd[1491]: time="2025-03-21T12:42:39.946034023Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.28347776s" Mar 21 12:42:39.946124 containerd[1491]: time="2025-03-21T12:42:39.946070548Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 21 12:42:39.961731 containerd[1491]: time="2025-03-21T12:42:39.961680370Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 21 12:42:40.367473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1578468826.mount: Deactivated successfully. Mar 21 12:42:40.371509 containerd[1491]: time="2025-03-21T12:42:40.371457063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:42:40.372273 containerd[1491]: time="2025-03-21T12:42:40.372213844Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268823" Mar 21 12:42:40.373382 containerd[1491]: time="2025-03-21T12:42:40.373333353Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:42:40.375015 containerd[1491]: time="2025-03-21T12:42:40.374986913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:42:40.376262 containerd[1491]: time="2025-03-21T12:42:40.376227810Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 414.509875ms" Mar 21 12:42:40.376319 containerd[1491]: time="2025-03-21T12:42:40.376267581Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Mar 21 12:42:40.391322 containerd[1491]: time="2025-03-21T12:42:40.391291290Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 21 12:42:40.853255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount192669052.mount: Deactivated successfully. Mar 21 12:42:43.228893 containerd[1491]: time="2025-03-21T12:42:43.228833067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:42:43.229735 containerd[1491]: time="2025-03-21T12:42:43.229647796Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191474" Mar 21 12:42:43.230407 containerd[1491]: time="2025-03-21T12:42:43.230363213Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:42:43.234189 containerd[1491]: time="2025-03-21T12:42:43.234145624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:42:43.234886 containerd[1491]: time="2025-03-21T12:42:43.234844682Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 2.843513172s" Mar 21 12:42:43.234886 containerd[1491]: time="2025-03-21T12:42:43.234879236Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Mar 21 12:42:45.817397 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 21 12:42:45.818825 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:42:45.939559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:42:45.948739 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 21 12:42:45.986051 kubelet[2251]: E0321 12:42:45.985994 2251 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 21 12:42:45.988219 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 21 12:42:45.988396 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 21 12:42:45.988730 systemd[1]: kubelet.service: Consumed 131ms CPU time, 99.1M memory peak. Mar 21 12:42:47.833227 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:42:47.833389 systemd[1]: kubelet.service: Consumed 131ms CPU time, 99.1M memory peak. Mar 21 12:42:47.835589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:42:47.857630 systemd[1]: Reload requested from client PID 2267 ('systemctl') (unit session-7.scope)... Mar 21 12:42:47.857644 systemd[1]: Reloading... Mar 21 12:42:47.923400 zram_generator::config[2312]: No configuration found. Mar 21 12:42:48.095902 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 12:42:48.167080 systemd[1]: Reloading finished in 308 ms. Mar 21 12:42:48.206925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:42:48.209303 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:42:48.210506 systemd[1]: kubelet.service: Deactivated successfully. Mar 21 12:42:48.210709 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:42:48.210745 systemd[1]: kubelet.service: Consumed 85ms CPU time, 82.4M memory peak. Mar 21 12:42:48.212073 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:42:48.331789 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:42:48.341792 (kubelet)[2359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 21 12:42:48.382552 kubelet[2359]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 12:42:48.382552 kubelet[2359]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 21 12:42:48.382552 kubelet[2359]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 12:42:48.383462 kubelet[2359]: I0321 12:42:48.383306 2359 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 21 12:42:49.114079 kubelet[2359]: I0321 12:42:49.114045 2359 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 21 12:42:49.114079 kubelet[2359]: I0321 12:42:49.114072 2359 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 21 12:42:49.114245 kubelet[2359]: I0321 12:42:49.114230 2359 server.go:927] "Client rotation is on, will bootstrap in background" Mar 21 12:42:49.142519 kubelet[2359]: E0321 12:42:49.142489 2359 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.147:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.147:6443: connect: connection refused Mar 21 12:42:49.142519 kubelet[2359]: I0321 12:42:49.142492 2359 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 21 12:42:49.154019 kubelet[2359]: I0321 12:42:49.153989 2359 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 21 12:42:49.155060 kubelet[2359]: I0321 12:42:49.155010 2359 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 21 12:42:49.155218 kubelet[2359]: I0321 12:42:49.155056 2359 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 21 12:42:49.155305 kubelet[2359]: I0321 12:42:49.155222 2359 topology_manager.go:138] "Creating topology manager with none policy" Mar 21 12:42:49.155305 kubelet[2359]: I0321 12:42:49.155231 2359 container_manager_linux.go:301] "Creating device plugin manager" Mar 21 12:42:49.155505 kubelet[2359]: I0321 12:42:49.155492 2359 state_mem.go:36] "Initialized new in-memory state store" Mar 21 12:42:49.156860 kubelet[2359]: I0321 12:42:49.156830 2359 kubelet.go:400] "Attempting to sync node with API server" Mar 21 12:42:49.156860 kubelet[2359]: I0321 12:42:49.156859 2359 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 21 12:42:49.157863 kubelet[2359]: I0321 12:42:49.157016 2359 kubelet.go:312] "Adding apiserver pod source" Mar 21 12:42:49.157863 kubelet[2359]: I0321 12:42:49.157033 2359 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 21 12:42:49.157863 kubelet[2359]: W0321 12:42:49.157032 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Mar 21 12:42:49.157863 kubelet[2359]: E0321 12:42:49.157086 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Mar 21 12:42:49.157863 kubelet[2359]: W0321 12:42:49.157691 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Mar 21 12:42:49.157863 kubelet[2359]: E0321 12:42:49.157734 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Mar 21 12:42:49.158085 kubelet[2359]: I0321 12:42:49.158033 2359 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 21 12:42:49.158413 kubelet[2359]: I0321 12:42:49.158398 2359 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 21 12:42:49.158525 kubelet[2359]: W0321 12:42:49.158503 2359 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 21 12:42:49.159288 kubelet[2359]: I0321 12:42:49.159262 2359 server.go:1264] "Started kubelet" Mar 21 12:42:49.160048 kubelet[2359]: I0321 12:42:49.159537 2359 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 21 12:42:49.160048 kubelet[2359]: I0321 12:42:49.159982 2359 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 21 12:42:49.161072 kubelet[2359]: I0321 12:42:49.160257 2359 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 21 12:42:49.161072 kubelet[2359]: I0321 12:42:49.160735 2359 server.go:455] "Adding debug handlers to kubelet server" Mar 21 12:42:49.161349 kubelet[2359]: E0321 12:42:49.161048 2359 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.147:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.147:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182ed1f80e325de9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-21 12:42:49.159237097 +0000 UTC m=+0.814298773,LastTimestamp:2025-03-21 12:42:49.159237097 +0000 UTC m=+0.814298773,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 21 12:42:49.162997 kubelet[2359]: I0321 12:42:49.162974 2359 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 21 12:42:49.165471 kubelet[2359]: E0321 12:42:49.165451 2359 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:42:49.165976 kubelet[2359]: I0321 12:42:49.165963 2359 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 21 12:42:49.166158 kubelet[2359]: I0321 12:42:49.166143 2359 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 21 12:42:49.167286 kubelet[2359]: W0321 12:42:49.167233 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Mar 21 12:42:49.167286 kubelet[2359]: E0321 12:42:49.167279 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Mar 21 12:42:49.168000 kubelet[2359]: E0321 12:42:49.167966 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="200ms" Mar 21 12:42:49.168079 kubelet[2359]: I0321 12:42:49.168066 2359 reconciler.go:26] "Reconciler: start to sync state" Mar 21 12:42:49.168232 kubelet[2359]: I0321 12:42:49.168205 2359 factory.go:221] Registration of the systemd container factory successfully Mar 21 12:42:49.168307 kubelet[2359]: I0321 12:42:49.168286 2359 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 21 12:42:49.168776 kubelet[2359]: E0321 12:42:49.168759 2359 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 21 12:42:49.169765 kubelet[2359]: I0321 12:42:49.169731 2359 factory.go:221] Registration of the containerd container factory successfully Mar 21 12:42:49.181662 kubelet[2359]: I0321 12:42:49.181599 2359 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 21 12:42:49.182674 kubelet[2359]: I0321 12:42:49.182648 2359 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 21 12:42:49.183492 kubelet[2359]: I0321 12:42:49.182888 2359 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 21 12:42:49.183492 kubelet[2359]: I0321 12:42:49.182892 2359 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 21 12:42:49.183492 kubelet[2359]: I0321 12:42:49.182904 2359 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 21 12:42:49.183492 kubelet[2359]: I0321 12:42:49.182908 2359 kubelet.go:2337] "Starting kubelet main sync loop" Mar 21 12:42:49.183492 kubelet[2359]: I0321 12:42:49.182919 2359 state_mem.go:36] "Initialized new in-memory state store" Mar 21 12:42:49.183492 kubelet[2359]: E0321 12:42:49.182942 2359 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 21 12:42:49.183492 kubelet[2359]: W0321 12:42:49.183280 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Mar 21 12:42:49.183492 kubelet[2359]: E0321 12:42:49.183318 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Mar 21 12:42:49.188194 kubelet[2359]: I0321 12:42:49.188173 2359 policy_none.go:49] "None policy: Start" Mar 21 12:42:49.188890 kubelet[2359]: I0321 12:42:49.188875 2359 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 21 12:42:49.189004 kubelet[2359]: I0321 12:42:49.188994 2359 state_mem.go:35] "Initializing new in-memory state store" Mar 21 12:42:49.195198 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 21 12:42:49.211481 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 21 12:42:49.214320 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 21 12:42:49.230321 kubelet[2359]: I0321 12:42:49.230305 2359 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 21 12:42:49.230724 kubelet[2359]: I0321 12:42:49.230636 2359 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 21 12:42:49.231018 kubelet[2359]: I0321 12:42:49.230959 2359 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 21 12:42:49.231773 kubelet[2359]: E0321 12:42:49.231749 2359 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 21 12:42:49.267158 kubelet[2359]: I0321 12:42:49.267124 2359 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 21 12:42:49.269247 kubelet[2359]: E0321 12:42:49.269224 2359 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Mar 21 12:42:49.283379 kubelet[2359]: I0321 12:42:49.283352 2359 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 21 12:42:49.286010 kubelet[2359]: I0321 12:42:49.285978 2359 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 21 12:42:49.286776 kubelet[2359]: I0321 12:42:49.286714 2359 topology_manager.go:215] "Topology Admit Handler" podUID="e08226593e152f4e66ea9b1b7542b405" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 21 12:42:49.291198 systemd[1]: Created slice kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice - libcontainer container kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice. Mar 21 12:42:49.326801 systemd[1]: Created slice kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice - libcontainer container kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice. Mar 21 12:42:49.330312 systemd[1]: Created slice kubepods-burstable-pode08226593e152f4e66ea9b1b7542b405.slice - libcontainer container kubepods-burstable-pode08226593e152f4e66ea9b1b7542b405.slice. Mar 21 12:42:49.369033 kubelet[2359]: I0321 12:42:49.368797 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e08226593e152f4e66ea9b1b7542b405-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e08226593e152f4e66ea9b1b7542b405\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:42:49.369033 kubelet[2359]: I0321 12:42:49.368831 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:42:49.369033 kubelet[2359]: E0321 12:42:49.368831 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="400ms" Mar 21 12:42:49.369033 kubelet[2359]: I0321 12:42:49.368852 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:42:49.369033 kubelet[2359]: I0321 12:42:49.368871 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:42:49.370325 kubelet[2359]: I0321 12:42:49.368893 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:42:49.370325 kubelet[2359]: I0321 12:42:49.368929 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 21 12:42:49.370325 kubelet[2359]: I0321 12:42:49.368945 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e08226593e152f4e66ea9b1b7542b405-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e08226593e152f4e66ea9b1b7542b405\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:42:49.370325 kubelet[2359]: I0321 12:42:49.368969 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e08226593e152f4e66ea9b1b7542b405-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e08226593e152f4e66ea9b1b7542b405\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:42:49.370325 kubelet[2359]: I0321 12:42:49.368984 2359 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:42:49.470508 kubelet[2359]: I0321 12:42:49.470461 2359 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 21 12:42:49.470907 kubelet[2359]: E0321 12:42:49.470781 2359 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Mar 21 12:42:49.626298 containerd[1491]: time="2025-03-21T12:42:49.626185426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,}" Mar 21 12:42:49.629792 containerd[1491]: time="2025-03-21T12:42:49.629697942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,}" Mar 21 12:42:49.632323 containerd[1491]: time="2025-03-21T12:42:49.632281016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e08226593e152f4e66ea9b1b7542b405,Namespace:kube-system,Attempt:0,}" Mar 21 12:42:49.769857 kubelet[2359]: E0321 12:42:49.769812 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="800ms" Mar 21 12:42:49.872207 kubelet[2359]: I0321 12:42:49.872176 2359 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 21 12:42:49.872473 kubelet[2359]: E0321 12:42:49.872451 2359 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Mar 21 12:42:50.092318 kubelet[2359]: W0321 12:42:50.092222 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Mar 21 12:42:50.092318 kubelet[2359]: E0321 12:42:50.092274 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Mar 21 12:42:50.278957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2849091323.mount: Deactivated successfully. Mar 21 12:42:50.283775 containerd[1491]: time="2025-03-21T12:42:50.283737328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 12:42:50.285836 containerd[1491]: time="2025-03-21T12:42:50.285792079Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Mar 21 12:42:50.287660 containerd[1491]: time="2025-03-21T12:42:50.287612100Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 12:42:50.288423 containerd[1491]: time="2025-03-21T12:42:50.288323885Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 12:42:50.289137 containerd[1491]: time="2025-03-21T12:42:50.289079786Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 12:42:50.289841 containerd[1491]: time="2025-03-21T12:42:50.289796365Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 21 12:42:50.290580 containerd[1491]: time="2025-03-21T12:42:50.290538360Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 21 12:42:50.292142 containerd[1491]: time="2025-03-21T12:42:50.292088685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 12:42:50.293184 containerd[1491]: time="2025-03-21T12:42:50.293151406Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 664.4392ms" Mar 21 12:42:50.294458 containerd[1491]: time="2025-03-21T12:42:50.294426720Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 662.913445ms" Mar 21 12:42:50.296723 containerd[1491]: time="2025-03-21T12:42:50.296477156Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 662.241483ms" Mar 21 12:42:50.315579 containerd[1491]: time="2025-03-21T12:42:50.315545598Z" level=info msg="connecting to shim 07a73af53abb6022817a422006c54575df8d41e12b0de21069c4b0a9ad9c3b5c" address="unix:///run/containerd/s/7fc4bc3c0d9bc113c0a945da9f78f5a00847c3b974391cd37692d7bdae63e055" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:42:50.317468 containerd[1491]: time="2025-03-21T12:42:50.317433553Z" level=info msg="connecting to shim 8004876981dd2a7036144aac728631c0110a5dcf3b14d0542676fce41f16f5c7" address="unix:///run/containerd/s/b62b189c34c81155c51f4eefa592d548a41eaccfbdcc1d0cd5ccdfc32eaa7156" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:42:50.319266 containerd[1491]: time="2025-03-21T12:42:50.319221605Z" level=info msg="connecting to shim 4bc2732953fff7fc64beec815daa27c6d955932b4847c7e1bd633b1813cc5722" address="unix:///run/containerd/s/69fd006166f137b398132451a1e89f57701a5462c65bf6b1aef1c6b367782642" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:42:50.322273 kubelet[2359]: E0321 12:42:50.322176 2359 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.147:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.147:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182ed1f80e325de9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-21 12:42:49.159237097 +0000 UTC m=+0.814298773,LastTimestamp:2025-03-21 12:42:49.159237097 +0000 UTC m=+0.814298773,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 21 12:42:50.342543 systemd[1]: Started cri-containerd-07a73af53abb6022817a422006c54575df8d41e12b0de21069c4b0a9ad9c3b5c.scope - libcontainer container 07a73af53abb6022817a422006c54575df8d41e12b0de21069c4b0a9ad9c3b5c. Mar 21 12:42:50.346632 systemd[1]: Started cri-containerd-4bc2732953fff7fc64beec815daa27c6d955932b4847c7e1bd633b1813cc5722.scope - libcontainer container 4bc2732953fff7fc64beec815daa27c6d955932b4847c7e1bd633b1813cc5722. Mar 21 12:42:50.348105 systemd[1]: Started cri-containerd-8004876981dd2a7036144aac728631c0110a5dcf3b14d0542676fce41f16f5c7.scope - libcontainer container 8004876981dd2a7036144aac728631c0110a5dcf3b14d0542676fce41f16f5c7. Mar 21 12:42:50.379481 containerd[1491]: time="2025-03-21T12:42:50.379439468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"07a73af53abb6022817a422006c54575df8d41e12b0de21069c4b0a9ad9c3b5c\"" Mar 21 12:42:50.383333 containerd[1491]: time="2025-03-21T12:42:50.383270643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e08226593e152f4e66ea9b1b7542b405,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bc2732953fff7fc64beec815daa27c6d955932b4847c7e1bd633b1813cc5722\"" Mar 21 12:42:50.384313 containerd[1491]: time="2025-03-21T12:42:50.384265471Z" level=info msg="CreateContainer within sandbox \"07a73af53abb6022817a422006c54575df8d41e12b0de21069c4b0a9ad9c3b5c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 21 12:42:50.386806 containerd[1491]: time="2025-03-21T12:42:50.386764308Z" level=info msg="CreateContainer within sandbox \"4bc2732953fff7fc64beec815daa27c6d955932b4847c7e1bd633b1813cc5722\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 21 12:42:50.391949 containerd[1491]: time="2025-03-21T12:42:50.391918231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"8004876981dd2a7036144aac728631c0110a5dcf3b14d0542676fce41f16f5c7\"" Mar 21 12:42:50.394675 containerd[1491]: time="2025-03-21T12:42:50.394647164Z" level=info msg="CreateContainer within sandbox \"8004876981dd2a7036144aac728631c0110a5dcf3b14d0542676fce41f16f5c7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 21 12:42:50.396155 containerd[1491]: time="2025-03-21T12:42:50.396003478Z" level=info msg="Container 05ac548d802c7009da2c5299a2f43bf9d3334fae41ee47f8d226fb6e37faa016: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:42:50.399653 containerd[1491]: time="2025-03-21T12:42:50.399624579Z" level=info msg="Container 0d52bfea5cdb1629814a382471a8b4fef2fb742e1f4ec9613dc16d58b5303945: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:42:50.404936 containerd[1491]: time="2025-03-21T12:42:50.404776623Z" level=info msg="CreateContainer within sandbox \"07a73af53abb6022817a422006c54575df8d41e12b0de21069c4b0a9ad9c3b5c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"05ac548d802c7009da2c5299a2f43bf9d3334fae41ee47f8d226fb6e37faa016\"" Mar 21 12:42:50.405455 containerd[1491]: time="2025-03-21T12:42:50.405429185Z" level=info msg="StartContainer for \"05ac548d802c7009da2c5299a2f43bf9d3334fae41ee47f8d226fb6e37faa016\"" Mar 21 12:42:50.406766 containerd[1491]: time="2025-03-21T12:42:50.406740224Z" level=info msg="connecting to shim 05ac548d802c7009da2c5299a2f43bf9d3334fae41ee47f8d226fb6e37faa016" address="unix:///run/containerd/s/7fc4bc3c0d9bc113c0a945da9f78f5a00847c3b974391cd37692d7bdae63e055" protocol=ttrpc version=3 Mar 21 12:42:50.407092 containerd[1491]: time="2025-03-21T12:42:50.407065226Z" level=info msg="Container b61114631acf92339430a6dacb846ebb39f43a2f58327bb80026ccd71fe9c81e: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:42:50.409851 containerd[1491]: time="2025-03-21T12:42:50.409790842Z" level=info msg="CreateContainer within sandbox \"4bc2732953fff7fc64beec815daa27c6d955932b4847c7e1bd633b1813cc5722\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0d52bfea5cdb1629814a382471a8b4fef2fb742e1f4ec9613dc16d58b5303945\"" Mar 21 12:42:50.411173 containerd[1491]: time="2025-03-21T12:42:50.410134226Z" level=info msg="StartContainer for \"0d52bfea5cdb1629814a382471a8b4fef2fb742e1f4ec9613dc16d58b5303945\"" Mar 21 12:42:50.411173 containerd[1491]: time="2025-03-21T12:42:50.411072829Z" level=info msg="connecting to shim 0d52bfea5cdb1629814a382471a8b4fef2fb742e1f4ec9613dc16d58b5303945" address="unix:///run/containerd/s/69fd006166f137b398132451a1e89f57701a5462c65bf6b1aef1c6b367782642" protocol=ttrpc version=3 Mar 21 12:42:50.414131 containerd[1491]: time="2025-03-21T12:42:50.414082288Z" level=info msg="CreateContainer within sandbox \"8004876981dd2a7036144aac728631c0110a5dcf3b14d0542676fce41f16f5c7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b61114631acf92339430a6dacb846ebb39f43a2f58327bb80026ccd71fe9c81e\"" Mar 21 12:42:50.417451 containerd[1491]: time="2025-03-21T12:42:50.417404281Z" level=info msg="StartContainer for \"b61114631acf92339430a6dacb846ebb39f43a2f58327bb80026ccd71fe9c81e\"" Mar 21 12:42:50.419013 containerd[1491]: time="2025-03-21T12:42:50.418977863Z" level=info msg="connecting to shim b61114631acf92339430a6dacb846ebb39f43a2f58327bb80026ccd71fe9c81e" address="unix:///run/containerd/s/b62b189c34c81155c51f4eefa592d548a41eaccfbdcc1d0cd5ccdfc32eaa7156" protocol=ttrpc version=3 Mar 21 12:42:50.428527 systemd[1]: Started cri-containerd-05ac548d802c7009da2c5299a2f43bf9d3334fae41ee47f8d226fb6e37faa016.scope - libcontainer container 05ac548d802c7009da2c5299a2f43bf9d3334fae41ee47f8d226fb6e37faa016. Mar 21 12:42:50.431869 systemd[1]: Started cri-containerd-0d52bfea5cdb1629814a382471a8b4fef2fb742e1f4ec9613dc16d58b5303945.scope - libcontainer container 0d52bfea5cdb1629814a382471a8b4fef2fb742e1f4ec9613dc16d58b5303945. Mar 21 12:42:50.434801 systemd[1]: Started cri-containerd-b61114631acf92339430a6dacb846ebb39f43a2f58327bb80026ccd71fe9c81e.scope - libcontainer container b61114631acf92339430a6dacb846ebb39f43a2f58327bb80026ccd71fe9c81e. Mar 21 12:42:50.452011 kubelet[2359]: W0321 12:42:50.451975 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Mar 21 12:42:50.452011 kubelet[2359]: E0321 12:42:50.452015 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Mar 21 12:42:50.473022 kubelet[2359]: W0321 12:42:50.472926 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Mar 21 12:42:50.473022 kubelet[2359]: E0321 12:42:50.472987 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Mar 21 12:42:50.478207 containerd[1491]: time="2025-03-21T12:42:50.478007007Z" level=info msg="StartContainer for \"0d52bfea5cdb1629814a382471a8b4fef2fb742e1f4ec9613dc16d58b5303945\" returns successfully" Mar 21 12:42:50.484718 containerd[1491]: time="2025-03-21T12:42:50.484602800Z" level=info msg="StartContainer for \"05ac548d802c7009da2c5299a2f43bf9d3334fae41ee47f8d226fb6e37faa016\" returns successfully" Mar 21 12:42:50.496825 containerd[1491]: time="2025-03-21T12:42:50.496736940Z" level=info msg="StartContainer for \"b61114631acf92339430a6dacb846ebb39f43a2f58327bb80026ccd71fe9c81e\" returns successfully" Mar 21 12:42:50.513736 kubelet[2359]: W0321 12:42:50.513650 2359 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Mar 21 12:42:50.513736 kubelet[2359]: E0321 12:42:50.513705 2359 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Mar 21 12:42:50.571119 kubelet[2359]: E0321 12:42:50.571046 2359 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="1.6s" Mar 21 12:42:50.673911 kubelet[2359]: I0321 12:42:50.673815 2359 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 21 12:42:51.956408 kubelet[2359]: I0321 12:42:51.956301 2359 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 21 12:42:51.966587 kubelet[2359]: E0321 12:42:51.966548 2359 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:42:52.066890 kubelet[2359]: E0321 12:42:52.066832 2359 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:42:52.167595 kubelet[2359]: E0321 12:42:52.167560 2359 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:42:52.268186 kubelet[2359]: E0321 12:42:52.268068 2359 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:42:52.368628 kubelet[2359]: E0321 12:42:52.368590 2359 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:42:52.469150 kubelet[2359]: E0321 12:42:52.469098 2359 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:42:52.569722 kubelet[2359]: E0321 12:42:52.569628 2359 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:42:53.161288 kubelet[2359]: I0321 12:42:53.161257 2359 apiserver.go:52] "Watching apiserver" Mar 21 12:42:53.166685 kubelet[2359]: I0321 12:42:53.166662 2359 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 21 12:42:53.791886 systemd[1]: Reload requested from client PID 2641 ('systemctl') (unit session-7.scope)... Mar 21 12:42:53.791904 systemd[1]: Reloading... Mar 21 12:42:53.854470 zram_generator::config[2691]: No configuration found. Mar 21 12:42:53.933596 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 12:42:54.018300 systemd[1]: Reloading finished in 226 ms. Mar 21 12:42:54.039566 kubelet[2359]: E0321 12:42:54.039301 2359 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.182ed1f80e325de9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-21 12:42:49.159237097 +0000 UTC m=+0.814298773,LastTimestamp:2025-03-21 12:42:49.159237097 +0000 UTC m=+0.814298773,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 21 12:42:54.039713 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:42:54.049814 systemd[1]: kubelet.service: Deactivated successfully. Mar 21 12:42:54.050154 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:42:54.050199 systemd[1]: kubelet.service: Consumed 1.149s CPU time, 116.5M memory peak. Mar 21 12:42:54.052293 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:42:54.169477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:42:54.177715 (kubelet)[2727]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 21 12:42:54.215791 kubelet[2727]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 12:42:54.215791 kubelet[2727]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 21 12:42:54.215791 kubelet[2727]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 12:42:54.216101 kubelet[2727]: I0321 12:42:54.215832 2727 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 21 12:42:54.220184 kubelet[2727]: I0321 12:42:54.220152 2727 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 21 12:42:54.220184 kubelet[2727]: I0321 12:42:54.220179 2727 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 21 12:42:54.220402 kubelet[2727]: I0321 12:42:54.220368 2727 server.go:927] "Client rotation is on, will bootstrap in background" Mar 21 12:42:54.221651 kubelet[2727]: I0321 12:42:54.221629 2727 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 21 12:42:54.222780 kubelet[2727]: I0321 12:42:54.222753 2727 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 21 12:42:54.227995 kubelet[2727]: I0321 12:42:54.227978 2727 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 21 12:42:54.228186 kubelet[2727]: I0321 12:42:54.228166 2727 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 21 12:42:54.228350 kubelet[2727]: I0321 12:42:54.228193 2727 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 21 12:42:54.228444 kubelet[2727]: I0321 12:42:54.228355 2727 topology_manager.go:138] "Creating topology manager with none policy" Mar 21 12:42:54.228444 kubelet[2727]: I0321 12:42:54.228366 2727 container_manager_linux.go:301] "Creating device plugin manager" Mar 21 12:42:54.228495 kubelet[2727]: I0321 12:42:54.228467 2727 state_mem.go:36] "Initialized new in-memory state store" Mar 21 12:42:54.228591 kubelet[2727]: I0321 12:42:54.228579 2727 kubelet.go:400] "Attempting to sync node with API server" Mar 21 12:42:54.228631 kubelet[2727]: I0321 12:42:54.228593 2727 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 21 12:42:54.228631 kubelet[2727]: I0321 12:42:54.228619 2727 kubelet.go:312] "Adding apiserver pod source" Mar 21 12:42:54.228679 kubelet[2727]: I0321 12:42:54.228634 2727 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 21 12:42:54.232404 kubelet[2727]: I0321 12:42:54.229687 2727 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 21 12:42:54.232404 kubelet[2727]: I0321 12:42:54.229844 2727 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 21 12:42:54.232404 kubelet[2727]: I0321 12:42:54.230168 2727 server.go:1264] "Started kubelet" Mar 21 12:42:54.232404 kubelet[2727]: I0321 12:42:54.231862 2727 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 21 12:42:54.244777 kubelet[2727]: I0321 12:42:54.243361 2727 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 21 12:42:54.244777 kubelet[2727]: I0321 12:42:54.244491 2727 server.go:455] "Adding debug handlers to kubelet server" Mar 21 12:42:54.246009 kubelet[2727]: E0321 12:42:54.245989 2727 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 21 12:42:54.248003 kubelet[2727]: I0321 12:42:54.246488 2727 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 21 12:42:54.248003 kubelet[2727]: I0321 12:42:54.246754 2727 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 21 12:42:54.248003 kubelet[2727]: I0321 12:42:54.247692 2727 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 21 12:42:54.248117 kubelet[2727]: I0321 12:42:54.248018 2727 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 21 12:42:54.248200 kubelet[2727]: I0321 12:42:54.248155 2727 reconciler.go:26] "Reconciler: start to sync state" Mar 21 12:42:54.248899 kubelet[2727]: I0321 12:42:54.248603 2727 factory.go:221] Registration of the systemd container factory successfully Mar 21 12:42:54.248899 kubelet[2727]: I0321 12:42:54.248694 2727 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 21 12:42:54.252268 kubelet[2727]: I0321 12:42:54.252236 2727 factory.go:221] Registration of the containerd container factory successfully Mar 21 12:42:54.256047 kubelet[2727]: I0321 12:42:54.256012 2727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 21 12:42:54.257719 kubelet[2727]: I0321 12:42:54.257640 2727 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 21 12:42:54.257719 kubelet[2727]: I0321 12:42:54.257679 2727 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 21 12:42:54.257719 kubelet[2727]: I0321 12:42:54.257694 2727 kubelet.go:2337] "Starting kubelet main sync loop" Mar 21 12:42:54.258181 kubelet[2727]: E0321 12:42:54.257736 2727 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 21 12:42:54.281183 kubelet[2727]: I0321 12:42:54.281160 2727 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 21 12:42:54.281183 kubelet[2727]: I0321 12:42:54.281176 2727 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 21 12:42:54.281324 kubelet[2727]: I0321 12:42:54.281195 2727 state_mem.go:36] "Initialized new in-memory state store" Mar 21 12:42:54.281349 kubelet[2727]: I0321 12:42:54.281337 2727 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 21 12:42:54.281371 kubelet[2727]: I0321 12:42:54.281348 2727 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 21 12:42:54.281371 kubelet[2727]: I0321 12:42:54.281363 2727 policy_none.go:49] "None policy: Start" Mar 21 12:42:54.281969 kubelet[2727]: I0321 12:42:54.281942 2727 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 21 12:42:54.281969 kubelet[2727]: I0321 12:42:54.281967 2727 state_mem.go:35] "Initializing new in-memory state store" Mar 21 12:42:54.282130 kubelet[2727]: I0321 12:42:54.282113 2727 state_mem.go:75] "Updated machine memory state" Mar 21 12:42:54.285822 kubelet[2727]: I0321 12:42:54.285805 2727 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 21 12:42:54.286477 kubelet[2727]: I0321 12:42:54.286205 2727 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 21 12:42:54.286477 kubelet[2727]: I0321 12:42:54.286332 2727 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 21 12:42:54.349915 kubelet[2727]: I0321 12:42:54.348941 2727 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 21 12:42:54.355788 kubelet[2727]: I0321 12:42:54.355754 2727 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Mar 21 12:42:54.355875 kubelet[2727]: I0321 12:42:54.355819 2727 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 21 12:42:54.357853 kubelet[2727]: I0321 12:42:54.357801 2727 topology_manager.go:215] "Topology Admit Handler" podUID="e08226593e152f4e66ea9b1b7542b405" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 21 12:42:54.358097 kubelet[2727]: I0321 12:42:54.358026 2727 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 21 12:42:54.358244 kubelet[2727]: I0321 12:42:54.358184 2727 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 21 12:42:54.548829 kubelet[2727]: I0321 12:42:54.548786 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 21 12:42:54.548829 kubelet[2727]: I0321 12:42:54.548822 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e08226593e152f4e66ea9b1b7542b405-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e08226593e152f4e66ea9b1b7542b405\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:42:54.548949 kubelet[2727]: I0321 12:42:54.548845 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e08226593e152f4e66ea9b1b7542b405-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e08226593e152f4e66ea9b1b7542b405\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:42:54.548949 kubelet[2727]: I0321 12:42:54.548912 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:42:54.548949 kubelet[2727]: I0321 12:42:54.548931 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:42:54.548949 kubelet[2727]: I0321 12:42:54.548948 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:42:54.549081 kubelet[2727]: I0321 12:42:54.548964 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:42:54.549081 kubelet[2727]: I0321 12:42:54.549002 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:42:54.549081 kubelet[2727]: I0321 12:42:54.549036 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e08226593e152f4e66ea9b1b7542b405-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e08226593e152f4e66ea9b1b7542b405\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:42:55.229603 kubelet[2727]: I0321 12:42:55.229562 2727 apiserver.go:52] "Watching apiserver" Mar 21 12:42:55.248277 kubelet[2727]: I0321 12:42:55.248238 2727 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 21 12:42:55.260610 kubelet[2727]: I0321 12:42:55.260499 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.260486359 podStartE2EDuration="1.260486359s" podCreationTimestamp="2025-03-21 12:42:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:42:55.253708075 +0000 UTC m=+1.072391629" watchObservedRunningTime="2025-03-21 12:42:55.260486359 +0000 UTC m=+1.079169913" Mar 21 12:42:55.266934 kubelet[2727]: I0321 12:42:55.266770 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.266758136 podStartE2EDuration="1.266758136s" podCreationTimestamp="2025-03-21 12:42:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:42:55.260620052 +0000 UTC m=+1.079303606" watchObservedRunningTime="2025-03-21 12:42:55.266758136 +0000 UTC m=+1.085441690" Mar 21 12:42:55.266934 kubelet[2727]: I0321 12:42:55.266925 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.2669193349999999 podStartE2EDuration="1.266919335s" podCreationTimestamp="2025-03-21 12:42:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:42:55.266852449 +0000 UTC m=+1.085536003" watchObservedRunningTime="2025-03-21 12:42:55.266919335 +0000 UTC m=+1.085602849" Mar 21 12:42:55.272471 kubelet[2727]: E0321 12:42:55.272026 2727 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 21 12:42:55.272471 kubelet[2727]: E0321 12:42:55.272084 2727 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 21 12:42:59.135510 sudo[1687]: pam_unix(sudo:session): session closed for user root Mar 21 12:42:59.136691 sshd[1686]: Connection closed by 10.0.0.1 port 34902 Mar 21 12:42:59.137177 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Mar 21 12:42:59.140926 systemd[1]: sshd@6-10.0.0.147:22-10.0.0.1:34902.service: Deactivated successfully. Mar 21 12:42:59.143531 systemd[1]: session-7.scope: Deactivated successfully. Mar 21 12:42:59.144425 systemd[1]: session-7.scope: Consumed 6.574s CPU time, 240.6M memory peak. Mar 21 12:42:59.145329 systemd-logind[1464]: Session 7 logged out. Waiting for processes to exit. Mar 21 12:42:59.146133 systemd-logind[1464]: Removed session 7. Mar 21 12:43:08.143113 update_engine[1467]: I20250321 12:43:08.143049 1467 update_attempter.cc:509] Updating boot flags... Mar 21 12:43:08.170402 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2824) Mar 21 12:43:08.207403 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2827) Mar 21 12:43:08.237911 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2827) Mar 21 12:43:08.776452 kubelet[2727]: I0321 12:43:08.776405 2727 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 21 12:43:08.780251 containerd[1491]: time="2025-03-21T12:43:08.780187649Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 21 12:43:08.781244 kubelet[2727]: I0321 12:43:08.780495 2727 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 21 12:43:09.822101 kubelet[2727]: I0321 12:43:09.822047 2727 topology_manager.go:215] "Topology Admit Handler" podUID="3b89be8e-1132-436f-8137-ee36919dac4a" podNamespace="kube-system" podName="kube-proxy-sjr9h" Mar 21 12:43:09.841249 systemd[1]: Created slice kubepods-besteffort-pod3b89be8e_1132_436f_8137_ee36919dac4a.slice - libcontainer container kubepods-besteffort-pod3b89be8e_1132_436f_8137_ee36919dac4a.slice. Mar 21 12:43:09.893163 kubelet[2727]: I0321 12:43:09.892307 2727 topology_manager.go:215] "Topology Admit Handler" podUID="6068888b-2a65-4a11-977c-5623000e1ee9" podNamespace="tigera-operator" podName="tigera-operator-6479d6dc54-bg8g8" Mar 21 12:43:09.899418 systemd[1]: Created slice kubepods-besteffort-pod6068888b_2a65_4a11_977c_5623000e1ee9.slice - libcontainer container kubepods-besteffort-pod6068888b_2a65_4a11_977c_5623000e1ee9.slice. Mar 21 12:43:09.938872 kubelet[2727]: I0321 12:43:09.938841 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3b89be8e-1132-436f-8137-ee36919dac4a-kube-proxy\") pod \"kube-proxy-sjr9h\" (UID: \"3b89be8e-1132-436f-8137-ee36919dac4a\") " pod="kube-system/kube-proxy-sjr9h" Mar 21 12:43:09.938962 kubelet[2727]: I0321 12:43:09.938908 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww6zd\" (UniqueName: \"kubernetes.io/projected/3b89be8e-1132-436f-8137-ee36919dac4a-kube-api-access-ww6zd\") pod \"kube-proxy-sjr9h\" (UID: \"3b89be8e-1132-436f-8137-ee36919dac4a\") " pod="kube-system/kube-proxy-sjr9h" Mar 21 12:43:09.938991 kubelet[2727]: I0321 12:43:09.938962 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b89be8e-1132-436f-8137-ee36919dac4a-xtables-lock\") pod \"kube-proxy-sjr9h\" (UID: \"3b89be8e-1132-436f-8137-ee36919dac4a\") " pod="kube-system/kube-proxy-sjr9h" Mar 21 12:43:09.938991 kubelet[2727]: I0321 12:43:09.938986 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b89be8e-1132-436f-8137-ee36919dac4a-lib-modules\") pod \"kube-proxy-sjr9h\" (UID: \"3b89be8e-1132-436f-8137-ee36919dac4a\") " pod="kube-system/kube-proxy-sjr9h" Mar 21 12:43:10.039960 kubelet[2727]: I0321 12:43:10.039791 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6068888b-2a65-4a11-977c-5623000e1ee9-var-lib-calico\") pod \"tigera-operator-6479d6dc54-bg8g8\" (UID: \"6068888b-2a65-4a11-977c-5623000e1ee9\") " pod="tigera-operator/tigera-operator-6479d6dc54-bg8g8" Mar 21 12:43:10.039960 kubelet[2727]: I0321 12:43:10.039860 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf785\" (UniqueName: \"kubernetes.io/projected/6068888b-2a65-4a11-977c-5623000e1ee9-kube-api-access-pf785\") pod \"tigera-operator-6479d6dc54-bg8g8\" (UID: \"6068888b-2a65-4a11-977c-5623000e1ee9\") " pod="tigera-operator/tigera-operator-6479d6dc54-bg8g8" Mar 21 12:43:10.161419 containerd[1491]: time="2025-03-21T12:43:10.161296578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sjr9h,Uid:3b89be8e-1132-436f-8137-ee36919dac4a,Namespace:kube-system,Attempt:0,}" Mar 21 12:43:10.191253 containerd[1491]: time="2025-03-21T12:43:10.191207177Z" level=info msg="connecting to shim ba9fe33f3cc3bfbe1fdfcfd5190a9664737f6d3fcf34da6a7863c5b00ebb566f" address="unix:///run/containerd/s/32b739fad3ba64128fd9ad6e7cf226150f415523a03125cd0b2351b29078eb61" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:43:10.206147 containerd[1491]: time="2025-03-21T12:43:10.206087175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6479d6dc54-bg8g8,Uid:6068888b-2a65-4a11-977c-5623000e1ee9,Namespace:tigera-operator,Attempt:0,}" Mar 21 12:43:10.218565 systemd[1]: Started cri-containerd-ba9fe33f3cc3bfbe1fdfcfd5190a9664737f6d3fcf34da6a7863c5b00ebb566f.scope - libcontainer container ba9fe33f3cc3bfbe1fdfcfd5190a9664737f6d3fcf34da6a7863c5b00ebb566f. Mar 21 12:43:10.224801 containerd[1491]: time="2025-03-21T12:43:10.224753234Z" level=info msg="connecting to shim 2a91ab355f9b7d7f94a0a2e197860e6ef0bc63e36cd75fdb7d59620cfd0586b2" address="unix:///run/containerd/s/f3aa26de76edfd497d1451a31b1fd2d0e138a04e883f6e5f4791f2ea2af9f589" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:43:10.245512 containerd[1491]: time="2025-03-21T12:43:10.245460867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sjr9h,Uid:3b89be8e-1132-436f-8137-ee36919dac4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba9fe33f3cc3bfbe1fdfcfd5190a9664737f6d3fcf34da6a7863c5b00ebb566f\"" Mar 21 12:43:10.251683 systemd[1]: Started cri-containerd-2a91ab355f9b7d7f94a0a2e197860e6ef0bc63e36cd75fdb7d59620cfd0586b2.scope - libcontainer container 2a91ab355f9b7d7f94a0a2e197860e6ef0bc63e36cd75fdb7d59620cfd0586b2. Mar 21 12:43:10.253907 containerd[1491]: time="2025-03-21T12:43:10.253872572Z" level=info msg="CreateContainer within sandbox \"ba9fe33f3cc3bfbe1fdfcfd5190a9664737f6d3fcf34da6a7863c5b00ebb566f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 21 12:43:10.266964 containerd[1491]: time="2025-03-21T12:43:10.266926881Z" level=info msg="Container 25661464155180d092fc83bda1dfc9415d22738196185687afa2a19b3236a28a: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:43:10.286604 containerd[1491]: time="2025-03-21T12:43:10.286522484Z" level=info msg="CreateContainer within sandbox \"ba9fe33f3cc3bfbe1fdfcfd5190a9664737f6d3fcf34da6a7863c5b00ebb566f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"25661464155180d092fc83bda1dfc9415d22738196185687afa2a19b3236a28a\"" Mar 21 12:43:10.289246 containerd[1491]: time="2025-03-21T12:43:10.289218036Z" level=info msg="StartContainer for \"25661464155180d092fc83bda1dfc9415d22738196185687afa2a19b3236a28a\"" Mar 21 12:43:10.290963 containerd[1491]: time="2025-03-21T12:43:10.290830119Z" level=info msg="connecting to shim 25661464155180d092fc83bda1dfc9415d22738196185687afa2a19b3236a28a" address="unix:///run/containerd/s/32b739fad3ba64128fd9ad6e7cf226150f415523a03125cd0b2351b29078eb61" protocol=ttrpc version=3 Mar 21 12:43:10.291755 containerd[1491]: time="2025-03-21T12:43:10.291722543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6479d6dc54-bg8g8,Uid:6068888b-2a65-4a11-977c-5623000e1ee9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2a91ab355f9b7d7f94a0a2e197860e6ef0bc63e36cd75fdb7d59620cfd0586b2\"" Mar 21 12:43:10.293365 containerd[1491]: time="2025-03-21T12:43:10.293336946Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Mar 21 12:43:10.325618 systemd[1]: Started cri-containerd-25661464155180d092fc83bda1dfc9415d22738196185687afa2a19b3236a28a.scope - libcontainer container 25661464155180d092fc83bda1dfc9415d22738196185687afa2a19b3236a28a. Mar 21 12:43:10.358202 containerd[1491]: time="2025-03-21T12:43:10.358153079Z" level=info msg="StartContainer for \"25661464155180d092fc83bda1dfc9415d22738196185687afa2a19b3236a28a\" returns successfully" Mar 21 12:43:13.330177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3903035019.mount: Deactivated successfully. Mar 21 12:43:13.844310 containerd[1491]: time="2025-03-21T12:43:13.844263379Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:13.845137 containerd[1491]: time="2025-03-21T12:43:13.845075997Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=19271115" Mar 21 12:43:13.846033 containerd[1491]: time="2025-03-21T12:43:13.845996139Z" level=info msg="ImageCreate event name:\"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:13.848122 containerd[1491]: time="2025-03-21T12:43:13.848073107Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:13.848814 containerd[1491]: time="2025-03-21T12:43:13.848784843Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"19267110\" in 3.555410896s" Mar 21 12:43:13.848893 containerd[1491]: time="2025-03-21T12:43:13.848816604Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:a709184cc04589116e7266cb3575491ae8f2ac1c959975fea966447025f66eaa\"" Mar 21 12:43:13.857406 containerd[1491]: time="2025-03-21T12:43:13.857104116Z" level=info msg="CreateContainer within sandbox \"2a91ab355f9b7d7f94a0a2e197860e6ef0bc63e36cd75fdb7d59620cfd0586b2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 21 12:43:13.863011 containerd[1491]: time="2025-03-21T12:43:13.862972291Z" level=info msg="Container aac69bd862bd755dcc7c7982f02ac88b1449eef38df9d0df7bce1c2f7d119484: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:43:13.875930 containerd[1491]: time="2025-03-21T12:43:13.875884390Z" level=info msg="CreateContainer within sandbox \"2a91ab355f9b7d7f94a0a2e197860e6ef0bc63e36cd75fdb7d59620cfd0586b2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"aac69bd862bd755dcc7c7982f02ac88b1449eef38df9d0df7bce1c2f7d119484\"" Mar 21 12:43:13.876385 containerd[1491]: time="2025-03-21T12:43:13.876343641Z" level=info msg="StartContainer for \"aac69bd862bd755dcc7c7982f02ac88b1449eef38df9d0df7bce1c2f7d119484\"" Mar 21 12:43:13.877155 containerd[1491]: time="2025-03-21T12:43:13.877122179Z" level=info msg="connecting to shim aac69bd862bd755dcc7c7982f02ac88b1449eef38df9d0df7bce1c2f7d119484" address="unix:///run/containerd/s/f3aa26de76edfd497d1451a31b1fd2d0e138a04e883f6e5f4791f2ea2af9f589" protocol=ttrpc version=3 Mar 21 12:43:13.915551 systemd[1]: Started cri-containerd-aac69bd862bd755dcc7c7982f02ac88b1449eef38df9d0df7bce1c2f7d119484.scope - libcontainer container aac69bd862bd755dcc7c7982f02ac88b1449eef38df9d0df7bce1c2f7d119484. Mar 21 12:43:13.995986 containerd[1491]: time="2025-03-21T12:43:13.995937287Z" level=info msg="StartContainer for \"aac69bd862bd755dcc7c7982f02ac88b1449eef38df9d0df7bce1c2f7d119484\" returns successfully" Mar 21 12:43:14.315178 kubelet[2727]: I0321 12:43:14.314550 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6479d6dc54-bg8g8" podStartSLOduration=1.752505514 podStartE2EDuration="5.314534645s" podCreationTimestamp="2025-03-21 12:43:09 +0000 UTC" firstStartedPulling="2025-03-21 12:43:10.292773691 +0000 UTC m=+16.111457245" lastFinishedPulling="2025-03-21 12:43:13.854802822 +0000 UTC m=+19.673486376" observedRunningTime="2025-03-21 12:43:14.314509285 +0000 UTC m=+20.133192839" watchObservedRunningTime="2025-03-21 12:43:14.314534645 +0000 UTC m=+20.133218159" Mar 21 12:43:14.315178 kubelet[2727]: I0321 12:43:14.314960 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sjr9h" podStartSLOduration=5.314952214 podStartE2EDuration="5.314952214s" podCreationTimestamp="2025-03-21 12:43:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:43:11.304585183 +0000 UTC m=+17.123268817" watchObservedRunningTime="2025-03-21 12:43:14.314952214 +0000 UTC m=+20.133635768" Mar 21 12:43:18.354406 kubelet[2727]: I0321 12:43:18.354131 2727 topology_manager.go:215] "Topology Admit Handler" podUID="c1d32a72-2785-40ee-a486-84fda909eb5b" podNamespace="calico-system" podName="calico-typha-5985bb5d89-mt49l" Mar 21 12:43:18.366529 systemd[1]: Created slice kubepods-besteffort-podc1d32a72_2785_40ee_a486_84fda909eb5b.slice - libcontainer container kubepods-besteffort-podc1d32a72_2785_40ee_a486_84fda909eb5b.slice. Mar 21 12:43:18.406016 kubelet[2727]: I0321 12:43:18.405953 2727 topology_manager.go:215] "Topology Admit Handler" podUID="d0e70105-d5d6-47d2-b1ab-ff1160ea517d" podNamespace="calico-system" podName="calico-node-m9bmr" Mar 21 12:43:18.412058 systemd[1]: Created slice kubepods-besteffort-podd0e70105_d5d6_47d2_b1ab_ff1160ea517d.slice - libcontainer container kubepods-besteffort-podd0e70105_d5d6_47d2_b1ab_ff1160ea517d.slice. Mar 21 12:43:18.493989 kubelet[2727]: I0321 12:43:18.493899 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c1d32a72-2785-40ee-a486-84fda909eb5b-typha-certs\") pod \"calico-typha-5985bb5d89-mt49l\" (UID: \"c1d32a72-2785-40ee-a486-84fda909eb5b\") " pod="calico-system/calico-typha-5985bb5d89-mt49l" Mar 21 12:43:18.493989 kubelet[2727]: I0321 12:43:18.493941 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1d32a72-2785-40ee-a486-84fda909eb5b-tigera-ca-bundle\") pod \"calico-typha-5985bb5d89-mt49l\" (UID: \"c1d32a72-2785-40ee-a486-84fda909eb5b\") " pod="calico-system/calico-typha-5985bb5d89-mt49l" Mar 21 12:43:18.493989 kubelet[2727]: I0321 12:43:18.493989 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6st57\" (UniqueName: \"kubernetes.io/projected/c1d32a72-2785-40ee-a486-84fda909eb5b-kube-api-access-6st57\") pod \"calico-typha-5985bb5d89-mt49l\" (UID: \"c1d32a72-2785-40ee-a486-84fda909eb5b\") " pod="calico-system/calico-typha-5985bb5d89-mt49l" Mar 21 12:43:18.518475 kubelet[2727]: I0321 12:43:18.518423 2727 topology_manager.go:215] "Topology Admit Handler" podUID="30c87969-e276-45d4-9080-209e74211884" podNamespace="calico-system" podName="csi-node-driver-cb58g" Mar 21 12:43:18.518822 kubelet[2727]: E0321 12:43:18.518690 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb58g" podUID="30c87969-e276-45d4-9080-209e74211884" Mar 21 12:43:18.594397 kubelet[2727]: I0321 12:43:18.594312 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d0e70105-d5d6-47d2-b1ab-ff1160ea517d-policysync\") pod \"calico-node-m9bmr\" (UID: \"d0e70105-d5d6-47d2-b1ab-ff1160ea517d\") " pod="calico-system/calico-node-m9bmr" Mar 21 12:43:18.594397 kubelet[2727]: I0321 12:43:18.594355 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d0e70105-d5d6-47d2-b1ab-ff1160ea517d-flexvol-driver-host\") pod \"calico-node-m9bmr\" (UID: \"d0e70105-d5d6-47d2-b1ab-ff1160ea517d\") " pod="calico-system/calico-node-m9bmr" Mar 21 12:43:18.594602 kubelet[2727]: I0321 12:43:18.594419 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0e70105-d5d6-47d2-b1ab-ff1160ea517d-lib-modules\") pod \"calico-node-m9bmr\" (UID: \"d0e70105-d5d6-47d2-b1ab-ff1160ea517d\") " pod="calico-system/calico-node-m9bmr" Mar 21 12:43:18.594602 kubelet[2727]: I0321 12:43:18.594436 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d0e70105-d5d6-47d2-b1ab-ff1160ea517d-cni-bin-dir\") pod \"calico-node-m9bmr\" (UID: \"d0e70105-d5d6-47d2-b1ab-ff1160ea517d\") " pod="calico-system/calico-node-m9bmr" Mar 21 12:43:18.594602 kubelet[2727]: I0321 12:43:18.594454 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d0e70105-d5d6-47d2-b1ab-ff1160ea517d-cni-log-dir\") pod \"calico-node-m9bmr\" (UID: \"d0e70105-d5d6-47d2-b1ab-ff1160ea517d\") " pod="calico-system/calico-node-m9bmr" Mar 21 12:43:18.594602 kubelet[2727]: I0321 12:43:18.594588 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d0e70105-d5d6-47d2-b1ab-ff1160ea517d-node-certs\") pod \"calico-node-m9bmr\" (UID: \"d0e70105-d5d6-47d2-b1ab-ff1160ea517d\") " pod="calico-system/calico-node-m9bmr" Mar 21 12:43:18.594830 kubelet[2727]: I0321 12:43:18.594622 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d0e70105-d5d6-47d2-b1ab-ff1160ea517d-var-run-calico\") pod \"calico-node-m9bmr\" (UID: \"d0e70105-d5d6-47d2-b1ab-ff1160ea517d\") " pod="calico-system/calico-node-m9bmr" Mar 21 12:43:18.594830 kubelet[2727]: I0321 12:43:18.594663 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d0e70105-d5d6-47d2-b1ab-ff1160ea517d-var-lib-calico\") pod \"calico-node-m9bmr\" (UID: \"d0e70105-d5d6-47d2-b1ab-ff1160ea517d\") " pod="calico-system/calico-node-m9bmr" Mar 21 12:43:18.594830 kubelet[2727]: I0321 12:43:18.594696 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0e70105-d5d6-47d2-b1ab-ff1160ea517d-tigera-ca-bundle\") pod \"calico-node-m9bmr\" (UID: \"d0e70105-d5d6-47d2-b1ab-ff1160ea517d\") " pod="calico-system/calico-node-m9bmr" Mar 21 12:43:18.594830 kubelet[2727]: I0321 12:43:18.594732 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq4rg\" (UniqueName: \"kubernetes.io/projected/d0e70105-d5d6-47d2-b1ab-ff1160ea517d-kube-api-access-dq4rg\") pod \"calico-node-m9bmr\" (UID: \"d0e70105-d5d6-47d2-b1ab-ff1160ea517d\") " pod="calico-system/calico-node-m9bmr" Mar 21 12:43:18.595360 kubelet[2727]: I0321 12:43:18.595036 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d0e70105-d5d6-47d2-b1ab-ff1160ea517d-cni-net-dir\") pod \"calico-node-m9bmr\" (UID: \"d0e70105-d5d6-47d2-b1ab-ff1160ea517d\") " pod="calico-system/calico-node-m9bmr" Mar 21 12:43:18.595360 kubelet[2727]: I0321 12:43:18.595078 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0e70105-d5d6-47d2-b1ab-ff1160ea517d-xtables-lock\") pod \"calico-node-m9bmr\" (UID: \"d0e70105-d5d6-47d2-b1ab-ff1160ea517d\") " pod="calico-system/calico-node-m9bmr" Mar 21 12:43:18.678523 containerd[1491]: time="2025-03-21T12:43:18.678338360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5985bb5d89-mt49l,Uid:c1d32a72-2785-40ee-a486-84fda909eb5b,Namespace:calico-system,Attempt:0,}" Mar 21 12:43:18.696930 kubelet[2727]: I0321 12:43:18.696412 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/30c87969-e276-45d4-9080-209e74211884-varrun\") pod \"csi-node-driver-cb58g\" (UID: \"30c87969-e276-45d4-9080-209e74211884\") " pod="calico-system/csi-node-driver-cb58g" Mar 21 12:43:18.696930 kubelet[2727]: I0321 12:43:18.696485 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30c87969-e276-45d4-9080-209e74211884-kubelet-dir\") pod \"csi-node-driver-cb58g\" (UID: \"30c87969-e276-45d4-9080-209e74211884\") " pod="calico-system/csi-node-driver-cb58g" Mar 21 12:43:18.696930 kubelet[2727]: I0321 12:43:18.696510 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsx2f\" (UniqueName: \"kubernetes.io/projected/30c87969-e276-45d4-9080-209e74211884-kube-api-access-lsx2f\") pod \"csi-node-driver-cb58g\" (UID: \"30c87969-e276-45d4-9080-209e74211884\") " pod="calico-system/csi-node-driver-cb58g" Mar 21 12:43:18.696930 kubelet[2727]: I0321 12:43:18.696538 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/30c87969-e276-45d4-9080-209e74211884-registration-dir\") pod \"csi-node-driver-cb58g\" (UID: \"30c87969-e276-45d4-9080-209e74211884\") " pod="calico-system/csi-node-driver-cb58g" Mar 21 12:43:18.696930 kubelet[2727]: I0321 12:43:18.696565 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/30c87969-e276-45d4-9080-209e74211884-socket-dir\") pod \"csi-node-driver-cb58g\" (UID: \"30c87969-e276-45d4-9080-209e74211884\") " pod="calico-system/csi-node-driver-cb58g" Mar 21 12:43:18.703963 kubelet[2727]: E0321 12:43:18.703917 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.704255 kubelet[2727]: W0321 12:43:18.704214 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.704255 kubelet[2727]: E0321 12:43:18.704247 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.709370 kubelet[2727]: E0321 12:43:18.709351 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.709537 kubelet[2727]: W0321 12:43:18.709486 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.709537 kubelet[2727]: E0321 12:43:18.709508 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.715045 containerd[1491]: time="2025-03-21T12:43:18.715009158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m9bmr,Uid:d0e70105-d5d6-47d2-b1ab-ff1160ea517d,Namespace:calico-system,Attempt:0,}" Mar 21 12:43:18.719691 containerd[1491]: time="2025-03-21T12:43:18.719656244Z" level=info msg="connecting to shim 87131c26bcab14c2514c861d80d3a94375413333e5a7cbbe2ce3f25781ec69b8" address="unix:///run/containerd/s/6001ae7807a256415ff6eea192685e0b6cb3e6d583be36ffa026b5eef8383461" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:43:18.732316 containerd[1491]: time="2025-03-21T12:43:18.732279237Z" level=info msg="connecting to shim a33b273509433ff02d116d8a2ab3a613d899af1c5ec103cc5fdce3135f18fcbc" address="unix:///run/containerd/s/6850d2983ab6bdd8a323a74185e2ff3efc8fb15d234f46b1756b34865c548dd9" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:43:18.741527 systemd[1]: Started cri-containerd-87131c26bcab14c2514c861d80d3a94375413333e5a7cbbe2ce3f25781ec69b8.scope - libcontainer container 87131c26bcab14c2514c861d80d3a94375413333e5a7cbbe2ce3f25781ec69b8. Mar 21 12:43:18.764522 systemd[1]: Started cri-containerd-a33b273509433ff02d116d8a2ab3a613d899af1c5ec103cc5fdce3135f18fcbc.scope - libcontainer container a33b273509433ff02d116d8a2ab3a613d899af1c5ec103cc5fdce3135f18fcbc. Mar 21 12:43:18.787466 containerd[1491]: time="2025-03-21T12:43:18.787363375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5985bb5d89-mt49l,Uid:c1d32a72-2785-40ee-a486-84fda909eb5b,Namespace:calico-system,Attempt:0,} returns sandbox id \"87131c26bcab14c2514c861d80d3a94375413333e5a7cbbe2ce3f25781ec69b8\"" Mar 21 12:43:18.788885 containerd[1491]: time="2025-03-21T12:43:18.788860443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 21 12:43:18.797691 kubelet[2727]: E0321 12:43:18.797666 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.797691 kubelet[2727]: W0321 12:43:18.797688 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.797847 kubelet[2727]: E0321 12:43:18.797826 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.798152 kubelet[2727]: E0321 12:43:18.798138 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.798152 kubelet[2727]: W0321 12:43:18.798152 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.798217 kubelet[2727]: E0321 12:43:18.798168 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.798417 containerd[1491]: time="2025-03-21T12:43:18.798388659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m9bmr,Uid:d0e70105-d5d6-47d2-b1ab-ff1160ea517d,Namespace:calico-system,Attempt:0,} returns sandbox id \"a33b273509433ff02d116d8a2ab3a613d899af1c5ec103cc5fdce3135f18fcbc\"" Mar 21 12:43:18.798552 kubelet[2727]: E0321 12:43:18.798530 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.798552 kubelet[2727]: W0321 12:43:18.798547 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.798658 kubelet[2727]: E0321 12:43:18.798565 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.798792 kubelet[2727]: E0321 12:43:18.798774 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.798822 kubelet[2727]: W0321 12:43:18.798792 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.798822 kubelet[2727]: E0321 12:43:18.798804 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.798977 kubelet[2727]: E0321 12:43:18.798963 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.799015 kubelet[2727]: W0321 12:43:18.798978 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.799403 kubelet[2727]: E0321 12:43:18.799306 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.799403 kubelet[2727]: W0321 12:43:18.799324 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.799403 kubelet[2727]: E0321 12:43:18.799337 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.799403 kubelet[2727]: E0321 12:43:18.799339 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.799600 kubelet[2727]: E0321 12:43:18.799576 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.799600 kubelet[2727]: W0321 12:43:18.799589 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.799649 kubelet[2727]: E0321 12:43:18.799607 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.799769 kubelet[2727]: E0321 12:43:18.799758 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.799769 kubelet[2727]: W0321 12:43:18.799768 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.799826 kubelet[2727]: E0321 12:43:18.799782 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.800035 kubelet[2727]: E0321 12:43:18.800025 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.800035 kubelet[2727]: W0321 12:43:18.800035 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.800104 kubelet[2727]: E0321 12:43:18.800048 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.800255 kubelet[2727]: E0321 12:43:18.800244 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.800255 kubelet[2727]: W0321 12:43:18.800255 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.800324 kubelet[2727]: E0321 12:43:18.800264 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.800505 kubelet[2727]: E0321 12:43:18.800489 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.800505 kubelet[2727]: W0321 12:43:18.800500 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.800635 kubelet[2727]: E0321 12:43:18.800513 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.800745 kubelet[2727]: E0321 12:43:18.800729 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.800775 kubelet[2727]: W0321 12:43:18.800746 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.800775 kubelet[2727]: E0321 12:43:18.800765 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.801290 kubelet[2727]: E0321 12:43:18.801274 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.801331 kubelet[2727]: W0321 12:43:18.801291 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.801331 kubelet[2727]: E0321 12:43:18.801308 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.801591 kubelet[2727]: E0321 12:43:18.801580 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.801591 kubelet[2727]: W0321 12:43:18.801591 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.801648 kubelet[2727]: E0321 12:43:18.801605 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.801761 kubelet[2727]: E0321 12:43:18.801751 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.801761 kubelet[2727]: W0321 12:43:18.801762 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.801826 kubelet[2727]: E0321 12:43:18.801790 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.801935 kubelet[2727]: E0321 12:43:18.801925 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.801935 kubelet[2727]: W0321 12:43:18.801934 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.801996 kubelet[2727]: E0321 12:43:18.801962 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.802100 kubelet[2727]: E0321 12:43:18.802090 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.802100 kubelet[2727]: W0321 12:43:18.802100 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.802163 kubelet[2727]: E0321 12:43:18.802124 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.802302 kubelet[2727]: E0321 12:43:18.802289 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.802327 kubelet[2727]: W0321 12:43:18.802302 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.802361 kubelet[2727]: E0321 12:43:18.802347 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.802776 kubelet[2727]: E0321 12:43:18.802760 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.802819 kubelet[2727]: W0321 12:43:18.802776 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.802819 kubelet[2727]: E0321 12:43:18.802800 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.802991 kubelet[2727]: E0321 12:43:18.802971 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.803023 kubelet[2727]: W0321 12:43:18.802991 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.803023 kubelet[2727]: E0321 12:43:18.803008 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.803686 kubelet[2727]: E0321 12:43:18.803672 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.803732 kubelet[2727]: W0321 12:43:18.803686 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.803829 kubelet[2727]: E0321 12:43:18.803781 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.803940 kubelet[2727]: E0321 12:43:18.803922 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.803940 kubelet[2727]: W0321 12:43:18.803936 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.804336 kubelet[2727]: E0321 12:43:18.804213 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.804705 kubelet[2727]: E0321 12:43:18.804579 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.804705 kubelet[2727]: W0321 12:43:18.804595 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.804705 kubelet[2727]: E0321 12:43:18.804614 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.804872 kubelet[2727]: E0321 12:43:18.804858 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.804928 kubelet[2727]: W0321 12:43:18.804917 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.804993 kubelet[2727]: E0321 12:43:18.804972 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.805461 kubelet[2727]: E0321 12:43:18.805443 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.805536 kubelet[2727]: W0321 12:43:18.805524 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.805627 kubelet[2727]: E0321 12:43:18.805597 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:18.811880 kubelet[2727]: E0321 12:43:18.811860 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:18.811880 kubelet[2727]: W0321 12:43:18.811877 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:18.811987 kubelet[2727]: E0321 12:43:18.811889 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.083018 containerd[1491]: time="2025-03-21T12:43:20.082954206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:20.083480 containerd[1491]: time="2025-03-21T12:43:20.083427974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=28363957" Mar 21 12:43:20.084332 containerd[1491]: time="2025-03-21T12:43:20.084295909Z" level=info msg="ImageCreate event name:\"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:20.086086 containerd[1491]: time="2025-03-21T12:43:20.086053419Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:20.086643 containerd[1491]: time="2025-03-21T12:43:20.086612789Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"29733706\" in 1.297719224s" Mar 21 12:43:20.086680 containerd[1491]: time="2025-03-21T12:43:20.086642189Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:38a4e8457549414848315eae0d5ab8ecd6c51f4baaea849fe5edce714d81a999\"" Mar 21 12:43:20.087636 containerd[1491]: time="2025-03-21T12:43:20.087612366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 21 12:43:20.097822 containerd[1491]: time="2025-03-21T12:43:20.097789699Z" level=info msg="CreateContainer within sandbox \"87131c26bcab14c2514c861d80d3a94375413333e5a7cbbe2ce3f25781ec69b8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 21 12:43:20.103773 containerd[1491]: time="2025-03-21T12:43:20.103721760Z" level=info msg="Container 95ac15466beac95f9ff68f0eefdf447a4cbb28ad26b61dd11d270bc2653b9d49: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:43:20.110358 containerd[1491]: time="2025-03-21T12:43:20.110301552Z" level=info msg="CreateContainer within sandbox \"87131c26bcab14c2514c861d80d3a94375413333e5a7cbbe2ce3f25781ec69b8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"95ac15466beac95f9ff68f0eefdf447a4cbb28ad26b61dd11d270bc2653b9d49\"" Mar 21 12:43:20.111461 containerd[1491]: time="2025-03-21T12:43:20.111430131Z" level=info msg="StartContainer for \"95ac15466beac95f9ff68f0eefdf447a4cbb28ad26b61dd11d270bc2653b9d49\"" Mar 21 12:43:20.112472 containerd[1491]: time="2025-03-21T12:43:20.112441868Z" level=info msg="connecting to shim 95ac15466beac95f9ff68f0eefdf447a4cbb28ad26b61dd11d270bc2653b9d49" address="unix:///run/containerd/s/6001ae7807a256415ff6eea192685e0b6cb3e6d583be36ffa026b5eef8383461" protocol=ttrpc version=3 Mar 21 12:43:20.132530 systemd[1]: Started cri-containerd-95ac15466beac95f9ff68f0eefdf447a4cbb28ad26b61dd11d270bc2653b9d49.scope - libcontainer container 95ac15466beac95f9ff68f0eefdf447a4cbb28ad26b61dd11d270bc2653b9d49. Mar 21 12:43:20.164318 containerd[1491]: time="2025-03-21T12:43:20.163362734Z" level=info msg="StartContainer for \"95ac15466beac95f9ff68f0eefdf447a4cbb28ad26b61dd11d270bc2653b9d49\" returns successfully" Mar 21 12:43:20.259241 kubelet[2727]: E0321 12:43:20.258970 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb58g" podUID="30c87969-e276-45d4-9080-209e74211884" Mar 21 12:43:20.330978 kubelet[2727]: I0321 12:43:20.330747 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5985bb5d89-mt49l" podStartSLOduration=1.031859737 podStartE2EDuration="2.330730182s" podCreationTimestamp="2025-03-21 12:43:18 +0000 UTC" firstStartedPulling="2025-03-21 12:43:18.788655839 +0000 UTC m=+24.607339393" lastFinishedPulling="2025-03-21 12:43:20.087526284 +0000 UTC m=+25.906209838" observedRunningTime="2025-03-21 12:43:20.330408576 +0000 UTC m=+26.149092170" watchObservedRunningTime="2025-03-21 12:43:20.330730182 +0000 UTC m=+26.149413696" Mar 21 12:43:20.411094 kubelet[2727]: E0321 12:43:20.411060 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.411094 kubelet[2727]: W0321 12:43:20.411084 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.411255 kubelet[2727]: E0321 12:43:20.411104 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.411421 kubelet[2727]: E0321 12:43:20.411395 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.411421 kubelet[2727]: W0321 12:43:20.411408 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.411421 kubelet[2727]: E0321 12:43:20.411418 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.411634 kubelet[2727]: E0321 12:43:20.411596 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.411634 kubelet[2727]: W0321 12:43:20.411608 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.411634 kubelet[2727]: E0321 12:43:20.411617 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.411799 kubelet[2727]: E0321 12:43:20.411776 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.411799 kubelet[2727]: W0321 12:43:20.411789 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.411848 kubelet[2727]: E0321 12:43:20.411797 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.412030 kubelet[2727]: E0321 12:43:20.412014 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.412030 kubelet[2727]: W0321 12:43:20.412026 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.412096 kubelet[2727]: E0321 12:43:20.412036 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.412226 kubelet[2727]: E0321 12:43:20.412198 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.412226 kubelet[2727]: W0321 12:43:20.412211 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.412226 kubelet[2727]: E0321 12:43:20.412226 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.412416 kubelet[2727]: E0321 12:43:20.412405 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.412448 kubelet[2727]: W0321 12:43:20.412416 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.412448 kubelet[2727]: E0321 12:43:20.412425 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.412615 kubelet[2727]: E0321 12:43:20.412592 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.412615 kubelet[2727]: W0321 12:43:20.412605 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.412667 kubelet[2727]: E0321 12:43:20.412613 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.412830 kubelet[2727]: E0321 12:43:20.412815 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.412830 kubelet[2727]: W0321 12:43:20.412826 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.412958 kubelet[2727]: E0321 12:43:20.412835 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.413034 kubelet[2727]: E0321 12:43:20.413014 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.413034 kubelet[2727]: W0321 12:43:20.413029 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.413083 kubelet[2727]: E0321 12:43:20.413038 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.413200 kubelet[2727]: E0321 12:43:20.413189 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.413231 kubelet[2727]: W0321 12:43:20.413200 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.413231 kubelet[2727]: E0321 12:43:20.413208 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.413370 kubelet[2727]: E0321 12:43:20.413358 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.413413 kubelet[2727]: W0321 12:43:20.413389 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.413413 kubelet[2727]: E0321 12:43:20.413400 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.413619 kubelet[2727]: E0321 12:43:20.413605 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.413649 kubelet[2727]: W0321 12:43:20.413617 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.413649 kubelet[2727]: E0321 12:43:20.413630 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.413783 kubelet[2727]: E0321 12:43:20.413756 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.413783 kubelet[2727]: W0321 12:43:20.413771 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.413783 kubelet[2727]: E0321 12:43:20.413780 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.413924 kubelet[2727]: E0321 12:43:20.413907 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.413924 kubelet[2727]: W0321 12:43:20.413923 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.413971 kubelet[2727]: E0321 12:43:20.413931 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.414204 kubelet[2727]: E0321 12:43:20.414189 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.414204 kubelet[2727]: W0321 12:43:20.414201 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.414273 kubelet[2727]: E0321 12:43:20.414211 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.414424 kubelet[2727]: E0321 12:43:20.414410 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.414424 kubelet[2727]: W0321 12:43:20.414422 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.414476 kubelet[2727]: E0321 12:43:20.414434 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.414596 kubelet[2727]: E0321 12:43:20.414584 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.414623 kubelet[2727]: W0321 12:43:20.414595 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.414623 kubelet[2727]: E0321 12:43:20.414604 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.414807 kubelet[2727]: E0321 12:43:20.414791 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.414807 kubelet[2727]: W0321 12:43:20.414805 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.414912 kubelet[2727]: E0321 12:43:20.414817 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.414999 kubelet[2727]: E0321 12:43:20.414985 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.415038 kubelet[2727]: W0321 12:43:20.414998 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.415038 kubelet[2727]: E0321 12:43:20.415020 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.415189 kubelet[2727]: E0321 12:43:20.415177 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.415189 kubelet[2727]: W0321 12:43:20.415187 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.415233 kubelet[2727]: E0321 12:43:20.415201 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.415417 kubelet[2727]: E0321 12:43:20.415403 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.415417 kubelet[2727]: W0321 12:43:20.415415 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.415478 kubelet[2727]: E0321 12:43:20.415427 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.415884 kubelet[2727]: E0321 12:43:20.415868 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.415884 kubelet[2727]: W0321 12:43:20.415883 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.416030 kubelet[2727]: E0321 12:43:20.415984 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.416084 kubelet[2727]: E0321 12:43:20.416068 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.416084 kubelet[2727]: W0321 12:43:20.416080 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.416223 kubelet[2727]: E0321 12:43:20.416162 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.416264 kubelet[2727]: E0321 12:43:20.416244 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.416264 kubelet[2727]: W0321 12:43:20.416251 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.416264 kubelet[2727]: E0321 12:43:20.416262 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.416435 kubelet[2727]: E0321 12:43:20.416423 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.416435 kubelet[2727]: W0321 12:43:20.416433 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.416490 kubelet[2727]: E0321 12:43:20.416448 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.416619 kubelet[2727]: E0321 12:43:20.416606 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.416619 kubelet[2727]: W0321 12:43:20.416617 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.416662 kubelet[2727]: E0321 12:43:20.416631 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.416836 kubelet[2727]: E0321 12:43:20.416823 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.416836 kubelet[2727]: W0321 12:43:20.416834 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.416892 kubelet[2727]: E0321 12:43:20.416848 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.417180 kubelet[2727]: E0321 12:43:20.417159 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.417180 kubelet[2727]: W0321 12:43:20.417176 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.417243 kubelet[2727]: E0321 12:43:20.417191 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.417366 kubelet[2727]: E0321 12:43:20.417351 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.417411 kubelet[2727]: W0321 12:43:20.417363 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.417411 kubelet[2727]: E0321 12:43:20.417393 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.417593 kubelet[2727]: E0321 12:43:20.417581 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.417618 kubelet[2727]: W0321 12:43:20.417592 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.417618 kubelet[2727]: E0321 12:43:20.417607 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.417860 kubelet[2727]: E0321 12:43:20.417846 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.417860 kubelet[2727]: W0321 12:43:20.417859 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.417909 kubelet[2727]: E0321 12:43:20.417877 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:20.418079 kubelet[2727]: E0321 12:43:20.418065 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:20.418079 kubelet[2727]: W0321 12:43:20.418076 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:20.418137 kubelet[2727]: E0321 12:43:20.418085 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.317841 kubelet[2727]: I0321 12:43:21.317809 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 12:43:21.322712 kubelet[2727]: E0321 12:43:21.322603 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.322712 kubelet[2727]: W0321 12:43:21.322641 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.322712 kubelet[2727]: E0321 12:43:21.322658 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.322961 kubelet[2727]: E0321 12:43:21.322947 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.322993 kubelet[2727]: W0321 12:43:21.322961 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.322993 kubelet[2727]: E0321 12:43:21.322972 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.323606 kubelet[2727]: E0321 12:43:21.323582 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.323647 kubelet[2727]: W0321 12:43:21.323607 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.323647 kubelet[2727]: E0321 12:43:21.323619 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.323852 kubelet[2727]: E0321 12:43:21.323838 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.323882 kubelet[2727]: W0321 12:43:21.323852 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.323882 kubelet[2727]: E0321 12:43:21.323863 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.324072 kubelet[2727]: E0321 12:43:21.324058 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.324072 kubelet[2727]: W0321 12:43:21.324070 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.324151 kubelet[2727]: E0321 12:43:21.324079 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.324265 kubelet[2727]: E0321 12:43:21.324241 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.324265 kubelet[2727]: W0321 12:43:21.324254 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.324265 kubelet[2727]: E0321 12:43:21.324263 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.324426 kubelet[2727]: E0321 12:43:21.324414 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.324426 kubelet[2727]: W0321 12:43:21.324425 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.324495 kubelet[2727]: E0321 12:43:21.324439 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.325458 kubelet[2727]: E0321 12:43:21.325435 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.325458 kubelet[2727]: W0321 12:43:21.325455 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.325510 kubelet[2727]: E0321 12:43:21.325471 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.325787 kubelet[2727]: E0321 12:43:21.325765 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.325787 kubelet[2727]: W0321 12:43:21.325779 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.325866 kubelet[2727]: E0321 12:43:21.325791 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.326018 kubelet[2727]: E0321 12:43:21.326002 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.326049 kubelet[2727]: W0321 12:43:21.326013 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.326893 kubelet[2727]: E0321 12:43:21.326031 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.327161 kubelet[2727]: E0321 12:43:21.327147 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.327194 kubelet[2727]: W0321 12:43:21.327162 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.327222 kubelet[2727]: E0321 12:43:21.327175 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.327428 kubelet[2727]: E0321 12:43:21.327415 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.327456 kubelet[2727]: W0321 12:43:21.327429 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.327456 kubelet[2727]: E0321 12:43:21.327451 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.327947 kubelet[2727]: E0321 12:43:21.327931 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.327983 kubelet[2727]: W0321 12:43:21.327946 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.327983 kubelet[2727]: E0321 12:43:21.327977 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.328229 kubelet[2727]: E0321 12:43:21.328215 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.328229 kubelet[2727]: W0321 12:43:21.328228 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.328293 kubelet[2727]: E0321 12:43:21.328239 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.328440 kubelet[2727]: E0321 12:43:21.328427 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.328440 kubelet[2727]: W0321 12:43:21.328440 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.328504 kubelet[2727]: E0321 12:43:21.328449 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.423049 kubelet[2727]: E0321 12:43:21.422975 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.423049 kubelet[2727]: W0321 12:43:21.422998 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.423049 kubelet[2727]: E0321 12:43:21.423021 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.423431 kubelet[2727]: E0321 12:43:21.423404 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.423493 kubelet[2727]: W0321 12:43:21.423433 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.423493 kubelet[2727]: E0321 12:43:21.423453 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.423639 kubelet[2727]: E0321 12:43:21.423613 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.423639 kubelet[2727]: W0321 12:43:21.423626 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.423639 kubelet[2727]: E0321 12:43:21.423637 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.424006 kubelet[2727]: E0321 12:43:21.423991 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.424006 kubelet[2727]: W0321 12:43:21.424005 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.424096 kubelet[2727]: E0321 12:43:21.424037 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.424349 kubelet[2727]: E0321 12:43:21.424336 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.424412 kubelet[2727]: W0321 12:43:21.424351 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.424575 kubelet[2727]: E0321 12:43:21.424557 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.424669 containerd[1491]: time="2025-03-21T12:43:21.424635270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:21.424920 kubelet[2727]: E0321 12:43:21.424804 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.424920 kubelet[2727]: W0321 12:43:21.424814 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.424920 kubelet[2727]: E0321 12:43:21.424866 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.425086 kubelet[2727]: E0321 12:43:21.425068 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.425086 kubelet[2727]: W0321 12:43:21.425085 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.425151 kubelet[2727]: E0321 12:43:21.425097 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.425364 kubelet[2727]: E0321 12:43:21.425347 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.425450 kubelet[2727]: W0321 12:43:21.425432 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.425483 kubelet[2727]: E0321 12:43:21.425457 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.425647 containerd[1491]: time="2025-03-21T12:43:21.425603686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5120152" Mar 21 12:43:21.425922 kubelet[2727]: E0321 12:43:21.425904 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.425922 kubelet[2727]: W0321 12:43:21.425920 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.426041 kubelet[2727]: E0321 12:43:21.426004 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.426160 kubelet[2727]: E0321 12:43:21.426143 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.426242 kubelet[2727]: W0321 12:43:21.426159 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.426242 kubelet[2727]: E0321 12:43:21.426199 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.426348 kubelet[2727]: E0321 12:43:21.426333 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.426348 kubelet[2727]: W0321 12:43:21.426346 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.426692 containerd[1491]: time="2025-03-21T12:43:21.426631502Z" level=info msg="ImageCreate event name:\"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:21.426903 kubelet[2727]: E0321 12:43:21.426882 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.427370 kubelet[2727]: E0321 12:43:21.427352 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.427370 kubelet[2727]: W0321 12:43:21.427368 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.427444 kubelet[2727]: E0321 12:43:21.427398 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.427600 kubelet[2727]: E0321 12:43:21.427586 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.427600 kubelet[2727]: W0321 12:43:21.427598 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.428307 kubelet[2727]: E0321 12:43:21.428236 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.428656 kubelet[2727]: E0321 12:43:21.428630 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.428656 kubelet[2727]: W0321 12:43:21.428648 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.428725 kubelet[2727]: E0321 12:43:21.428664 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.428799 containerd[1491]: time="2025-03-21T12:43:21.428756537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:21.428872 kubelet[2727]: E0321 12:43:21.428860 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.428898 kubelet[2727]: W0321 12:43:21.428872 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.428898 kubelet[2727]: E0321 12:43:21.428883 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.429080 kubelet[2727]: E0321 12:43:21.429063 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.429080 kubelet[2727]: W0321 12:43:21.429077 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.429144 kubelet[2727]: E0321 12:43:21.429088 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.429528 kubelet[2727]: E0321 12:43:21.429512 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.429528 kubelet[2727]: W0321 12:43:21.429525 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.429590 kubelet[2727]: E0321 12:43:21.429539 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.429719 kubelet[2727]: E0321 12:43:21.429706 2727 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:43:21.429719 kubelet[2727]: W0321 12:43:21.429717 2727 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:43:21.429773 kubelet[2727]: E0321 12:43:21.429726 2727 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:43:21.430531 containerd[1491]: time="2025-03-21T12:43:21.430485765Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6489869\" in 1.342841119s" Mar 21 12:43:21.430572 containerd[1491]: time="2025-03-21T12:43:21.430533286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:bf0e51f0111c4e6f7bc448c15934e73123805f3c5e66e455c7eb7392854e0921\"" Mar 21 12:43:21.434289 containerd[1491]: time="2025-03-21T12:43:21.434187226Z" level=info msg="CreateContainer within sandbox \"a33b273509433ff02d116d8a2ab3a613d899af1c5ec103cc5fdce3135f18fcbc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 21 12:43:21.453562 containerd[1491]: time="2025-03-21T12:43:21.452568166Z" level=info msg="Container 40450ef7ee3694e08d1d94cc98e57b3b531d824c017704feae1dcd3ff2577a0c: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:43:21.462871 containerd[1491]: time="2025-03-21T12:43:21.462838334Z" level=info msg="CreateContainer within sandbox \"a33b273509433ff02d116d8a2ab3a613d899af1c5ec103cc5fdce3135f18fcbc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"40450ef7ee3694e08d1d94cc98e57b3b531d824c017704feae1dcd3ff2577a0c\"" Mar 21 12:43:21.463423 containerd[1491]: time="2025-03-21T12:43:21.463397623Z" level=info msg="StartContainer for \"40450ef7ee3694e08d1d94cc98e57b3b531d824c017704feae1dcd3ff2577a0c\"" Mar 21 12:43:21.465749 containerd[1491]: time="2025-03-21T12:43:21.465718861Z" level=info msg="connecting to shim 40450ef7ee3694e08d1d94cc98e57b3b531d824c017704feae1dcd3ff2577a0c" address="unix:///run/containerd/s/6850d2983ab6bdd8a323a74185e2ff3efc8fb15d234f46b1756b34865c548dd9" protocol=ttrpc version=3 Mar 21 12:43:21.485877 systemd[1]: Started cri-containerd-40450ef7ee3694e08d1d94cc98e57b3b531d824c017704feae1dcd3ff2577a0c.scope - libcontainer container 40450ef7ee3694e08d1d94cc98e57b3b531d824c017704feae1dcd3ff2577a0c. Mar 21 12:43:21.538421 containerd[1491]: time="2025-03-21T12:43:21.537725078Z" level=info msg="StartContainer for \"40450ef7ee3694e08d1d94cc98e57b3b531d824c017704feae1dcd3ff2577a0c\" returns successfully" Mar 21 12:43:21.556491 systemd[1]: cri-containerd-40450ef7ee3694e08d1d94cc98e57b3b531d824c017704feae1dcd3ff2577a0c.scope: Deactivated successfully. Mar 21 12:43:21.579543 containerd[1491]: time="2025-03-21T12:43:21.579369559Z" level=info msg="received exit event container_id:\"40450ef7ee3694e08d1d94cc98e57b3b531d824c017704feae1dcd3ff2577a0c\" id:\"40450ef7ee3694e08d1d94cc98e57b3b531d824c017704feae1dcd3ff2577a0c\" pid:3377 exited_at:{seconds:1742561001 nanos:571425029}" Mar 21 12:43:21.579543 containerd[1491]: time="2025-03-21T12:43:21.579441120Z" level=info msg="TaskExit event in podsandbox handler container_id:\"40450ef7ee3694e08d1d94cc98e57b3b531d824c017704feae1dcd3ff2577a0c\" id:\"40450ef7ee3694e08d1d94cc98e57b3b531d824c017704feae1dcd3ff2577a0c\" pid:3377 exited_at:{seconds:1742561001 nanos:571425029}" Mar 21 12:43:21.626197 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40450ef7ee3694e08d1d94cc98e57b3b531d824c017704feae1dcd3ff2577a0c-rootfs.mount: Deactivated successfully. Mar 21 12:43:22.258702 kubelet[2727]: E0321 12:43:22.258657 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb58g" podUID="30c87969-e276-45d4-9080-209e74211884" Mar 21 12:43:22.322804 containerd[1491]: time="2025-03-21T12:43:22.322758227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 21 12:43:23.165332 systemd[1]: Started sshd@7-10.0.0.147:22-10.0.0.1:36572.service - OpenSSH per-connection server daemon (10.0.0.1:36572). Mar 21 12:43:23.220846 sshd[3417]: Accepted publickey for core from 10.0.0.1 port 36572 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:43:23.221914 sshd-session[3417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:43:23.225841 systemd-logind[1464]: New session 8 of user core. Mar 21 12:43:23.231505 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 21 12:43:23.360761 sshd[3419]: Connection closed by 10.0.0.1 port 36572 Mar 21 12:43:23.361098 sshd-session[3417]: pam_unix(sshd:session): session closed for user core Mar 21 12:43:23.363988 systemd[1]: sshd@7-10.0.0.147:22-10.0.0.1:36572.service: Deactivated successfully. Mar 21 12:43:23.365804 systemd[1]: session-8.scope: Deactivated successfully. Mar 21 12:43:23.368053 systemd-logind[1464]: Session 8 logged out. Waiting for processes to exit. Mar 21 12:43:23.369517 systemd-logind[1464]: Removed session 8. Mar 21 12:43:24.258352 kubelet[2727]: E0321 12:43:24.258186 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cb58g" podUID="30c87969-e276-45d4-9080-209e74211884" Mar 21 12:43:25.056483 containerd[1491]: time="2025-03-21T12:43:25.056436406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:25.057601 containerd[1491]: time="2025-03-21T12:43:25.057543462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=91227396" Mar 21 12:43:25.058406 containerd[1491]: time="2025-03-21T12:43:25.058203151Z" level=info msg="ImageCreate event name:\"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:25.060549 containerd[1491]: time="2025-03-21T12:43:25.060515664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:25.061500 containerd[1491]: time="2025-03-21T12:43:25.061459957Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"92597153\" in 2.73865917s" Mar 21 12:43:25.061500 containerd[1491]: time="2025-03-21T12:43:25.061486117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:57c2b1dcdc0045be5220c7237f900bce5f47c006714073859cf102b0eaa65290\"" Mar 21 12:43:25.063827 containerd[1491]: time="2025-03-21T12:43:25.063777629Z" level=info msg="CreateContainer within sandbox \"a33b273509433ff02d116d8a2ab3a613d899af1c5ec103cc5fdce3135f18fcbc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 21 12:43:25.073318 containerd[1491]: time="2025-03-21T12:43:25.073264643Z" level=info msg="Container a7c5870896ed097bb53581ee9372a42781224d4749e086927b44e8a13d54caa8: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:43:25.080425 containerd[1491]: time="2025-03-21T12:43:25.080360783Z" level=info msg="CreateContainer within sandbox \"a33b273509433ff02d116d8a2ab3a613d899af1c5ec103cc5fdce3135f18fcbc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a7c5870896ed097bb53581ee9372a42781224d4749e086927b44e8a13d54caa8\"" Mar 21 12:43:25.081092 containerd[1491]: time="2025-03-21T12:43:25.081047312Z" level=info msg="StartContainer for \"a7c5870896ed097bb53581ee9372a42781224d4749e086927b44e8a13d54caa8\"" Mar 21 12:43:25.082513 containerd[1491]: time="2025-03-21T12:43:25.082471412Z" level=info msg="connecting to shim a7c5870896ed097bb53581ee9372a42781224d4749e086927b44e8a13d54caa8" address="unix:///run/containerd/s/6850d2983ab6bdd8a323a74185e2ff3efc8fb15d234f46b1756b34865c548dd9" protocol=ttrpc version=3 Mar 21 12:43:25.104578 systemd[1]: Started cri-containerd-a7c5870896ed097bb53581ee9372a42781224d4749e086927b44e8a13d54caa8.scope - libcontainer container a7c5870896ed097bb53581ee9372a42781224d4749e086927b44e8a13d54caa8. Mar 21 12:43:25.184972 containerd[1491]: time="2025-03-21T12:43:25.184925973Z" level=info msg="StartContainer for \"a7c5870896ed097bb53581ee9372a42781224d4749e086927b44e8a13d54caa8\" returns successfully" Mar 21 12:43:25.689779 systemd[1]: cri-containerd-a7c5870896ed097bb53581ee9372a42781224d4749e086927b44e8a13d54caa8.scope: Deactivated successfully. Mar 21 12:43:25.690697 containerd[1491]: time="2025-03-21T12:43:25.690497120Z" level=info msg="received exit event container_id:\"a7c5870896ed097bb53581ee9372a42781224d4749e086927b44e8a13d54caa8\" id:\"a7c5870896ed097bb53581ee9372a42781224d4749e086927b44e8a13d54caa8\" pid:3452 exited_at:{seconds:1742561005 nanos:690218036}" Mar 21 12:43:25.690697 containerd[1491]: time="2025-03-21T12:43:25.690584321Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a7c5870896ed097bb53581ee9372a42781224d4749e086927b44e8a13d54caa8\" id:\"a7c5870896ed097bb53581ee9372a42781224d4749e086927b44e8a13d54caa8\" pid:3452 exited_at:{seconds:1742561005 nanos:690218036}" Mar 21 12:43:25.690682 systemd[1]: cri-containerd-a7c5870896ed097bb53581ee9372a42781224d4749e086927b44e8a13d54caa8.scope: Consumed 455ms CPU time, 162.3M memory peak, 48K read from disk, 150.3M written to disk. Mar 21 12:43:25.707133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7c5870896ed097bb53581ee9372a42781224d4749e086927b44e8a13d54caa8-rootfs.mount: Deactivated successfully. Mar 21 12:43:25.764729 kubelet[2727]: I0321 12:43:25.764690 2727 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 21 12:43:25.786383 kubelet[2727]: I0321 12:43:25.786249 2727 topology_manager.go:215] "Topology Admit Handler" podUID="bca733af-36c6-4d5a-82d9-681d834b79f7" podNamespace="calico-apiserver" podName="calico-apiserver-59dbb779f6-cvkdq" Mar 21 12:43:25.788147 kubelet[2727]: I0321 12:43:25.788110 2727 topology_manager.go:215] "Topology Admit Handler" podUID="d20eb502-b39f-4baa-9001-7d6ac293f12b" podNamespace="calico-apiserver" podName="calico-apiserver-59dbb779f6-fnbzh" Mar 21 12:43:25.788846 kubelet[2727]: I0321 12:43:25.788809 2727 topology_manager.go:215] "Topology Admit Handler" podUID="fdd3b8e6-9992-4c39-aaca-050b7a719dd1" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9bz5c" Mar 21 12:43:25.789408 kubelet[2727]: I0321 12:43:25.788934 2727 topology_manager.go:215] "Topology Admit Handler" podUID="67f9d2d6-04e7-4fe6-87c9-0953eade1288" podNamespace="calico-system" podName="calico-kube-controllers-74b8c688b5-hg74d" Mar 21 12:43:25.789408 kubelet[2727]: I0321 12:43:25.789347 2727 topology_manager.go:215] "Topology Admit Handler" podUID="c43df681-4b82-49c4-9251-bf7c3ffee013" podNamespace="kube-system" podName="coredns-7db6d8ff4d-pjqw9" Mar 21 12:43:25.796791 systemd[1]: Created slice kubepods-besteffort-podd20eb502_b39f_4baa_9001_7d6ac293f12b.slice - libcontainer container kubepods-besteffort-podd20eb502_b39f_4baa_9001_7d6ac293f12b.slice. Mar 21 12:43:25.802514 systemd[1]: Created slice kubepods-besteffort-podbca733af_36c6_4d5a_82d9_681d834b79f7.slice - libcontainer container kubepods-besteffort-podbca733af_36c6_4d5a_82d9_681d834b79f7.slice. Mar 21 12:43:25.807757 systemd[1]: Created slice kubepods-besteffort-pod67f9d2d6_04e7_4fe6_87c9_0953eade1288.slice - libcontainer container kubepods-besteffort-pod67f9d2d6_04e7_4fe6_87c9_0953eade1288.slice. Mar 21 12:43:25.812055 systemd[1]: Created slice kubepods-burstable-podfdd3b8e6_9992_4c39_aaca_050b7a719dd1.slice - libcontainer container kubepods-burstable-podfdd3b8e6_9992_4c39_aaca_050b7a719dd1.slice. Mar 21 12:43:25.817292 systemd[1]: Created slice kubepods-burstable-podc43df681_4b82_49c4_9251_bf7c3ffee013.slice - libcontainer container kubepods-burstable-podc43df681_4b82_49c4_9251_bf7c3ffee013.slice. Mar 21 12:43:25.851597 kubelet[2727]: I0321 12:43:25.851553 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bca733af-36c6-4d5a-82d9-681d834b79f7-calico-apiserver-certs\") pod \"calico-apiserver-59dbb779f6-cvkdq\" (UID: \"bca733af-36c6-4d5a-82d9-681d834b79f7\") " pod="calico-apiserver/calico-apiserver-59dbb779f6-cvkdq" Mar 21 12:43:25.851597 kubelet[2727]: I0321 12:43:25.851598 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8bs7\" (UniqueName: \"kubernetes.io/projected/fdd3b8e6-9992-4c39-aaca-050b7a719dd1-kube-api-access-b8bs7\") pod \"coredns-7db6d8ff4d-9bz5c\" (UID: \"fdd3b8e6-9992-4c39-aaca-050b7a719dd1\") " pod="kube-system/coredns-7db6d8ff4d-9bz5c" Mar 21 12:43:25.852288 kubelet[2727]: I0321 12:43:25.851625 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gtc6\" (UniqueName: \"kubernetes.io/projected/d20eb502-b39f-4baa-9001-7d6ac293f12b-kube-api-access-5gtc6\") pod \"calico-apiserver-59dbb779f6-fnbzh\" (UID: \"d20eb502-b39f-4baa-9001-7d6ac293f12b\") " pod="calico-apiserver/calico-apiserver-59dbb779f6-fnbzh" Mar 21 12:43:25.852288 kubelet[2727]: I0321 12:43:25.851646 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d889t\" (UniqueName: \"kubernetes.io/projected/c43df681-4b82-49c4-9251-bf7c3ffee013-kube-api-access-d889t\") pod \"coredns-7db6d8ff4d-pjqw9\" (UID: \"c43df681-4b82-49c4-9251-bf7c3ffee013\") " pod="kube-system/coredns-7db6d8ff4d-pjqw9" Mar 21 12:43:25.852288 kubelet[2727]: I0321 12:43:25.851725 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d20eb502-b39f-4baa-9001-7d6ac293f12b-calico-apiserver-certs\") pod \"calico-apiserver-59dbb779f6-fnbzh\" (UID: \"d20eb502-b39f-4baa-9001-7d6ac293f12b\") " pod="calico-apiserver/calico-apiserver-59dbb779f6-fnbzh" Mar 21 12:43:25.852288 kubelet[2727]: I0321 12:43:25.851771 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdd3b8e6-9992-4c39-aaca-050b7a719dd1-config-volume\") pod \"coredns-7db6d8ff4d-9bz5c\" (UID: \"fdd3b8e6-9992-4c39-aaca-050b7a719dd1\") " pod="kube-system/coredns-7db6d8ff4d-9bz5c" Mar 21 12:43:25.852288 kubelet[2727]: I0321 12:43:25.851792 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c43df681-4b82-49c4-9251-bf7c3ffee013-config-volume\") pod \"coredns-7db6d8ff4d-pjqw9\" (UID: \"c43df681-4b82-49c4-9251-bf7c3ffee013\") " pod="kube-system/coredns-7db6d8ff4d-pjqw9" Mar 21 12:43:25.852405 kubelet[2727]: I0321 12:43:25.851812 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngvh2\" (UniqueName: \"kubernetes.io/projected/bca733af-36c6-4d5a-82d9-681d834b79f7-kube-api-access-ngvh2\") pod \"calico-apiserver-59dbb779f6-cvkdq\" (UID: \"bca733af-36c6-4d5a-82d9-681d834b79f7\") " pod="calico-apiserver/calico-apiserver-59dbb779f6-cvkdq" Mar 21 12:43:25.852405 kubelet[2727]: I0321 12:43:25.851830 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v5bf\" (UniqueName: \"kubernetes.io/projected/67f9d2d6-04e7-4fe6-87c9-0953eade1288-kube-api-access-4v5bf\") pod \"calico-kube-controllers-74b8c688b5-hg74d\" (UID: \"67f9d2d6-04e7-4fe6-87c9-0953eade1288\") " pod="calico-system/calico-kube-controllers-74b8c688b5-hg74d" Mar 21 12:43:25.852405 kubelet[2727]: I0321 12:43:25.851851 2727 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67f9d2d6-04e7-4fe6-87c9-0953eade1288-tigera-ca-bundle\") pod \"calico-kube-controllers-74b8c688b5-hg74d\" (UID: \"67f9d2d6-04e7-4fe6-87c9-0953eade1288\") " pod="calico-system/calico-kube-controllers-74b8c688b5-hg74d" Mar 21 12:43:26.104922 containerd[1491]: time="2025-03-21T12:43:26.104845255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59dbb779f6-fnbzh,Uid:d20eb502-b39f-4baa-9001-7d6ac293f12b,Namespace:calico-apiserver,Attempt:0,}" Mar 21 12:43:26.105412 containerd[1491]: time="2025-03-21T12:43:26.105087818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59dbb779f6-cvkdq,Uid:bca733af-36c6-4d5a-82d9-681d834b79f7,Namespace:calico-apiserver,Attempt:0,}" Mar 21 12:43:26.111043 containerd[1491]: time="2025-03-21T12:43:26.111009578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b8c688b5-hg74d,Uid:67f9d2d6-04e7-4fe6-87c9-0953eade1288,Namespace:calico-system,Attempt:0,}" Mar 21 12:43:26.116899 containerd[1491]: time="2025-03-21T12:43:26.116539374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9bz5c,Uid:fdd3b8e6-9992-4c39-aaca-050b7a719dd1,Namespace:kube-system,Attempt:0,}" Mar 21 12:43:26.121314 containerd[1491]: time="2025-03-21T12:43:26.121286118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pjqw9,Uid:c43df681-4b82-49c4-9251-bf7c3ffee013,Namespace:kube-system,Attempt:0,}" Mar 21 12:43:26.268643 systemd[1]: Created slice kubepods-besteffort-pod30c87969_e276_45d4_9080_209e74211884.slice - libcontainer container kubepods-besteffort-pod30c87969_e276_45d4_9080_209e74211884.slice. Mar 21 12:43:26.272149 containerd[1491]: time="2025-03-21T12:43:26.272107485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cb58g,Uid:30c87969-e276-45d4-9080-209e74211884,Namespace:calico-system,Attempt:0,}" Mar 21 12:43:26.382105 containerd[1491]: time="2025-03-21T12:43:26.381988017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 21 12:43:26.439583 containerd[1491]: time="2025-03-21T12:43:26.439537678Z" level=error msg="Failed to destroy network for sandbox \"9386dc98fafa11eb1b891e9b47cbdf36bde4208f085a121eed86b0b4730dc959\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:43:26.441438 containerd[1491]: time="2025-03-21T12:43:26.441324862Z" level=error msg="Failed to destroy network for sandbox \"20634dab2e8da9363e8f5b50a7cce2fd775d2acfaa413d883297cc5d9405c93b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:43:26.443784 containerd[1491]: time="2025-03-21T12:43:26.442865123Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pjqw9,Uid:c43df681-4b82-49c4-9251-bf7c3ffee013,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9386dc98fafa11eb1b891e9b47cbdf36bde4208f085a121eed86b0b4730dc959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:43:26.444237 containerd[1491]: time="2025-03-21T12:43:26.444181821Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59dbb779f6-cvkdq,Uid:bca733af-36c6-4d5a-82d9-681d834b79f7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"20634dab2e8da9363e8f5b50a7cce2fd775d2acfaa413d883297cc5d9405c93b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:43:26.448996 kubelet[2727]: E0321 12:43:26.448842 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20634dab2e8da9363e8f5b50a7cce2fd775d2acfaa413d883297cc5d9405c93b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:43:26.449115 kubelet[2727]: E0321 12:43:26.448915 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9386dc98fafa11eb1b891e9b47cbdf36bde4208f085a121eed86b0b4730dc959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:43:26.449115 kubelet[2727]: E0321 12:43:26.449063 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9386dc98fafa11eb1b891e9b47cbdf36bde4208f085a121eed86b0b4730dc959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pjqw9" Mar 21 12:43:26.449115 kubelet[2727]: E0321 12:43:26.449106 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9386dc98fafa11eb1b891e9b47cbdf36bde4208f085a121eed86b0b4730dc959\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pjqw9" Mar 21 12:43:26.449293 kubelet[2727]: E0321 12:43:26.449154 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-pjqw9_kube-system(c43df681-4b82-49c4-9251-bf7c3ffee013)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-pjqw9_kube-system(c43df681-4b82-49c4-9251-bf7c3ffee013)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9386dc98fafa11eb1b891e9b47cbdf36bde4208f085a121eed86b0b4730dc959\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-pjqw9" podUID="c43df681-4b82-49c4-9251-bf7c3ffee013" Mar 21 12:43:26.449504 kubelet[2727]: E0321 12:43:26.449479 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20634dab2e8da9363e8f5b50a7cce2fd775d2acfaa413d883297cc5d9405c93b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59dbb779f6-cvkdq" Mar 21 12:43:26.449549 kubelet[2727]: E0321 12:43:26.449508 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20634dab2e8da9363e8f5b50a7cce2fd775d2acfaa413d883297cc5d9405c93b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59dbb779f6-cvkdq" Mar 21 12:43:26.449606 kubelet[2727]: E0321 12:43:26.449543 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59dbb779f6-cvkdq_calico-apiserver(bca733af-36c6-4d5a-82d9-681d834b79f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59dbb779f6-cvkdq_calico-apiserver(bca733af-36c6-4d5a-82d9-681d834b79f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20634dab2e8da9363e8f5b50a7cce2fd775d2acfaa413d883297cc5d9405c93b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59dbb779f6-cvkdq" podUID="bca733af-36c6-4d5a-82d9-681d834b79f7" Mar 21 12:43:26.452640 containerd[1491]: time="2025-03-21T12:43:26.452516494Z" level=error msg="Failed to destroy network for sandbox \"47cbf4cf9f161d8f55f8ba65be7c9be4a8a1202f13b90052a7522afd4b4440ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:43:26.453416 containerd[1491]: time="2025-03-21T12:43:26.453285624Z" level=error msg="Failed to destroy network for sandbox \"87fe622648846dee842b8f3329dd547c6b9242a11ada032088a205ff4f03c27a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:43:26.453956 containerd[1491]: time="2025-03-21T12:43:26.453816631Z" level=error msg="Failed to destroy network for sandbox \"02a1b60952b5c1d56d952401ecfe1ca4317e7f0e3f6d9900b171298f5fd8bbec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:43:26.454152 containerd[1491]: time="2025-03-21T12:43:26.454055955Z" level=error msg="Failed to destroy network for sandbox \"17fdff08d6dcf6ae526ebce809a82e5a2a1f0e7c04f03e3c172ffa011dbe638a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:43:26.454842 containerd[1491]: time="2025-03-21T12:43:26.454614642Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cb58g,Uid:30c87969-e276-45d4-9080-209e74211884,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"87fe622648846dee842b8f3329dd547c6b9242a11ada032088a205ff4f03c27a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:43:26.454965 kubelet[2727]: E0321 12:43:26.454793 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87fe622648846dee842b8f3329dd547c6b9242a11ada032088a205ff4f03c27a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:43:26.454965 kubelet[2727]: E0321 12:43:26.454839 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87fe622648846dee842b8f3329dd547c6b9242a11ada032088a205ff4f03c27a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cb58g" Mar 21 12:43:26.454965 kubelet[2727]: E0321 12:43:26.454854 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87fe622648846dee842b8f3329dd547c6b9242a11ada032088a205ff4f03c27a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cb58g" Mar 21 12:43:26.455058 kubelet[2727]: E0321 12:43:26.454925 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cb58g_calico-system(30c87969-e276-45d4-9080-209e74211884)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cb58g_calico-system(30c87969-e276-45d4-9080-209e74211884)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87fe622648846dee842b8f3329dd547c6b9242a11ada032088a205ff4f03c27a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cb58g" podUID="30c87969-e276-45d4-9080-209e74211884" Mar 21 12:43:26.456554 containerd[1491]: time="2025-03-21T12:43:26.455841379Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59dbb779f6-fnbzh,Uid:d20eb502-b39f-4baa-9001-7d6ac293f12b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"47cbf4cf9f161d8f55f8ba65be7c9be4a8a1202f13b90052a7522afd4b4440ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:43:26.456629 kubelet[2727]: E0321 12:43:26.455990 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47cbf4cf9f161d8f55f8ba65be7c9be4a8a1202f13b90052a7522afd4b4440ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:43:26.456629 kubelet[2727]: E0321 12:43:26.456021 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47cbf4cf9f161d8f55f8ba65be7c9be4a8a1202f13b90052a7522afd4b4440ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59dbb779f6-fnbzh" Mar 21 12:43:26.456629 kubelet[2727]: E0321 12:43:26.456035 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47cbf4cf9f161d8f55f8ba65be7c9be4a8a1202f13b90052a7522afd4b4440ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-59dbb779f6-fnbzh" Mar 21 12:43:26.456734 kubelet[2727]: E0321 12:43:26.456081 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-59dbb779f6-fnbzh_calico-apiserver(d20eb502-b39f-4baa-9001-7d6ac293f12b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-59dbb779f6-fnbzh_calico-apiserver(d20eb502-b39f-4baa-9001-7d6ac293f12b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47cbf4cf9f161d8f55f8ba65be7c9be4a8a1202f13b90052a7522afd4b4440ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-59dbb779f6-fnbzh" podUID="d20eb502-b39f-4baa-9001-7d6ac293f12b" Mar 21 12:43:26.457285 containerd[1491]: time="2025-03-21T12:43:26.457239598Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9bz5c,Uid:fdd3b8e6-9992-4c39-aaca-050b7a719dd1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"02a1b60952b5c1d56d952401ecfe1ca4317e7f0e3f6d9900b171298f5fd8bbec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:43:26.457859 kubelet[2727]: E0321 12:43:26.457407 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02a1b60952b5c1d56d952401ecfe1ca4317e7f0e3f6d9900b171298f5fd8bbec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:43:26.457859 kubelet[2727]: E0321 12:43:26.457447 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02a1b60952b5c1d56d952401ecfe1ca4317e7f0e3f6d9900b171298f5fd8bbec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9bz5c" Mar 21 12:43:26.457859 kubelet[2727]: E0321 12:43:26.457465 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02a1b60952b5c1d56d952401ecfe1ca4317e7f0e3f6d9900b171298f5fd8bbec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9bz5c" Mar 21 12:43:26.457968 kubelet[2727]: E0321 12:43:26.457492 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9bz5c_kube-system(fdd3b8e6-9992-4c39-aaca-050b7a719dd1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9bz5c_kube-system(fdd3b8e6-9992-4c39-aaca-050b7a719dd1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02a1b60952b5c1d56d952401ecfe1ca4317e7f0e3f6d9900b171298f5fd8bbec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9bz5c" podUID="fdd3b8e6-9992-4c39-aaca-050b7a719dd1" Mar 21 12:43:26.458828 containerd[1491]: time="2025-03-21T12:43:26.458767579Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b8c688b5-hg74d,Uid:67f9d2d6-04e7-4fe6-87c9-0953eade1288,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"17fdff08d6dcf6ae526ebce809a82e5a2a1f0e7c04f03e3c172ffa011dbe638a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:43:26.459819 kubelet[2727]: E0321 12:43:26.458939 2727 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17fdff08d6dcf6ae526ebce809a82e5a2a1f0e7c04f03e3c172ffa011dbe638a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:43:26.459819 kubelet[2727]: E0321 12:43:26.458973 2727 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17fdff08d6dcf6ae526ebce809a82e5a2a1f0e7c04f03e3c172ffa011dbe638a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74b8c688b5-hg74d" Mar 21 12:43:26.459819 kubelet[2727]: E0321 12:43:26.458989 2727 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17fdff08d6dcf6ae526ebce809a82e5a2a1f0e7c04f03e3c172ffa011dbe638a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74b8c688b5-hg74d" Mar 21 12:43:26.459940 kubelet[2727]: E0321 12:43:26.459016 2727 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74b8c688b5-hg74d_calico-system(67f9d2d6-04e7-4fe6-87c9-0953eade1288)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74b8c688b5-hg74d_calico-system(67f9d2d6-04e7-4fe6-87c9-0953eade1288)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17fdff08d6dcf6ae526ebce809a82e5a2a1f0e7c04f03e3c172ffa011dbe638a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74b8c688b5-hg74d" podUID="67f9d2d6-04e7-4fe6-87c9-0953eade1288" Mar 21 12:43:27.073394 systemd[1]: run-netns-cni\x2da55d1dce\x2d9a74\x2d3444\x2dd26b\x2d66a3361dcde4.mount: Deactivated successfully. Mar 21 12:43:27.073494 systemd[1]: run-netns-cni\x2d6bf52813\x2d2e2a\x2d1432\x2d48c3\x2d43e29313ef6b.mount: Deactivated successfully. Mar 21 12:43:27.073541 systemd[1]: run-netns-cni\x2d8fe249f2\x2d89e4\x2df35e\x2de71a\x2de51932802c18.mount: Deactivated successfully. Mar 21 12:43:27.073585 systemd[1]: run-netns-cni\x2d6ec44b5f\x2d3549\x2d3260\x2db923\x2df700b159ed9e.mount: Deactivated successfully. Mar 21 12:43:27.073630 systemd[1]: run-netns-cni\x2d2aa2a50f\x2defa3\x2d5188\x2d7798\x2d68dc4a4d6294.mount: Deactivated successfully. Mar 21 12:43:27.970958 kubelet[2727]: I0321 12:43:27.970907 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 12:43:28.374215 systemd[1]: Started sshd@8-10.0.0.147:22-10.0.0.1:36574.service - OpenSSH per-connection server daemon (10.0.0.1:36574). Mar 21 12:43:28.433958 sshd[3723]: Accepted publickey for core from 10.0.0.1 port 36574 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:43:28.435073 sshd-session[3723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:43:28.440005 systemd-logind[1464]: New session 9 of user core. Mar 21 12:43:28.446594 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 21 12:43:28.571488 sshd[3725]: Connection closed by 10.0.0.1 port 36574 Mar 21 12:43:28.571837 sshd-session[3723]: pam_unix(sshd:session): session closed for user core Mar 21 12:43:28.575574 systemd[1]: sshd@8-10.0.0.147:22-10.0.0.1:36574.service: Deactivated successfully. Mar 21 12:43:28.579123 systemd[1]: session-9.scope: Deactivated successfully. Mar 21 12:43:28.579977 systemd-logind[1464]: Session 9 logged out. Waiting for processes to exit. Mar 21 12:43:28.581186 systemd-logind[1464]: Removed session 9. Mar 21 12:43:29.911756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3127056610.mount: Deactivated successfully. Mar 21 12:43:30.115300 containerd[1491]: time="2025-03-21T12:43:30.115222606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:30.116757 containerd[1491]: time="2025-03-21T12:43:30.116707864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=137086024" Mar 21 12:43:30.117568 containerd[1491]: time="2025-03-21T12:43:30.117528754Z" level=info msg="ImageCreate event name:\"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:30.119269 containerd[1491]: time="2025-03-21T12:43:30.119230774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:30.119780 containerd[1491]: time="2025-03-21T12:43:30.119742260Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"137085886\" in 3.737706603s" Mar 21 12:43:30.119780 containerd[1491]: time="2025-03-21T12:43:30.119775060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:8fd1983cc851d15f05a37eb3ff85b0cde86869beec7630d2940c86fc7b98d0c1\"" Mar 21 12:43:30.127235 containerd[1491]: time="2025-03-21T12:43:30.127188429Z" level=info msg="CreateContainer within sandbox \"a33b273509433ff02d116d8a2ab3a613d899af1c5ec103cc5fdce3135f18fcbc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 21 12:43:30.137232 containerd[1491]: time="2025-03-21T12:43:30.137186908Z" level=info msg="Container ca2c046bdf68a2636db5ab8571dce44ed2f05955d3b2f934593157699d90c029: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:43:30.147166 containerd[1491]: time="2025-03-21T12:43:30.147129666Z" level=info msg="CreateContainer within sandbox \"a33b273509433ff02d116d8a2ab3a613d899af1c5ec103cc5fdce3135f18fcbc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ca2c046bdf68a2636db5ab8571dce44ed2f05955d3b2f934593157699d90c029\"" Mar 21 12:43:30.147680 containerd[1491]: time="2025-03-21T12:43:30.147537351Z" level=info msg="StartContainer for \"ca2c046bdf68a2636db5ab8571dce44ed2f05955d3b2f934593157699d90c029\"" Mar 21 12:43:30.148940 containerd[1491]: time="2025-03-21T12:43:30.148912328Z" level=info msg="connecting to shim ca2c046bdf68a2636db5ab8571dce44ed2f05955d3b2f934593157699d90c029" address="unix:///run/containerd/s/6850d2983ab6bdd8a323a74185e2ff3efc8fb15d234f46b1756b34865c548dd9" protocol=ttrpc version=3 Mar 21 12:43:30.169720 systemd[1]: Started cri-containerd-ca2c046bdf68a2636db5ab8571dce44ed2f05955d3b2f934593157699d90c029.scope - libcontainer container ca2c046bdf68a2636db5ab8571dce44ed2f05955d3b2f934593157699d90c029. Mar 21 12:43:30.203799 containerd[1491]: time="2025-03-21T12:43:30.203764021Z" level=info msg="StartContainer for \"ca2c046bdf68a2636db5ab8571dce44ed2f05955d3b2f934593157699d90c029\" returns successfully" Mar 21 12:43:30.365837 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 21 12:43:30.365950 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 21 12:43:30.644899 containerd[1491]: time="2025-03-21T12:43:30.644846439Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca2c046bdf68a2636db5ab8571dce44ed2f05955d3b2f934593157699d90c029\" id:\"9c0ffa69b41484d1103f15d5e66ca7eacf3cbaf60e35dc0e17650b3449e50181\" pid:3815 exit_status:1 exited_at:{seconds:1742561010 nanos:644549315}" Mar 21 12:43:30.718657 containerd[1491]: time="2025-03-21T12:43:30.718603958Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca2c046bdf68a2636db5ab8571dce44ed2f05955d3b2f934593157699d90c029\" id:\"c884f8b328c6aeb3caeff7817dce6fcd1d65d986ced0021f211637220d02facf\" pid:3841 exit_status:1 exited_at:{seconds:1742561010 nanos:718305394}" Mar 21 12:43:31.429444 containerd[1491]: time="2025-03-21T12:43:31.429362478Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca2c046bdf68a2636db5ab8571dce44ed2f05955d3b2f934593157699d90c029\" id:\"e601dc848b5ac9faa9ed8bc7948b1506def9e047f78fd021c8a9329d43ae44e9\" pid:3865 exit_status:1 exited_at:{seconds:1742561011 nanos:429099875}" Mar 21 12:43:31.757430 kernel: bpftool[4010]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 21 12:43:31.909670 systemd-networkd[1406]: vxlan.calico: Link UP Mar 21 12:43:31.909677 systemd-networkd[1406]: vxlan.calico: Gained carrier Mar 21 12:43:32.451327 containerd[1491]: time="2025-03-21T12:43:32.451280431Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca2c046bdf68a2636db5ab8571dce44ed2f05955d3b2f934593157699d90c029\" id:\"1a9893a5ace20ce3c02856177b0a672505031283b34145672e214c70581e3c06\" pid:4094 exit_status:1 exited_at:{seconds:1742561012 nanos:450954228}" Mar 21 12:43:33.587416 systemd[1]: Started sshd@9-10.0.0.147:22-10.0.0.1:55158.service - OpenSSH per-connection server daemon (10.0.0.1:55158). Mar 21 12:43:33.644959 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 55158 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:43:33.646402 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:43:33.650431 systemd-logind[1464]: New session 10 of user core. Mar 21 12:43:33.665540 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 21 12:43:33.818235 sshd[4118]: Connection closed by 10.0.0.1 port 55158 Mar 21 12:43:33.819703 sshd-session[4114]: pam_unix(sshd:session): session closed for user core Mar 21 12:43:33.828529 systemd[1]: sshd@9-10.0.0.147:22-10.0.0.1:55158.service: Deactivated successfully. Mar 21 12:43:33.829987 systemd[1]: session-10.scope: Deactivated successfully. Mar 21 12:43:33.830736 systemd-logind[1464]: Session 10 logged out. Waiting for processes to exit. Mar 21 12:43:33.832984 systemd[1]: Started sshd@10-10.0.0.147:22-10.0.0.1:55174.service - OpenSSH per-connection server daemon (10.0.0.1:55174). Mar 21 12:43:33.834137 systemd-logind[1464]: Removed session 10. Mar 21 12:43:33.885017 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 55174 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:43:33.885542 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:43:33.889457 systemd-logind[1464]: New session 11 of user core. Mar 21 12:43:33.896511 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 21 12:43:33.925540 systemd-networkd[1406]: vxlan.calico: Gained IPv6LL Mar 21 12:43:34.040024 sshd[4134]: Connection closed by 10.0.0.1 port 55174 Mar 21 12:43:34.040323 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Mar 21 12:43:34.050722 systemd[1]: Started sshd@11-10.0.0.147:22-10.0.0.1:55178.service - OpenSSH per-connection server daemon (10.0.0.1:55178). Mar 21 12:43:34.051157 systemd[1]: sshd@10-10.0.0.147:22-10.0.0.1:55174.service: Deactivated successfully. Mar 21 12:43:34.054964 systemd[1]: session-11.scope: Deactivated successfully. Mar 21 12:43:34.059430 systemd-logind[1464]: Session 11 logged out. Waiting for processes to exit. Mar 21 12:43:34.065356 systemd-logind[1464]: Removed session 11. Mar 21 12:43:34.104586 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 55178 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:43:34.105821 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:43:34.110238 systemd-logind[1464]: New session 12 of user core. Mar 21 12:43:34.119506 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 21 12:43:34.240175 sshd[4148]: Connection closed by 10.0.0.1 port 55178 Mar 21 12:43:34.240459 sshd-session[4143]: pam_unix(sshd:session): session closed for user core Mar 21 12:43:34.243749 systemd[1]: sshd@11-10.0.0.147:22-10.0.0.1:55178.service: Deactivated successfully. Mar 21 12:43:34.245413 systemd[1]: session-12.scope: Deactivated successfully. Mar 21 12:43:34.246113 systemd-logind[1464]: Session 12 logged out. Waiting for processes to exit. Mar 21 12:43:34.246879 systemd-logind[1464]: Removed session 12. Mar 21 12:43:37.261635 containerd[1491]: time="2025-03-21T12:43:37.261596159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59dbb779f6-cvkdq,Uid:bca733af-36c6-4d5a-82d9-681d834b79f7,Namespace:calico-apiserver,Attempt:0,}" Mar 21 12:43:37.519568 systemd-networkd[1406]: cali85f9ffd265f: Link UP Mar 21 12:43:37.520187 systemd-networkd[1406]: cali85f9ffd265f: Gained carrier Mar 21 12:43:37.529446 kubelet[2727]: I0321 12:43:37.529124 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-m9bmr" podStartSLOduration=8.208291974 podStartE2EDuration="19.529106921s" podCreationTimestamp="2025-03-21 12:43:18 +0000 UTC" firstStartedPulling="2025-03-21 12:43:18.79952636 +0000 UTC m=+24.618209874" lastFinishedPulling="2025-03-21 12:43:30.120341307 +0000 UTC m=+35.939024821" observedRunningTime="2025-03-21 12:43:30.394628496 +0000 UTC m=+36.213312050" watchObservedRunningTime="2025-03-21 12:43:37.529106921 +0000 UTC m=+43.347790475" Mar 21 12:43:37.532772 containerd[1491]: 2025-03-21 12:43:37.387 [INFO][4168] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59dbb779f6--cvkdq-eth0 calico-apiserver-59dbb779f6- calico-apiserver bca733af-36c6-4d5a-82d9-681d834b79f7 714 0 2025-03-21 12:43:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59dbb779f6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59dbb779f6-cvkdq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali85f9ffd265f [] []}} ContainerID="75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42" Namespace="calico-apiserver" Pod="calico-apiserver-59dbb779f6-cvkdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dbb779f6--cvkdq-" Mar 21 12:43:37.532772 containerd[1491]: 2025-03-21 12:43:37.387 [INFO][4168] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42" Namespace="calico-apiserver" Pod="calico-apiserver-59dbb779f6-cvkdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dbb779f6--cvkdq-eth0" Mar 21 12:43:37.532772 containerd[1491]: 2025-03-21 12:43:37.477 [INFO][4184] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42" HandleID="k8s-pod-network.75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42" Workload="localhost-k8s-calico--apiserver--59dbb779f6--cvkdq-eth0" Mar 21 12:43:37.532942 containerd[1491]: 2025-03-21 12:43:37.489 [INFO][4184] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42" HandleID="k8s-pod-network.75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42" Workload="localhost-k8s-calico--apiserver--59dbb779f6--cvkdq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ee2a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59dbb779f6-cvkdq", "timestamp":"2025-03-21 12:43:37.477843615 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 21 12:43:37.532942 containerd[1491]: 2025-03-21 12:43:37.489 [INFO][4184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 21 12:43:37.532942 containerd[1491]: 2025-03-21 12:43:37.489 [INFO][4184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 21 12:43:37.532942 containerd[1491]: 2025-03-21 12:43:37.489 [INFO][4184] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 21 12:43:37.532942 containerd[1491]: 2025-03-21 12:43:37.491 [INFO][4184] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42" host="localhost" Mar 21 12:43:37.532942 containerd[1491]: 2025-03-21 12:43:37.496 [INFO][4184] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 21 12:43:37.532942 containerd[1491]: 2025-03-21 12:43:37.501 [INFO][4184] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 21 12:43:37.532942 containerd[1491]: 2025-03-21 12:43:37.502 [INFO][4184] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 21 12:43:37.532942 containerd[1491]: 2025-03-21 12:43:37.504 [INFO][4184] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 21 12:43:37.532942 containerd[1491]: 2025-03-21 12:43:37.504 [INFO][4184] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42" host="localhost" Mar 21 12:43:37.533156 containerd[1491]: 2025-03-21 12:43:37.505 [INFO][4184] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42 Mar 21 12:43:37.533156 containerd[1491]: 2025-03-21 12:43:37.509 [INFO][4184] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42" host="localhost" Mar 21 12:43:37.533156 containerd[1491]: 2025-03-21 12:43:37.513 [INFO][4184] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42" host="localhost" Mar 21 12:43:37.533156 containerd[1491]: 2025-03-21 12:43:37.513 [INFO][4184] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42" host="localhost" Mar 21 12:43:37.533156 containerd[1491]: 2025-03-21 12:43:37.513 [INFO][4184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 21 12:43:37.533156 containerd[1491]: 2025-03-21 12:43:37.513 [INFO][4184] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42" HandleID="k8s-pod-network.75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42" Workload="localhost-k8s-calico--apiserver--59dbb779f6--cvkdq-eth0" Mar 21 12:43:37.533275 containerd[1491]: 2025-03-21 12:43:37.516 [INFO][4168] cni-plugin/k8s.go 386: Populated endpoint ContainerID="75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42" Namespace="calico-apiserver" Pod="calico-apiserver-59dbb779f6-cvkdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dbb779f6--cvkdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59dbb779f6--cvkdq-eth0", GenerateName:"calico-apiserver-59dbb779f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"bca733af-36c6-4d5a-82d9-681d834b79f7", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 43, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59dbb779f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59dbb779f6-cvkdq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali85f9ffd265f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:43:37.533327 containerd[1491]: 2025-03-21 12:43:37.516 [INFO][4168] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42" Namespace="calico-apiserver" Pod="calico-apiserver-59dbb779f6-cvkdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dbb779f6--cvkdq-eth0" Mar 21 12:43:37.533327 containerd[1491]: 2025-03-21 12:43:37.516 [INFO][4168] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali85f9ffd265f ContainerID="75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42" Namespace="calico-apiserver" Pod="calico-apiserver-59dbb779f6-cvkdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dbb779f6--cvkdq-eth0" Mar 21 12:43:37.533327 containerd[1491]: 2025-03-21 12:43:37.520 [INFO][4168] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42" Namespace="calico-apiserver" Pod="calico-apiserver-59dbb779f6-cvkdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dbb779f6--cvkdq-eth0" Mar 21 12:43:37.533425 containerd[1491]: 2025-03-21 12:43:37.520 [INFO][4168] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42" Namespace="calico-apiserver" Pod="calico-apiserver-59dbb779f6-cvkdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dbb779f6--cvkdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59dbb779f6--cvkdq-eth0", GenerateName:"calico-apiserver-59dbb779f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"bca733af-36c6-4d5a-82d9-681d834b79f7", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 43, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59dbb779f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42", Pod:"calico-apiserver-59dbb779f6-cvkdq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali85f9ffd265f", MAC:"0a:e6:7e:b9:91:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:43:37.533477 containerd[1491]: 2025-03-21 12:43:37.530 [INFO][4168] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42" Namespace="calico-apiserver" Pod="calico-apiserver-59dbb779f6-cvkdq" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dbb779f6--cvkdq-eth0" Mar 21 12:43:37.640072 containerd[1491]: time="2025-03-21T12:43:37.640029817Z" level=info msg="connecting to shim 75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42" address="unix:///run/containerd/s/d9362e4cf0209d62e676ae69225206e367853bcd1f25c1765e944462ac2de2b1" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:43:37.666534 systemd[1]: Started cri-containerd-75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42.scope - libcontainer container 75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42. Mar 21 12:43:37.676758 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 21 12:43:37.731467 containerd[1491]: time="2025-03-21T12:43:37.731426799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59dbb779f6-cvkdq,Uid:bca733af-36c6-4d5a-82d9-681d834b79f7,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42\"" Mar 21 12:43:37.732905 containerd[1491]: time="2025-03-21T12:43:37.732748892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 21 12:43:38.259207 containerd[1491]: time="2025-03-21T12:43:38.259170995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pjqw9,Uid:c43df681-4b82-49c4-9251-bf7c3ffee013,Namespace:kube-system,Attempt:0,}" Mar 21 12:43:38.353914 systemd-networkd[1406]: cali855a8a34ca5: Link UP Mar 21 12:43:38.354180 systemd-networkd[1406]: cali855a8a34ca5: Gained carrier Mar 21 12:43:38.364629 containerd[1491]: 2025-03-21 12:43:38.291 [INFO][4254] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--pjqw9-eth0 coredns-7db6d8ff4d- kube-system c43df681-4b82-49c4-9251-bf7c3ffee013 713 0 2025-03-21 12:43:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-pjqw9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali855a8a34ca5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pjqw9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pjqw9-" Mar 21 12:43:38.364629 containerd[1491]: 2025-03-21 12:43:38.291 [INFO][4254] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pjqw9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pjqw9-eth0" Mar 21 12:43:38.364629 containerd[1491]: 2025-03-21 12:43:38.316 [INFO][4268] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866" HandleID="k8s-pod-network.561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866" Workload="localhost-k8s-coredns--7db6d8ff4d--pjqw9-eth0" Mar 21 12:43:38.364996 containerd[1491]: 2025-03-21 12:43:38.327 [INFO][4268] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866" HandleID="k8s-pod-network.561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866" Workload="localhost-k8s-coredns--7db6d8ff4d--pjqw9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a8ef0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-pjqw9", "timestamp":"2025-03-21 12:43:38.31670983 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 21 12:43:38.364996 containerd[1491]: 2025-03-21 12:43:38.328 [INFO][4268] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 21 12:43:38.364996 containerd[1491]: 2025-03-21 12:43:38.328 [INFO][4268] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 21 12:43:38.364996 containerd[1491]: 2025-03-21 12:43:38.328 [INFO][4268] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 21 12:43:38.364996 containerd[1491]: 2025-03-21 12:43:38.329 [INFO][4268] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866" host="localhost" Mar 21 12:43:38.364996 containerd[1491]: 2025-03-21 12:43:38.332 [INFO][4268] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 21 12:43:38.364996 containerd[1491]: 2025-03-21 12:43:38.336 [INFO][4268] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 21 12:43:38.364996 containerd[1491]: 2025-03-21 12:43:38.338 [INFO][4268] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 21 12:43:38.364996 containerd[1491]: 2025-03-21 12:43:38.340 [INFO][4268] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 21 12:43:38.364996 containerd[1491]: 2025-03-21 12:43:38.340 [INFO][4268] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866" host="localhost" Mar 21 12:43:38.365269 containerd[1491]: 2025-03-21 12:43:38.341 [INFO][4268] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866 Mar 21 12:43:38.365269 containerd[1491]: 2025-03-21 12:43:38.345 [INFO][4268] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866" host="localhost" Mar 21 12:43:38.365269 containerd[1491]: 2025-03-21 12:43:38.349 [INFO][4268] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866" host="localhost" Mar 21 12:43:38.365269 containerd[1491]: 2025-03-21 12:43:38.349 [INFO][4268] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866" host="localhost" Mar 21 12:43:38.365269 containerd[1491]: 2025-03-21 12:43:38.349 [INFO][4268] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 21 12:43:38.365269 containerd[1491]: 2025-03-21 12:43:38.349 [INFO][4268] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866" HandleID="k8s-pod-network.561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866" Workload="localhost-k8s-coredns--7db6d8ff4d--pjqw9-eth0" Mar 21 12:43:38.365539 containerd[1491]: 2025-03-21 12:43:38.351 [INFO][4254] cni-plugin/k8s.go 386: Populated endpoint ContainerID="561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pjqw9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pjqw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--pjqw9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c43df681-4b82-49c4-9251-bf7c3ffee013", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 43, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-pjqw9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali855a8a34ca5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:43:38.365606 containerd[1491]: 2025-03-21 12:43:38.351 [INFO][4254] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pjqw9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pjqw9-eth0" Mar 21 12:43:38.365606 containerd[1491]: 2025-03-21 12:43:38.351 [INFO][4254] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali855a8a34ca5 ContainerID="561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pjqw9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pjqw9-eth0" Mar 21 12:43:38.365606 containerd[1491]: 2025-03-21 12:43:38.354 [INFO][4254] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pjqw9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pjqw9-eth0" Mar 21 12:43:38.365672 containerd[1491]: 2025-03-21 12:43:38.354 [INFO][4254] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pjqw9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pjqw9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--pjqw9-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c43df681-4b82-49c4-9251-bf7c3ffee013", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 43, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866", Pod:"coredns-7db6d8ff4d-pjqw9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali855a8a34ca5", MAC:"f2:d8:3f:13:09:22", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:43:38.365672 containerd[1491]: 2025-03-21 12:43:38.362 [INFO][4254] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pjqw9" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--pjqw9-eth0" Mar 21 12:43:38.385007 containerd[1491]: time="2025-03-21T12:43:38.384950769Z" level=info msg="connecting to shim 561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866" address="unix:///run/containerd/s/b07053e9f74803c5d0fd31f0f70a3de4dffc0f618d07bd9253ab5e033a0717ce" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:43:38.413546 systemd[1]: Started cri-containerd-561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866.scope - libcontainer container 561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866. Mar 21 12:43:38.424868 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 21 12:43:38.445706 containerd[1491]: time="2025-03-21T12:43:38.445667435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pjqw9,Uid:c43df681-4b82-49c4-9251-bf7c3ffee013,Namespace:kube-system,Attempt:0,} returns sandbox id \"561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866\"" Mar 21 12:43:38.448813 containerd[1491]: time="2025-03-21T12:43:38.448744185Z" level=info msg="CreateContainer within sandbox \"561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 21 12:43:38.460839 containerd[1491]: time="2025-03-21T12:43:38.460646419Z" level=info msg="Container 63b2d08d9d6550c04b515f78ef12ce6add9da78363aa47a5f6116f078cf36d6c: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:43:38.462007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount867343802.mount: Deactivated successfully. Mar 21 12:43:38.469104 containerd[1491]: time="2025-03-21T12:43:38.469072141Z" level=info msg="CreateContainer within sandbox \"561e7073924c02ef2cfe73fa6d78788a04e02518fc749c84b9740dd69d442866\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"63b2d08d9d6550c04b515f78ef12ce6add9da78363aa47a5f6116f078cf36d6c\"" Mar 21 12:43:38.469783 containerd[1491]: time="2025-03-21T12:43:38.469758307Z" level=info msg="StartContainer for \"63b2d08d9d6550c04b515f78ef12ce6add9da78363aa47a5f6116f078cf36d6c\"" Mar 21 12:43:38.470573 containerd[1491]: time="2025-03-21T12:43:38.470546395Z" level=info msg="connecting to shim 63b2d08d9d6550c04b515f78ef12ce6add9da78363aa47a5f6116f078cf36d6c" address="unix:///run/containerd/s/b07053e9f74803c5d0fd31f0f70a3de4dffc0f618d07bd9253ab5e033a0717ce" protocol=ttrpc version=3 Mar 21 12:43:38.492588 systemd[1]: Started cri-containerd-63b2d08d9d6550c04b515f78ef12ce6add9da78363aa47a5f6116f078cf36d6c.scope - libcontainer container 63b2d08d9d6550c04b515f78ef12ce6add9da78363aa47a5f6116f078cf36d6c. Mar 21 12:43:38.519618 containerd[1491]: time="2025-03-21T12:43:38.518539938Z" level=info msg="StartContainer for \"63b2d08d9d6550c04b515f78ef12ce6add9da78363aa47a5f6116f078cf36d6c\" returns successfully" Mar 21 12:43:38.599173 systemd-networkd[1406]: cali85f9ffd265f: Gained IPv6LL Mar 21 12:43:39.252839 systemd[1]: Started sshd@12-10.0.0.147:22-10.0.0.1:55180.service - OpenSSH per-connection server daemon (10.0.0.1:55180). Mar 21 12:43:39.261096 containerd[1491]: time="2025-03-21T12:43:39.260792329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9bz5c,Uid:fdd3b8e6-9992-4c39-aaca-050b7a719dd1,Namespace:kube-system,Attempt:0,}" Mar 21 12:43:39.272246 containerd[1491]: time="2025-03-21T12:43:39.272020515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cb58g,Uid:30c87969-e276-45d4-9080-209e74211884,Namespace:calico-system,Attempt:0,}" Mar 21 12:43:39.316473 sshd[4374]: Accepted publickey for core from 10.0.0.1 port 55180 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:43:39.318140 sshd-session[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:43:39.321931 systemd-logind[1464]: New session 13 of user core. Mar 21 12:43:39.334508 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 21 12:43:39.403443 containerd[1491]: time="2025-03-21T12:43:39.402889991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:39.405115 containerd[1491]: time="2025-03-21T12:43:39.404574807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=40253267" Mar 21 12:43:39.406798 containerd[1491]: time="2025-03-21T12:43:39.406583746Z" level=info msg="ImageCreate event name:\"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:39.411762 containerd[1491]: time="2025-03-21T12:43:39.411561673Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"41623040\" in 1.67877914s" Mar 21 12:43:39.412001 containerd[1491]: time="2025-03-21T12:43:39.411848396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:15defb01cf01d9d97dc594b25d63dee89192c67a6c991b6a78d49fa834325f4e\"" Mar 21 12:43:39.413331 containerd[1491]: time="2025-03-21T12:43:39.413280809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:39.417725 containerd[1491]: time="2025-03-21T12:43:39.417691571Z" level=info msg="CreateContainer within sandbox \"75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 21 12:43:39.421101 kubelet[2727]: I0321 12:43:39.417988 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-pjqw9" podStartSLOduration=30.417968973 podStartE2EDuration="30.417968973s" podCreationTimestamp="2025-03-21 12:43:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:43:39.417718211 +0000 UTC m=+45.236401765" watchObservedRunningTime="2025-03-21 12:43:39.417968973 +0000 UTC m=+45.236652607" Mar 21 12:43:39.453310 containerd[1491]: time="2025-03-21T12:43:39.452470059Z" level=info msg="Container b9198ab805f8d735155b3ba9786866113e2c310811d68332a7a8fc4afac70745: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:43:39.477434 containerd[1491]: time="2025-03-21T12:43:39.477325654Z" level=info msg="CreateContainer within sandbox \"75ae70a2e441d8a19589ba9537e0bb45093415ea7028f8f3830274d7f4c05f42\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b9198ab805f8d735155b3ba9786866113e2c310811d68332a7a8fc4afac70745\"" Mar 21 12:43:39.479790 containerd[1491]: time="2025-03-21T12:43:39.479729557Z" level=info msg="StartContainer for \"b9198ab805f8d735155b3ba9786866113e2c310811d68332a7a8fc4afac70745\"" Mar 21 12:43:39.480976 containerd[1491]: time="2025-03-21T12:43:39.480939768Z" level=info msg="connecting to shim b9198ab805f8d735155b3ba9786866113e2c310811d68332a7a8fc4afac70745" address="unix:///run/containerd/s/d9362e4cf0209d62e676ae69225206e367853bcd1f25c1765e944462ac2de2b1" protocol=ttrpc version=3 Mar 21 12:43:39.543536 systemd[1]: Started cri-containerd-b9198ab805f8d735155b3ba9786866113e2c310811d68332a7a8fc4afac70745.scope - libcontainer container b9198ab805f8d735155b3ba9786866113e2c310811d68332a7a8fc4afac70745. Mar 21 12:43:39.561714 systemd-networkd[1406]: calidc7c4ba5ce5: Link UP Mar 21 12:43:39.563045 systemd-networkd[1406]: calidc7c4ba5ce5: Gained carrier Mar 21 12:43:39.583393 containerd[1491]: 2025-03-21 12:43:39.415 [INFO][4383] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--cb58g-eth0 csi-node-driver- calico-system 30c87969-e276-45d4-9080-209e74211884 600 0 2025-03-21 12:43:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:69ddf5d45d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-cb58g eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidc7c4ba5ce5 [] []}} ContainerID="2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6" Namespace="calico-system" Pod="csi-node-driver-cb58g" WorkloadEndpoint="localhost-k8s-csi--node--driver--cb58g-" Mar 21 12:43:39.583393 containerd[1491]: 2025-03-21 12:43:39.415 [INFO][4383] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6" Namespace="calico-system" Pod="csi-node-driver-cb58g" WorkloadEndpoint="localhost-k8s-csi--node--driver--cb58g-eth0" Mar 21 12:43:39.583393 containerd[1491]: 2025-03-21 12:43:39.501 [INFO][4426] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6" HandleID="k8s-pod-network.2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6" Workload="localhost-k8s-csi--node--driver--cb58g-eth0" Mar 21 12:43:39.583393 containerd[1491]: 2025-03-21 12:43:39.517 [INFO][4426] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6" HandleID="k8s-pod-network.2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6" Workload="localhost-k8s-csi--node--driver--cb58g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000293bf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-cb58g", "timestamp":"2025-03-21 12:43:39.501819325 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 21 12:43:39.583393 containerd[1491]: 2025-03-21 12:43:39.517 [INFO][4426] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 21 12:43:39.583393 containerd[1491]: 2025-03-21 12:43:39.517 [INFO][4426] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 21 12:43:39.583393 containerd[1491]: 2025-03-21 12:43:39.517 [INFO][4426] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 21 12:43:39.583393 containerd[1491]: 2025-03-21 12:43:39.519 [INFO][4426] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6" host="localhost" Mar 21 12:43:39.583393 containerd[1491]: 2025-03-21 12:43:39.525 [INFO][4426] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 21 12:43:39.583393 containerd[1491]: 2025-03-21 12:43:39.532 [INFO][4426] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 21 12:43:39.583393 containerd[1491]: 2025-03-21 12:43:39.537 [INFO][4426] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 21 12:43:39.583393 containerd[1491]: 2025-03-21 12:43:39.539 [INFO][4426] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 21 12:43:39.583393 containerd[1491]: 2025-03-21 12:43:39.539 [INFO][4426] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6" host="localhost" Mar 21 12:43:39.583393 containerd[1491]: 2025-03-21 12:43:39.541 [INFO][4426] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6 Mar 21 12:43:39.583393 containerd[1491]: 2025-03-21 12:43:39.545 [INFO][4426] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6" host="localhost" Mar 21 12:43:39.583393 containerd[1491]: 2025-03-21 12:43:39.553 [INFO][4426] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6" host="localhost" Mar 21 12:43:39.583393 containerd[1491]: 2025-03-21 12:43:39.553 [INFO][4426] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6" host="localhost" Mar 21 12:43:39.583393 containerd[1491]: 2025-03-21 12:43:39.553 [INFO][4426] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 21 12:43:39.583393 containerd[1491]: 2025-03-21 12:43:39.553 [INFO][4426] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6" HandleID="k8s-pod-network.2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6" Workload="localhost-k8s-csi--node--driver--cb58g-eth0" Mar 21 12:43:39.584092 containerd[1491]: 2025-03-21 12:43:39.555 [INFO][4383] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6" Namespace="calico-system" Pod="csi-node-driver-cb58g" WorkloadEndpoint="localhost-k8s-csi--node--driver--cb58g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cb58g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"30c87969-e276-45d4-9080-209e74211884", ResourceVersion:"600", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 43, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-cb58g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidc7c4ba5ce5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:43:39.584092 containerd[1491]: 2025-03-21 12:43:39.555 [INFO][4383] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6" Namespace="calico-system" Pod="csi-node-driver-cb58g" WorkloadEndpoint="localhost-k8s-csi--node--driver--cb58g-eth0" Mar 21 12:43:39.584092 containerd[1491]: 2025-03-21 12:43:39.555 [INFO][4383] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc7c4ba5ce5 ContainerID="2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6" Namespace="calico-system" Pod="csi-node-driver-cb58g" WorkloadEndpoint="localhost-k8s-csi--node--driver--cb58g-eth0" Mar 21 12:43:39.584092 containerd[1491]: 2025-03-21 12:43:39.564 [INFO][4383] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6" Namespace="calico-system" Pod="csi-node-driver-cb58g" WorkloadEndpoint="localhost-k8s-csi--node--driver--cb58g-eth0" Mar 21 12:43:39.584092 containerd[1491]: 2025-03-21 12:43:39.565 [INFO][4383] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6" Namespace="calico-system" Pod="csi-node-driver-cb58g" WorkloadEndpoint="localhost-k8s-csi--node--driver--cb58g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cb58g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"30c87969-e276-45d4-9080-209e74211884", ResourceVersion:"600", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 43, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6", Pod:"csi-node-driver-cb58g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidc7c4ba5ce5", MAC:"16:9a:09:b3:d8:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:43:39.584092 containerd[1491]: 2025-03-21 12:43:39.578 [INFO][4383] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6" Namespace="calico-system" Pod="csi-node-driver-cb58g" WorkloadEndpoint="localhost-k8s-csi--node--driver--cb58g-eth0" Mar 21 12:43:39.608344 systemd-networkd[1406]: cali8cddbe978e1: Link UP Mar 21 12:43:39.609695 systemd-networkd[1406]: cali8cddbe978e1: Gained carrier Mar 21 12:43:39.628795 containerd[1491]: time="2025-03-21T12:43:39.628749084Z" level=info msg="StartContainer for \"b9198ab805f8d735155b3ba9786866113e2c310811d68332a7a8fc4afac70745\" returns successfully" Mar 21 12:43:39.632254 containerd[1491]: 2025-03-21 12:43:39.409 [INFO][4377] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--9bz5c-eth0 coredns-7db6d8ff4d- kube-system fdd3b8e6-9992-4c39-aaca-050b7a719dd1 712 0 2025-03-21 12:43:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-9bz5c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8cddbe978e1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9bz5c" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9bz5c-" Mar 21 12:43:39.632254 containerd[1491]: 2025-03-21 12:43:39.410 [INFO][4377] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9bz5c" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9bz5c-eth0" Mar 21 12:43:39.632254 containerd[1491]: 2025-03-21 12:43:39.513 [INFO][4418] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01" HandleID="k8s-pod-network.68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01" Workload="localhost-k8s-coredns--7db6d8ff4d--9bz5c-eth0" Mar 21 12:43:39.632254 containerd[1491]: 2025-03-21 12:43:39.528 [INFO][4418] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01" HandleID="k8s-pod-network.68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01" Workload="localhost-k8s-coredns--7db6d8ff4d--9bz5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a6170), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-9bz5c", "timestamp":"2025-03-21 12:43:39.513768078 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 21 12:43:39.632254 containerd[1491]: 2025-03-21 12:43:39.528 [INFO][4418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 21 12:43:39.632254 containerd[1491]: 2025-03-21 12:43:39.553 [INFO][4418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 21 12:43:39.632254 containerd[1491]: 2025-03-21 12:43:39.553 [INFO][4418] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 21 12:43:39.632254 containerd[1491]: 2025-03-21 12:43:39.555 [INFO][4418] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01" host="localhost" Mar 21 12:43:39.632254 containerd[1491]: 2025-03-21 12:43:39.568 [INFO][4418] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 21 12:43:39.632254 containerd[1491]: 2025-03-21 12:43:39.575 [INFO][4418] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 21 12:43:39.632254 containerd[1491]: 2025-03-21 12:43:39.578 [INFO][4418] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 21 12:43:39.632254 containerd[1491]: 2025-03-21 12:43:39.582 [INFO][4418] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 21 12:43:39.632254 containerd[1491]: 2025-03-21 12:43:39.582 [INFO][4418] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01" host="localhost" Mar 21 12:43:39.632254 containerd[1491]: 2025-03-21 12:43:39.584 [INFO][4418] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01 Mar 21 12:43:39.632254 containerd[1491]: 2025-03-21 12:43:39.591 [INFO][4418] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01" host="localhost" Mar 21 12:43:39.632254 containerd[1491]: 2025-03-21 12:43:39.596 [INFO][4418] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01" host="localhost" Mar 21 12:43:39.632254 containerd[1491]: 2025-03-21 12:43:39.597 [INFO][4418] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01" host="localhost" Mar 21 12:43:39.632254 containerd[1491]: 2025-03-21 12:43:39.597 [INFO][4418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 21 12:43:39.632254 containerd[1491]: 2025-03-21 12:43:39.597 [INFO][4418] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01" HandleID="k8s-pod-network.68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01" Workload="localhost-k8s-coredns--7db6d8ff4d--9bz5c-eth0" Mar 21 12:43:39.634331 containerd[1491]: 2025-03-21 12:43:39.602 [INFO][4377] cni-plugin/k8s.go 386: Populated endpoint ContainerID="68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9bz5c" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9bz5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--9bz5c-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fdd3b8e6-9992-4c39-aaca-050b7a719dd1", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 43, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-9bz5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8cddbe978e1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:43:39.634331 containerd[1491]: 2025-03-21 12:43:39.602 [INFO][4377] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9bz5c" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9bz5c-eth0" Mar 21 12:43:39.634331 containerd[1491]: 2025-03-21 12:43:39.602 [INFO][4377] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8cddbe978e1 ContainerID="68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9bz5c" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9bz5c-eth0" Mar 21 12:43:39.634331 containerd[1491]: 2025-03-21 12:43:39.610 [INFO][4377] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9bz5c" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9bz5c-eth0" Mar 21 12:43:39.634331 containerd[1491]: 2025-03-21 12:43:39.610 [INFO][4377] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9bz5c" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9bz5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--9bz5c-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fdd3b8e6-9992-4c39-aaca-050b7a719dd1", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 43, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01", Pod:"coredns-7db6d8ff4d-9bz5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8cddbe978e1", MAC:"d2:22:fa:3d:9e:17", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:43:39.634331 containerd[1491]: 2025-03-21 12:43:39.627 [INFO][4377] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9bz5c" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--9bz5c-eth0" Mar 21 12:43:39.642623 containerd[1491]: time="2025-03-21T12:43:39.642553814Z" level=info msg="connecting to shim 2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6" address="unix:///run/containerd/s/64b142ea53b9e107b1c24dcea99a838bd0eb9d2e3c8de124d458bf7f544f0226" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:43:39.664923 containerd[1491]: time="2025-03-21T12:43:39.664864425Z" level=info msg="connecting to shim 68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01" address="unix:///run/containerd/s/f6451b0eba89bf1705cfa52d626da8868ad602b61f66e517a40f328a9c007d83" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:43:39.667097 sshd[4376]: Connection closed by 10.0.0.1 port 55180 Mar 21 12:43:39.667516 sshd-session[4374]: pam_unix(sshd:session): session closed for user core Mar 21 12:43:39.677605 systemd[1]: sshd@12-10.0.0.147:22-10.0.0.1:55180.service: Deactivated successfully. Mar 21 12:43:39.680245 systemd[1]: session-13.scope: Deactivated successfully. Mar 21 12:43:39.684418 systemd-logind[1464]: Session 13 logged out. Waiting for processes to exit. Mar 21 12:43:39.695576 systemd[1]: Started cri-containerd-2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6.scope - libcontainer container 2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6. Mar 21 12:43:39.697604 systemd[1]: Started sshd@13-10.0.0.147:22-10.0.0.1:55192.service - OpenSSH per-connection server daemon (10.0.0.1:55192). Mar 21 12:43:39.701435 systemd[1]: Started cri-containerd-68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01.scope - libcontainer container 68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01. Mar 21 12:43:39.702685 systemd-logind[1464]: Removed session 13. Mar 21 12:43:39.716689 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 21 12:43:39.718207 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 21 12:43:39.741811 containerd[1491]: time="2025-03-21T12:43:39.741772551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cb58g,Uid:30c87969-e276-45d4-9080-209e74211884,Namespace:calico-system,Attempt:0,} returns sandbox id \"2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6\"" Mar 21 12:43:39.743008 containerd[1491]: time="2025-03-21T12:43:39.742941082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9bz5c,Uid:fdd3b8e6-9992-4c39-aaca-050b7a719dd1,Namespace:kube-system,Attempt:0,} returns sandbox id \"68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01\"" Mar 21 12:43:39.744465 containerd[1491]: time="2025-03-21T12:43:39.743734090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 21 12:43:39.747239 containerd[1491]: time="2025-03-21T12:43:39.747202803Z" level=info msg="CreateContainer within sandbox \"68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 21 12:43:39.754285 sshd[4566]: Accepted publickey for core from 10.0.0.1 port 55192 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:43:39.755640 containerd[1491]: time="2025-03-21T12:43:39.755580522Z" level=info msg="Container 25173f4fe912aa86246a850fbd2e8c8d5adba0e0b8e2ced5c920e3441dbe4f2b: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:43:39.755986 sshd-session[4566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:43:39.760538 systemd-logind[1464]: New session 14 of user core. Mar 21 12:43:39.763637 containerd[1491]: time="2025-03-21T12:43:39.763609198Z" level=info msg="CreateContainer within sandbox \"68c50bef52d65b56c928b8b107278ec6d25d621fdb3def3c7c85fd3702cb9e01\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"25173f4fe912aa86246a850fbd2e8c8d5adba0e0b8e2ced5c920e3441dbe4f2b\"" Mar 21 12:43:39.764329 containerd[1491]: time="2025-03-21T12:43:39.764062962Z" level=info msg="StartContainer for \"25173f4fe912aa86246a850fbd2e8c8d5adba0e0b8e2ced5c920e3441dbe4f2b\"" Mar 21 12:43:39.765527 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 21 12:43:39.766742 containerd[1491]: time="2025-03-21T12:43:39.766249583Z" level=info msg="connecting to shim 25173f4fe912aa86246a850fbd2e8c8d5adba0e0b8e2ced5c920e3441dbe4f2b" address="unix:///run/containerd/s/f6451b0eba89bf1705cfa52d626da8868ad602b61f66e517a40f328a9c007d83" protocol=ttrpc version=3 Mar 21 12:43:39.783522 systemd[1]: Started cri-containerd-25173f4fe912aa86246a850fbd2e8c8d5adba0e0b8e2ced5c920e3441dbe4f2b.scope - libcontainer container 25173f4fe912aa86246a850fbd2e8c8d5adba0e0b8e2ced5c920e3441dbe4f2b. Mar 21 12:43:39.825624 containerd[1491]: time="2025-03-21T12:43:39.825526142Z" level=info msg="StartContainer for \"25173f4fe912aa86246a850fbd2e8c8d5adba0e0b8e2ced5c920e3441dbe4f2b\" returns successfully" Mar 21 12:43:40.006494 systemd-networkd[1406]: cali855a8a34ca5: Gained IPv6LL Mar 21 12:43:40.068677 sshd[4605]: Connection closed by 10.0.0.1 port 55192 Mar 21 12:43:40.068537 sshd-session[4566]: pam_unix(sshd:session): session closed for user core Mar 21 12:43:40.079738 systemd[1]: sshd@13-10.0.0.147:22-10.0.0.1:55192.service: Deactivated successfully. Mar 21 12:43:40.082804 systemd[1]: session-14.scope: Deactivated successfully. Mar 21 12:43:40.084506 systemd-logind[1464]: Session 14 logged out. Waiting for processes to exit. Mar 21 12:43:40.086840 systemd[1]: Started sshd@14-10.0.0.147:22-10.0.0.1:55196.service - OpenSSH per-connection server daemon (10.0.0.1:55196). Mar 21 12:43:40.087631 systemd-logind[1464]: Removed session 14. Mar 21 12:43:40.161021 sshd[4651]: Accepted publickey for core from 10.0.0.1 port 55196 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:43:40.163005 sshd-session[4651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:43:40.169535 systemd-logind[1464]: New session 15 of user core. Mar 21 12:43:40.179621 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 21 12:43:40.414926 kubelet[2727]: I0321 12:43:40.414857 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59dbb779f6-cvkdq" podStartSLOduration=21.733944267 podStartE2EDuration="23.414841147s" podCreationTimestamp="2025-03-21 12:43:17 +0000 UTC" firstStartedPulling="2025-03-21 12:43:37.73249221 +0000 UTC m=+43.551175764" lastFinishedPulling="2025-03-21 12:43:39.41338905 +0000 UTC m=+45.232072644" observedRunningTime="2025-03-21 12:43:40.4141205 +0000 UTC m=+46.232804094" watchObservedRunningTime="2025-03-21 12:43:40.414841147 +0000 UTC m=+46.233524741" Mar 21 12:43:40.775655 containerd[1491]: time="2025-03-21T12:43:40.775494202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7473801" Mar 21 12:43:40.775655 containerd[1491]: time="2025-03-21T12:43:40.775639843Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:40.778414 containerd[1491]: time="2025-03-21T12:43:40.776305889Z" level=info msg="ImageCreate event name:\"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:40.778547 containerd[1491]: time="2025-03-21T12:43:40.778504550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:40.779027 containerd[1491]: time="2025-03-21T12:43:40.778991394Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"8843558\" in 1.035229344s" Mar 21 12:43:40.779027 containerd[1491]: time="2025-03-21T12:43:40.779024434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:f39063099e467ddd9d84500bfd4d97c404bb5f706a2161afc8979f4a94b8ad0b\"" Mar 21 12:43:40.789033 containerd[1491]: time="2025-03-21T12:43:40.788994007Z" level=info msg="CreateContainer within sandbox \"2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 21 12:43:40.809580 containerd[1491]: time="2025-03-21T12:43:40.809530116Z" level=info msg="Container 77bab8d275b83c4fb2ec77babe8966c080b6e3dc0e038d117f645b837ae9b24f: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:43:40.817024 containerd[1491]: time="2025-03-21T12:43:40.816980185Z" level=info msg="CreateContainer within sandbox \"2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"77bab8d275b83c4fb2ec77babe8966c080b6e3dc0e038d117f645b837ae9b24f\"" Mar 21 12:43:40.817654 containerd[1491]: time="2025-03-21T12:43:40.817609631Z" level=info msg="StartContainer for \"77bab8d275b83c4fb2ec77babe8966c080b6e3dc0e038d117f645b837ae9b24f\"" Mar 21 12:43:40.819090 containerd[1491]: time="2025-03-21T12:43:40.819060805Z" level=info msg="connecting to shim 77bab8d275b83c4fb2ec77babe8966c080b6e3dc0e038d117f645b837ae9b24f" address="unix:///run/containerd/s/64b142ea53b9e107b1c24dcea99a838bd0eb9d2e3c8de124d458bf7f544f0226" protocol=ttrpc version=3 Mar 21 12:43:40.838469 systemd-networkd[1406]: cali8cddbe978e1: Gained IPv6LL Mar 21 12:43:40.844584 systemd[1]: Started cri-containerd-77bab8d275b83c4fb2ec77babe8966c080b6e3dc0e038d117f645b837ae9b24f.scope - libcontainer container 77bab8d275b83c4fb2ec77babe8966c080b6e3dc0e038d117f645b837ae9b24f. Mar 21 12:43:40.883545 containerd[1491]: time="2025-03-21T12:43:40.883432920Z" level=info msg="StartContainer for \"77bab8d275b83c4fb2ec77babe8966c080b6e3dc0e038d117f645b837ae9b24f\" returns successfully" Mar 21 12:43:40.884727 containerd[1491]: time="2025-03-21T12:43:40.884534730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 21 12:43:41.222501 systemd-networkd[1406]: calidc7c4ba5ce5: Gained IPv6LL Mar 21 12:43:41.259338 containerd[1491]: time="2025-03-21T12:43:41.259281548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b8c688b5-hg74d,Uid:67f9d2d6-04e7-4fe6-87c9-0953eade1288,Namespace:calico-system,Attempt:0,}" Mar 21 12:43:41.259649 containerd[1491]: time="2025-03-21T12:43:41.259281588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59dbb779f6-fnbzh,Uid:d20eb502-b39f-4baa-9001-7d6ac293f12b,Namespace:calico-apiserver,Attempt:0,}" Mar 21 12:43:41.397300 systemd-networkd[1406]: cali2b1a0839ed3: Link UP Mar 21 12:43:41.397567 systemd-networkd[1406]: cali2b1a0839ed3: Gained carrier Mar 21 12:43:41.414586 kubelet[2727]: I0321 12:43:41.413875 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 12:43:41.418785 kubelet[2727]: I0321 12:43:41.418110 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9bz5c" podStartSLOduration=32.418096787 podStartE2EDuration="32.418096787s" podCreationTimestamp="2025-03-21 12:43:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:43:40.426582695 +0000 UTC m=+46.245266249" watchObservedRunningTime="2025-03-21 12:43:41.418096787 +0000 UTC m=+47.236780301" Mar 21 12:43:41.426863 containerd[1491]: 2025-03-21 12:43:41.303 [INFO][4706] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--74b8c688b5--hg74d-eth0 calico-kube-controllers-74b8c688b5- calico-system 67f9d2d6-04e7-4fe6-87c9-0953eade1288 711 0 2025-03-21 12:43:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74b8c688b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-74b8c688b5-hg74d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2b1a0839ed3 [] []}} ContainerID="45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17" Namespace="calico-system" Pod="calico-kube-controllers-74b8c688b5-hg74d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b8c688b5--hg74d-" Mar 21 12:43:41.426863 containerd[1491]: 2025-03-21 12:43:41.303 [INFO][4706] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17" Namespace="calico-system" Pod="calico-kube-controllers-74b8c688b5-hg74d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b8c688b5--hg74d-eth0" Mar 21 12:43:41.426863 containerd[1491]: 2025-03-21 12:43:41.345 [INFO][4733] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17" HandleID="k8s-pod-network.45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17" Workload="localhost-k8s-calico--kube--controllers--74b8c688b5--hg74d-eth0" Mar 21 12:43:41.426863 containerd[1491]: 2025-03-21 12:43:41.358 [INFO][4733] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17" HandleID="k8s-pod-network.45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17" Workload="localhost-k8s-calico--kube--controllers--74b8c688b5--hg74d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400043c040), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-74b8c688b5-hg74d", "timestamp":"2025-03-21 12:43:41.345321688 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 21 12:43:41.426863 containerd[1491]: 2025-03-21 12:43:41.358 [INFO][4733] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 21 12:43:41.426863 containerd[1491]: 2025-03-21 12:43:41.358 [INFO][4733] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 21 12:43:41.426863 containerd[1491]: 2025-03-21 12:43:41.358 [INFO][4733] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 21 12:43:41.426863 containerd[1491]: 2025-03-21 12:43:41.359 [INFO][4733] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17" host="localhost" Mar 21 12:43:41.426863 containerd[1491]: 2025-03-21 12:43:41.363 [INFO][4733] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 21 12:43:41.426863 containerd[1491]: 2025-03-21 12:43:41.370 [INFO][4733] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 21 12:43:41.426863 containerd[1491]: 2025-03-21 12:43:41.372 [INFO][4733] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 21 12:43:41.426863 containerd[1491]: 2025-03-21 12:43:41.375 [INFO][4733] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 21 12:43:41.426863 containerd[1491]: 2025-03-21 12:43:41.375 [INFO][4733] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17" host="localhost" Mar 21 12:43:41.426863 containerd[1491]: 2025-03-21 12:43:41.377 [INFO][4733] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17 Mar 21 12:43:41.426863 containerd[1491]: 2025-03-21 12:43:41.382 [INFO][4733] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17" host="localhost" Mar 21 12:43:41.426863 containerd[1491]: 2025-03-21 12:43:41.389 [INFO][4733] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17" host="localhost" Mar 21 12:43:41.426863 containerd[1491]: 2025-03-21 12:43:41.389 [INFO][4733] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17" host="localhost" Mar 21 12:43:41.426863 containerd[1491]: 2025-03-21 12:43:41.389 [INFO][4733] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 21 12:43:41.426863 containerd[1491]: 2025-03-21 12:43:41.389 [INFO][4733] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17" HandleID="k8s-pod-network.45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17" Workload="localhost-k8s-calico--kube--controllers--74b8c688b5--hg74d-eth0" Mar 21 12:43:41.428019 containerd[1491]: 2025-03-21 12:43:41.391 [INFO][4706] cni-plugin/k8s.go 386: Populated endpoint ContainerID="45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17" Namespace="calico-system" Pod="calico-kube-controllers-74b8c688b5-hg74d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b8c688b5--hg74d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74b8c688b5--hg74d-eth0", GenerateName:"calico-kube-controllers-74b8c688b5-", Namespace:"calico-system", SelfLink:"", UID:"67f9d2d6-04e7-4fe6-87c9-0953eade1288", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 43, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74b8c688b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-74b8c688b5-hg74d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b1a0839ed3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:43:41.428019 containerd[1491]: 2025-03-21 12:43:41.391 [INFO][4706] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17" Namespace="calico-system" Pod="calico-kube-controllers-74b8c688b5-hg74d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b8c688b5--hg74d-eth0" Mar 21 12:43:41.428019 containerd[1491]: 2025-03-21 12:43:41.392 [INFO][4706] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b1a0839ed3 ContainerID="45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17" Namespace="calico-system" Pod="calico-kube-controllers-74b8c688b5-hg74d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b8c688b5--hg74d-eth0" Mar 21 12:43:41.428019 containerd[1491]: 2025-03-21 12:43:41.396 [INFO][4706] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17" Namespace="calico-system" Pod="calico-kube-controllers-74b8c688b5-hg74d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b8c688b5--hg74d-eth0" Mar 21 12:43:41.428019 containerd[1491]: 2025-03-21 12:43:41.402 [INFO][4706] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17" Namespace="calico-system" Pod="calico-kube-controllers-74b8c688b5-hg74d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b8c688b5--hg74d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74b8c688b5--hg74d-eth0", GenerateName:"calico-kube-controllers-74b8c688b5-", Namespace:"calico-system", SelfLink:"", UID:"67f9d2d6-04e7-4fe6-87c9-0953eade1288", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 43, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74b8c688b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17", Pod:"calico-kube-controllers-74b8c688b5-hg74d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2b1a0839ed3", MAC:"36:ed:fe:cd:2c:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:43:41.428019 containerd[1491]: 2025-03-21 12:43:41.421 [INFO][4706] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17" Namespace="calico-system" Pod="calico-kube-controllers-74b8c688b5-hg74d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b8c688b5--hg74d-eth0" Mar 21 12:43:41.463101 systemd-networkd[1406]: cali20c1c52f0bd: Link UP Mar 21 12:43:41.463273 systemd-networkd[1406]: cali20c1c52f0bd: Gained carrier Mar 21 12:43:41.479506 containerd[1491]: time="2025-03-21T12:43:41.478506615Z" level=info msg="connecting to shim 45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17" address="unix:///run/containerd/s/9e14c6debbcbf5a96e75970c5fc356a5b59e880ada14cc1c8b256d33f3ec2725" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:43:41.480264 containerd[1491]: 2025-03-21 12:43:41.317 [INFO][4717] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59dbb779f6--fnbzh-eth0 calico-apiserver-59dbb779f6- calico-apiserver d20eb502-b39f-4baa-9001-7d6ac293f12b 708 0 2025-03-21 12:43:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59dbb779f6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59dbb779f6-fnbzh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali20c1c52f0bd [] []}} ContainerID="1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2" Namespace="calico-apiserver" Pod="calico-apiserver-59dbb779f6-fnbzh" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dbb779f6--fnbzh-" Mar 21 12:43:41.480264 containerd[1491]: 2025-03-21 12:43:41.317 [INFO][4717] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2" Namespace="calico-apiserver" Pod="calico-apiserver-59dbb779f6-fnbzh" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dbb779f6--fnbzh-eth0" Mar 21 12:43:41.480264 containerd[1491]: 2025-03-21 12:43:41.369 [INFO][4741] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2" HandleID="k8s-pod-network.1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2" Workload="localhost-k8s-calico--apiserver--59dbb779f6--fnbzh-eth0" Mar 21 12:43:41.480264 containerd[1491]: 2025-03-21 12:43:41.398 [INFO][4741] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2" HandleID="k8s-pod-network.1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2" Workload="localhost-k8s-calico--apiserver--59dbb779f6--fnbzh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001fb1c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59dbb779f6-fnbzh", "timestamp":"2025-03-21 12:43:41.369631908 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 21 12:43:41.480264 containerd[1491]: 2025-03-21 12:43:41.402 [INFO][4741] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 21 12:43:41.480264 containerd[1491]: 2025-03-21 12:43:41.402 [INFO][4741] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 21 12:43:41.480264 containerd[1491]: 2025-03-21 12:43:41.402 [INFO][4741] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 21 12:43:41.480264 containerd[1491]: 2025-03-21 12:43:41.405 [INFO][4741] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2" host="localhost" Mar 21 12:43:41.480264 containerd[1491]: 2025-03-21 12:43:41.411 [INFO][4741] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 21 12:43:41.480264 containerd[1491]: 2025-03-21 12:43:41.416 [INFO][4741] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 21 12:43:41.480264 containerd[1491]: 2025-03-21 12:43:41.423 [INFO][4741] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 21 12:43:41.480264 containerd[1491]: 2025-03-21 12:43:41.426 [INFO][4741] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 21 12:43:41.480264 containerd[1491]: 2025-03-21 12:43:41.426 [INFO][4741] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2" host="localhost" Mar 21 12:43:41.480264 containerd[1491]: 2025-03-21 12:43:41.428 [INFO][4741] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2 Mar 21 12:43:41.480264 containerd[1491]: 2025-03-21 12:43:41.434 [INFO][4741] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2" host="localhost" Mar 21 12:43:41.480264 containerd[1491]: 2025-03-21 12:43:41.444 [INFO][4741] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2" host="localhost" Mar 21 12:43:41.480264 containerd[1491]: 2025-03-21 12:43:41.444 [INFO][4741] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2" host="localhost" Mar 21 12:43:41.480264 containerd[1491]: 2025-03-21 12:43:41.444 [INFO][4741] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 21 12:43:41.480264 containerd[1491]: 2025-03-21 12:43:41.444 [INFO][4741] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2" HandleID="k8s-pod-network.1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2" Workload="localhost-k8s-calico--apiserver--59dbb779f6--fnbzh-eth0" Mar 21 12:43:41.480773 containerd[1491]: 2025-03-21 12:43:41.460 [INFO][4717] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2" Namespace="calico-apiserver" Pod="calico-apiserver-59dbb779f6-fnbzh" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dbb779f6--fnbzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59dbb779f6--fnbzh-eth0", GenerateName:"calico-apiserver-59dbb779f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"d20eb502-b39f-4baa-9001-7d6ac293f12b", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 43, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59dbb779f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59dbb779f6-fnbzh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali20c1c52f0bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:43:41.480773 containerd[1491]: 2025-03-21 12:43:41.460 [INFO][4717] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2" Namespace="calico-apiserver" Pod="calico-apiserver-59dbb779f6-fnbzh" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dbb779f6--fnbzh-eth0" Mar 21 12:43:41.480773 containerd[1491]: 2025-03-21 12:43:41.460 [INFO][4717] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20c1c52f0bd ContainerID="1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2" Namespace="calico-apiserver" Pod="calico-apiserver-59dbb779f6-fnbzh" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dbb779f6--fnbzh-eth0" Mar 21 12:43:41.480773 containerd[1491]: 2025-03-21 12:43:41.462 [INFO][4717] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2" Namespace="calico-apiserver" Pod="calico-apiserver-59dbb779f6-fnbzh" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dbb779f6--fnbzh-eth0" Mar 21 12:43:41.480773 containerd[1491]: 2025-03-21 12:43:41.462 [INFO][4717] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2" Namespace="calico-apiserver" Pod="calico-apiserver-59dbb779f6-fnbzh" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dbb779f6--fnbzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59dbb779f6--fnbzh-eth0", GenerateName:"calico-apiserver-59dbb779f6-", Namespace:"calico-apiserver", SelfLink:"", UID:"d20eb502-b39f-4baa-9001-7d6ac293f12b", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 43, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59dbb779f6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2", Pod:"calico-apiserver-59dbb779f6-fnbzh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali20c1c52f0bd", MAC:"c6:b7:30:50:ad:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:43:41.480773 containerd[1491]: 2025-03-21 12:43:41.475 [INFO][4717] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2" Namespace="calico-apiserver" Pod="calico-apiserver-59dbb779f6-fnbzh" WorkloadEndpoint="localhost-k8s-calico--apiserver--59dbb779f6--fnbzh-eth0" Mar 21 12:43:41.507631 systemd[1]: Started cri-containerd-45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17.scope - libcontainer container 45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17. Mar 21 12:43:41.519934 containerd[1491]: time="2025-03-21T12:43:41.519892270Z" level=info msg="connecting to shim 1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2" address="unix:///run/containerd/s/12e7aa2a8b1c33eefc1d31cc2d28f2ac270ac272479c3c8f0265ca68a9ae5068" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:43:41.523872 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 21 12:43:41.549710 systemd[1]: Started cri-containerd-1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2.scope - libcontainer container 1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2. Mar 21 12:43:41.558008 containerd[1491]: time="2025-03-21T12:43:41.557913295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b8c688b5-hg74d,Uid:67f9d2d6-04e7-4fe6-87c9-0953eade1288,Namespace:calico-system,Attempt:0,} returns sandbox id \"45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17\"" Mar 21 12:43:41.566297 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 21 12:43:41.591152 containerd[1491]: time="2025-03-21T12:43:41.591100395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59dbb779f6-fnbzh,Uid:d20eb502-b39f-4baa-9001-7d6ac293f12b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2\"" Mar 21 12:43:41.594352 containerd[1491]: time="2025-03-21T12:43:41.594318585Z" level=info msg="CreateContainer within sandbox \"1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 21 12:43:41.600977 containerd[1491]: time="2025-03-21T12:43:41.600545481Z" level=info msg="Container fe5e2092a61ddd1cb03914f1d0ead391f20f1767f862ea1c805973a5d016089a: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:43:41.606632 containerd[1491]: time="2025-03-21T12:43:41.606592136Z" level=info msg="CreateContainer within sandbox \"1138e6d44bb52c332d676e6fa765b3547eaf0f4753ee6b9b32d4b3057adcafe2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fe5e2092a61ddd1cb03914f1d0ead391f20f1767f862ea1c805973a5d016089a\"" Mar 21 12:43:41.607071 containerd[1491]: time="2025-03-21T12:43:41.606989219Z" level=info msg="StartContainer for \"fe5e2092a61ddd1cb03914f1d0ead391f20f1767f862ea1c805973a5d016089a\"" Mar 21 12:43:41.608749 containerd[1491]: time="2025-03-21T12:43:41.608716675Z" level=info msg="connecting to shim fe5e2092a61ddd1cb03914f1d0ead391f20f1767f862ea1c805973a5d016089a" address="unix:///run/containerd/s/12e7aa2a8b1c33eefc1d31cc2d28f2ac270ac272479c3c8f0265ca68a9ae5068" protocol=ttrpc version=3 Mar 21 12:43:41.629558 systemd[1]: Started cri-containerd-fe5e2092a61ddd1cb03914f1d0ead391f20f1767f862ea1c805973a5d016089a.scope - libcontainer container fe5e2092a61ddd1cb03914f1d0ead391f20f1767f862ea1c805973a5d016089a. Mar 21 12:43:41.669921 containerd[1491]: time="2025-03-21T12:43:41.669885269Z" level=info msg="StartContainer for \"fe5e2092a61ddd1cb03914f1d0ead391f20f1767f862ea1c805973a5d016089a\" returns successfully" Mar 21 12:43:41.792011 sshd[4655]: Connection closed by 10.0.0.1 port 55196 Mar 21 12:43:41.792721 sshd-session[4651]: pam_unix(sshd:session): session closed for user core Mar 21 12:43:41.815025 systemd[1]: Started sshd@15-10.0.0.147:22-10.0.0.1:55200.service - OpenSSH per-connection server daemon (10.0.0.1:55200). Mar 21 12:43:41.815713 systemd[1]: sshd@14-10.0.0.147:22-10.0.0.1:55196.service: Deactivated successfully. Mar 21 12:43:41.817515 systemd[1]: session-15.scope: Deactivated successfully. Mar 21 12:43:41.819424 systemd[1]: session-15.scope: Consumed 515ms CPU time, 69M memory peak. Mar 21 12:43:41.820389 systemd-logind[1464]: Session 15 logged out. Waiting for processes to exit. Mar 21 12:43:41.825512 systemd-logind[1464]: Removed session 15. Mar 21 12:43:41.874696 sshd[4908]: Accepted publickey for core from 10.0.0.1 port 55200 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:43:41.876719 sshd-session[4908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:43:41.883176 systemd-logind[1464]: New session 16 of user core. Mar 21 12:43:41.888526 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 21 12:43:41.993057 containerd[1491]: time="2025-03-21T12:43:41.993001318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:41.994820 containerd[1491]: time="2025-03-21T12:43:41.994425171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13121717" Mar 21 12:43:41.996181 containerd[1491]: time="2025-03-21T12:43:41.995400380Z" level=info msg="ImageCreate event name:\"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:41.997425 containerd[1491]: time="2025-03-21T12:43:41.997398998Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:41.999445 containerd[1491]: time="2025-03-21T12:43:41.999419696Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"14491426\" in 1.114851606s" Mar 21 12:43:41.999612 containerd[1491]: time="2025-03-21T12:43:41.999592338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:5b766f5f5d1b2ccc7c16f12d59c6c17c490ae33a8973c1fa7b2bcf3b8aa5098a\"" Mar 21 12:43:42.000866 containerd[1491]: time="2025-03-21T12:43:42.000844909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\"" Mar 21 12:43:42.005169 containerd[1491]: time="2025-03-21T12:43:42.005132067Z" level=info msg="CreateContainer within sandbox \"2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 21 12:43:42.015173 containerd[1491]: time="2025-03-21T12:43:42.014782753Z" level=info msg="Container ced73ac89edeb6378fded5095dd7a712b6baf880c61198d9f58bc6cfcb05e707: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:43:42.023825 containerd[1491]: time="2025-03-21T12:43:42.023792833Z" level=info msg="CreateContainer within sandbox \"2aff13feb6825a18192e918fc737b7ce72a5c7a8df49de93cd5438a91260aae6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ced73ac89edeb6378fded5095dd7a712b6baf880c61198d9f58bc6cfcb05e707\"" Mar 21 12:43:42.024671 containerd[1491]: time="2025-03-21T12:43:42.024645681Z" level=info msg="StartContainer for \"ced73ac89edeb6378fded5095dd7a712b6baf880c61198d9f58bc6cfcb05e707\"" Mar 21 12:43:42.026282 containerd[1491]: time="2025-03-21T12:43:42.026252815Z" level=info msg="connecting to shim ced73ac89edeb6378fded5095dd7a712b6baf880c61198d9f58bc6cfcb05e707" address="unix:///run/containerd/s/64b142ea53b9e107b1c24dcea99a838bd0eb9d2e3c8de124d458bf7f544f0226" protocol=ttrpc version=3 Mar 21 12:43:42.048522 systemd[1]: Started cri-containerd-ced73ac89edeb6378fded5095dd7a712b6baf880c61198d9f58bc6cfcb05e707.scope - libcontainer container ced73ac89edeb6378fded5095dd7a712b6baf880c61198d9f58bc6cfcb05e707. Mar 21 12:43:42.134623 containerd[1491]: time="2025-03-21T12:43:42.133653090Z" level=info msg="StartContainer for \"ced73ac89edeb6378fded5095dd7a712b6baf880c61198d9f58bc6cfcb05e707\" returns successfully" Mar 21 12:43:42.202099 sshd[4917]: Connection closed by 10.0.0.1 port 55200 Mar 21 12:43:42.203664 sshd-session[4908]: pam_unix(sshd:session): session closed for user core Mar 21 12:43:42.213906 systemd[1]: sshd@15-10.0.0.147:22-10.0.0.1:55200.service: Deactivated successfully. Mar 21 12:43:42.217085 systemd[1]: session-16.scope: Deactivated successfully. Mar 21 12:43:42.219806 systemd-logind[1464]: Session 16 logged out. Waiting for processes to exit. Mar 21 12:43:42.222821 systemd[1]: Started sshd@16-10.0.0.147:22-10.0.0.1:55210.service - OpenSSH per-connection server daemon (10.0.0.1:55210). Mar 21 12:43:42.225787 systemd-logind[1464]: Removed session 16. Mar 21 12:43:42.287327 sshd[4965]: Accepted publickey for core from 10.0.0.1 port 55210 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:43:42.288545 sshd-session[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:43:42.292979 systemd-logind[1464]: New session 17 of user core. Mar 21 12:43:42.301613 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 21 12:43:42.324084 kubelet[2727]: I0321 12:43:42.323994 2727 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 21 12:43:42.329932 kubelet[2727]: I0321 12:43:42.329640 2727 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 21 12:43:42.442018 kubelet[2727]: I0321 12:43:42.441937 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cb58g" podStartSLOduration=22.185067334 podStartE2EDuration="24.441920071s" podCreationTimestamp="2025-03-21 12:43:18 +0000 UTC" firstStartedPulling="2025-03-21 12:43:39.743446327 +0000 UTC m=+45.562129841" lastFinishedPulling="2025-03-21 12:43:42.000299024 +0000 UTC m=+47.818982578" observedRunningTime="2025-03-21 12:43:42.431036454 +0000 UTC m=+48.249720008" watchObservedRunningTime="2025-03-21 12:43:42.441920071 +0000 UTC m=+48.260603625" Mar 21 12:43:42.474544 sshd[4968]: Connection closed by 10.0.0.1 port 55210 Mar 21 12:43:42.474855 sshd-session[4965]: pam_unix(sshd:session): session closed for user core Mar 21 12:43:42.479112 systemd[1]: sshd@16-10.0.0.147:22-10.0.0.1:55210.service: Deactivated successfully. Mar 21 12:43:42.481846 systemd[1]: session-17.scope: Deactivated successfully. Mar 21 12:43:42.483430 systemd-logind[1464]: Session 17 logged out. Waiting for processes to exit. Mar 21 12:43:42.485013 systemd-logind[1464]: Removed session 17. Mar 21 12:43:42.757502 systemd-networkd[1406]: cali2b1a0839ed3: Gained IPv6LL Mar 21 12:43:43.426991 kubelet[2727]: I0321 12:43:43.426912 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 12:43:43.448590 containerd[1491]: time="2025-03-21T12:43:43.448546508Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:43.449499 containerd[1491]: time="2025-03-21T12:43:43.449259394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.2: active requests=0, bytes read=32560257" Mar 21 12:43:43.450324 containerd[1491]: time="2025-03-21T12:43:43.450281803Z" level=info msg="ImageCreate event name:\"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:43.452495 containerd[1491]: time="2025-03-21T12:43:43.452423302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:43:43.453102 containerd[1491]: time="2025-03-21T12:43:43.452953546Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" with image id \"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\", size \"33929982\" in 1.452012956s" Mar 21 12:43:43.453102 containerd[1491]: time="2025-03-21T12:43:43.452989226Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" returns image reference \"sha256:39a6e91a11a792441d34dccf5e11416a0fd297782f169fdb871a5558ad50b229\"" Mar 21 12:43:43.461389 containerd[1491]: time="2025-03-21T12:43:43.460002248Z" level=info msg="CreateContainer within sandbox \"45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 21 12:43:43.463538 systemd-networkd[1406]: cali20c1c52f0bd: Gained IPv6LL Mar 21 12:43:43.467401 containerd[1491]: time="2025-03-21T12:43:43.466590305Z" level=info msg="Container 932aca492ae9999400e2edfe0c4a3b1778938375797454a6e589fe18fb7b3c42: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:43:43.473103 containerd[1491]: time="2025-03-21T12:43:43.473070722Z" level=info msg="CreateContainer within sandbox \"45b0c744239f6f69c49ee9e2ce682128bc51b7c6a576f375412aceb115f9ea17\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"932aca492ae9999400e2edfe0c4a3b1778938375797454a6e589fe18fb7b3c42\"" Mar 21 12:43:43.473757 containerd[1491]: time="2025-03-21T12:43:43.473515886Z" level=info msg="StartContainer for \"932aca492ae9999400e2edfe0c4a3b1778938375797454a6e589fe18fb7b3c42\"" Mar 21 12:43:43.474489 containerd[1491]: time="2025-03-21T12:43:43.474464614Z" level=info msg="connecting to shim 932aca492ae9999400e2edfe0c4a3b1778938375797454a6e589fe18fb7b3c42" address="unix:///run/containerd/s/9e14c6debbcbf5a96e75970c5fc356a5b59e880ada14cc1c8b256d33f3ec2725" protocol=ttrpc version=3 Mar 21 12:43:43.493526 systemd[1]: Started cri-containerd-932aca492ae9999400e2edfe0c4a3b1778938375797454a6e589fe18fb7b3c42.scope - libcontainer container 932aca492ae9999400e2edfe0c4a3b1778938375797454a6e589fe18fb7b3c42. Mar 21 12:43:43.538084 containerd[1491]: time="2025-03-21T12:43:43.538045129Z" level=info msg="StartContainer for \"932aca492ae9999400e2edfe0c4a3b1778938375797454a6e589fe18fb7b3c42\" returns successfully" Mar 21 12:43:44.452649 kubelet[2727]: I0321 12:43:44.452578 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59dbb779f6-fnbzh" podStartSLOduration=27.452560003 podStartE2EDuration="27.452560003s" podCreationTimestamp="2025-03-21 12:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:43:42.441688068 +0000 UTC m=+48.260371622" watchObservedRunningTime="2025-03-21 12:43:44.452560003 +0000 UTC m=+50.271243517" Mar 21 12:43:44.453012 kubelet[2727]: I0321 12:43:44.452718 2727 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-74b8c688b5-hg74d" podStartSLOduration=24.558521682 podStartE2EDuration="26.452712885s" podCreationTimestamp="2025-03-21 12:43:18 +0000 UTC" firstStartedPulling="2025-03-21 12:43:41.559348388 +0000 UTC m=+47.378031902" lastFinishedPulling="2025-03-21 12:43:43.453539551 +0000 UTC m=+49.272223105" observedRunningTime="2025-03-21 12:43:44.44986022 +0000 UTC m=+50.268543774" watchObservedRunningTime="2025-03-21 12:43:44.452712885 +0000 UTC m=+50.271396399" Mar 21 12:43:44.499826 containerd[1491]: time="2025-03-21T12:43:44.499775528Z" level=info msg="TaskExit event in podsandbox handler container_id:\"932aca492ae9999400e2edfe0c4a3b1778938375797454a6e589fe18fb7b3c42\" id:\"8c53882571297244b87936364e2ad647a5dab0f2aeaa04f4cea74585b3416914\" pid:5036 exited_at:{seconds:1742561024 nanos:499199123}" Mar 21 12:43:47.485736 systemd[1]: Started sshd@17-10.0.0.147:22-10.0.0.1:45756.service - OpenSSH per-connection server daemon (10.0.0.1:45756). Mar 21 12:43:47.552364 sshd[5050]: Accepted publickey for core from 10.0.0.1 port 45756 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:43:47.553874 sshd-session[5050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:43:47.558442 systemd-logind[1464]: New session 18 of user core. Mar 21 12:43:47.565529 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 21 12:43:47.733582 sshd[5052]: Connection closed by 10.0.0.1 port 45756 Mar 21 12:43:47.733932 sshd-session[5050]: pam_unix(sshd:session): session closed for user core Mar 21 12:43:47.737296 systemd-logind[1464]: Session 18 logged out. Waiting for processes to exit. Mar 21 12:43:47.737475 systemd[1]: sshd@17-10.0.0.147:22-10.0.0.1:45756.service: Deactivated successfully. Mar 21 12:43:47.739258 systemd[1]: session-18.scope: Deactivated successfully. Mar 21 12:43:47.741024 systemd-logind[1464]: Removed session 18. Mar 21 12:43:48.689599 kubelet[2727]: I0321 12:43:48.689466 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 12:43:52.745648 systemd[1]: Started sshd@18-10.0.0.147:22-10.0.0.1:51542.service - OpenSSH per-connection server daemon (10.0.0.1:51542). Mar 21 12:43:52.797027 sshd[5079]: Accepted publickey for core from 10.0.0.1 port 51542 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:43:52.798184 sshd-session[5079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:43:52.801698 systemd-logind[1464]: New session 19 of user core. Mar 21 12:43:52.812603 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 21 12:43:52.926342 sshd[5081]: Connection closed by 10.0.0.1 port 51542 Mar 21 12:43:52.926669 sshd-session[5079]: pam_unix(sshd:session): session closed for user core Mar 21 12:43:52.930095 systemd[1]: sshd@18-10.0.0.147:22-10.0.0.1:51542.service: Deactivated successfully. Mar 21 12:43:52.931811 systemd[1]: session-19.scope: Deactivated successfully. Mar 21 12:43:52.932513 systemd-logind[1464]: Session 19 logged out. Waiting for processes to exit. Mar 21 12:43:52.933713 systemd-logind[1464]: Removed session 19. Mar 21 12:43:56.138095 containerd[1491]: time="2025-03-21T12:43:56.138037662Z" level=info msg="TaskExit event in podsandbox handler container_id:\"932aca492ae9999400e2edfe0c4a3b1778938375797454a6e589fe18fb7b3c42\" id:\"09979e7a691bd5d5f8975ac669a4886df8992ebec690e1bde28f0c387eb0d398\" pid:5108 exited_at:{seconds:1742561036 nanos:137847501}" Mar 21 12:43:57.161062 kubelet[2727]: I0321 12:43:57.161022 2727 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 12:43:57.938937 systemd[1]: Started sshd@19-10.0.0.147:22-10.0.0.1:51554.service - OpenSSH per-connection server daemon (10.0.0.1:51554). Mar 21 12:43:57.996543 sshd[5121]: Accepted publickey for core from 10.0.0.1 port 51554 ssh2: RSA SHA256:MdsOSlIGNpcftqwP7ll+xX3Rmkua/0DX/UznjsKKr2Y Mar 21 12:43:57.997823 sshd-session[5121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:43:58.001802 systemd-logind[1464]: New session 20 of user core. Mar 21 12:43:58.011579 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 21 12:43:58.164745 sshd[5123]: Connection closed by 10.0.0.1 port 51554 Mar 21 12:43:58.165487 sshd-session[5121]: pam_unix(sshd:session): session closed for user core Mar 21 12:43:58.169428 systemd[1]: sshd@19-10.0.0.147:22-10.0.0.1:51554.service: Deactivated successfully. Mar 21 12:43:58.171107 systemd[1]: session-20.scope: Deactivated successfully. Mar 21 12:43:58.172403 systemd-logind[1464]: Session 20 logged out. Waiting for processes to exit. Mar 21 12:43:58.173413 systemd-logind[1464]: Removed session 20.