Jan 29 11:07:02.907895 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 11:07:02.907915 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:37:00 -00 2025 Jan 29 11:07:02.907925 kernel: KASLR enabled Jan 29 11:07:02.907931 kernel: efi: EFI v2.7 by EDK II Jan 29 11:07:02.907937 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Jan 29 11:07:02.907943 kernel: random: crng init done Jan 29 11:07:02.907950 kernel: secureboot: Secure boot disabled Jan 29 11:07:02.907956 kernel: ACPI: Early table checksum verification disabled Jan 29 11:07:02.907962 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 29 11:07:02.907970 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:07:02.907976 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:07:02.907982 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:07:02.907989 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:07:02.907995 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:07:02.908002 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:07:02.908010 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:07:02.908016 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:07:02.908023 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:07:02.908029 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:07:02.908035 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 29 11:07:02.908042 kernel: NUMA: Failed to initialise from firmware Jan 29 11:07:02.908048 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:07:02.908054 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jan 29 11:07:02.908061 kernel: Zone ranges: Jan 29 11:07:02.908067 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:07:02.908083 kernel: DMA32 empty Jan 29 11:07:02.908090 kernel: Normal empty Jan 29 11:07:02.908096 kernel: Movable zone start for each node Jan 29 11:07:02.908102 kernel: Early memory node ranges Jan 29 11:07:02.908108 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 29 11:07:02.908115 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 29 11:07:02.908121 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 29 11:07:02.908127 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 29 11:07:02.908134 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 29 11:07:02.908140 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 29 11:07:02.908146 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 29 11:07:02.908153 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:07:02.908160 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 29 11:07:02.908168 kernel: psci: probing for conduit method from ACPI. Jan 29 11:07:02.908174 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 11:07:02.908184 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 11:07:02.908190 kernel: psci: Trusted OS migration not required Jan 29 11:07:02.908197 kernel: psci: SMC Calling Convention v1.1 Jan 29 11:07:02.908206 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 11:07:02.908213 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 11:07:02.908219 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 11:07:02.908226 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 29 11:07:02.908233 kernel: Detected PIPT I-cache on CPU0 Jan 29 11:07:02.908240 kernel: CPU features: detected: GIC system register CPU interface Jan 29 11:07:02.908256 kernel: CPU features: detected: Hardware dirty bit management Jan 29 11:07:02.908267 kernel: CPU features: detected: Spectre-v4 Jan 29 11:07:02.908274 kernel: CPU features: detected: Spectre-BHB Jan 29 11:07:02.908281 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 11:07:02.908289 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 11:07:02.908296 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 11:07:02.908303 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 11:07:02.908310 kernel: alternatives: applying boot alternatives Jan 29 11:07:02.908318 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 11:07:02.908325 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:07:02.908332 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:07:02.908339 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:07:02.908345 kernel: Fallback order for Node 0: 0 Jan 29 11:07:02.908352 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 29 11:07:02.908359 kernel: Policy zone: DMA Jan 29 11:07:02.908367 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:07:02.908374 kernel: software IO TLB: area num 4. Jan 29 11:07:02.908381 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 29 11:07:02.908388 kernel: Memory: 2386320K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39680K init, 897K bss, 185968K reserved, 0K cma-reserved) Jan 29 11:07:02.908395 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:07:02.908402 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:07:02.908409 kernel: rcu: RCU event tracing is enabled. Jan 29 11:07:02.908416 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:07:02.908423 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:07:02.908430 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:07:02.908440 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:07:02.908449 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:07:02.908458 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 11:07:02.908465 kernel: GICv3: 256 SPIs implemented Jan 29 11:07:02.908475 kernel: GICv3: 0 Extended SPIs implemented Jan 29 11:07:02.908483 kernel: Root IRQ handler: gic_handle_irq Jan 29 11:07:02.908490 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 11:07:02.908497 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 11:07:02.908504 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 11:07:02.908511 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 11:07:02.908518 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 11:07:02.908527 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 29 11:07:02.908538 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 29 11:07:02.908547 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:07:02.908557 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:07:02.908567 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 11:07:02.908574 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 11:07:02.908581 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 11:07:02.908588 kernel: arm-pv: using stolen time PV Jan 29 11:07:02.908595 kernel: Console: colour dummy device 80x25 Jan 29 11:07:02.908602 kernel: ACPI: Core revision 20230628 Jan 29 11:07:02.908609 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 11:07:02.908616 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:07:02.908624 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:07:02.908631 kernel: landlock: Up and running. Jan 29 11:07:02.908637 kernel: SELinux: Initializing. Jan 29 11:07:02.908644 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:07:02.908651 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:07:02.908658 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:07:02.908685 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:07:02.908693 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:07:02.908700 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:07:02.908709 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 11:07:02.908716 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 11:07:02.908723 kernel: Remapping and enabling EFI services. Jan 29 11:07:02.908729 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:07:02.908736 kernel: Detected PIPT I-cache on CPU1 Jan 29 11:07:02.908743 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 11:07:02.908750 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 29 11:07:02.908757 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:07:02.908763 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 11:07:02.908770 kernel: Detected PIPT I-cache on CPU2 Jan 29 11:07:02.908778 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 29 11:07:02.908785 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 29 11:07:02.908797 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:07:02.908806 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 29 11:07:02.908813 kernel: Detected PIPT I-cache on CPU3 Jan 29 11:07:02.908821 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 29 11:07:02.908828 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 29 11:07:02.908835 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:07:02.908842 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 29 11:07:02.908850 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:07:02.908857 kernel: SMP: Total of 4 processors activated. Jan 29 11:07:02.908865 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 11:07:02.908872 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 11:07:02.908879 kernel: CPU features: detected: Common not Private translations Jan 29 11:07:02.908887 kernel: CPU features: detected: CRC32 instructions Jan 29 11:07:02.908894 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 11:07:02.908901 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 11:07:02.908918 kernel: CPU features: detected: LSE atomic instructions Jan 29 11:07:02.908927 kernel: CPU features: detected: Privileged Access Never Jan 29 11:07:02.908934 kernel: CPU features: detected: RAS Extension Support Jan 29 11:07:02.908948 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 11:07:02.908956 kernel: CPU: All CPU(s) started at EL1 Jan 29 11:07:02.908971 kernel: alternatives: applying system-wide alternatives Jan 29 11:07:02.908978 kernel: devtmpfs: initialized Jan 29 11:07:02.908987 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:07:02.909001 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:07:02.909009 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:07:02.909016 kernel: SMBIOS 3.0.0 present. Jan 29 11:07:02.909024 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 29 11:07:02.909031 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:07:02.909038 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 11:07:02.909045 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 11:07:02.909052 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 11:07:02.909059 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:07:02.909067 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Jan 29 11:07:02.909081 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:07:02.909089 kernel: cpuidle: using governor menu Jan 29 11:07:02.909096 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 11:07:02.909103 kernel: ASID allocator initialised with 32768 entries Jan 29 11:07:02.909110 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:07:02.909117 kernel: Serial: AMBA PL011 UART driver Jan 29 11:07:02.909124 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 11:07:02.909131 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 11:07:02.909138 kernel: Modules: 508960 pages in range for PLT usage Jan 29 11:07:02.909147 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:07:02.909154 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:07:02.909161 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 11:07:02.909168 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 11:07:02.909175 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:07:02.909182 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:07:02.909189 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 11:07:02.909197 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 11:07:02.909203 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:07:02.909212 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:07:02.909219 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:07:02.909226 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:07:02.909233 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:07:02.909240 kernel: ACPI: Interpreter enabled Jan 29 11:07:02.909247 kernel: ACPI: Using GIC for interrupt routing Jan 29 11:07:02.909254 kernel: ACPI: MCFG table detected, 1 entries Jan 29 11:07:02.909261 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 11:07:02.909268 kernel: printk: console [ttyAMA0] enabled Jan 29 11:07:02.909275 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:07:02.909420 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:07:02.909492 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 11:07:02.909557 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 11:07:02.909620 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 11:07:02.909723 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 11:07:02.909735 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 11:07:02.909745 kernel: PCI host bridge to bus 0000:00 Jan 29 11:07:02.909819 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 11:07:02.909880 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 11:07:02.909938 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 11:07:02.910133 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:07:02.910222 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 11:07:02.910298 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:07:02.910370 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 29 11:07:02.910436 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 29 11:07:02.910501 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:07:02.910566 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:07:02.910631 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 29 11:07:02.910711 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 29 11:07:02.910794 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 11:07:02.910875 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 11:07:02.910933 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 11:07:02.910943 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 11:07:02.910950 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 11:07:02.910958 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 11:07:02.910965 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 11:07:02.910972 kernel: iommu: Default domain type: Translated Jan 29 11:07:02.910980 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 11:07:02.910989 kernel: efivars: Registered efivars operations Jan 29 11:07:02.910997 kernel: vgaarb: loaded Jan 29 11:07:02.911004 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 11:07:02.911011 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:07:02.911018 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:07:02.911026 kernel: pnp: PnP ACPI init Jan 29 11:07:02.911144 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 11:07:02.911156 kernel: pnp: PnP ACPI: found 1 devices Jan 29 11:07:02.911165 kernel: NET: Registered PF_INET protocol family Jan 29 11:07:02.911172 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:07:02.911180 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:07:02.911187 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:07:02.911194 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:07:02.911201 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:07:02.911208 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:07:02.911215 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:07:02.911223 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:07:02.911231 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:07:02.911239 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:07:02.911246 kernel: kvm [1]: HYP mode not available Jan 29 11:07:02.911253 kernel: Initialise system trusted keyrings Jan 29 11:07:02.911260 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:07:02.911267 kernel: Key type asymmetric registered Jan 29 11:07:02.911274 kernel: Asymmetric key parser 'x509' registered Jan 29 11:07:02.911281 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 11:07:02.911288 kernel: io scheduler mq-deadline registered Jan 29 11:07:02.911297 kernel: io scheduler kyber registered Jan 29 11:07:02.911304 kernel: io scheduler bfq registered Jan 29 11:07:02.911311 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 11:07:02.911318 kernel: ACPI: button: Power Button [PWRB] Jan 29 11:07:02.911326 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 11:07:02.911392 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 29 11:07:02.911402 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:07:02.911409 kernel: thunder_xcv, ver 1.0 Jan 29 11:07:02.911417 kernel: thunder_bgx, ver 1.0 Jan 29 11:07:02.911430 kernel: nicpf, ver 1.0 Jan 29 11:07:02.911438 kernel: nicvf, ver 1.0 Jan 29 11:07:02.911516 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 11:07:02.911577 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T11:07:02 UTC (1738148822) Jan 29 11:07:02.911587 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:07:02.911594 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 11:07:02.911602 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 11:07:02.911612 kernel: watchdog: Hard watchdog permanently disabled Jan 29 11:07:02.911621 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:07:02.911629 kernel: Segment Routing with IPv6 Jan 29 11:07:02.911637 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:07:02.911646 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:07:02.911655 kernel: Key type dns_resolver registered Jan 29 11:07:02.911669 kernel: registered taskstats version 1 Jan 29 11:07:02.911678 kernel: Loading compiled-in X.509 certificates Jan 29 11:07:02.911685 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f3333311a24aa8c58222f4e98a07eaa1f186ad1a' Jan 29 11:07:02.911692 kernel: Key type .fscrypt registered Jan 29 11:07:02.911699 kernel: Key type fscrypt-provisioning registered Jan 29 11:07:02.911709 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:07:02.911716 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:07:02.911723 kernel: ima: No architecture policies found Jan 29 11:07:02.911730 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 11:07:02.911737 kernel: clk: Disabling unused clocks Jan 29 11:07:02.911744 kernel: Freeing unused kernel memory: 39680K Jan 29 11:07:02.911751 kernel: Run /init as init process Jan 29 11:07:02.911758 kernel: with arguments: Jan 29 11:07:02.911766 kernel: /init Jan 29 11:07:02.911773 kernel: with environment: Jan 29 11:07:02.911780 kernel: HOME=/ Jan 29 11:07:02.911787 kernel: TERM=linux Jan 29 11:07:02.911794 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:07:02.911803 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:07:02.911812 systemd[1]: Detected virtualization kvm. Jan 29 11:07:02.911820 systemd[1]: Detected architecture arm64. Jan 29 11:07:02.911829 systemd[1]: Running in initrd. Jan 29 11:07:02.911836 systemd[1]: No hostname configured, using default hostname. Jan 29 11:07:02.911843 systemd[1]: Hostname set to . Jan 29 11:07:02.911851 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:07:02.911859 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:07:02.911866 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:07:02.911874 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:07:02.911882 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:07:02.911904 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:07:02.911912 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:07:02.911920 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:07:02.911929 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:07:02.911937 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:07:02.911944 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:07:02.911952 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:07:02.911961 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:07:02.911969 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:07:02.911976 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:07:02.911987 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:07:02.911994 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:07:02.912002 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:07:02.912010 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:07:02.912017 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:07:02.912027 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:07:02.912035 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:07:02.912043 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:07:02.912051 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:07:02.912058 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:07:02.912066 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:07:02.912087 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:07:02.912095 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:07:02.912103 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:07:02.912113 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:07:02.912121 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:07:02.912128 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:07:02.912136 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:07:02.912143 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:07:02.912175 systemd-journald[239]: Collecting audit messages is disabled. Jan 29 11:07:02.912197 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:07:02.912206 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:07:02.912215 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:07:02.912223 kernel: Bridge firewalling registered Jan 29 11:07:02.912230 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:07:02.912239 systemd-journald[239]: Journal started Jan 29 11:07:02.912259 systemd-journald[239]: Runtime Journal (/run/log/journal/d7c1107588524dc4b6cd465f62304f4a) is 5.9M, max 47.3M, 41.4M free. Jan 29 11:07:02.895737 systemd-modules-load[240]: Inserted module 'overlay' Jan 29 11:07:02.912229 systemd-modules-load[240]: Inserted module 'br_netfilter' Jan 29 11:07:02.915801 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:07:02.916913 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:07:02.929254 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:07:02.930700 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:07:02.932142 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:07:02.934814 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:07:02.942592 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:07:02.944142 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:07:02.946581 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:07:02.947795 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:07:02.961329 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:07:02.963140 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:07:02.970868 dracut-cmdline[276]: dracut-dracut-053 Jan 29 11:07:02.973227 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c8edc06d36325e34bb125a9ad39c4f788eb9f01102631b71efea3f9afa94c89e Jan 29 11:07:02.993440 systemd-resolved[278]: Positive Trust Anchors: Jan 29 11:07:02.993511 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:07:02.993542 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:07:02.998259 systemd-resolved[278]: Defaulting to hostname 'linux'. Jan 29 11:07:02.999370 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:07:03.000609 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:07:03.042098 kernel: SCSI subsystem initialized Jan 29 11:07:03.047098 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:07:03.054091 kernel: iscsi: registered transport (tcp) Jan 29 11:07:03.071100 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:07:03.071119 kernel: QLogic iSCSI HBA Driver Jan 29 11:07:03.110570 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:07:03.127278 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:07:03.143205 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:07:03.143239 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:07:03.144114 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:07:03.188100 kernel: raid6: neonx8 gen() 15783 MB/s Jan 29 11:07:03.205099 kernel: raid6: neonx4 gen() 15662 MB/s Jan 29 11:07:03.222097 kernel: raid6: neonx2 gen() 13207 MB/s Jan 29 11:07:03.239088 kernel: raid6: neonx1 gen() 10479 MB/s Jan 29 11:07:03.256101 kernel: raid6: int64x8 gen() 6958 MB/s Jan 29 11:07:03.273102 kernel: raid6: int64x4 gen() 7343 MB/s Jan 29 11:07:03.290099 kernel: raid6: int64x2 gen() 6128 MB/s Jan 29 11:07:03.307099 kernel: raid6: int64x1 gen() 5055 MB/s Jan 29 11:07:03.307123 kernel: raid6: using algorithm neonx8 gen() 15783 MB/s Jan 29 11:07:03.324102 kernel: raid6: .... xor() 11928 MB/s, rmw enabled Jan 29 11:07:03.324127 kernel: raid6: using neon recovery algorithm Jan 29 11:07:03.329088 kernel: xor: measuring software checksum speed Jan 29 11:07:03.329105 kernel: 8regs : 19797 MB/sec Jan 29 11:07:03.330461 kernel: 32regs : 18649 MB/sec Jan 29 11:07:03.330472 kernel: arm64_neon : 26998 MB/sec Jan 29 11:07:03.330482 kernel: xor: using function: arm64_neon (26998 MB/sec) Jan 29 11:07:03.383102 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:07:03.392973 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:07:03.403226 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:07:03.413990 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jan 29 11:07:03.417041 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:07:03.419314 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:07:03.433028 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Jan 29 11:07:03.457065 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:07:03.473253 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:07:03.511826 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:07:03.522247 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:07:03.534854 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:07:03.536516 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:07:03.538101 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:07:03.539963 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:07:03.549270 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:07:03.552859 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 29 11:07:03.559126 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:07:03.559231 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:07:03.559242 kernel: GPT:9289727 != 19775487 Jan 29 11:07:03.559251 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:07:03.559260 kernel: GPT:9289727 != 19775487 Jan 29 11:07:03.559268 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:07:03.559280 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:07:03.555991 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:07:03.556121 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:07:03.560365 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:07:03.561140 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:07:03.561276 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:07:03.563278 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:07:03.571125 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:07:03.574325 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:07:03.578099 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (510) Jan 29 11:07:03.580382 kernel: BTRFS: device fsid b5bc7ecc-f31a-46c7-9582-5efca7819025 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (523) Jan 29 11:07:03.580319 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:07:03.592153 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:07:03.599547 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:07:03.606925 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:07:03.610899 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:07:03.611923 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:07:03.624225 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:07:03.625855 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:07:03.632120 disk-uuid[552]: Primary Header is updated. Jan 29 11:07:03.632120 disk-uuid[552]: Secondary Entries is updated. Jan 29 11:07:03.632120 disk-uuid[552]: Secondary Header is updated. Jan 29 11:07:03.636318 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:07:03.651877 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:07:04.652093 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:07:04.652161 disk-uuid[553]: The operation has completed successfully. Jan 29 11:07:04.670792 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:07:04.670922 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:07:04.697281 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:07:04.700243 sh[574]: Success Jan 29 11:07:04.719106 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 11:07:04.765655 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:07:04.767465 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:07:04.768812 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:07:04.779120 kernel: BTRFS info (device dm-0): first mount of filesystem b5bc7ecc-f31a-46c7-9582-5efca7819025 Jan 29 11:07:04.779169 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:07:04.779180 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:07:04.780493 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:07:04.780508 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:07:04.783665 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:07:04.784875 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:07:04.795261 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:07:04.796662 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:07:04.803711 kernel: BTRFS info (device vda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:07:04.803752 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:07:04.803768 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:07:04.807126 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:07:04.814127 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:07:04.816135 kernel: BTRFS info (device vda6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:07:04.822698 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:07:04.832274 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:07:04.892916 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:07:04.899266 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:07:04.938560 systemd-networkd[764]: lo: Link UP Jan 29 11:07:04.938572 systemd-networkd[764]: lo: Gained carrier Jan 29 11:07:04.939300 systemd-networkd[764]: Enumeration completed Jan 29 11:07:04.939392 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:07:04.940146 ignition[666]: Ignition 2.20.0 Jan 29 11:07:04.939709 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:07:04.940153 ignition[666]: Stage: fetch-offline Jan 29 11:07:04.939712 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:07:04.940417 ignition[666]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:07:04.940664 systemd-networkd[764]: eth0: Link UP Jan 29 11:07:04.940431 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:07:04.940676 systemd-networkd[764]: eth0: Gained carrier Jan 29 11:07:04.940634 ignition[666]: parsed url from cmdline: "" Jan 29 11:07:04.940684 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:07:04.940638 ignition[666]: no config URL provided Jan 29 11:07:04.940904 systemd[1]: Reached target network.target - Network. Jan 29 11:07:04.940643 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:07:04.940651 ignition[666]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:07:04.940688 ignition[666]: op(1): [started] loading QEMU firmware config module Jan 29 11:07:04.940693 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:07:04.948790 ignition[666]: op(1): [finished] loading QEMU firmware config module Jan 29 11:07:04.958125 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.94/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:07:04.960642 ignition[666]: parsing config with SHA512: 2b74aefe053195e2c7f11eb29b6f6d0a799d5a29ede9ab7c6380510a10950f56672f02dd07f05f98ecbc792aba04ce34bf51bb55753b9d041e9f6414df37159f Jan 29 11:07:04.964238 unknown[666]: fetched base config from "system" Jan 29 11:07:04.964248 unknown[666]: fetched user config from "qemu" Jan 29 11:07:04.964600 ignition[666]: fetch-offline: fetch-offline passed Jan 29 11:07:04.966187 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:07:04.964687 ignition[666]: Ignition finished successfully Jan 29 11:07:04.967283 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:07:04.971243 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:07:04.983344 ignition[771]: Ignition 2.20.0 Jan 29 11:07:04.983354 ignition[771]: Stage: kargs Jan 29 11:07:04.983520 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:07:04.983529 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:07:04.984202 ignition[771]: kargs: kargs passed Jan 29 11:07:04.984246 ignition[771]: Ignition finished successfully Jan 29 11:07:04.986473 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:07:04.997264 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:07:05.007108 ignition[779]: Ignition 2.20.0 Jan 29 11:07:05.007118 ignition[779]: Stage: disks Jan 29 11:07:05.007271 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:07:05.007280 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:07:05.007924 ignition[779]: disks: disks passed Jan 29 11:07:05.007963 ignition[779]: Ignition finished successfully Jan 29 11:07:05.010557 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:07:05.011465 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:07:05.012678 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:07:05.014159 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:07:05.015574 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:07:05.016862 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:07:05.032245 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:07:05.041545 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:07:05.044830 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:07:05.047052 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:07:05.089088 kernel: EXT4-fs (vda9): mounted filesystem bd47c032-97f4-4b3a-b174-3601de374086 r/w with ordered data mode. Quota mode: none. Jan 29 11:07:05.089865 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:07:05.091029 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:07:05.107175 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:07:05.108777 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:07:05.110062 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:07:05.110127 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:07:05.115307 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (799) Jan 29 11:07:05.115335 kernel: BTRFS info (device vda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:07:05.110152 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:07:05.118353 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:07:05.118369 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:07:05.116778 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:07:05.120183 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:07:05.120416 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:07:05.122023 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:07:05.162196 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:07:05.165974 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:07:05.169260 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:07:05.173168 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:07:05.254395 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:07:05.265197 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:07:05.266556 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:07:05.271104 kernel: BTRFS info (device vda6): last unmount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:07:05.288789 ignition[912]: INFO : Ignition 2.20.0 Jan 29 11:07:05.290368 ignition[912]: INFO : Stage: mount Jan 29 11:07:05.290368 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:07:05.290368 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:07:05.290194 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:07:05.295618 ignition[912]: INFO : mount: mount passed Jan 29 11:07:05.295618 ignition[912]: INFO : Ignition finished successfully Jan 29 11:07:05.293026 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:07:05.304200 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:07:05.778010 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:07:05.793267 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:07:05.799581 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (926) Jan 29 11:07:05.799610 kernel: BTRFS info (device vda6): first mount of filesystem 9c6de53f-d522-4994-b092-a63f342c3ab0 Jan 29 11:07:05.800572 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:07:05.800586 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:07:05.803110 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:07:05.804195 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:07:05.824530 ignition[943]: INFO : Ignition 2.20.0 Jan 29 11:07:05.824530 ignition[943]: INFO : Stage: files Jan 29 11:07:05.825919 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:07:05.825919 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:07:05.825919 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:07:05.828486 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:07:05.828486 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:07:05.828486 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:07:05.828486 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:07:05.828486 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:07:05.828332 unknown[943]: wrote ssh authorized keys file for user: core Jan 29 11:07:05.834649 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:07:05.834649 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:07:05.834649 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:07:05.834649 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:07:05.834649 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:07:05.834649 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:07:05.834649 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:07:05.834649 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Jan 29 11:07:06.083170 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 29 11:07:06.298607 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 29 11:07:06.298607 ignition[943]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 29 11:07:06.301307 ignition[943]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:07:06.301307 ignition[943]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:07:06.301307 ignition[943]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 29 11:07:06.301307 ignition[943]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:07:06.319430 ignition[943]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:07:06.323547 ignition[943]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:07:06.323547 ignition[943]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:07:06.323547 ignition[943]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:07:06.328936 ignition[943]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:07:06.328936 ignition[943]: INFO : files: files passed Jan 29 11:07:06.328936 ignition[943]: INFO : Ignition finished successfully Jan 29 11:07:06.326154 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:07:06.336339 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:07:06.337840 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:07:06.340505 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:07:06.340596 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:07:06.344614 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:07:06.347551 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:07:06.347551 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:07:06.349836 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:07:06.349837 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:07:06.350899 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:07:06.361213 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:07:06.379355 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:07:06.379456 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:07:06.381108 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:07:06.382505 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:07:06.383955 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:07:06.389179 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:07:06.401196 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:07:06.414303 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:07:06.422710 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:07:06.423693 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:07:06.425191 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:07:06.426547 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:07:06.426688 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:07:06.428579 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:07:06.430036 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:07:06.431274 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:07:06.432549 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:07:06.433954 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:07:06.435440 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:07:06.436903 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:07:06.438464 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:07:06.439892 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:07:06.441186 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:07:06.442301 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:07:06.442433 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:07:06.444202 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:07:06.445665 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:07:06.447021 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:07:06.450144 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:07:06.451116 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:07:06.451254 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:07:06.453354 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:07:06.453476 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:07:06.455124 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:07:06.456436 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:07:06.457164 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:07:06.458787 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:07:06.460151 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:07:06.461715 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:07:06.461854 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:07:06.462879 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:07:06.463002 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:07:06.464090 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:07:06.464249 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:07:06.465422 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:07:06.465571 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:07:06.476303 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:07:06.477745 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:07:06.478415 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:07:06.478598 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:07:06.479945 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:07:06.480109 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:07:06.486057 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:07:06.487003 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:07:06.491166 ignition[997]: INFO : Ignition 2.20.0 Jan 29 11:07:06.491166 ignition[997]: INFO : Stage: umount Jan 29 11:07:06.493771 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:07:06.493771 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:07:06.493771 ignition[997]: INFO : umount: umount passed Jan 29 11:07:06.493771 ignition[997]: INFO : Ignition finished successfully Jan 29 11:07:06.493144 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:07:06.494451 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:07:06.494555 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:07:06.496435 systemd[1]: Stopped target network.target - Network. Jan 29 11:07:06.497301 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:07:06.497368 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:07:06.498568 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:07:06.498609 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:07:06.499903 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:07:06.499950 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:07:06.501225 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:07:06.501265 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:07:06.502766 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:07:06.503881 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:07:06.510791 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:07:06.510935 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:07:06.513180 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:07:06.513243 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:07:06.516135 systemd-networkd[764]: eth0: DHCPv6 lease lost Jan 29 11:07:06.518444 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:07:06.519188 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:07:06.520407 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:07:06.520442 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:07:06.537203 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:07:06.537882 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:07:06.537951 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:07:06.539643 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:07:06.539703 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:07:06.541156 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:07:06.541199 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:07:06.543014 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:07:06.546323 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:07:06.546405 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:07:06.548737 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:07:06.548822 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:07:06.552893 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:07:06.553003 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:07:06.559836 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:07:06.559982 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:07:06.561707 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:07:06.561750 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:07:06.563013 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:07:06.563047 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:07:06.564381 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:07:06.564427 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:07:06.566374 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:07:06.566417 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:07:06.568308 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:07:06.568351 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:07:06.577275 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:07:06.578090 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:07:06.578148 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:07:06.579744 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:07:06.579784 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:07:06.581201 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:07:06.581237 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:07:06.582811 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:07:06.582850 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:07:06.585711 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:07:06.585808 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:07:06.586899 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:07:06.589978 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:07:06.600872 systemd[1]: Switching root. Jan 29 11:07:06.624952 systemd-journald[239]: Journal stopped Jan 29 11:07:07.284436 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jan 29 11:07:07.284501 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:07:07.284514 kernel: SELinux: policy capability open_perms=1 Jan 29 11:07:07.284527 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:07:07.284536 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:07:07.284545 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:07:07.284555 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:07:07.284564 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:07:07.284573 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:07:07.284583 kernel: audit: type=1403 audit(1738148826.749:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:07:07.284594 systemd[1]: Successfully loaded SELinux policy in 30.914ms. Jan 29 11:07:07.284611 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.513ms. Jan 29 11:07:07.284623 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:07:07.284634 systemd[1]: Detected virtualization kvm. Jan 29 11:07:07.284645 systemd[1]: Detected architecture arm64. Jan 29 11:07:07.284655 systemd[1]: Detected first boot. Jan 29 11:07:07.284665 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:07:07.284682 zram_generator::config[1042]: No configuration found. Jan 29 11:07:07.284701 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:07:07.284712 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:07:07.284724 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:07:07.284736 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:07:07.284748 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:07:07.284758 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:07:07.284768 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:07:07.284779 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:07:07.284790 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:07:07.284800 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:07:07.284810 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:07:07.284820 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:07:07.284831 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:07:07.284841 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:07:07.284852 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:07:07.284862 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:07:07.284874 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:07:07.284885 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:07:07.284896 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 11:07:07.284907 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:07:07.284925 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:07:07.284936 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:07:07.284947 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:07:07.284957 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:07:07.284969 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:07:07.284980 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:07:07.284990 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:07:07.285000 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:07:07.285010 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:07:07.285023 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:07:07.285033 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:07:07.285044 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:07:07.285054 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:07:07.285067 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:07:07.285150 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:07:07.285166 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:07:07.285176 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:07:07.285186 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:07:07.285197 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:07:07.285208 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:07:07.285219 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:07:07.285229 systemd[1]: Reached target machines.target - Containers. Jan 29 11:07:07.285242 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:07:07.285253 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:07:07.285263 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:07:07.285273 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:07:07.285284 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:07:07.285295 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:07:07.285305 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:07:07.285317 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:07:07.285331 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:07:07.285342 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:07:07.285353 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:07:07.285363 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:07:07.285373 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:07:07.285383 kernel: fuse: init (API version 7.39) Jan 29 11:07:07.285411 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:07:07.285421 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:07:07.285431 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:07:07.285443 kernel: ACPI: bus type drm_connector registered Jan 29 11:07:07.285452 kernel: loop: module loaded Jan 29 11:07:07.285462 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:07:07.285473 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:07:07.285484 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:07:07.285494 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:07:07.285505 systemd[1]: Stopped verity-setup.service. Jan 29 11:07:07.285515 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:07:07.285524 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:07:07.285557 systemd-journald[1117]: Collecting audit messages is disabled. Jan 29 11:07:07.285579 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:07:07.285590 systemd-journald[1117]: Journal started Jan 29 11:07:07.285612 systemd-journald[1117]: Runtime Journal (/run/log/journal/d7c1107588524dc4b6cd465f62304f4a) is 5.9M, max 47.3M, 41.4M free. Jan 29 11:07:07.109736 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:07:07.124863 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:07:07.125217 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:07:07.287101 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:07:07.287663 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:07:07.288625 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:07:07.289623 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:07:07.290603 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:07:07.291833 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:07:07.293143 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:07:07.293302 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:07:07.294443 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:07:07.294592 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:07:07.295711 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:07:07.295849 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:07:07.296945 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:07:07.297107 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:07:07.298317 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:07:07.298452 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:07:07.300416 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:07:07.300566 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:07:07.301714 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:07:07.304110 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:07:07.305225 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:07:07.316858 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:07:07.326233 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:07:07.328385 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:07:07.329572 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:07:07.329616 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:07:07.331503 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:07:07.333793 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:07:07.335934 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:07:07.337054 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:07:07.338517 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:07:07.341241 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:07:07.342594 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:07:07.343575 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:07:07.345331 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:07:07.346270 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:07:07.350266 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:07:07.355245 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:07:07.359031 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:07:07.360570 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:07:07.361982 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:07:07.363371 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:07:07.366119 systemd-journald[1117]: Time spent on flushing to /var/log/journal/d7c1107588524dc4b6cd465f62304f4a is 19.282ms for 844 entries. Jan 29 11:07:07.366119 systemd-journald[1117]: System Journal (/var/log/journal/d7c1107588524dc4b6cd465f62304f4a) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:07:07.394375 systemd-journald[1117]: Received client request to flush runtime journal. Jan 29 11:07:07.394428 kernel: loop0: detected capacity change from 0 to 113536 Jan 29 11:07:07.394446 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:07:07.365617 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:07:07.371989 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:07:07.379289 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:07:07.381164 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:07:07.385112 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:07:07.402336 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:07:07.406308 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Jan 29 11:07:07.406325 systemd-tmpfiles[1155]: ACLs are not supported, ignoring. Jan 29 11:07:07.407952 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:07:07.411899 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:07:07.413505 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:07:07.417099 kernel: loop1: detected capacity change from 0 to 201592 Jan 29 11:07:07.425385 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:07:07.428182 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:07:07.456287 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:07:07.462454 kernel: loop2: detected capacity change from 0 to 116808 Jan 29 11:07:07.462725 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:07:07.478399 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 29 11:07:07.478440 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 29 11:07:07.482699 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:07:07.494092 kernel: loop3: detected capacity change from 0 to 113536 Jan 29 11:07:07.499095 kernel: loop4: detected capacity change from 0 to 201592 Jan 29 11:07:07.505139 kernel: loop5: detected capacity change from 0 to 116808 Jan 29 11:07:07.508327 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:07:07.508761 (sd-merge)[1182]: Merged extensions into '/usr'. Jan 29 11:07:07.512467 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:07:07.512484 systemd[1]: Reloading... Jan 29 11:07:07.572113 zram_generator::config[1206]: No configuration found. Jan 29 11:07:07.627962 ldconfig[1149]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:07:07.662472 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:07:07.698047 systemd[1]: Reloading finished in 185 ms. Jan 29 11:07:07.728154 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:07:07.729289 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:07:07.741229 systemd[1]: Starting ensure-sysext.service... Jan 29 11:07:07.743024 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:07:07.754398 systemd[1]: Reloading requested from client PID 1243 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:07:07.754417 systemd[1]: Reloading... Jan 29 11:07:07.765388 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:07:07.765643 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:07:07.766317 systemd-tmpfiles[1244]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:07:07.766529 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jan 29 11:07:07.766580 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Jan 29 11:07:07.768889 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:07:07.768903 systemd-tmpfiles[1244]: Skipping /boot Jan 29 11:07:07.776251 systemd-tmpfiles[1244]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:07:07.776266 systemd-tmpfiles[1244]: Skipping /boot Jan 29 11:07:07.813194 zram_generator::config[1274]: No configuration found. Jan 29 11:07:07.893667 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:07:07.928997 systemd[1]: Reloading finished in 174 ms. Jan 29 11:07:07.947165 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:07:07.948416 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:07:07.965114 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:07:07.967232 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:07:07.969243 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:07:07.973282 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:07:07.977908 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:07:07.981244 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:07:07.987724 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:07:07.989035 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:07:07.994631 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:07:07.997761 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:07:07.998770 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:07:08.001323 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:07:08.003549 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:07:08.005354 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:07:08.006787 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:07:08.006911 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:07:08.008426 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:07:08.014195 systemd-udevd[1312]: Using default interface naming scheme 'v255'. Jan 29 11:07:08.014298 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:07:08.014493 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:07:08.023539 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:07:08.033375 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:07:08.038357 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:07:08.041750 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:07:08.045765 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:07:08.046771 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:07:08.048051 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:07:08.049520 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:07:08.051871 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:07:08.053438 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:07:08.054952 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:07:08.055103 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:07:08.056080 augenrules[1360]: No rules Jan 29 11:07:08.056539 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:07:08.056658 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:07:08.059601 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:07:08.060121 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:07:08.063811 systemd[1]: Finished ensure-sysext.service. Jan 29 11:07:08.087301 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:07:08.093568 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:07:08.094497 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:07:08.094799 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:07:08.095910 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:07:08.098106 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:07:08.099269 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:07:08.099398 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:07:08.103569 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:07:08.108087 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1359) Jan 29 11:07:08.109583 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:07:08.109661 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:07:08.112084 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 11:07:08.163815 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:07:08.165563 systemd-networkd[1376]: lo: Link UP Jan 29 11:07:08.165576 systemd-networkd[1376]: lo: Gained carrier Jan 29 11:07:08.166527 systemd-networkd[1376]: Enumeration completed Jan 29 11:07:08.166653 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:07:08.169280 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:07:08.170799 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:07:08.170808 systemd-networkd[1376]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:07:08.171557 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:07:08.171587 systemd-networkd[1376]: eth0: Link UP Jan 29 11:07:08.171589 systemd-networkd[1376]: eth0: Gained carrier Jan 29 11:07:08.171597 systemd-networkd[1376]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:07:08.178332 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:07:08.180586 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:07:08.183032 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:07:08.188155 systemd-networkd[1376]: eth0: DHCPv4 address 10.0.0.94/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:07:08.189084 systemd-timesyncd[1381]: Network configuration changed, trying to establish connection. Jan 29 11:07:08.189981 systemd-timesyncd[1381]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:07:08.190164 systemd-timesyncd[1381]: Initial clock synchronization to Wed 2025-01-29 11:07:08.086717 UTC. Jan 29 11:07:08.192466 systemd-resolved[1310]: Positive Trust Anchors: Jan 29 11:07:08.192801 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:07:08.193239 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:07:08.200569 systemd-resolved[1310]: Defaulting to hostname 'linux'. Jan 29 11:07:08.205116 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:07:08.206255 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:07:08.207592 systemd[1]: Reached target network.target - Network. Jan 29 11:07:08.208405 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:07:08.231372 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:07:08.241165 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:07:08.243596 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:07:08.259400 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:07:08.272631 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:07:08.291602 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:07:08.292789 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:07:08.293726 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:07:08.294575 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:07:08.295536 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:07:08.296591 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:07:08.297479 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:07:08.298389 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:07:08.299263 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:07:08.299300 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:07:08.299927 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:07:08.301387 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:07:08.303423 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:07:08.309989 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:07:08.312035 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:07:08.313502 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:07:08.314441 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:07:08.315134 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:07:08.315828 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:07:08.315861 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:07:08.316801 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:07:08.318550 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:07:08.321022 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:07:08.321186 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:07:08.325270 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:07:08.326034 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:07:08.331558 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:07:08.336930 jq[1415]: false Jan 29 11:07:08.337004 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:07:08.339283 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:07:08.342617 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:07:08.344850 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:07:08.345272 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:07:08.346036 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:07:08.348529 dbus-daemon[1414]: [system] SELinux support is enabled Jan 29 11:07:08.348264 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:07:08.350526 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:07:08.353529 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:07:08.357691 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:07:08.357872 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:07:08.358173 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:07:08.358323 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:07:08.358589 jq[1425]: true Jan 29 11:07:08.365748 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:07:08.365805 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:07:08.367199 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:07:08.367216 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:07:08.374931 jq[1431]: true Jan 29 11:07:08.375878 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:07:08.376300 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:07:08.379853 (ntainerd)[1439]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:07:08.387866 extend-filesystems[1416]: Found loop3 Jan 29 11:07:08.387866 extend-filesystems[1416]: Found loop4 Jan 29 11:07:08.387866 extend-filesystems[1416]: Found loop5 Jan 29 11:07:08.387866 extend-filesystems[1416]: Found vda Jan 29 11:07:08.387866 extend-filesystems[1416]: Found vda1 Jan 29 11:07:08.387866 extend-filesystems[1416]: Found vda2 Jan 29 11:07:08.387866 extend-filesystems[1416]: Found vda3 Jan 29 11:07:08.387866 extend-filesystems[1416]: Found usr Jan 29 11:07:08.387866 extend-filesystems[1416]: Found vda4 Jan 29 11:07:08.387866 extend-filesystems[1416]: Found vda6 Jan 29 11:07:08.387866 extend-filesystems[1416]: Found vda7 Jan 29 11:07:08.387866 extend-filesystems[1416]: Found vda9 Jan 29 11:07:08.387866 extend-filesystems[1416]: Checking size of /dev/vda9 Jan 29 11:07:08.404690 update_engine[1423]: I20250129 11:07:08.404537 1423 main.cc:92] Flatcar Update Engine starting Jan 29 11:07:08.407105 update_engine[1423]: I20250129 11:07:08.406963 1423 update_check_scheduler.cc:74] Next update check in 4m15s Jan 29 11:07:08.407240 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:07:08.412796 extend-filesystems[1416]: Resized partition /dev/vda9 Jan 29 11:07:08.414095 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1354) Jan 29 11:07:08.416752 extend-filesystems[1462]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:07:08.417243 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:07:08.427650 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:07:08.427569 systemd-logind[1420]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 11:07:08.429908 systemd-logind[1420]: New seat seat0. Jan 29 11:07:08.431570 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:07:08.459099 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:07:08.489355 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:07:08.489355 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:07:08.489355 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:07:08.492441 extend-filesystems[1416]: Resized filesystem in /dev/vda9 Jan 29 11:07:08.491495 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:07:08.491728 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:07:08.498382 bash[1463]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:07:08.499692 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:07:08.501461 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:07:08.504394 locksmithd[1461]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:07:08.585527 containerd[1439]: time="2025-01-29T11:07:08.583620160Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:07:08.608403 containerd[1439]: time="2025-01-29T11:07:08.608354000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:07:08.610963 containerd[1439]: time="2025-01-29T11:07:08.609860800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:07:08.610963 containerd[1439]: time="2025-01-29T11:07:08.609892920Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:07:08.610963 containerd[1439]: time="2025-01-29T11:07:08.609909800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:07:08.610963 containerd[1439]: time="2025-01-29T11:07:08.610063960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:07:08.610963 containerd[1439]: time="2025-01-29T11:07:08.610098360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:07:08.610963 containerd[1439]: time="2025-01-29T11:07:08.610158000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:07:08.610963 containerd[1439]: time="2025-01-29T11:07:08.610169560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:07:08.610963 containerd[1439]: time="2025-01-29T11:07:08.610316040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:07:08.610963 containerd[1439]: time="2025-01-29T11:07:08.610331360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:07:08.610963 containerd[1439]: time="2025-01-29T11:07:08.610343520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:07:08.610963 containerd[1439]: time="2025-01-29T11:07:08.610352040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:07:08.611239 containerd[1439]: time="2025-01-29T11:07:08.610415400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:07:08.611239 containerd[1439]: time="2025-01-29T11:07:08.610598160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:07:08.611239 containerd[1439]: time="2025-01-29T11:07:08.610698040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:07:08.611239 containerd[1439]: time="2025-01-29T11:07:08.610711360Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:07:08.611239 containerd[1439]: time="2025-01-29T11:07:08.610782400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:07:08.611239 containerd[1439]: time="2025-01-29T11:07:08.610827600Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:07:08.613969 containerd[1439]: time="2025-01-29T11:07:08.613939960Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:07:08.614100 containerd[1439]: time="2025-01-29T11:07:08.614069000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:07:08.614164 containerd[1439]: time="2025-01-29T11:07:08.614145240Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:07:08.614236 containerd[1439]: time="2025-01-29T11:07:08.614222480Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:07:08.614292 containerd[1439]: time="2025-01-29T11:07:08.614279360Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:07:08.614495 containerd[1439]: time="2025-01-29T11:07:08.614475080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:07:08.614816 containerd[1439]: time="2025-01-29T11:07:08.614792200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:07:08.614989 containerd[1439]: time="2025-01-29T11:07:08.614969360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:07:08.615054 containerd[1439]: time="2025-01-29T11:07:08.615041000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:07:08.615124 containerd[1439]: time="2025-01-29T11:07:08.615111560Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:07:08.615198 containerd[1439]: time="2025-01-29T11:07:08.615184040Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:07:08.615254 containerd[1439]: time="2025-01-29T11:07:08.615243120Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:07:08.615306 containerd[1439]: time="2025-01-29T11:07:08.615294160Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:07:08.615360 containerd[1439]: time="2025-01-29T11:07:08.615348440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:07:08.615417 containerd[1439]: time="2025-01-29T11:07:08.615404080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:07:08.615471 containerd[1439]: time="2025-01-29T11:07:08.615459880Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:07:08.615534 containerd[1439]: time="2025-01-29T11:07:08.615521080Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:07:08.615586 containerd[1439]: time="2025-01-29T11:07:08.615574880Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:07:08.615655 containerd[1439]: time="2025-01-29T11:07:08.615642800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:07:08.615726 containerd[1439]: time="2025-01-29T11:07:08.615712680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:07:08.615795 containerd[1439]: time="2025-01-29T11:07:08.615780800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:07:08.615859 containerd[1439]: time="2025-01-29T11:07:08.615843320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:07:08.615911 containerd[1439]: time="2025-01-29T11:07:08.615899680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:07:08.615963 containerd[1439]: time="2025-01-29T11:07:08.615951760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:07:08.616013 containerd[1439]: time="2025-01-29T11:07:08.616002120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:07:08.616067 containerd[1439]: time="2025-01-29T11:07:08.616056200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:07:08.616156 containerd[1439]: time="2025-01-29T11:07:08.616141240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:07:08.616214 containerd[1439]: time="2025-01-29T11:07:08.616201760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:07:08.616264 containerd[1439]: time="2025-01-29T11:07:08.616253920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:07:08.616318 containerd[1439]: time="2025-01-29T11:07:08.616306920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:07:08.616374 containerd[1439]: time="2025-01-29T11:07:08.616361840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:07:08.616430 containerd[1439]: time="2025-01-29T11:07:08.616418600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:07:08.616507 containerd[1439]: time="2025-01-29T11:07:08.616492520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:07:08.616571 containerd[1439]: time="2025-01-29T11:07:08.616556120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:07:08.616641 containerd[1439]: time="2025-01-29T11:07:08.616626080Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:07:08.616883 containerd[1439]: time="2025-01-29T11:07:08.616865320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:07:08.616956 containerd[1439]: time="2025-01-29T11:07:08.616938800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:07:08.617007 containerd[1439]: time="2025-01-29T11:07:08.616993720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:07:08.617068 containerd[1439]: time="2025-01-29T11:07:08.617053640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:07:08.617146 containerd[1439]: time="2025-01-29T11:07:08.617132560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:07:08.617204 containerd[1439]: time="2025-01-29T11:07:08.617192000Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:07:08.617251 containerd[1439]: time="2025-01-29T11:07:08.617241160Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:07:08.617303 containerd[1439]: time="2025-01-29T11:07:08.617291280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:07:08.617730 containerd[1439]: time="2025-01-29T11:07:08.617666120Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:07:08.617911 containerd[1439]: time="2025-01-29T11:07:08.617892040Z" level=info msg="Connect containerd service" Jan 29 11:07:08.617999 containerd[1439]: time="2025-01-29T11:07:08.617984680Z" level=info msg="using legacy CRI server" Jan 29 11:07:08.618045 containerd[1439]: time="2025-01-29T11:07:08.618033880Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:07:08.619418 containerd[1439]: time="2025-01-29T11:07:08.618327400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:07:08.619418 containerd[1439]: time="2025-01-29T11:07:08.618966520Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:07:08.619418 containerd[1439]: time="2025-01-29T11:07:08.619170080Z" level=info msg="Start subscribing containerd event" Jan 29 11:07:08.619418 containerd[1439]: time="2025-01-29T11:07:08.619225920Z" level=info msg="Start recovering state" Jan 29 11:07:08.619418 containerd[1439]: time="2025-01-29T11:07:08.619293200Z" level=info msg="Start event monitor" Jan 29 11:07:08.619418 containerd[1439]: time="2025-01-29T11:07:08.619309440Z" level=info msg="Start snapshots syncer" Jan 29 11:07:08.619418 containerd[1439]: time="2025-01-29T11:07:08.619321800Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:07:08.619418 containerd[1439]: time="2025-01-29T11:07:08.619330960Z" level=info msg="Start streaming server" Jan 29 11:07:08.620062 containerd[1439]: time="2025-01-29T11:07:08.620032000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:07:08.620209 containerd[1439]: time="2025-01-29T11:07:08.620191520Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:07:08.620390 containerd[1439]: time="2025-01-29T11:07:08.620374080Z" level=info msg="containerd successfully booted in 0.038282s" Jan 29 11:07:08.620447 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:07:09.188693 sshd_keygen[1440]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:07:09.206979 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:07:09.223352 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:07:09.228696 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:07:09.228917 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:07:09.231402 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:07:09.244129 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:07:09.246646 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:07:09.248608 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 11:07:09.249710 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:07:09.765253 systemd-networkd[1376]: eth0: Gained IPv6LL Jan 29 11:07:09.767638 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:07:09.769338 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:07:09.782373 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:07:09.784824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:07:09.786859 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:07:09.802535 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:07:09.802724 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:07:09.804290 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:07:09.807287 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:07:10.308099 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:07:10.309563 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:07:10.312037 (kubelet)[1519]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:07:10.313161 systemd[1]: Startup finished in 559ms (kernel) + 4.037s (initrd) + 3.597s (userspace) = 8.195s. Jan 29 11:07:10.724439 kubelet[1519]: E0129 11:07:10.724315 1519 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:07:10.726592 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:07:10.726739 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:07:15.295635 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:07:15.296712 systemd[1]: Started sshd@0-10.0.0.94:22-10.0.0.1:60660.service - OpenSSH per-connection server daemon (10.0.0.1:60660). Jan 29 11:07:15.369621 sshd[1532]: Accepted publickey for core from 10.0.0.1 port 60660 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:07:15.371369 sshd-session[1532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:07:15.378746 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:07:15.389315 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:07:15.391040 systemd-logind[1420]: New session 1 of user core. Jan 29 11:07:15.398583 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:07:15.401851 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:07:15.408555 (systemd)[1536]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:07:15.497824 systemd[1536]: Queued start job for default target default.target. Jan 29 11:07:15.507992 systemd[1536]: Created slice app.slice - User Application Slice. Jan 29 11:07:15.508036 systemd[1536]: Reached target paths.target - Paths. Jan 29 11:07:15.508059 systemd[1536]: Reached target timers.target - Timers. Jan 29 11:07:15.509354 systemd[1536]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:07:15.519224 systemd[1536]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:07:15.519294 systemd[1536]: Reached target sockets.target - Sockets. Jan 29 11:07:15.519307 systemd[1536]: Reached target basic.target - Basic System. Jan 29 11:07:15.519344 systemd[1536]: Reached target default.target - Main User Target. Jan 29 11:07:15.519371 systemd[1536]: Startup finished in 103ms. Jan 29 11:07:15.519615 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:07:15.521016 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:07:15.581545 systemd[1]: Started sshd@1-10.0.0.94:22-10.0.0.1:60670.service - OpenSSH per-connection server daemon (10.0.0.1:60670). Jan 29 11:07:15.637909 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 60670 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:07:15.639295 sshd-session[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:07:15.643091 systemd-logind[1420]: New session 2 of user core. Jan 29 11:07:15.655266 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:07:15.706289 sshd[1549]: Connection closed by 10.0.0.1 port 60670 Jan 29 11:07:15.706755 sshd-session[1547]: pam_unix(sshd:session): session closed for user core Jan 29 11:07:15.716328 systemd[1]: sshd@1-10.0.0.94:22-10.0.0.1:60670.service: Deactivated successfully. Jan 29 11:07:15.718436 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:07:15.720267 systemd-logind[1420]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:07:15.722238 systemd-logind[1420]: Removed session 2. Jan 29 11:07:15.723107 systemd[1]: Started sshd@2-10.0.0.94:22-10.0.0.1:60686.service - OpenSSH per-connection server daemon (10.0.0.1:60686). Jan 29 11:07:15.767924 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 60686 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:07:15.769155 sshd-session[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:07:15.773180 systemd-logind[1420]: New session 3 of user core. Jan 29 11:07:15.790270 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:07:15.837350 sshd[1556]: Connection closed by 10.0.0.1 port 60686 Jan 29 11:07:15.837673 sshd-session[1554]: pam_unix(sshd:session): session closed for user core Jan 29 11:07:15.854485 systemd[1]: sshd@2-10.0.0.94:22-10.0.0.1:60686.service: Deactivated successfully. Jan 29 11:07:15.855808 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:07:15.856979 systemd-logind[1420]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:07:15.858130 systemd[1]: Started sshd@3-10.0.0.94:22-10.0.0.1:60692.service - OpenSSH per-connection server daemon (10.0.0.1:60692). Jan 29 11:07:15.858865 systemd-logind[1420]: Removed session 3. Jan 29 11:07:15.902957 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 60692 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:07:15.904307 sshd-session[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:07:15.908313 systemd-logind[1420]: New session 4 of user core. Jan 29 11:07:15.917260 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:07:15.974519 sshd[1563]: Connection closed by 10.0.0.1 port 60692 Jan 29 11:07:15.975046 sshd-session[1561]: pam_unix(sshd:session): session closed for user core Jan 29 11:07:15.990728 systemd[1]: sshd@3-10.0.0.94:22-10.0.0.1:60692.service: Deactivated successfully. Jan 29 11:07:15.993508 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:07:15.994892 systemd-logind[1420]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:07:15.996227 systemd[1]: Started sshd@4-10.0.0.94:22-10.0.0.1:60694.service - OpenSSH per-connection server daemon (10.0.0.1:60694). Jan 29 11:07:15.997816 systemd-logind[1420]: Removed session 4. Jan 29 11:07:16.042949 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 60694 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:07:16.044195 sshd-session[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:07:16.047778 systemd-logind[1420]: New session 5 of user core. Jan 29 11:07:16.059299 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:07:16.120664 sudo[1571]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:07:16.120933 sudo[1571]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:07:16.142060 sudo[1571]: pam_unix(sudo:session): session closed for user root Jan 29 11:07:16.143474 sshd[1570]: Connection closed by 10.0.0.1 port 60694 Jan 29 11:07:16.144065 sshd-session[1568]: pam_unix(sshd:session): session closed for user core Jan 29 11:07:16.160588 systemd[1]: sshd@4-10.0.0.94:22-10.0.0.1:60694.service: Deactivated successfully. Jan 29 11:07:16.162168 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:07:16.163402 systemd-logind[1420]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:07:16.173460 systemd[1]: Started sshd@5-10.0.0.94:22-10.0.0.1:60710.service - OpenSSH per-connection server daemon (10.0.0.1:60710). Jan 29 11:07:16.174286 systemd-logind[1420]: Removed session 5. Jan 29 11:07:16.216271 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 60710 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:07:16.217684 sshd-session[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:07:16.221725 systemd-logind[1420]: New session 6 of user core. Jan 29 11:07:16.232258 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:07:16.283417 sudo[1580]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:07:16.283687 sudo[1580]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:07:16.286753 sudo[1580]: pam_unix(sudo:session): session closed for user root Jan 29 11:07:16.291139 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:07:16.291384 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:07:16.308399 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:07:16.331752 augenrules[1602]: No rules Jan 29 11:07:16.332401 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:07:16.332609 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:07:16.333873 sudo[1579]: pam_unix(sudo:session): session closed for user root Jan 29 11:07:16.335069 sshd[1578]: Connection closed by 10.0.0.1 port 60710 Jan 29 11:07:16.335578 sshd-session[1576]: pam_unix(sshd:session): session closed for user core Jan 29 11:07:16.344467 systemd[1]: sshd@5-10.0.0.94:22-10.0.0.1:60710.service: Deactivated successfully. Jan 29 11:07:16.346018 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:07:16.347236 systemd-logind[1420]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:07:16.348380 systemd[1]: Started sshd@6-10.0.0.94:22-10.0.0.1:60722.service - OpenSSH per-connection server daemon (10.0.0.1:60722). Jan 29 11:07:16.349188 systemd-logind[1420]: Removed session 6. Jan 29 11:07:16.394594 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 60722 ssh2: RSA SHA256:Bq1DMYRFt3vwSJT5tcC1MQpWKmkwK1uKH+vc+Uts7DI Jan 29 11:07:16.395757 sshd-session[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:07:16.399732 systemd-logind[1420]: New session 7 of user core. Jan 29 11:07:16.405345 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:07:16.455246 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:07:16.455782 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:07:16.479401 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:07:16.494517 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:07:16.494691 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:07:16.930415 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:07:16.940329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:07:16.962551 systemd[1]: Reloading requested from client PID 1656 ('systemctl') (unit session-7.scope)... Jan 29 11:07:16.962568 systemd[1]: Reloading... Jan 29 11:07:17.034100 zram_generator::config[1694]: No configuration found. Jan 29 11:07:17.231988 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:07:17.284910 systemd[1]: Reloading finished in 322 ms. Jan 29 11:07:17.329065 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:07:17.329202 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:07:17.329440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:07:17.331623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:07:17.429767 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:07:17.433845 (kubelet)[1740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:07:17.470741 kubelet[1740]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:07:17.470741 kubelet[1740]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 11:07:17.470741 kubelet[1740]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:07:17.471045 kubelet[1740]: I0129 11:07:17.470791 1740 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:07:18.273760 kubelet[1740]: I0129 11:07:18.273715 1740 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 11:07:18.273760 kubelet[1740]: I0129 11:07:18.273746 1740 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:07:18.274048 kubelet[1740]: I0129 11:07:18.274019 1740 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 11:07:18.321770 kubelet[1740]: I0129 11:07:18.320394 1740 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:07:18.327148 kubelet[1740]: E0129 11:07:18.327106 1740 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:07:18.327148 kubelet[1740]: I0129 11:07:18.327137 1740 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:07:18.329829 kubelet[1740]: I0129 11:07:18.329802 1740 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:07:18.331179 kubelet[1740]: I0129 11:07:18.331125 1740 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:07:18.331366 kubelet[1740]: I0129 11:07:18.331174 1740 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.94","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:07:18.331508 kubelet[1740]: I0129 11:07:18.331427 1740 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:07:18.331508 kubelet[1740]: I0129 11:07:18.331437 1740 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 11:07:18.331663 kubelet[1740]: I0129 11:07:18.331632 1740 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:07:18.335409 kubelet[1740]: I0129 11:07:18.335376 1740 kubelet.go:446] "Attempting to sync node with API server" Jan 29 11:07:18.335451 kubelet[1740]: I0129 11:07:18.335413 1740 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:07:18.335451 kubelet[1740]: I0129 11:07:18.335437 1740 kubelet.go:352] "Adding apiserver pod source" Jan 29 11:07:18.335451 kubelet[1740]: I0129 11:07:18.335447 1740 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:07:18.335645 kubelet[1740]: E0129 11:07:18.335612 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:18.336994 kubelet[1740]: E0129 11:07:18.336951 1740 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:18.339567 kubelet[1740]: I0129 11:07:18.339544 1740 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:07:18.340186 kubelet[1740]: I0129 11:07:18.340168 1740 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:07:18.340298 kubelet[1740]: W0129 11:07:18.340286 1740 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:07:18.341261 kubelet[1740]: I0129 11:07:18.341242 1740 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 11:07:18.341323 kubelet[1740]: I0129 11:07:18.341279 1740 server.go:1287] "Started kubelet" Jan 29 11:07:18.341762 kubelet[1740]: I0129 11:07:18.341390 1740 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:07:18.342329 kubelet[1740]: I0129 11:07:18.342306 1740 server.go:490] "Adding debug handlers to kubelet server" Jan 29 11:07:18.344632 kubelet[1740]: I0129 11:07:18.344401 1740 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:07:18.344911 kubelet[1740]: I0129 11:07:18.344895 1740 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:07:18.345506 kubelet[1740]: I0129 11:07:18.345472 1740 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:07:18.345694 kubelet[1740]: I0129 11:07:18.345669 1740 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:07:18.347468 kubelet[1740]: E0129 11:07:18.346392 1740 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Jan 29 11:07:18.347468 kubelet[1740]: I0129 11:07:18.346422 1740 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 11:07:18.347468 kubelet[1740]: I0129 11:07:18.346586 1740 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:07:18.347468 kubelet[1740]: I0129 11:07:18.346665 1740 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:07:18.347468 kubelet[1740]: W0129 11:07:18.347311 1740 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 11:07:18.347468 kubelet[1740]: E0129 11:07:18.347338 1740 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 29 11:07:18.347468 kubelet[1740]: W0129 11:07:18.347383 1740 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.94" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 11:07:18.347468 kubelet[1740]: E0129 11:07:18.347394 1740 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.94\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 29 11:07:18.347827 kubelet[1740]: E0129 11:07:18.347805 1740 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:07:18.348454 kubelet[1740]: I0129 11:07:18.348427 1740 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:07:18.348563 kubelet[1740]: I0129 11:07:18.348539 1740 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:07:18.352553 kubelet[1740]: I0129 11:07:18.352529 1740 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:07:18.356740 kubelet[1740]: E0129 11:07:18.356688 1740 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.94\" not found" node="10.0.0.94" Jan 29 11:07:18.361982 kubelet[1740]: I0129 11:07:18.361956 1740 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 11:07:18.361982 kubelet[1740]: I0129 11:07:18.361978 1740 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 11:07:18.362121 kubelet[1740]: I0129 11:07:18.361999 1740 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:07:18.431755 kubelet[1740]: I0129 11:07:18.431709 1740 policy_none.go:49] "None policy: Start" Jan 29 11:07:18.431755 kubelet[1740]: I0129 11:07:18.431742 1740 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 11:07:18.431755 kubelet[1740]: I0129 11:07:18.431755 1740 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:07:18.437790 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:07:18.447575 kubelet[1740]: E0129 11:07:18.447533 1740 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Jan 29 11:07:18.453016 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:07:18.456403 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:07:18.456752 kubelet[1740]: I0129 11:07:18.456492 1740 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:07:18.457516 kubelet[1740]: I0129 11:07:18.457497 1740 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:07:18.457673 kubelet[1740]: I0129 11:07:18.457619 1740 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 11:07:18.457673 kubelet[1740]: I0129 11:07:18.457643 1740 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 11:07:18.457673 kubelet[1740]: I0129 11:07:18.457650 1740 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 11:07:18.458207 kubelet[1740]: E0129 11:07:18.458036 1740 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:07:18.462056 kubelet[1740]: I0129 11:07:18.462017 1740 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:07:18.463956 kubelet[1740]: I0129 11:07:18.463927 1740 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:07:18.464376 kubelet[1740]: I0129 11:07:18.464332 1740 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:07:18.464939 kubelet[1740]: I0129 11:07:18.464555 1740 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:07:18.465391 kubelet[1740]: E0129 11:07:18.465335 1740 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 11:07:18.465391 kubelet[1740]: E0129 11:07:18.465376 1740 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.94\" not found" Jan 29 11:07:18.565979 kubelet[1740]: I0129 11:07:18.565873 1740 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.94" Jan 29 11:07:18.570950 kubelet[1740]: I0129 11:07:18.570900 1740 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.94" Jan 29 11:07:18.570950 kubelet[1740]: E0129 11:07:18.570934 1740 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.0.0.94\": node \"10.0.0.94\" not found" Jan 29 11:07:18.574967 kubelet[1740]: E0129 11:07:18.574943 1740 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Jan 29 11:07:18.675774 kubelet[1740]: E0129 11:07:18.675719 1740 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Jan 29 11:07:18.776037 kubelet[1740]: E0129 11:07:18.775992 1740 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Jan 29 11:07:18.877159 kubelet[1740]: E0129 11:07:18.877012 1740 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Jan 29 11:07:18.971433 sudo[1613]: pam_unix(sudo:session): session closed for user root Jan 29 11:07:18.972599 sshd[1612]: Connection closed by 10.0.0.1 port 60722 Jan 29 11:07:18.972959 sshd-session[1610]: pam_unix(sshd:session): session closed for user core Jan 29 11:07:18.976020 systemd-logind[1420]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:07:18.976299 systemd[1]: sshd@6-10.0.0.94:22-10.0.0.1:60722.service: Deactivated successfully. Jan 29 11:07:18.977821 kubelet[1740]: E0129 11:07:18.977787 1740 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Jan 29 11:07:18.978009 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:07:18.978964 systemd-logind[1420]: Removed session 7. Jan 29 11:07:19.078430 kubelet[1740]: E0129 11:07:19.078382 1740 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Jan 29 11:07:19.178947 kubelet[1740]: E0129 11:07:19.178832 1740 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.94\" not found" Jan 29 11:07:19.276270 kubelet[1740]: I0129 11:07:19.276227 1740 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 11:07:19.276437 kubelet[1740]: W0129 11:07:19.276407 1740 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:07:19.276588 kubelet[1740]: W0129 11:07:19.276444 1740 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:07:19.276588 kubelet[1740]: W0129 11:07:19.276463 1740 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:07:19.276588 kubelet[1740]: W0129 11:07:19.276534 1740 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:07:19.280234 kubelet[1740]: I0129 11:07:19.280136 1740 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 11:07:19.280575 containerd[1439]: time="2025-01-29T11:07:19.280535508Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:07:19.280846 kubelet[1740]: I0129 11:07:19.280720 1740 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 11:07:19.336576 kubelet[1740]: I0129 11:07:19.336531 1740 apiserver.go:52] "Watching apiserver" Jan 29 11:07:19.336806 kubelet[1740]: E0129 11:07:19.336767 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:19.351132 kubelet[1740]: E0129 11:07:19.350989 1740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hw698" podUID="b00b2b43-4033-4a45-94e5-a857f0ae2b4c" Jan 29 11:07:19.365260 systemd[1]: Created slice kubepods-besteffort-pod341e9e9f_f369_4f71_beb1_fa8048eb906c.slice - libcontainer container kubepods-besteffort-pod341e9e9f_f369_4f71_beb1_fa8048eb906c.slice. Jan 29 11:07:19.378554 systemd[1]: Created slice kubepods-besteffort-pod43457da9_f6c9_4a2f_b5f1_93ad73627b56.slice - libcontainer container kubepods-besteffort-pod43457da9_f6c9_4a2f_b5f1_93ad73627b56.slice. Jan 29 11:07:19.447466 kubelet[1740]: I0129 11:07:19.447344 1740 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:07:19.457532 kubelet[1740]: I0129 11:07:19.457431 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/43457da9-f6c9-4a2f-b5f1-93ad73627b56-var-run-calico\") pod \"calico-node-q2twf\" (UID: \"43457da9-f6c9-4a2f-b5f1-93ad73627b56\") " pod="calico-system/calico-node-q2twf" Jan 29 11:07:19.457532 kubelet[1740]: I0129 11:07:19.457482 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/43457da9-f6c9-4a2f-b5f1-93ad73627b56-cni-net-dir\") pod \"calico-node-q2twf\" (UID: \"43457da9-f6c9-4a2f-b5f1-93ad73627b56\") " pod="calico-system/calico-node-q2twf" Jan 29 11:07:19.457532 kubelet[1740]: I0129 11:07:19.457505 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mj9m\" (UniqueName: \"kubernetes.io/projected/b00b2b43-4033-4a45-94e5-a857f0ae2b4c-kube-api-access-9mj9m\") pod \"csi-node-driver-hw698\" (UID: \"b00b2b43-4033-4a45-94e5-a857f0ae2b4c\") " pod="calico-system/csi-node-driver-hw698" Jan 29 11:07:19.457532 kubelet[1740]: I0129 11:07:19.457541 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/341e9e9f-f369-4f71-beb1-fa8048eb906c-xtables-lock\") pod \"kube-proxy-56nsl\" (UID: \"341e9e9f-f369-4f71-beb1-fa8048eb906c\") " pod="kube-system/kube-proxy-56nsl" Jan 29 11:07:19.457744 kubelet[1740]: I0129 11:07:19.457562 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43457da9-f6c9-4a2f-b5f1-93ad73627b56-xtables-lock\") pod \"calico-node-q2twf\" (UID: \"43457da9-f6c9-4a2f-b5f1-93ad73627b56\") " pod="calico-system/calico-node-q2twf" Jan 29 11:07:19.457744 kubelet[1740]: I0129 11:07:19.457577 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/43457da9-f6c9-4a2f-b5f1-93ad73627b56-policysync\") pod \"calico-node-q2twf\" (UID: \"43457da9-f6c9-4a2f-b5f1-93ad73627b56\") " pod="calico-system/calico-node-q2twf" Jan 29 11:07:19.457744 kubelet[1740]: I0129 11:07:19.457617 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/43457da9-f6c9-4a2f-b5f1-93ad73627b56-node-certs\") pod \"calico-node-q2twf\" (UID: \"43457da9-f6c9-4a2f-b5f1-93ad73627b56\") " pod="calico-system/calico-node-q2twf" Jan 29 11:07:19.457744 kubelet[1740]: I0129 11:07:19.457650 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/43457da9-f6c9-4a2f-b5f1-93ad73627b56-var-lib-calico\") pod \"calico-node-q2twf\" (UID: \"43457da9-f6c9-4a2f-b5f1-93ad73627b56\") " pod="calico-system/calico-node-q2twf" Jan 29 11:07:19.457744 kubelet[1740]: I0129 11:07:19.457667 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b00b2b43-4033-4a45-94e5-a857f0ae2b4c-varrun\") pod \"csi-node-driver-hw698\" (UID: \"b00b2b43-4033-4a45-94e5-a857f0ae2b4c\") " pod="calico-system/csi-node-driver-hw698" Jan 29 11:07:19.457887 kubelet[1740]: I0129 11:07:19.457681 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b00b2b43-4033-4a45-94e5-a857f0ae2b4c-kubelet-dir\") pod \"csi-node-driver-hw698\" (UID: \"b00b2b43-4033-4a45-94e5-a857f0ae2b4c\") " pod="calico-system/csi-node-driver-hw698" Jan 29 11:07:19.457887 kubelet[1740]: I0129 11:07:19.457705 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b00b2b43-4033-4a45-94e5-a857f0ae2b4c-socket-dir\") pod \"csi-node-driver-hw698\" (UID: \"b00b2b43-4033-4a45-94e5-a857f0ae2b4c\") " pod="calico-system/csi-node-driver-hw698" Jan 29 11:07:19.457887 kubelet[1740]: I0129 11:07:19.457721 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43457da9-f6c9-4a2f-b5f1-93ad73627b56-lib-modules\") pod \"calico-node-q2twf\" (UID: \"43457da9-f6c9-4a2f-b5f1-93ad73627b56\") " pod="calico-system/calico-node-q2twf" Jan 29 11:07:19.457887 kubelet[1740]: I0129 11:07:19.457801 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b00b2b43-4033-4a45-94e5-a857f0ae2b4c-registration-dir\") pod \"csi-node-driver-hw698\" (UID: \"b00b2b43-4033-4a45-94e5-a857f0ae2b4c\") " pod="calico-system/csi-node-driver-hw698" Jan 29 11:07:19.457887 kubelet[1740]: I0129 11:07:19.457844 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvsx8\" (UniqueName: \"kubernetes.io/projected/341e9e9f-f369-4f71-beb1-fa8048eb906c-kube-api-access-jvsx8\") pod \"kube-proxy-56nsl\" (UID: \"341e9e9f-f369-4f71-beb1-fa8048eb906c\") " pod="kube-system/kube-proxy-56nsl" Jan 29 11:07:19.457992 kubelet[1740]: I0129 11:07:19.457862 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/43457da9-f6c9-4a2f-b5f1-93ad73627b56-cni-bin-dir\") pod \"calico-node-q2twf\" (UID: \"43457da9-f6c9-4a2f-b5f1-93ad73627b56\") " pod="calico-system/calico-node-q2twf" Jan 29 11:07:19.457992 kubelet[1740]: I0129 11:07:19.457889 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/43457da9-f6c9-4a2f-b5f1-93ad73627b56-cni-log-dir\") pod \"calico-node-q2twf\" (UID: \"43457da9-f6c9-4a2f-b5f1-93ad73627b56\") " pod="calico-system/calico-node-q2twf" Jan 29 11:07:19.457992 kubelet[1740]: I0129 11:07:19.457905 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/43457da9-f6c9-4a2f-b5f1-93ad73627b56-flexvol-driver-host\") pod \"calico-node-q2twf\" (UID: \"43457da9-f6c9-4a2f-b5f1-93ad73627b56\") " pod="calico-system/calico-node-q2twf" Jan 29 11:07:19.457992 kubelet[1740]: I0129 11:07:19.457941 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbtl2\" (UniqueName: \"kubernetes.io/projected/43457da9-f6c9-4a2f-b5f1-93ad73627b56-kube-api-access-fbtl2\") pod \"calico-node-q2twf\" (UID: \"43457da9-f6c9-4a2f-b5f1-93ad73627b56\") " pod="calico-system/calico-node-q2twf" Jan 29 11:07:19.457992 kubelet[1740]: I0129 11:07:19.457956 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/341e9e9f-f369-4f71-beb1-fa8048eb906c-kube-proxy\") pod \"kube-proxy-56nsl\" (UID: \"341e9e9f-f369-4f71-beb1-fa8048eb906c\") " pod="kube-system/kube-proxy-56nsl" Jan 29 11:07:19.458120 kubelet[1740]: I0129 11:07:19.457971 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/341e9e9f-f369-4f71-beb1-fa8048eb906c-lib-modules\") pod \"kube-proxy-56nsl\" (UID: \"341e9e9f-f369-4f71-beb1-fa8048eb906c\") " pod="kube-system/kube-proxy-56nsl" Jan 29 11:07:19.458120 kubelet[1740]: I0129 11:07:19.457992 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43457da9-f6c9-4a2f-b5f1-93ad73627b56-tigera-ca-bundle\") pod \"calico-node-q2twf\" (UID: \"43457da9-f6c9-4a2f-b5f1-93ad73627b56\") " pod="calico-system/calico-node-q2twf" Jan 29 11:07:19.562515 kubelet[1740]: E0129 11:07:19.562477 1740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:07:19.562515 kubelet[1740]: W0129 11:07:19.562500 1740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:07:19.562515 kubelet[1740]: E0129 11:07:19.562519 1740 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:07:19.571117 kubelet[1740]: E0129 11:07:19.571091 1740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:07:19.571497 kubelet[1740]: W0129 11:07:19.571409 1740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:07:19.571497 kubelet[1740]: E0129 11:07:19.571438 1740 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:07:19.574559 kubelet[1740]: E0129 11:07:19.574522 1740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:07:19.574559 kubelet[1740]: W0129 11:07:19.574538 1740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:07:19.574559 kubelet[1740]: E0129 11:07:19.574552 1740 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:07:19.575676 kubelet[1740]: E0129 11:07:19.575493 1740 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:07:19.575676 kubelet[1740]: W0129 11:07:19.575509 1740 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:07:19.575676 kubelet[1740]: E0129 11:07:19.575522 1740 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:07:19.676399 kubelet[1740]: E0129 11:07:19.676357 1740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:07:19.677722 containerd[1439]: time="2025-01-29T11:07:19.677217169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-56nsl,Uid:341e9e9f-f369-4f71-beb1-fa8048eb906c,Namespace:kube-system,Attempt:0,}" Jan 29 11:07:19.681127 kubelet[1740]: E0129 11:07:19.680554 1740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:07:19.681226 containerd[1439]: time="2025-01-29T11:07:19.681008303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q2twf,Uid:43457da9-f6c9-4a2f-b5f1-93ad73627b56,Namespace:calico-system,Attempt:0,}" Jan 29 11:07:20.325343 containerd[1439]: time="2025-01-29T11:07:20.325285838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:07:20.326719 containerd[1439]: time="2025-01-29T11:07:20.326527568Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 29 11:07:20.328275 containerd[1439]: time="2025-01-29T11:07:20.328239244Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:07:20.329573 containerd[1439]: time="2025-01-29T11:07:20.329536557Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:07:20.330451 containerd[1439]: time="2025-01-29T11:07:20.330407068Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:07:20.331142 containerd[1439]: time="2025-01-29T11:07:20.331117847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:07:20.332775 containerd[1439]: time="2025-01-29T11:07:20.332371100Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 651.263094ms" Jan 29 11:07:20.335929 containerd[1439]: time="2025-01-29T11:07:20.335877547Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 658.574826ms" Jan 29 11:07:20.337481 kubelet[1740]: E0129 11:07:20.337447 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:20.456646 containerd[1439]: time="2025-01-29T11:07:20.456305077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:07:20.456646 containerd[1439]: time="2025-01-29T11:07:20.456432112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:07:20.456646 containerd[1439]: time="2025-01-29T11:07:20.456454083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:07:20.456938 containerd[1439]: time="2025-01-29T11:07:20.456880686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:07:20.457908 containerd[1439]: time="2025-01-29T11:07:20.457756460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:07:20.458011 containerd[1439]: time="2025-01-29T11:07:20.457939597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:07:20.458011 containerd[1439]: time="2025-01-29T11:07:20.457985132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:07:20.458189 containerd[1439]: time="2025-01-29T11:07:20.458126682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:07:20.458875 kubelet[1740]: E0129 11:07:20.458785 1740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hw698" podUID="b00b2b43-4033-4a45-94e5-a857f0ae2b4c" Jan 29 11:07:20.567313 systemd[1]: Started cri-containerd-31afde787fcc4d2b1bda5679e9a01d512dda74fbc5b5cb7154c411f7e3a51367.scope - libcontainer container 31afde787fcc4d2b1bda5679e9a01d512dda74fbc5b5cb7154c411f7e3a51367. Jan 29 11:07:20.568954 systemd[1]: Started cri-containerd-cb42d0e08f3389d30e92ad692b8da5789eb03a88e7dc801b4e148a71483f0f44.scope - libcontainer container cb42d0e08f3389d30e92ad692b8da5789eb03a88e7dc801b4e148a71483f0f44. Jan 29 11:07:20.573478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3799818252.mount: Deactivated successfully. Jan 29 11:07:20.590149 containerd[1439]: time="2025-01-29T11:07:20.590005187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q2twf,Uid:43457da9-f6c9-4a2f-b5f1-93ad73627b56,Namespace:calico-system,Attempt:0,} returns sandbox id \"31afde787fcc4d2b1bda5679e9a01d512dda74fbc5b5cb7154c411f7e3a51367\"" Jan 29 11:07:20.591667 kubelet[1740]: E0129 11:07:20.591344 1740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:07:20.592652 containerd[1439]: time="2025-01-29T11:07:20.592608187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 11:07:20.594789 containerd[1439]: time="2025-01-29T11:07:20.594739009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-56nsl,Uid:341e9e9f-f369-4f71-beb1-fa8048eb906c,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb42d0e08f3389d30e92ad692b8da5789eb03a88e7dc801b4e148a71483f0f44\"" Jan 29 11:07:20.595541 kubelet[1740]: E0129 11:07:20.595480 1740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:07:21.337720 kubelet[1740]: E0129 11:07:21.337659 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:21.383452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2867656794.mount: Deactivated successfully. Jan 29 11:07:21.438081 containerd[1439]: time="2025-01-29T11:07:21.438021855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:21.439119 containerd[1439]: time="2025-01-29T11:07:21.439060558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Jan 29 11:07:21.439981 containerd[1439]: time="2025-01-29T11:07:21.439938460Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:21.442438 containerd[1439]: time="2025-01-29T11:07:21.442373002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:21.443056 containerd[1439]: time="2025-01-29T11:07:21.443013253Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 850.350835ms" Jan 29 11:07:21.443056 containerd[1439]: time="2025-01-29T11:07:21.443052017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 29 11:07:21.444338 containerd[1439]: time="2025-01-29T11:07:21.444262448Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 29 11:07:21.445056 containerd[1439]: time="2025-01-29T11:07:21.445029202Z" level=info msg="CreateContainer within sandbox \"31afde787fcc4d2b1bda5679e9a01d512dda74fbc5b5cb7154c411f7e3a51367\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:07:21.456443 containerd[1439]: time="2025-01-29T11:07:21.456389890Z" level=info msg="CreateContainer within sandbox \"31afde787fcc4d2b1bda5679e9a01d512dda74fbc5b5cb7154c411f7e3a51367\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fe63e4d6d924c2807460a251048b37485b696b54a260fc352c4b30affc0bedbb\"" Jan 29 11:07:21.456996 containerd[1439]: time="2025-01-29T11:07:21.456954566Z" level=info msg="StartContainer for \"fe63e4d6d924c2807460a251048b37485b696b54a260fc352c4b30affc0bedbb\"" Jan 29 11:07:21.481259 systemd[1]: Started cri-containerd-fe63e4d6d924c2807460a251048b37485b696b54a260fc352c4b30affc0bedbb.scope - libcontainer container fe63e4d6d924c2807460a251048b37485b696b54a260fc352c4b30affc0bedbb. Jan 29 11:07:21.510090 containerd[1439]: time="2025-01-29T11:07:21.510039453Z" level=info msg="StartContainer for \"fe63e4d6d924c2807460a251048b37485b696b54a260fc352c4b30affc0bedbb\" returns successfully" Jan 29 11:07:21.528914 systemd[1]: cri-containerd-fe63e4d6d924c2807460a251048b37485b696b54a260fc352c4b30affc0bedbb.scope: Deactivated successfully. Jan 29 11:07:21.574616 containerd[1439]: time="2025-01-29T11:07:21.574540342Z" level=info msg="shim disconnected" id=fe63e4d6d924c2807460a251048b37485b696b54a260fc352c4b30affc0bedbb namespace=k8s.io Jan 29 11:07:21.574616 containerd[1439]: time="2025-01-29T11:07:21.574613165Z" level=warning msg="cleaning up after shim disconnected" id=fe63e4d6d924c2807460a251048b37485b696b54a260fc352c4b30affc0bedbb namespace=k8s.io Jan 29 11:07:21.574616 containerd[1439]: time="2025-01-29T11:07:21.574622777Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:07:22.338838 kubelet[1740]: E0129 11:07:22.338788 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:22.458197 kubelet[1740]: E0129 11:07:22.458138 1740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hw698" podUID="b00b2b43-4033-4a45-94e5-a857f0ae2b4c" Jan 29 11:07:22.471992 kubelet[1740]: E0129 11:07:22.471510 1740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:07:22.694414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1928625225.mount: Deactivated successfully. Jan 29 11:07:22.933185 containerd[1439]: time="2025-01-29T11:07:22.933103905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:22.934047 containerd[1439]: time="2025-01-29T11:07:22.933992382Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=27364399" Jan 29 11:07:22.935277 containerd[1439]: time="2025-01-29T11:07:22.935237503Z" level=info msg="ImageCreate event name:\"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:22.937423 containerd[1439]: time="2025-01-29T11:07:22.937380115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:22.938508 containerd[1439]: time="2025-01-29T11:07:22.938470428Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"27363416\" in 1.494175794s" Jan 29 11:07:22.938550 containerd[1439]: time="2025-01-29T11:07:22.938509080Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\"" Jan 29 11:07:22.939874 containerd[1439]: time="2025-01-29T11:07:22.939854241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 11:07:22.940705 containerd[1439]: time="2025-01-29T11:07:22.940661185Z" level=info msg="CreateContainer within sandbox \"cb42d0e08f3389d30e92ad692b8da5789eb03a88e7dc801b4e148a71483f0f44\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:07:22.950358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3641922229.mount: Deactivated successfully. Jan 29 11:07:22.954711 containerd[1439]: time="2025-01-29T11:07:22.954652963Z" level=info msg="CreateContainer within sandbox \"cb42d0e08f3389d30e92ad692b8da5789eb03a88e7dc801b4e148a71483f0f44\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9dec5963624c54167be15b72c719eb24f40bd9dae7487ac567005ddddaa6cf68\"" Jan 29 11:07:22.955239 containerd[1439]: time="2025-01-29T11:07:22.955199396Z" level=info msg="StartContainer for \"9dec5963624c54167be15b72c719eb24f40bd9dae7487ac567005ddddaa6cf68\"" Jan 29 11:07:22.984287 systemd[1]: Started cri-containerd-9dec5963624c54167be15b72c719eb24f40bd9dae7487ac567005ddddaa6cf68.scope - libcontainer container 9dec5963624c54167be15b72c719eb24f40bd9dae7487ac567005ddddaa6cf68. Jan 29 11:07:23.021529 containerd[1439]: time="2025-01-29T11:07:23.017632856Z" level=info msg="StartContainer for \"9dec5963624c54167be15b72c719eb24f40bd9dae7487ac567005ddddaa6cf68\" returns successfully" Jan 29 11:07:23.340022 kubelet[1740]: E0129 11:07:23.339904 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:23.474923 kubelet[1740]: E0129 11:07:23.474787 1740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:07:23.484494 kubelet[1740]: I0129 11:07:23.484357 1740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-56nsl" podStartSLOduration=3.141046509 podStartE2EDuration="5.484338267s" podCreationTimestamp="2025-01-29 11:07:18 +0000 UTC" firstStartedPulling="2025-01-29 11:07:20.595960325 +0000 UTC m=+3.159076889" lastFinishedPulling="2025-01-29 11:07:22.939252083 +0000 UTC m=+5.502368647" observedRunningTime="2025-01-29 11:07:23.484257678 +0000 UTC m=+6.047374282" watchObservedRunningTime="2025-01-29 11:07:23.484338267 +0000 UTC m=+6.047454871" Jan 29 11:07:24.340606 kubelet[1740]: E0129 11:07:24.340545 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:24.458613 kubelet[1740]: E0129 11:07:24.458560 1740 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hw698" podUID="b00b2b43-4033-4a45-94e5-a857f0ae2b4c" Jan 29 11:07:24.476274 kubelet[1740]: E0129 11:07:24.476228 1740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:07:25.133126 containerd[1439]: time="2025-01-29T11:07:25.133045711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:25.133593 containerd[1439]: time="2025-01-29T11:07:25.133545601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 29 11:07:25.139546 containerd[1439]: time="2025-01-29T11:07:25.139479942Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:25.141753 containerd[1439]: time="2025-01-29T11:07:25.141718989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:25.142444 containerd[1439]: time="2025-01-29T11:07:25.142388608Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.202401454s" Jan 29 11:07:25.142444 containerd[1439]: time="2025-01-29T11:07:25.142418339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 29 11:07:25.144459 containerd[1439]: time="2025-01-29T11:07:25.144426398Z" level=info msg="CreateContainer within sandbox \"31afde787fcc4d2b1bda5679e9a01d512dda74fbc5b5cb7154c411f7e3a51367\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:07:25.169675 containerd[1439]: time="2025-01-29T11:07:25.169615024Z" level=info msg="CreateContainer within sandbox \"31afde787fcc4d2b1bda5679e9a01d512dda74fbc5b5cb7154c411f7e3a51367\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e0654e0c586a3bf1da561196f9e521447fc73c84ddfb787acdec9a0c32ce44b3\"" Jan 29 11:07:25.170197 containerd[1439]: time="2025-01-29T11:07:25.170169388Z" level=info msg="StartContainer for \"e0654e0c586a3bf1da561196f9e521447fc73c84ddfb787acdec9a0c32ce44b3\"" Jan 29 11:07:25.198248 systemd[1]: Started cri-containerd-e0654e0c586a3bf1da561196f9e521447fc73c84ddfb787acdec9a0c32ce44b3.scope - libcontainer container e0654e0c586a3bf1da561196f9e521447fc73c84ddfb787acdec9a0c32ce44b3. Jan 29 11:07:25.223686 containerd[1439]: time="2025-01-29T11:07:25.223643073Z" level=info msg="StartContainer for \"e0654e0c586a3bf1da561196f9e521447fc73c84ddfb787acdec9a0c32ce44b3\" returns successfully" Jan 29 11:07:25.341032 kubelet[1740]: E0129 11:07:25.340973 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:25.479585 kubelet[1740]: E0129 11:07:25.479436 1740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:07:25.671559 containerd[1439]: time="2025-01-29T11:07:25.671494941Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:07:25.673536 systemd[1]: cri-containerd-e0654e0c586a3bf1da561196f9e521447fc73c84ddfb787acdec9a0c32ce44b3.scope: Deactivated successfully. Jan 29 11:07:25.690489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0654e0c586a3bf1da561196f9e521447fc73c84ddfb787acdec9a0c32ce44b3-rootfs.mount: Deactivated successfully. Jan 29 11:07:25.695607 kubelet[1740]: I0129 11:07:25.695483 1740 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 29 11:07:25.885742 containerd[1439]: time="2025-01-29T11:07:25.885597605Z" level=info msg="shim disconnected" id=e0654e0c586a3bf1da561196f9e521447fc73c84ddfb787acdec9a0c32ce44b3 namespace=k8s.io Jan 29 11:07:25.885742 containerd[1439]: time="2025-01-29T11:07:25.885655112Z" level=warning msg="cleaning up after shim disconnected" id=e0654e0c586a3bf1da561196f9e521447fc73c84ddfb787acdec9a0c32ce44b3 namespace=k8s.io Jan 29 11:07:25.885742 containerd[1439]: time="2025-01-29T11:07:25.885663653Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:07:26.341351 kubelet[1740]: E0129 11:07:26.341232 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:26.463883 systemd[1]: Created slice kubepods-besteffort-podb00b2b43_4033_4a45_94e5_a857f0ae2b4c.slice - libcontainer container kubepods-besteffort-podb00b2b43_4033_4a45_94e5_a857f0ae2b4c.slice. Jan 29 11:07:26.465764 containerd[1439]: time="2025-01-29T11:07:26.465722728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hw698,Uid:b00b2b43-4033-4a45-94e5-a857f0ae2b4c,Namespace:calico-system,Attempt:0,}" Jan 29 11:07:26.485427 kubelet[1740]: E0129 11:07:26.483397 1740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:07:26.485565 containerd[1439]: time="2025-01-29T11:07:26.484171047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 11:07:26.604129 containerd[1439]: time="2025-01-29T11:07:26.603990624Z" level=error msg="Failed to destroy network for sandbox \"ea764a6e452b1a67d5662dc4b5a88b52da952a43b51a34cb061b39c189d393d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:07:26.604129 containerd[1439]: time="2025-01-29T11:07:26.604386051Z" level=error msg="encountered an error cleaning up failed sandbox \"ea764a6e452b1a67d5662dc4b5a88b52da952a43b51a34cb061b39c189d393d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:07:26.604129 containerd[1439]: time="2025-01-29T11:07:26.604455501Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hw698,Uid:b00b2b43-4033-4a45-94e5-a857f0ae2b4c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea764a6e452b1a67d5662dc4b5a88b52da952a43b51a34cb061b39c189d393d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:07:26.604745 kubelet[1740]: E0129 11:07:26.604673 1740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea764a6e452b1a67d5662dc4b5a88b52da952a43b51a34cb061b39c189d393d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:07:26.604745 kubelet[1740]: E0129 11:07:26.604739 1740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea764a6e452b1a67d5662dc4b5a88b52da952a43b51a34cb061b39c189d393d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hw698" Jan 29 11:07:26.604811 kubelet[1740]: E0129 11:07:26.604757 1740 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea764a6e452b1a67d5662dc4b5a88b52da952a43b51a34cb061b39c189d393d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hw698" Jan 29 11:07:26.604811 kubelet[1740]: E0129 11:07:26.604794 1740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hw698_calico-system(b00b2b43-4033-4a45-94e5-a857f0ae2b4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hw698_calico-system(b00b2b43-4033-4a45-94e5-a857f0ae2b4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea764a6e452b1a67d5662dc4b5a88b52da952a43b51a34cb061b39c189d393d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hw698" podUID="b00b2b43-4033-4a45-94e5-a857f0ae2b4c" Jan 29 11:07:26.605476 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ea764a6e452b1a67d5662dc4b5a88b52da952a43b51a34cb061b39c189d393d3-shm.mount: Deactivated successfully. Jan 29 11:07:27.341613 kubelet[1740]: E0129 11:07:27.341561 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:27.486598 kubelet[1740]: I0129 11:07:27.486556 1740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea764a6e452b1a67d5662dc4b5a88b52da952a43b51a34cb061b39c189d393d3" Jan 29 11:07:27.487210 containerd[1439]: time="2025-01-29T11:07:27.487176402Z" level=info msg="StopPodSandbox for \"ea764a6e452b1a67d5662dc4b5a88b52da952a43b51a34cb061b39c189d393d3\"" Jan 29 11:07:27.487484 containerd[1439]: time="2025-01-29T11:07:27.487341508Z" level=info msg="Ensure that sandbox ea764a6e452b1a67d5662dc4b5a88b52da952a43b51a34cb061b39c189d393d3 in task-service has been cleanup successfully" Jan 29 11:07:27.488144 containerd[1439]: time="2025-01-29T11:07:27.488112629Z" level=info msg="TearDown network for sandbox \"ea764a6e452b1a67d5662dc4b5a88b52da952a43b51a34cb061b39c189d393d3\" successfully" Jan 29 11:07:27.488144 containerd[1439]: time="2025-01-29T11:07:27.488137738Z" level=info msg="StopPodSandbox for \"ea764a6e452b1a67d5662dc4b5a88b52da952a43b51a34cb061b39c189d393d3\" returns successfully" Jan 29 11:07:27.489233 containerd[1439]: time="2025-01-29T11:07:27.488844389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hw698,Uid:b00b2b43-4033-4a45-94e5-a857f0ae2b4c,Namespace:calico-system,Attempt:1,}" Jan 29 11:07:27.503613 systemd[1]: run-netns-cni\x2dcc04fcde\x2d3eb7\x2d70b2\x2dc88f\x2d0d150c70b444.mount: Deactivated successfully. Jan 29 11:07:27.560723 containerd[1439]: time="2025-01-29T11:07:27.560667021Z" level=error msg="Failed to destroy network for sandbox \"a29b4de2f4ad80e29195ba404fd775de1c0967b68952adf591a9b37289728e30\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:07:27.561024 containerd[1439]: time="2025-01-29T11:07:27.560991844Z" level=error msg="encountered an error cleaning up failed sandbox \"a29b4de2f4ad80e29195ba404fd775de1c0967b68952adf591a9b37289728e30\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:07:27.561096 containerd[1439]: time="2025-01-29T11:07:27.561051683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hw698,Uid:b00b2b43-4033-4a45-94e5-a857f0ae2b4c,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a29b4de2f4ad80e29195ba404fd775de1c0967b68952adf591a9b37289728e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:07:27.561360 kubelet[1740]: E0129 11:07:27.561293 1740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a29b4de2f4ad80e29195ba404fd775de1c0967b68952adf591a9b37289728e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:07:27.561410 kubelet[1740]: E0129 11:07:27.561365 1740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a29b4de2f4ad80e29195ba404fd775de1c0967b68952adf591a9b37289728e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hw698" Jan 29 11:07:27.561410 kubelet[1740]: E0129 11:07:27.561387 1740 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a29b4de2f4ad80e29195ba404fd775de1c0967b68952adf591a9b37289728e30\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hw698" Jan 29 11:07:27.561469 kubelet[1740]: E0129 11:07:27.561428 1740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hw698_calico-system(b00b2b43-4033-4a45-94e5-a857f0ae2b4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hw698_calico-system(b00b2b43-4033-4a45-94e5-a857f0ae2b4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a29b4de2f4ad80e29195ba404fd775de1c0967b68952adf591a9b37289728e30\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hw698" podUID="b00b2b43-4033-4a45-94e5-a857f0ae2b4c" Jan 29 11:07:27.562262 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a29b4de2f4ad80e29195ba404fd775de1c0967b68952adf591a9b37289728e30-shm.mount: Deactivated successfully. Jan 29 11:07:28.342655 kubelet[1740]: E0129 11:07:28.342615 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:28.492199 kubelet[1740]: I0129 11:07:28.492167 1740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a29b4de2f4ad80e29195ba404fd775de1c0967b68952adf591a9b37289728e30" Jan 29 11:07:28.492834 containerd[1439]: time="2025-01-29T11:07:28.492798064Z" level=info msg="StopPodSandbox for \"a29b4de2f4ad80e29195ba404fd775de1c0967b68952adf591a9b37289728e30\"" Jan 29 11:07:28.493200 containerd[1439]: time="2025-01-29T11:07:28.493045515Z" level=info msg="Ensure that sandbox a29b4de2f4ad80e29195ba404fd775de1c0967b68952adf591a9b37289728e30 in task-service has been cleanup successfully" Jan 29 11:07:28.493418 containerd[1439]: time="2025-01-29T11:07:28.493392417Z" level=info msg="TearDown network for sandbox \"a29b4de2f4ad80e29195ba404fd775de1c0967b68952adf591a9b37289728e30\" successfully" Jan 29 11:07:28.493474 containerd[1439]: time="2025-01-29T11:07:28.493413138Z" level=info msg="StopPodSandbox for \"a29b4de2f4ad80e29195ba404fd775de1c0967b68952adf591a9b37289728e30\" returns successfully" Jan 29 11:07:28.494763 systemd[1]: run-netns-cni\x2dcce13784\x2d76de\x2dc6ff\x2d60b3\x2d89acab39db42.mount: Deactivated successfully. Jan 29 11:07:28.496091 containerd[1439]: time="2025-01-29T11:07:28.495670778Z" level=info msg="StopPodSandbox for \"ea764a6e452b1a67d5662dc4b5a88b52da952a43b51a34cb061b39c189d393d3\"" Jan 29 11:07:28.496149 containerd[1439]: time="2025-01-29T11:07:28.496110225Z" level=info msg="TearDown network for sandbox \"ea764a6e452b1a67d5662dc4b5a88b52da952a43b51a34cb061b39c189d393d3\" successfully" Jan 29 11:07:28.496149 containerd[1439]: time="2025-01-29T11:07:28.496128391Z" level=info msg="StopPodSandbox for \"ea764a6e452b1a67d5662dc4b5a88b52da952a43b51a34cb061b39c189d393d3\" returns successfully" Jan 29 11:07:28.497046 containerd[1439]: time="2025-01-29T11:07:28.496718712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hw698,Uid:b00b2b43-4033-4a45-94e5-a857f0ae2b4c,Namespace:calico-system,Attempt:2,}" Jan 29 11:07:28.652827 containerd[1439]: time="2025-01-29T11:07:28.652694511Z" level=error msg="Failed to destroy network for sandbox \"374c18f1c595eb0daa9a0cc58a812e5d17d2dab90c7e1b41d694e6d84a58e244\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:07:28.653800 containerd[1439]: time="2025-01-29T11:07:28.653573884Z" level=error msg="encountered an error cleaning up failed sandbox \"374c18f1c595eb0daa9a0cc58a812e5d17d2dab90c7e1b41d694e6d84a58e244\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:07:28.653800 containerd[1439]: time="2025-01-29T11:07:28.653653293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hw698,Uid:b00b2b43-4033-4a45-94e5-a857f0ae2b4c,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"374c18f1c595eb0daa9a0cc58a812e5d17d2dab90c7e1b41d694e6d84a58e244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:07:28.655331 kubelet[1740]: E0129 11:07:28.653912 1740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"374c18f1c595eb0daa9a0cc58a812e5d17d2dab90c7e1b41d694e6d84a58e244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:07:28.655331 kubelet[1740]: E0129 11:07:28.653975 1740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"374c18f1c595eb0daa9a0cc58a812e5d17d2dab90c7e1b41d694e6d84a58e244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hw698" Jan 29 11:07:28.655331 kubelet[1740]: E0129 11:07:28.654004 1740 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"374c18f1c595eb0daa9a0cc58a812e5d17d2dab90c7e1b41d694e6d84a58e244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hw698" Jan 29 11:07:28.654670 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-374c18f1c595eb0daa9a0cc58a812e5d17d2dab90c7e1b41d694e6d84a58e244-shm.mount: Deactivated successfully. Jan 29 11:07:28.655598 kubelet[1740]: E0129 11:07:28.654050 1740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hw698_calico-system(b00b2b43-4033-4a45-94e5-a857f0ae2b4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hw698_calico-system(b00b2b43-4033-4a45-94e5-a857f0ae2b4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"374c18f1c595eb0daa9a0cc58a812e5d17d2dab90c7e1b41d694e6d84a58e244\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hw698" podUID="b00b2b43-4033-4a45-94e5-a857f0ae2b4c" Jan 29 11:07:29.343151 kubelet[1740]: E0129 11:07:29.343109 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:29.384128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount926518629.mount: Deactivated successfully. Jan 29 11:07:29.495050 kubelet[1740]: I0129 11:07:29.495018 1740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="374c18f1c595eb0daa9a0cc58a812e5d17d2dab90c7e1b41d694e6d84a58e244" Jan 29 11:07:29.495531 containerd[1439]: time="2025-01-29T11:07:29.495498065Z" level=info msg="StopPodSandbox for \"374c18f1c595eb0daa9a0cc58a812e5d17d2dab90c7e1b41d694e6d84a58e244\"" Jan 29 11:07:29.495862 containerd[1439]: time="2025-01-29T11:07:29.495670598Z" level=info msg="Ensure that sandbox 374c18f1c595eb0daa9a0cc58a812e5d17d2dab90c7e1b41d694e6d84a58e244 in task-service has been cleanup successfully" Jan 29 11:07:29.495898 containerd[1439]: time="2025-01-29T11:07:29.495855789Z" level=info msg="TearDown network for sandbox \"374c18f1c595eb0daa9a0cc58a812e5d17d2dab90c7e1b41d694e6d84a58e244\" successfully" Jan 29 11:07:29.495898 containerd[1439]: time="2025-01-29T11:07:29.495871441Z" level=info msg="StopPodSandbox for \"374c18f1c595eb0daa9a0cc58a812e5d17d2dab90c7e1b41d694e6d84a58e244\" returns successfully" Jan 29 11:07:29.496768 containerd[1439]: time="2025-01-29T11:07:29.496106025Z" level=info msg="StopPodSandbox for \"a29b4de2f4ad80e29195ba404fd775de1c0967b68952adf591a9b37289728e30\"" Jan 29 11:07:29.496768 containerd[1439]: time="2025-01-29T11:07:29.496183327Z" level=info msg="TearDown network for sandbox \"a29b4de2f4ad80e29195ba404fd775de1c0967b68952adf591a9b37289728e30\" successfully" Jan 29 11:07:29.496768 containerd[1439]: time="2025-01-29T11:07:29.496194188Z" level=info msg="StopPodSandbox for \"a29b4de2f4ad80e29195ba404fd775de1c0967b68952adf591a9b37289728e30\" returns successfully" Jan 29 11:07:29.496768 containerd[1439]: time="2025-01-29T11:07:29.496440630Z" level=info msg="StopPodSandbox for \"ea764a6e452b1a67d5662dc4b5a88b52da952a43b51a34cb061b39c189d393d3\"" Jan 29 11:07:29.496768 containerd[1439]: time="2025-01-29T11:07:29.496519969Z" level=info msg="TearDown network for sandbox \"ea764a6e452b1a67d5662dc4b5a88b52da952a43b51a34cb061b39c189d393d3\" successfully" Jan 29 11:07:29.496768 containerd[1439]: time="2025-01-29T11:07:29.496529792Z" level=info msg="StopPodSandbox for \"ea764a6e452b1a67d5662dc4b5a88b52da952a43b51a34cb061b39c189d393d3\" returns successfully" Jan 29 11:07:29.497415 containerd[1439]: time="2025-01-29T11:07:29.497189739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hw698,Uid:b00b2b43-4033-4a45-94e5-a857f0ae2b4c,Namespace:calico-system,Attempt:3,}" Jan 29 11:07:29.497224 systemd[1]: run-netns-cni\x2db943eeee\x2d23bc\x2db4ed\x2d1788\x2d30364d3d9fe1.mount: Deactivated successfully. Jan 29 11:07:29.533708 containerd[1439]: time="2025-01-29T11:07:29.533657376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:29.536689 containerd[1439]: time="2025-01-29T11:07:29.536636123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 29 11:07:29.539261 containerd[1439]: time="2025-01-29T11:07:29.538963228Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:29.541909 containerd[1439]: time="2025-01-29T11:07:29.541869583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:29.542518 containerd[1439]: time="2025-01-29T11:07:29.542490600Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 3.058272451s" Jan 29 11:07:29.542600 containerd[1439]: time="2025-01-29T11:07:29.542520387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 29 11:07:29.550092 containerd[1439]: time="2025-01-29T11:07:29.550020459Z" level=info msg="CreateContainer within sandbox \"31afde787fcc4d2b1bda5679e9a01d512dda74fbc5b5cb7154c411f7e3a51367\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:07:29.577556 containerd[1439]: time="2025-01-29T11:07:29.577499828Z" level=info msg="CreateContainer within sandbox \"31afde787fcc4d2b1bda5679e9a01d512dda74fbc5b5cb7154c411f7e3a51367\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c61d46896667910ae32fbc76327b18a570f74f6fca5cb64404b1648cd72de485\"" Jan 29 11:07:29.578039 containerd[1439]: time="2025-01-29T11:07:29.578008405Z" level=info msg="StartContainer for \"c61d46896667910ae32fbc76327b18a570f74f6fca5cb64404b1648cd72de485\"" Jan 29 11:07:29.581997 containerd[1439]: time="2025-01-29T11:07:29.581955311Z" level=error msg="Failed to destroy network for sandbox \"1f67a7976a9bd440ac4beb90aa7b39332d249f7a857bdf0695c93767f71fc9c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:07:29.582296 containerd[1439]: time="2025-01-29T11:07:29.582266518Z" level=error msg="encountered an error cleaning up failed sandbox \"1f67a7976a9bd440ac4beb90aa7b39332d249f7a857bdf0695c93767f71fc9c0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:07:29.582352 containerd[1439]: time="2025-01-29T11:07:29.582333199Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hw698,Uid:b00b2b43-4033-4a45-94e5-a857f0ae2b4c,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"1f67a7976a9bd440ac4beb90aa7b39332d249f7a857bdf0695c93767f71fc9c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:07:29.582599 kubelet[1740]: E0129 11:07:29.582567 1740 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f67a7976a9bd440ac4beb90aa7b39332d249f7a857bdf0695c93767f71fc9c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:07:29.582665 kubelet[1740]: E0129 11:07:29.582624 1740 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f67a7976a9bd440ac4beb90aa7b39332d249f7a857bdf0695c93767f71fc9c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hw698" Jan 29 11:07:29.582696 kubelet[1740]: E0129 11:07:29.582672 1740 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f67a7976a9bd440ac4beb90aa7b39332d249f7a857bdf0695c93767f71fc9c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hw698" Jan 29 11:07:29.582756 kubelet[1740]: E0129 11:07:29.582733 1740 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hw698_calico-system(b00b2b43-4033-4a45-94e5-a857f0ae2b4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hw698_calico-system(b00b2b43-4033-4a45-94e5-a857f0ae2b4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f67a7976a9bd440ac4beb90aa7b39332d249f7a857bdf0695c93767f71fc9c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hw698" podUID="b00b2b43-4033-4a45-94e5-a857f0ae2b4c" Jan 29 11:07:29.603258 systemd[1]: Started cri-containerd-c61d46896667910ae32fbc76327b18a570f74f6fca5cb64404b1648cd72de485.scope - libcontainer container c61d46896667910ae32fbc76327b18a570f74f6fca5cb64404b1648cd72de485. Jan 29 11:07:29.638973 containerd[1439]: time="2025-01-29T11:07:29.638914854Z" level=info msg="StartContainer for \"c61d46896667910ae32fbc76327b18a570f74f6fca5cb64404b1648cd72de485\" returns successfully" Jan 29 11:07:29.780353 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 11:07:29.780729 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 11:07:30.228930 systemd[1]: Created slice kubepods-besteffort-pod82a5d872_c0ab_4d3a_85d3_045ccf63c713.slice - libcontainer container kubepods-besteffort-pod82a5d872_c0ab_4d3a_85d3_045ccf63c713.slice. Jan 29 11:07:30.322033 kubelet[1740]: I0129 11:07:30.321976 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4v7x\" (UniqueName: \"kubernetes.io/projected/82a5d872-c0ab-4d3a-85d3-045ccf63c713-kube-api-access-t4v7x\") pod \"nginx-deployment-7fcdb87857-2p5nf\" (UID: \"82a5d872-c0ab-4d3a-85d3-045ccf63c713\") " pod="default/nginx-deployment-7fcdb87857-2p5nf" Jan 29 11:07:30.343366 kubelet[1740]: E0129 11:07:30.343327 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:30.499213 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f67a7976a9bd440ac4beb90aa7b39332d249f7a857bdf0695c93767f71fc9c0-shm.mount: Deactivated successfully. Jan 29 11:07:30.501408 kubelet[1740]: E0129 11:07:30.501107 1740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:07:30.504322 kubelet[1740]: I0129 11:07:30.504282 1740 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f67a7976a9bd440ac4beb90aa7b39332d249f7a857bdf0695c93767f71fc9c0" Jan 29 11:07:30.505104 containerd[1439]: time="2025-01-29T11:07:30.504786962Z" level=info msg="StopPodSandbox for \"1f67a7976a9bd440ac4beb90aa7b39332d249f7a857bdf0695c93767f71fc9c0\"" Jan 29 11:07:30.505104 containerd[1439]: time="2025-01-29T11:07:30.504956200Z" level=info msg="Ensure that sandbox 1f67a7976a9bd440ac4beb90aa7b39332d249f7a857bdf0695c93767f71fc9c0 in task-service has been cleanup successfully" Jan 29 11:07:30.506445 systemd[1]: run-netns-cni\x2dc76d5e31\x2d809f\x2d1267\x2d2590\x2d9ff3edfd809f.mount: Deactivated successfully. Jan 29 11:07:30.507173 containerd[1439]: time="2025-01-29T11:07:30.507103702Z" level=info msg="TearDown network for sandbox \"1f67a7976a9bd440ac4beb90aa7b39332d249f7a857bdf0695c93767f71fc9c0\" successfully" Jan 29 11:07:30.507173 containerd[1439]: time="2025-01-29T11:07:30.507136288Z" level=info msg="StopPodSandbox for \"1f67a7976a9bd440ac4beb90aa7b39332d249f7a857bdf0695c93767f71fc9c0\" returns successfully" Jan 29 11:07:30.507770 containerd[1439]: time="2025-01-29T11:07:30.507734492Z" level=info msg="StopPodSandbox for \"374c18f1c595eb0daa9a0cc58a812e5d17d2dab90c7e1b41d694e6d84a58e244\"" Jan 29 11:07:30.507896 containerd[1439]: time="2025-01-29T11:07:30.507863317Z" level=info msg="TearDown network for sandbox \"374c18f1c595eb0daa9a0cc58a812e5d17d2dab90c7e1b41d694e6d84a58e244\" successfully" Jan 29 11:07:30.507896 containerd[1439]: time="2025-01-29T11:07:30.507894665Z" level=info msg="StopPodSandbox for \"374c18f1c595eb0daa9a0cc58a812e5d17d2dab90c7e1b41d694e6d84a58e244\" returns successfully" Jan 29 11:07:30.510381 containerd[1439]: time="2025-01-29T11:07:30.510355007Z" level=info msg="StopPodSandbox for \"a29b4de2f4ad80e29195ba404fd775de1c0967b68952adf591a9b37289728e30\"" Jan 29 11:07:30.510460 containerd[1439]: time="2025-01-29T11:07:30.510445696Z" level=info msg="TearDown network for sandbox \"a29b4de2f4ad80e29195ba404fd775de1c0967b68952adf591a9b37289728e30\" successfully" Jan 29 11:07:30.510485 containerd[1439]: time="2025-01-29T11:07:30.510459752Z" level=info msg="StopPodSandbox for \"a29b4de2f4ad80e29195ba404fd775de1c0967b68952adf591a9b37289728e30\" returns successfully" Jan 29 11:07:30.510779 containerd[1439]: time="2025-01-29T11:07:30.510760172Z" level=info msg="StopPodSandbox for \"ea764a6e452b1a67d5662dc4b5a88b52da952a43b51a34cb061b39c189d393d3\"" Jan 29 11:07:30.510856 containerd[1439]: time="2025-01-29T11:07:30.510838481Z" level=info msg="TearDown network for sandbox \"ea764a6e452b1a67d5662dc4b5a88b52da952a43b51a34cb061b39c189d393d3\" successfully" Jan 29 11:07:30.510856 containerd[1439]: time="2025-01-29T11:07:30.510854694Z" level=info msg="StopPodSandbox for \"ea764a6e452b1a67d5662dc4b5a88b52da952a43b51a34cb061b39c189d393d3\" returns successfully" Jan 29 11:07:30.511410 containerd[1439]: time="2025-01-29T11:07:30.511368798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hw698,Uid:b00b2b43-4033-4a45-94e5-a857f0ae2b4c,Namespace:calico-system,Attempt:4,}" Jan 29 11:07:30.518108 kubelet[1740]: I0129 11:07:30.517976 1740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-q2twf" podStartSLOduration=3.566518092 podStartE2EDuration="12.517960059s" podCreationTimestamp="2025-01-29 11:07:18 +0000 UTC" firstStartedPulling="2025-01-29 11:07:20.592185053 +0000 UTC m=+3.155301657" lastFinishedPulling="2025-01-29 11:07:29.54362702 +0000 UTC m=+12.106743624" observedRunningTime="2025-01-29 11:07:30.517899719 +0000 UTC m=+13.081016363" watchObservedRunningTime="2025-01-29 11:07:30.517960059 +0000 UTC m=+13.081076782" Jan 29 11:07:30.531954 containerd[1439]: time="2025-01-29T11:07:30.531897522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-2p5nf,Uid:82a5d872-c0ab-4d3a-85d3-045ccf63c713,Namespace:default,Attempt:0,}" Jan 29 11:07:30.693793 systemd-networkd[1376]: cali53180a6c2f3: Link UP Jan 29 11:07:30.693994 systemd-networkd[1376]: cali53180a6c2f3: Gained carrier Jan 29 11:07:30.704907 containerd[1439]: 2025-01-29 11:07:30.543 [INFO][2442] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:07:30.704907 containerd[1439]: 2025-01-29 11:07:30.568 [INFO][2442] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.94-k8s-csi--node--driver--hw698-eth0 csi-node-driver- calico-system b00b2b43-4033-4a45-94e5-a857f0ae2b4c 818 0 2025-01-29 11:07:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.94 csi-node-driver-hw698 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali53180a6c2f3 [] []}} ContainerID="a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a" Namespace="calico-system" Pod="csi-node-driver-hw698" WorkloadEndpoint="10.0.0.94-k8s-csi--node--driver--hw698-" Jan 29 11:07:30.704907 containerd[1439]: 2025-01-29 11:07:30.568 [INFO][2442] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a" Namespace="calico-system" Pod="csi-node-driver-hw698" WorkloadEndpoint="10.0.0.94-k8s-csi--node--driver--hw698-eth0" Jan 29 11:07:30.704907 containerd[1439]: 2025-01-29 11:07:30.641 [INFO][2477] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a" HandleID="k8s-pod-network.a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a" Workload="10.0.0.94-k8s-csi--node--driver--hw698-eth0" Jan 29 11:07:30.704907 containerd[1439]: 2025-01-29 11:07:30.656 [INFO][2477] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a" HandleID="k8s-pod-network.a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a" Workload="10.0.0.94-k8s-csi--node--driver--hw698-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000495b00), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.94", "pod":"csi-node-driver-hw698", "timestamp":"2025-01-29 11:07:30.641022747 +0000 UTC"}, Hostname:"10.0.0.94", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:07:30.704907 containerd[1439]: 2025-01-29 11:07:30.656 [INFO][2477] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:07:30.704907 containerd[1439]: 2025-01-29 11:07:30.656 [INFO][2477] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:07:30.704907 containerd[1439]: 2025-01-29 11:07:30.656 [INFO][2477] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.94' Jan 29 11:07:30.704907 containerd[1439]: 2025-01-29 11:07:30.658 [INFO][2477] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a" host="10.0.0.94" Jan 29 11:07:30.704907 containerd[1439]: 2025-01-29 11:07:30.664 [INFO][2477] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.94" Jan 29 11:07:30.704907 containerd[1439]: 2025-01-29 11:07:30.669 [INFO][2477] ipam/ipam.go 489: Trying affinity for 192.168.24.0/26 host="10.0.0.94" Jan 29 11:07:30.704907 containerd[1439]: 2025-01-29 11:07:30.671 [INFO][2477] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.0/26 host="10.0.0.94" Jan 29 11:07:30.704907 containerd[1439]: 2025-01-29 11:07:30.673 [INFO][2477] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="10.0.0.94" Jan 29 11:07:30.704907 containerd[1439]: 2025-01-29 11:07:30.673 [INFO][2477] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a" host="10.0.0.94" Jan 29 11:07:30.704907 containerd[1439]: 2025-01-29 11:07:30.675 [INFO][2477] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a Jan 29 11:07:30.704907 containerd[1439]: 2025-01-29 11:07:30.679 [INFO][2477] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a" host="10.0.0.94" Jan 29 11:07:30.704907 containerd[1439]: 2025-01-29 11:07:30.684 [INFO][2477] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.1/26] block=192.168.24.0/26 handle="k8s-pod-network.a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a" host="10.0.0.94" Jan 29 11:07:30.704907 containerd[1439]: 2025-01-29 11:07:30.684 [INFO][2477] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.1/26] handle="k8s-pod-network.a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a" host="10.0.0.94" Jan 29 11:07:30.704907 containerd[1439]: 2025-01-29 11:07:30.684 [INFO][2477] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:07:30.704907 containerd[1439]: 2025-01-29 11:07:30.684 [INFO][2477] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.1/26] IPv6=[] ContainerID="a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a" HandleID="k8s-pod-network.a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a" Workload="10.0.0.94-k8s-csi--node--driver--hw698-eth0" Jan 29 11:07:30.705701 containerd[1439]: 2025-01-29 11:07:30.687 [INFO][2442] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a" Namespace="calico-system" Pod="csi-node-driver-hw698" WorkloadEndpoint="10.0.0.94-k8s-csi--node--driver--hw698-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94-k8s-csi--node--driver--hw698-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b00b2b43-4033-4a45-94e5-a857f0ae2b4c", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.94", ContainerID:"", Pod:"csi-node-driver-hw698", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali53180a6c2f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:07:30.705701 containerd[1439]: 2025-01-29 11:07:30.687 [INFO][2442] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.1/32] ContainerID="a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a" Namespace="calico-system" Pod="csi-node-driver-hw698" WorkloadEndpoint="10.0.0.94-k8s-csi--node--driver--hw698-eth0" Jan 29 11:07:30.705701 containerd[1439]: 2025-01-29 11:07:30.687 [INFO][2442] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53180a6c2f3 ContainerID="a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a" Namespace="calico-system" Pod="csi-node-driver-hw698" WorkloadEndpoint="10.0.0.94-k8s-csi--node--driver--hw698-eth0" Jan 29 11:07:30.705701 containerd[1439]: 2025-01-29 11:07:30.694 [INFO][2442] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a" Namespace="calico-system" Pod="csi-node-driver-hw698" WorkloadEndpoint="10.0.0.94-k8s-csi--node--driver--hw698-eth0" Jan 29 11:07:30.705701 containerd[1439]: 2025-01-29 11:07:30.694 [INFO][2442] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a" Namespace="calico-system" Pod="csi-node-driver-hw698" WorkloadEndpoint="10.0.0.94-k8s-csi--node--driver--hw698-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94-k8s-csi--node--driver--hw698-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b00b2b43-4033-4a45-94e5-a857f0ae2b4c", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 7, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.94", ContainerID:"a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a", Pod:"csi-node-driver-hw698", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.24.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali53180a6c2f3", MAC:"92:b5:24:eb:8b:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:07:30.705701 containerd[1439]: 2025-01-29 11:07:30.703 [INFO][2442] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a" Namespace="calico-system" Pod="csi-node-driver-hw698" WorkloadEndpoint="10.0.0.94-k8s-csi--node--driver--hw698-eth0" Jan 29 11:07:30.720198 containerd[1439]: time="2025-01-29T11:07:30.720069635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:07:30.720198 containerd[1439]: time="2025-01-29T11:07:30.720165436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:07:30.720868 containerd[1439]: time="2025-01-29T11:07:30.720177975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:07:30.721006 containerd[1439]: time="2025-01-29T11:07:30.720933955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:07:30.742278 systemd[1]: Started cri-containerd-a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a.scope - libcontainer container a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a. Jan 29 11:07:30.751386 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:07:30.762604 containerd[1439]: time="2025-01-29T11:07:30.762561135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hw698,Uid:b00b2b43-4033-4a45-94e5-a857f0ae2b4c,Namespace:calico-system,Attempt:4,} returns sandbox id \"a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a\"" Jan 29 11:07:30.764403 containerd[1439]: time="2025-01-29T11:07:30.764364851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 11:07:30.793275 systemd-networkd[1376]: cali40e587188aa: Link UP Jan 29 11:07:30.793544 systemd-networkd[1376]: cali40e587188aa: Gained carrier Jan 29 11:07:30.802302 containerd[1439]: 2025-01-29 11:07:30.565 [INFO][2462] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:07:30.802302 containerd[1439]: 2025-01-29 11:07:30.581 [INFO][2462] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.94-k8s-nginx--deployment--7fcdb87857--2p5nf-eth0 nginx-deployment-7fcdb87857- default 82a5d872-c0ab-4d3a-85d3-045ccf63c713 1051 0 2025-01-29 11:07:30 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.94 nginx-deployment-7fcdb87857-2p5nf eth0 default [] [] [kns.default ksa.default.default] cali40e587188aa [] []}} ContainerID="f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12" Namespace="default" Pod="nginx-deployment-7fcdb87857-2p5nf" WorkloadEndpoint="10.0.0.94-k8s-nginx--deployment--7fcdb87857--2p5nf-" Jan 29 11:07:30.802302 containerd[1439]: 2025-01-29 11:07:30.581 [INFO][2462] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12" Namespace="default" Pod="nginx-deployment-7fcdb87857-2p5nf" WorkloadEndpoint="10.0.0.94-k8s-nginx--deployment--7fcdb87857--2p5nf-eth0" Jan 29 11:07:30.802302 containerd[1439]: 2025-01-29 11:07:30.641 [INFO][2482] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12" HandleID="k8s-pod-network.f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12" Workload="10.0.0.94-k8s-nginx--deployment--7fcdb87857--2p5nf-eth0" Jan 29 11:07:30.802302 containerd[1439]: 2025-01-29 11:07:30.656 [INFO][2482] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12" HandleID="k8s-pod-network.f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12" Workload="10.0.0.94-k8s-nginx--deployment--7fcdb87857--2p5nf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e0e20), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.94", "pod":"nginx-deployment-7fcdb87857-2p5nf", "timestamp":"2025-01-29 11:07:30.641019712 +0000 UTC"}, Hostname:"10.0.0.94", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:07:30.802302 containerd[1439]: 2025-01-29 11:07:30.656 [INFO][2482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:07:30.802302 containerd[1439]: 2025-01-29 11:07:30.684 [INFO][2482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:07:30.802302 containerd[1439]: 2025-01-29 11:07:30.684 [INFO][2482] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.94' Jan 29 11:07:30.802302 containerd[1439]: 2025-01-29 11:07:30.760 [INFO][2482] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12" host="10.0.0.94" Jan 29 11:07:30.802302 containerd[1439]: 2025-01-29 11:07:30.766 [INFO][2482] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.94" Jan 29 11:07:30.802302 containerd[1439]: 2025-01-29 11:07:30.773 [INFO][2482] ipam/ipam.go 489: Trying affinity for 192.168.24.0/26 host="10.0.0.94" Jan 29 11:07:30.802302 containerd[1439]: 2025-01-29 11:07:30.775 [INFO][2482] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.0/26 host="10.0.0.94" Jan 29 11:07:30.802302 containerd[1439]: 2025-01-29 11:07:30.778 [INFO][2482] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="10.0.0.94" Jan 29 11:07:30.802302 containerd[1439]: 2025-01-29 11:07:30.778 [INFO][2482] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12" host="10.0.0.94" Jan 29 11:07:30.802302 containerd[1439]: 2025-01-29 11:07:30.779 [INFO][2482] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12 Jan 29 11:07:30.802302 containerd[1439]: 2025-01-29 11:07:30.785 [INFO][2482] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12" host="10.0.0.94" Jan 29 11:07:30.802302 containerd[1439]: 2025-01-29 11:07:30.789 [INFO][2482] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.2/26] block=192.168.24.0/26 handle="k8s-pod-network.f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12" host="10.0.0.94" Jan 29 11:07:30.802302 containerd[1439]: 2025-01-29 11:07:30.789 [INFO][2482] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.2/26] handle="k8s-pod-network.f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12" host="10.0.0.94" Jan 29 11:07:30.802302 containerd[1439]: 2025-01-29 11:07:30.789 [INFO][2482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:07:30.802302 containerd[1439]: 2025-01-29 11:07:30.789 [INFO][2482] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.2/26] IPv6=[] ContainerID="f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12" HandleID="k8s-pod-network.f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12" Workload="10.0.0.94-k8s-nginx--deployment--7fcdb87857--2p5nf-eth0" Jan 29 11:07:30.802893 containerd[1439]: 2025-01-29 11:07:30.791 [INFO][2462] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12" Namespace="default" Pod="nginx-deployment-7fcdb87857-2p5nf" WorkloadEndpoint="10.0.0.94-k8s-nginx--deployment--7fcdb87857--2p5nf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94-k8s-nginx--deployment--7fcdb87857--2p5nf-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"82a5d872-c0ab-4d3a-85d3-045ccf63c713", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 7, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.94", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-2p5nf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.24.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali40e587188aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:07:30.802893 containerd[1439]: 2025-01-29 11:07:30.791 [INFO][2462] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.2/32] ContainerID="f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12" Namespace="default" Pod="nginx-deployment-7fcdb87857-2p5nf" WorkloadEndpoint="10.0.0.94-k8s-nginx--deployment--7fcdb87857--2p5nf-eth0" Jan 29 11:07:30.802893 containerd[1439]: 2025-01-29 11:07:30.791 [INFO][2462] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40e587188aa ContainerID="f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12" Namespace="default" Pod="nginx-deployment-7fcdb87857-2p5nf" WorkloadEndpoint="10.0.0.94-k8s-nginx--deployment--7fcdb87857--2p5nf-eth0" Jan 29 11:07:30.802893 containerd[1439]: 2025-01-29 11:07:30.793 [INFO][2462] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12" Namespace="default" Pod="nginx-deployment-7fcdb87857-2p5nf" WorkloadEndpoint="10.0.0.94-k8s-nginx--deployment--7fcdb87857--2p5nf-eth0" Jan 29 11:07:30.802893 containerd[1439]: 2025-01-29 11:07:30.794 [INFO][2462] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12" Namespace="default" Pod="nginx-deployment-7fcdb87857-2p5nf" WorkloadEndpoint="10.0.0.94-k8s-nginx--deployment--7fcdb87857--2p5nf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94-k8s-nginx--deployment--7fcdb87857--2p5nf-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"82a5d872-c0ab-4d3a-85d3-045ccf63c713", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 7, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.94", ContainerID:"f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12", Pod:"nginx-deployment-7fcdb87857-2p5nf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.24.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali40e587188aa", MAC:"ee:e1:e4:4c:cf:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:07:30.802893 containerd[1439]: 2025-01-29 11:07:30.800 [INFO][2462] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12" Namespace="default" Pod="nginx-deployment-7fcdb87857-2p5nf" WorkloadEndpoint="10.0.0.94-k8s-nginx--deployment--7fcdb87857--2p5nf-eth0" Jan 29 11:07:30.818749 containerd[1439]: time="2025-01-29T11:07:30.818664601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:07:30.818749 containerd[1439]: time="2025-01-29T11:07:30.818729932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:07:30.818749 containerd[1439]: time="2025-01-29T11:07:30.818741992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:07:30.818926 containerd[1439]: time="2025-01-29T11:07:30.818828568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:07:30.837260 systemd[1]: Started cri-containerd-f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12.scope - libcontainer container f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12. Jan 29 11:07:30.846262 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:07:30.862698 containerd[1439]: time="2025-01-29T11:07:30.862661673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-2p5nf,Uid:82a5d872-c0ab-4d3a-85d3-045ccf63c713,Namespace:default,Attempt:0,} returns sandbox id \"f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12\"" Jan 29 11:07:31.188101 kernel: bpftool[2728]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 11:07:31.337278 systemd-networkd[1376]: vxlan.calico: Link UP Jan 29 11:07:31.337290 systemd-networkd[1376]: vxlan.calico: Gained carrier Jan 29 11:07:31.344152 kubelet[1740]: E0129 11:07:31.344103 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:31.511276 kubelet[1740]: E0129 11:07:31.509966 1740 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:07:31.705626 containerd[1439]: time="2025-01-29T11:07:31.705479622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:31.709887 containerd[1439]: time="2025-01-29T11:07:31.709831307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 29 11:07:31.712894 containerd[1439]: time="2025-01-29T11:07:31.712862094Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:31.715203 containerd[1439]: time="2025-01-29T11:07:31.715164300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:31.715887 containerd[1439]: time="2025-01-29T11:07:31.715843559Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 951.440291ms" Jan 29 11:07:31.715935 containerd[1439]: time="2025-01-29T11:07:31.715885613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 29 11:07:31.717290 containerd[1439]: time="2025-01-29T11:07:31.717259708Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 11:07:31.717921 containerd[1439]: time="2025-01-29T11:07:31.717808171Z" level=info msg="CreateContainer within sandbox \"a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 11:07:31.733526 containerd[1439]: time="2025-01-29T11:07:31.733476865Z" level=info msg="CreateContainer within sandbox \"a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b5da29489b8ae8802f5ac1d238007e5227fe08ffd8120f6a022d53b9b49a5a6e\"" Jan 29 11:07:31.734014 containerd[1439]: time="2025-01-29T11:07:31.733987867Z" level=info msg="StartContainer for \"b5da29489b8ae8802f5ac1d238007e5227fe08ffd8120f6a022d53b9b49a5a6e\"" Jan 29 11:07:31.761275 systemd[1]: Started cri-containerd-b5da29489b8ae8802f5ac1d238007e5227fe08ffd8120f6a022d53b9b49a5a6e.scope - libcontainer container b5da29489b8ae8802f5ac1d238007e5227fe08ffd8120f6a022d53b9b49a5a6e. Jan 29 11:07:31.789038 containerd[1439]: time="2025-01-29T11:07:31.788997171Z" level=info msg="StartContainer for \"b5da29489b8ae8802f5ac1d238007e5227fe08ffd8120f6a022d53b9b49a5a6e\" returns successfully" Jan 29 11:07:32.229261 systemd-networkd[1376]: cali53180a6c2f3: Gained IPv6LL Jan 29 11:07:32.344904 kubelet[1740]: E0129 11:07:32.344857 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:32.613510 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL Jan 29 11:07:32.614441 systemd-networkd[1376]: cali40e587188aa: Gained IPv6LL Jan 29 11:07:33.345271 kubelet[1740]: E0129 11:07:33.345186 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:33.642426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount662943726.mount: Deactivated successfully. Jan 29 11:07:34.345590 kubelet[1740]: E0129 11:07:34.345545 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:34.363467 containerd[1439]: time="2025-01-29T11:07:34.363116259Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67680490" Jan 29 11:07:34.366667 containerd[1439]: time="2025-01-29T11:07:34.366607808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:34.433939 containerd[1439]: time="2025-01-29T11:07:34.429978097Z" level=info msg="ImageCreate event name:\"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:34.456193 containerd[1439]: time="2025-01-29T11:07:34.455979612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:34.457412 containerd[1439]: time="2025-01-29T11:07:34.457114672Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 2.739815822s" Jan 29 11:07:34.457412 containerd[1439]: time="2025-01-29T11:07:34.457147230Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\"" Jan 29 11:07:34.459083 containerd[1439]: time="2025-01-29T11:07:34.458879322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 11:07:34.460253 containerd[1439]: time="2025-01-29T11:07:34.460101550Z" level=info msg="CreateContainer within sandbox \"f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 11:07:34.475133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount317952449.mount: Deactivated successfully. Jan 29 11:07:34.478396 containerd[1439]: time="2025-01-29T11:07:34.478289076Z" level=info msg="CreateContainer within sandbox \"f5d6fbeb121a3eb5d5867505299931ed381403dd5a2be04621209f153a54cf12\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"a9eea75864df9db6c465390f0b19b78d2cac70d0bbdd76148bc1e3214a0d8195\"" Jan 29 11:07:34.479062 containerd[1439]: time="2025-01-29T11:07:34.478832138Z" level=info msg="StartContainer for \"a9eea75864df9db6c465390f0b19b78d2cac70d0bbdd76148bc1e3214a0d8195\"" Jan 29 11:07:34.566298 systemd[1]: Started cri-containerd-a9eea75864df9db6c465390f0b19b78d2cac70d0bbdd76148bc1e3214a0d8195.scope - libcontainer container a9eea75864df9db6c465390f0b19b78d2cac70d0bbdd76148bc1e3214a0d8195. Jan 29 11:07:34.598065 containerd[1439]: time="2025-01-29T11:07:34.597911650Z" level=info msg="StartContainer for \"a9eea75864df9db6c465390f0b19b78d2cac70d0bbdd76148bc1e3214a0d8195\" returns successfully" Jan 29 11:07:35.346021 kubelet[1740]: E0129 11:07:35.345966 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:35.430207 containerd[1439]: time="2025-01-29T11:07:35.430160376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:35.431377 containerd[1439]: time="2025-01-29T11:07:35.431340753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 29 11:07:35.438241 containerd[1439]: time="2025-01-29T11:07:35.438195008Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:35.440544 containerd[1439]: time="2025-01-29T11:07:35.440511774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:35.441495 containerd[1439]: time="2025-01-29T11:07:35.441463507Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 982.555222ms" Jan 29 11:07:35.441917 containerd[1439]: time="2025-01-29T11:07:35.441894427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 29 11:07:35.443893 containerd[1439]: time="2025-01-29T11:07:35.443860497Z" level=info msg="CreateContainer within sandbox \"a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 11:07:35.460046 containerd[1439]: time="2025-01-29T11:07:35.460001314Z" level=info msg="CreateContainer within sandbox \"a8364604e5d92313c28047467b31b6724df0bbedf8d57fc1883ca562e28c233a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"709dc217974e03f7b127892df32d8b2a5e9a58452b4a003c60a7060d820cdcfe\"" Jan 29 11:07:35.460757 containerd[1439]: time="2025-01-29T11:07:35.460501471Z" level=info msg="StartContainer for \"709dc217974e03f7b127892df32d8b2a5e9a58452b4a003c60a7060d820cdcfe\"" Jan 29 11:07:35.501374 systemd[1]: Started cri-containerd-709dc217974e03f7b127892df32d8b2a5e9a58452b4a003c60a7060d820cdcfe.scope - libcontainer container 709dc217974e03f7b127892df32d8b2a5e9a58452b4a003c60a7060d820cdcfe. Jan 29 11:07:35.546112 containerd[1439]: time="2025-01-29T11:07:35.538778287Z" level=info msg="StartContainer for \"709dc217974e03f7b127892df32d8b2a5e9a58452b4a003c60a7060d820cdcfe\" returns successfully" Jan 29 11:07:35.562526 kubelet[1740]: I0129 11:07:35.562453 1740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-2p5nf" podStartSLOduration=1.968193297 podStartE2EDuration="5.562438279s" podCreationTimestamp="2025-01-29 11:07:30 +0000 UTC" firstStartedPulling="2025-01-29 11:07:30.863809042 +0000 UTC m=+13.426925646" lastFinishedPulling="2025-01-29 11:07:34.458054024 +0000 UTC m=+17.021170628" observedRunningTime="2025-01-29 11:07:35.562231568 +0000 UTC m=+18.125348212" watchObservedRunningTime="2025-01-29 11:07:35.562438279 +0000 UTC m=+18.125554883" Jan 29 11:07:36.346708 kubelet[1740]: E0129 11:07:36.346661 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:36.485083 kubelet[1740]: I0129 11:07:36.485020 1740 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 11:07:36.485083 kubelet[1740]: I0129 11:07:36.485079 1740 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 11:07:36.961895 kubelet[1740]: I0129 11:07:36.961821 1740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hw698" podStartSLOduration=14.283258749 podStartE2EDuration="18.961799968s" podCreationTimestamp="2025-01-29 11:07:18 +0000 UTC" firstStartedPulling="2025-01-29 11:07:30.764066308 +0000 UTC m=+13.327182912" lastFinishedPulling="2025-01-29 11:07:35.442607527 +0000 UTC m=+18.005724131" observedRunningTime="2025-01-29 11:07:36.568250048 +0000 UTC m=+19.131366612" watchObservedRunningTime="2025-01-29 11:07:36.961799968 +0000 UTC m=+19.524916572" Jan 29 11:07:36.968733 systemd[1]: Created slice kubepods-besteffort-pod6c8b13d2_76f4_4c2e_9d3b_cb6c349d91c3.slice - libcontainer container kubepods-besteffort-pod6c8b13d2_76f4_4c2e_9d3b_cb6c349d91c3.slice. Jan 29 11:07:37.065906 kubelet[1740]: I0129 11:07:37.065801 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/6c8b13d2-76f4-4c2e-9d3b-cb6c349d91c3-data\") pod \"nfs-server-provisioner-0\" (UID: \"6c8b13d2-76f4-4c2e-9d3b-cb6c349d91c3\") " pod="default/nfs-server-provisioner-0" Jan 29 11:07:37.065906 kubelet[1740]: I0129 11:07:37.065865 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghtlb\" (UniqueName: \"kubernetes.io/projected/6c8b13d2-76f4-4c2e-9d3b-cb6c349d91c3-kube-api-access-ghtlb\") pod \"nfs-server-provisioner-0\" (UID: \"6c8b13d2-76f4-4c2e-9d3b-cb6c349d91c3\") " pod="default/nfs-server-provisioner-0" Jan 29 11:07:37.272150 containerd[1439]: time="2025-01-29T11:07:37.271792890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6c8b13d2-76f4-4c2e-9d3b-cb6c349d91c3,Namespace:default,Attempt:0,}" Jan 29 11:07:37.347378 kubelet[1740]: E0129 11:07:37.347323 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:37.508624 systemd-networkd[1376]: cali60e51b789ff: Link UP Jan 29 11:07:37.508910 systemd-networkd[1376]: cali60e51b789ff: Gained carrier Jan 29 11:07:37.520937 containerd[1439]: 2025-01-29 11:07:37.343 [INFO][3004] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.94-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 6c8b13d2-76f4-4c2e-9d3b-cb6c349d91c3 1115 0 2025-01-29 11:07:36 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.94 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.94-k8s-nfs--server--provisioner--0-" Jan 29 11:07:37.520937 containerd[1439]: 2025-01-29 11:07:37.343 [INFO][3004] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.94-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:07:37.520937 containerd[1439]: 2025-01-29 11:07:37.372 [INFO][3017] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd" HandleID="k8s-pod-network.a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd" Workload="10.0.0.94-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:07:37.520937 containerd[1439]: 2025-01-29 11:07:37.385 [INFO][3017] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd" HandleID="k8s-pod-network.a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd" Workload="10.0.0.94-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000287690), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.94", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-29 11:07:37.372663726 +0000 UTC"}, Hostname:"10.0.0.94", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:07:37.520937 containerd[1439]: 2025-01-29 11:07:37.386 [INFO][3017] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:07:37.520937 containerd[1439]: 2025-01-29 11:07:37.386 [INFO][3017] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:07:37.520937 containerd[1439]: 2025-01-29 11:07:37.386 [INFO][3017] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.94' Jan 29 11:07:37.520937 containerd[1439]: 2025-01-29 11:07:37.388 [INFO][3017] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd" host="10.0.0.94" Jan 29 11:07:37.520937 containerd[1439]: 2025-01-29 11:07:37.483 [INFO][3017] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.94" Jan 29 11:07:37.520937 containerd[1439]: 2025-01-29 11:07:37.488 [INFO][3017] ipam/ipam.go 489: Trying affinity for 192.168.24.0/26 host="10.0.0.94" Jan 29 11:07:37.520937 containerd[1439]: 2025-01-29 11:07:37.490 [INFO][3017] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.0/26 host="10.0.0.94" Jan 29 11:07:37.520937 containerd[1439]: 2025-01-29 11:07:37.492 [INFO][3017] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="10.0.0.94" Jan 29 11:07:37.520937 containerd[1439]: 2025-01-29 11:07:37.493 [INFO][3017] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd" host="10.0.0.94" Jan 29 11:07:37.520937 containerd[1439]: 2025-01-29 11:07:37.494 [INFO][3017] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd Jan 29 11:07:37.520937 containerd[1439]: 2025-01-29 11:07:37.498 [INFO][3017] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd" host="10.0.0.94" Jan 29 11:07:37.520937 containerd[1439]: 2025-01-29 11:07:37.504 [INFO][3017] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.3/26] block=192.168.24.0/26 handle="k8s-pod-network.a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd" host="10.0.0.94" Jan 29 11:07:37.520937 containerd[1439]: 2025-01-29 11:07:37.504 [INFO][3017] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.3/26] handle="k8s-pod-network.a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd" host="10.0.0.94" Jan 29 11:07:37.520937 containerd[1439]: 2025-01-29 11:07:37.504 [INFO][3017] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:07:37.520937 containerd[1439]: 2025-01-29 11:07:37.504 [INFO][3017] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.3/26] IPv6=[] ContainerID="a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd" HandleID="k8s-pod-network.a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd" Workload="10.0.0.94-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:07:37.521486 containerd[1439]: 2025-01-29 11:07:37.506 [INFO][3004] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.94-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"6c8b13d2-76f4-4c2e-9d3b-cb6c349d91c3", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 7, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.94", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.24.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:07:37.521486 containerd[1439]: 2025-01-29 11:07:37.507 [INFO][3004] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.3/32] ContainerID="a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.94-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:07:37.521486 containerd[1439]: 2025-01-29 11:07:37.507 [INFO][3004] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.94-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:07:37.521486 containerd[1439]: 2025-01-29 11:07:37.508 [INFO][3004] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.94-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:07:37.521638 containerd[1439]: 2025-01-29 11:07:37.509 [INFO][3004] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.94-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"6c8b13d2-76f4-4c2e-9d3b-cb6c349d91c3", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 7, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.94", ContainerID:"a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.24.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"0a:9d:f1:c5:58:1f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:07:37.521638 containerd[1439]: 2025-01-29 11:07:37.519 [INFO][3004] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.94-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:07:37.540912 containerd[1439]: time="2025-01-29T11:07:37.540523581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:07:37.541252 containerd[1439]: time="2025-01-29T11:07:37.541057815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:07:37.541362 containerd[1439]: time="2025-01-29T11:07:37.541327089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:07:37.541543 containerd[1439]: time="2025-01-29T11:07:37.541507978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:07:37.561262 systemd[1]: Started cri-containerd-a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd.scope - libcontainer container a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd. Jan 29 11:07:37.571926 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:07:37.630945 containerd[1439]: time="2025-01-29T11:07:37.630893584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6c8b13d2-76f4-4c2e-9d3b-cb6c349d91c3,Namespace:default,Attempt:0,} returns sandbox id \"a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd\"" Jan 29 11:07:37.632593 containerd[1439]: time="2025-01-29T11:07:37.632566291Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 11:07:38.336378 kubelet[1740]: E0129 11:07:38.336326 1740 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:38.347883 kubelet[1740]: E0129 11:07:38.347830 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:39.142237 systemd-networkd[1376]: cali60e51b789ff: Gained IPv6LL Jan 29 11:07:39.348956 kubelet[1740]: E0129 11:07:39.348917 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:39.703231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2703039623.mount: Deactivated successfully. Jan 29 11:07:40.349206 kubelet[1740]: E0129 11:07:40.349149 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:41.086107 containerd[1439]: time="2025-01-29T11:07:41.085660947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:41.086966 containerd[1439]: time="2025-01-29T11:07:41.086610090Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Jan 29 11:07:41.087779 containerd[1439]: time="2025-01-29T11:07:41.087736288Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:41.090608 containerd[1439]: time="2025-01-29T11:07:41.090571408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:41.091766 containerd[1439]: time="2025-01-29T11:07:41.091729820Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.459129762s" Jan 29 11:07:41.091962 containerd[1439]: time="2025-01-29T11:07:41.091854318Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 29 11:07:41.094454 containerd[1439]: time="2025-01-29T11:07:41.094326655Z" level=info msg="CreateContainer within sandbox \"a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 11:07:41.105087 containerd[1439]: time="2025-01-29T11:07:41.105027898Z" level=info msg="CreateContainer within sandbox \"a1ee046baa2c681dd724197fcccb52e7d5a13b4fdf1de02acf763d95b3a620fd\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"7358f77607c99dfb2f3398bc2cc75fe5f166315b8f1ff7dc393e25e1ad50fee2\"" Jan 29 11:07:41.105844 containerd[1439]: time="2025-01-29T11:07:41.105797109Z" level=info msg="StartContainer for \"7358f77607c99dfb2f3398bc2cc75fe5f166315b8f1ff7dc393e25e1ad50fee2\"" Jan 29 11:07:41.147274 systemd[1]: Started cri-containerd-7358f77607c99dfb2f3398bc2cc75fe5f166315b8f1ff7dc393e25e1ad50fee2.scope - libcontainer container 7358f77607c99dfb2f3398bc2cc75fe5f166315b8f1ff7dc393e25e1ad50fee2. Jan 29 11:07:41.171182 containerd[1439]: time="2025-01-29T11:07:41.171125329Z" level=info msg="StartContainer for \"7358f77607c99dfb2f3398bc2cc75fe5f166315b8f1ff7dc393e25e1ad50fee2\" returns successfully" Jan 29 11:07:41.352268 kubelet[1740]: E0129 11:07:41.352147 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:42.354449 kubelet[1740]: E0129 11:07:42.352678 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:43.353416 kubelet[1740]: E0129 11:07:43.353367 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:44.354212 kubelet[1740]: E0129 11:07:44.354171 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:45.356638 kubelet[1740]: E0129 11:07:45.356589 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:46.356941 kubelet[1740]: E0129 11:07:46.356884 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:47.357866 kubelet[1740]: E0129 11:07:47.357810 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:48.358115 kubelet[1740]: E0129 11:07:48.358061 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:49.358457 kubelet[1740]: E0129 11:07:49.358408 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:50.359118 kubelet[1740]: E0129 11:07:50.359057 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:51.359973 kubelet[1740]: E0129 11:07:51.359923 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:51.442533 kubelet[1740]: I0129 11:07:51.442447 1740 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.981810969 podStartE2EDuration="15.442430658s" podCreationTimestamp="2025-01-29 11:07:36 +0000 UTC" firstStartedPulling="2025-01-29 11:07:37.63219972 +0000 UTC m=+20.195316324" lastFinishedPulling="2025-01-29 11:07:41.092819409 +0000 UTC m=+23.655936013" observedRunningTime="2025-01-29 11:07:41.581298914 +0000 UTC m=+24.144415478" watchObservedRunningTime="2025-01-29 11:07:51.442430658 +0000 UTC m=+34.005547262" Jan 29 11:07:51.447595 systemd[1]: Created slice kubepods-besteffort-pod56982253_521f_4455_ade9_912e54c6eeea.slice - libcontainer container kubepods-besteffort-pod56982253_521f_4455_ade9_912e54c6eeea.slice. Jan 29 11:07:51.540503 kubelet[1740]: I0129 11:07:51.540458 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7f36b874-84df-44da-842e-acdb2ec20a6e\" (UniqueName: \"kubernetes.io/nfs/56982253-521f-4455-ade9-912e54c6eeea-pvc-7f36b874-84df-44da-842e-acdb2ec20a6e\") pod \"test-pod-1\" (UID: \"56982253-521f-4455-ade9-912e54c6eeea\") " pod="default/test-pod-1" Jan 29 11:07:51.540503 kubelet[1740]: I0129 11:07:51.540502 1740 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr7pw\" (UniqueName: \"kubernetes.io/projected/56982253-521f-4455-ade9-912e54c6eeea-kube-api-access-zr7pw\") pod \"test-pod-1\" (UID: \"56982253-521f-4455-ade9-912e54c6eeea\") " pod="default/test-pod-1" Jan 29 11:07:51.668107 kernel: FS-Cache: Loaded Jan 29 11:07:51.694448 kernel: RPC: Registered named UNIX socket transport module. Jan 29 11:07:51.694548 kernel: RPC: Registered udp transport module. Jan 29 11:07:51.694568 kernel: RPC: Registered tcp transport module. Jan 29 11:07:51.694583 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 11:07:51.694596 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 11:07:51.874262 kernel: NFS: Registering the id_resolver key type Jan 29 11:07:51.874363 kernel: Key type id_resolver registered Jan 29 11:07:51.874380 kernel: Key type id_legacy registered Jan 29 11:07:51.908365 nfsidmap[3222]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 29 11:07:51.915708 nfsidmap[3225]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 29 11:07:52.050105 containerd[1439]: time="2025-01-29T11:07:52.050013807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:56982253-521f-4455-ade9-912e54c6eeea,Namespace:default,Attempt:0,}" Jan 29 11:07:52.160546 systemd-networkd[1376]: cali5ec59c6bf6e: Link UP Jan 29 11:07:52.160726 systemd-networkd[1376]: cali5ec59c6bf6e: Gained carrier Jan 29 11:07:52.169679 containerd[1439]: 2025-01-29 11:07:52.096 [INFO][3229] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.94-k8s-test--pod--1-eth0 default 56982253-521f-4455-ade9-912e54c6eeea 1193 0 2025-01-29 11:07:37 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.94 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.94-k8s-test--pod--1-" Jan 29 11:07:52.169679 containerd[1439]: 2025-01-29 11:07:52.097 [INFO][3229] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.94-k8s-test--pod--1-eth0" Jan 29 11:07:52.169679 containerd[1439]: 2025-01-29 11:07:52.121 [INFO][3241] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73" HandleID="k8s-pod-network.dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73" Workload="10.0.0.94-k8s-test--pod--1-eth0" Jan 29 11:07:52.169679 containerd[1439]: 2025-01-29 11:07:52.133 [INFO][3241] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73" HandleID="k8s-pod-network.dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73" Workload="10.0.0.94-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000306400), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.94", "pod":"test-pod-1", "timestamp":"2025-01-29 11:07:52.121127966 +0000 UTC"}, Hostname:"10.0.0.94", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:07:52.169679 containerd[1439]: 2025-01-29 11:07:52.134 [INFO][3241] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:07:52.169679 containerd[1439]: 2025-01-29 11:07:52.134 [INFO][3241] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:07:52.169679 containerd[1439]: 2025-01-29 11:07:52.134 [INFO][3241] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.94' Jan 29 11:07:52.169679 containerd[1439]: 2025-01-29 11:07:52.136 [INFO][3241] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73" host="10.0.0.94" Jan 29 11:07:52.169679 containerd[1439]: 2025-01-29 11:07:52.139 [INFO][3241] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.94" Jan 29 11:07:52.169679 containerd[1439]: 2025-01-29 11:07:52.143 [INFO][3241] ipam/ipam.go 489: Trying affinity for 192.168.24.0/26 host="10.0.0.94" Jan 29 11:07:52.169679 containerd[1439]: 2025-01-29 11:07:52.144 [INFO][3241] ipam/ipam.go 155: Attempting to load block cidr=192.168.24.0/26 host="10.0.0.94" Jan 29 11:07:52.169679 containerd[1439]: 2025-01-29 11:07:52.147 [INFO][3241] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.24.0/26 host="10.0.0.94" Jan 29 11:07:52.169679 containerd[1439]: 2025-01-29 11:07:52.147 [INFO][3241] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.24.0/26 handle="k8s-pod-network.dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73" host="10.0.0.94" Jan 29 11:07:52.169679 containerd[1439]: 2025-01-29 11:07:52.148 [INFO][3241] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73 Jan 29 11:07:52.169679 containerd[1439]: 2025-01-29 11:07:52.152 [INFO][3241] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.24.0/26 handle="k8s-pod-network.dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73" host="10.0.0.94" Jan 29 11:07:52.169679 containerd[1439]: 2025-01-29 11:07:52.157 [INFO][3241] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.24.4/26] block=192.168.24.0/26 handle="k8s-pod-network.dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73" host="10.0.0.94" Jan 29 11:07:52.169679 containerd[1439]: 2025-01-29 11:07:52.157 [INFO][3241] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.24.4/26] handle="k8s-pod-network.dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73" host="10.0.0.94" Jan 29 11:07:52.169679 containerd[1439]: 2025-01-29 11:07:52.157 [INFO][3241] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:07:52.169679 containerd[1439]: 2025-01-29 11:07:52.157 [INFO][3241] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.24.4/26] IPv6=[] ContainerID="dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73" HandleID="k8s-pod-network.dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73" Workload="10.0.0.94-k8s-test--pod--1-eth0" Jan 29 11:07:52.169679 containerd[1439]: 2025-01-29 11:07:52.159 [INFO][3229] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.94-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"56982253-521f-4455-ade9-912e54c6eeea", ResourceVersion:"1193", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 7, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.94", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.24.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:07:52.170266 containerd[1439]: 2025-01-29 11:07:52.159 [INFO][3229] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.24.4/32] ContainerID="dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.94-k8s-test--pod--1-eth0" Jan 29 11:07:52.170266 containerd[1439]: 2025-01-29 11:07:52.159 [INFO][3229] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.94-k8s-test--pod--1-eth0" Jan 29 11:07:52.170266 containerd[1439]: 2025-01-29 11:07:52.160 [INFO][3229] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.94-k8s-test--pod--1-eth0" Jan 29 11:07:52.170266 containerd[1439]: 2025-01-29 11:07:52.161 [INFO][3229] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.94-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.94-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"56982253-521f-4455-ade9-912e54c6eeea", ResourceVersion:"1193", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 7, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.94", ContainerID:"dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.24.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"6a:19:9e:b4:89:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:07:52.170266 containerd[1439]: 2025-01-29 11:07:52.167 [INFO][3229] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.94-k8s-test--pod--1-eth0" Jan 29 11:07:52.186134 containerd[1439]: time="2025-01-29T11:07:52.185594958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:07:52.186134 containerd[1439]: time="2025-01-29T11:07:52.186045657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:07:52.186134 containerd[1439]: time="2025-01-29T11:07:52.186061130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:07:52.186313 containerd[1439]: time="2025-01-29T11:07:52.186160051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:07:52.203248 systemd[1]: Started cri-containerd-dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73.scope - libcontainer container dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73. Jan 29 11:07:52.212325 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:07:52.228171 containerd[1439]: time="2025-01-29T11:07:52.228087628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:56982253-521f-4455-ade9-912e54c6eeea,Namespace:default,Attempt:0,} returns sandbox id \"dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73\"" Jan 29 11:07:52.229202 containerd[1439]: time="2025-01-29T11:07:52.229132168Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 11:07:52.360227 kubelet[1740]: E0129 11:07:52.360109 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:52.497780 containerd[1439]: time="2025-01-29T11:07:52.497330982Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:07:52.498065 containerd[1439]: time="2025-01-29T11:07:52.498008429Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 11:07:52.501117 containerd[1439]: time="2025-01-29T11:07:52.501068318Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 271.859182ms" Jan 29 11:07:52.501117 containerd[1439]: time="2025-01-29T11:07:52.501113860Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\"" Jan 29 11:07:52.506458 containerd[1439]: time="2025-01-29T11:07:52.506418207Z" level=info msg="CreateContainer within sandbox \"dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 11:07:52.516820 containerd[1439]: time="2025-01-29T11:07:52.516764086Z" level=info msg="CreateContainer within sandbox \"dc6eed9c11588545e5ff4bb43db4b151daa640ff33e587509a3fad49f52fea73\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"7c78689d8b822d0cfe92d702def10af4abf35f67507504b9945bff83d372cbfe\"" Jan 29 11:07:52.517341 containerd[1439]: time="2025-01-29T11:07:52.517311666Z" level=info msg="StartContainer for \"7c78689d8b822d0cfe92d702def10af4abf35f67507504b9945bff83d372cbfe\"" Jan 29 11:07:52.549237 systemd[1]: Started cri-containerd-7c78689d8b822d0cfe92d702def10af4abf35f67507504b9945bff83d372cbfe.scope - libcontainer container 7c78689d8b822d0cfe92d702def10af4abf35f67507504b9945bff83d372cbfe. Jan 29 11:07:52.579628 containerd[1439]: time="2025-01-29T11:07:52.579445236Z" level=info msg="StartContainer for \"7c78689d8b822d0cfe92d702def10af4abf35f67507504b9945bff83d372cbfe\" returns successfully" Jan 29 11:07:53.360849 kubelet[1740]: E0129 11:07:53.360789 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:54.117585 systemd-networkd[1376]: cali5ec59c6bf6e: Gained IPv6LL Jan 29 11:07:54.119160 update_engine[1423]: I20250129 11:07:54.119105 1423 update_attempter.cc:509] Updating boot flags... Jan 29 11:07:54.146328 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3214) Jan 29 11:07:54.181110 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3370) Jan 29 11:07:54.361064 kubelet[1740]: E0129 11:07:54.360918 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:07:55.361822 kubelet[1740]: E0129 11:07:55.361779 1740 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"