Dec 13 13:27:09.910316 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 13:27:09.910336 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Fri Dec 13 11:56:07 -00 2024 Dec 13 13:27:09.910346 kernel: KASLR enabled Dec 13 13:27:09.910352 kernel: efi: EFI v2.7 by EDK II Dec 13 13:27:09.910365 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Dec 13 13:27:09.910371 kernel: random: crng init done Dec 13 13:27:09.910378 kernel: secureboot: Secure boot disabled Dec 13 13:27:09.910384 kernel: ACPI: Early table checksum verification disabled Dec 13 13:27:09.910389 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Dec 13 13:27:09.910397 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 13:27:09.910403 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:09.910408 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:09.910414 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:09.910420 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:09.910427 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:09.910435 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:09.910441 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:09.910447 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:09.910453 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:09.910459 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 13:27:09.910465 kernel: NUMA: Failed to initialise from firmware Dec 13 13:27:09.910471 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:27:09.910477 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Dec 13 13:27:09.910483 kernel: Zone ranges: Dec 13 13:27:09.910489 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:27:09.910496 kernel: DMA32 empty Dec 13 13:27:09.910501 kernel: Normal empty Dec 13 13:27:09.910507 kernel: Movable zone start for each node Dec 13 13:27:09.910513 kernel: Early memory node ranges Dec 13 13:27:09.910519 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Dec 13 13:27:09.910525 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Dec 13 13:27:09.910531 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Dec 13 13:27:09.910537 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Dec 13 13:27:09.910543 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Dec 13 13:27:09.910548 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 13 13:27:09.910554 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 13 13:27:09.910560 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 13 13:27:09.910567 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 13 13:27:09.910573 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 13:27:09.910579 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 13:27:09.910587 kernel: psci: probing for conduit method from ACPI. Dec 13 13:27:09.910594 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 13:27:09.910600 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 13:27:09.910608 kernel: psci: Trusted OS migration not required Dec 13 13:27:09.910614 kernel: psci: SMC Calling Convention v1.1 Dec 13 13:27:09.910620 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 13:27:09.910627 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 13:27:09.910633 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 13:27:09.910640 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 13:27:09.910646 kernel: Detected PIPT I-cache on CPU0 Dec 13 13:27:09.910653 kernel: CPU features: detected: GIC system register CPU interface Dec 13 13:27:09.910659 kernel: CPU features: detected: Hardware dirty bit management Dec 13 13:27:09.910666 kernel: CPU features: detected: Spectre-v4 Dec 13 13:27:09.910673 kernel: CPU features: detected: Spectre-BHB Dec 13 13:27:09.910688 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 13:27:09.910701 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 13:27:09.910707 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 13:27:09.910715 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 13:27:09.910721 kernel: alternatives: applying boot alternatives Dec 13 13:27:09.910728 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 13:27:09.910735 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:27:09.910742 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:27:09.910748 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:27:09.910755 kernel: Fallback order for Node 0: 0 Dec 13 13:27:09.910762 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Dec 13 13:27:09.910769 kernel: Policy zone: DMA Dec 13 13:27:09.910775 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:27:09.910781 kernel: software IO TLB: area num 4. Dec 13 13:27:09.910788 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Dec 13 13:27:09.910795 kernel: Memory: 2385940K/2572288K available (10304K kernel code, 2184K rwdata, 8088K rodata, 39936K init, 897K bss, 186348K reserved, 0K cma-reserved) Dec 13 13:27:09.910802 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 13:27:09.910808 kernel: trace event string verifier disabled Dec 13 13:27:09.910815 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:27:09.910822 kernel: rcu: RCU event tracing is enabled. Dec 13 13:27:09.910828 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 13:27:09.910835 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:27:09.910842 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:27:09.910849 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:27:09.910856 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 13:27:09.910862 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 13:27:09.910869 kernel: GICv3: 256 SPIs implemented Dec 13 13:27:09.910885 kernel: GICv3: 0 Extended SPIs implemented Dec 13 13:27:09.910893 kernel: Root IRQ handler: gic_handle_irq Dec 13 13:27:09.910899 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 13:27:09.910905 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 13:27:09.910912 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 13:27:09.910921 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 13:27:09.910931 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 13:27:09.910937 kernel: GICv3: using LPI property table @0x00000000400f0000 Dec 13 13:27:09.910944 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Dec 13 13:27:09.910950 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:27:09.910957 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:27:09.910963 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 13:27:09.910970 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 13:27:09.910976 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 13:27:09.910982 kernel: arm-pv: using stolen time PV Dec 13 13:27:09.910989 kernel: Console: colour dummy device 80x25 Dec 13 13:27:09.910996 kernel: ACPI: Core revision 20230628 Dec 13 13:27:09.911004 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 13:27:09.911010 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:27:09.911017 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:27:09.911023 kernel: landlock: Up and running. Dec 13 13:27:09.911030 kernel: SELinux: Initializing. Dec 13 13:27:09.911036 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:27:09.911043 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:27:09.911049 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:27:09.911056 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:27:09.911064 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:27:09.911070 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:27:09.911077 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 13:27:09.911083 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 13:27:09.911090 kernel: Remapping and enabling EFI services. Dec 13 13:27:09.911096 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:27:09.911103 kernel: Detected PIPT I-cache on CPU1 Dec 13 13:27:09.911109 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 13:27:09.911116 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Dec 13 13:27:09.911124 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:27:09.911130 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 13:27:09.911142 kernel: Detected PIPT I-cache on CPU2 Dec 13 13:27:09.911150 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 13:27:09.911157 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Dec 13 13:27:09.911164 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:27:09.911171 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 13:27:09.911177 kernel: Detected PIPT I-cache on CPU3 Dec 13 13:27:09.911184 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 13:27:09.911193 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Dec 13 13:27:09.911200 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 13:27:09.911206 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 13:27:09.911213 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 13:27:09.911220 kernel: SMP: Total of 4 processors activated. Dec 13 13:27:09.911227 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 13:27:09.911234 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 13:27:09.911241 kernel: CPU features: detected: Common not Private translations Dec 13 13:27:09.911247 kernel: CPU features: detected: CRC32 instructions Dec 13 13:27:09.911256 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 13:27:09.911262 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 13:27:09.911269 kernel: CPU features: detected: LSE atomic instructions Dec 13 13:27:09.911276 kernel: CPU features: detected: Privileged Access Never Dec 13 13:27:09.911283 kernel: CPU features: detected: RAS Extension Support Dec 13 13:27:09.911290 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 13:27:09.911297 kernel: CPU: All CPU(s) started at EL1 Dec 13 13:27:09.911304 kernel: alternatives: applying system-wide alternatives Dec 13 13:27:09.911310 kernel: devtmpfs: initialized Dec 13 13:27:09.911319 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:27:09.911326 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 13:27:09.911332 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:27:09.911339 kernel: SMBIOS 3.0.0 present. Dec 13 13:27:09.911346 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Dec 13 13:27:09.911353 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:27:09.911365 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 13:27:09.911372 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 13:27:09.911379 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 13:27:09.911388 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:27:09.911395 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Dec 13 13:27:09.911402 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:27:09.911409 kernel: cpuidle: using governor menu Dec 13 13:27:09.911416 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 13:27:09.911423 kernel: ASID allocator initialised with 32768 entries Dec 13 13:27:09.911430 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:27:09.911436 kernel: Serial: AMBA PL011 UART driver Dec 13 13:27:09.911443 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 13:27:09.911451 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 13:27:09.911458 kernel: Modules: 508880 pages in range for PLT usage Dec 13 13:27:09.911465 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:27:09.911472 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:27:09.911484 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 13:27:09.911490 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 13:27:09.911497 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:27:09.911504 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:27:09.911511 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 13:27:09.911519 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 13:27:09.911526 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:27:09.911533 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:27:09.911540 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:27:09.911547 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:27:09.911554 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:27:09.911560 kernel: ACPI: Interpreter enabled Dec 13 13:27:09.911567 kernel: ACPI: Using GIC for interrupt routing Dec 13 13:27:09.911574 kernel: ACPI: MCFG table detected, 1 entries Dec 13 13:27:09.911582 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 13:27:09.911589 kernel: printk: console [ttyAMA0] enabled Dec 13 13:27:09.911596 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 13:27:09.911727 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:27:09.911798 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 13:27:09.911864 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 13:27:09.911945 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 13:27:09.912011 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 13:27:09.912021 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 13:27:09.912028 kernel: PCI host bridge to bus 0000:00 Dec 13 13:27:09.912098 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 13:27:09.912157 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 13:27:09.912215 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 13:27:09.912272 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 13:27:09.912353 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 13:27:09.912443 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 13:27:09.912510 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Dec 13 13:27:09.912573 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Dec 13 13:27:09.912636 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 13:27:09.912711 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 13:27:09.912776 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Dec 13 13:27:09.912843 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Dec 13 13:27:09.912923 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 13:27:09.912981 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 13:27:09.913038 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 13:27:09.913047 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 13:27:09.913054 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 13:27:09.913061 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 13:27:09.913071 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 13:27:09.913078 kernel: iommu: Default domain type: Translated Dec 13 13:27:09.913085 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 13:27:09.913091 kernel: efivars: Registered efivars operations Dec 13 13:27:09.913098 kernel: vgaarb: loaded Dec 13 13:27:09.913105 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 13:27:09.913112 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:27:09.913119 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:27:09.913126 kernel: pnp: PnP ACPI init Dec 13 13:27:09.913201 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 13:27:09.913213 kernel: pnp: PnP ACPI: found 1 devices Dec 13 13:27:09.913220 kernel: NET: Registered PF_INET protocol family Dec 13 13:27:09.913227 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:27:09.913234 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 13:27:09.913241 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:27:09.913248 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:27:09.913255 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 13:27:09.913262 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 13:27:09.913271 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:27:09.913278 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:27:09.913285 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:27:09.913292 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:27:09.913299 kernel: kvm [1]: HYP mode not available Dec 13 13:27:09.913306 kernel: Initialise system trusted keyrings Dec 13 13:27:09.913313 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 13:27:09.913320 kernel: Key type asymmetric registered Dec 13 13:27:09.913327 kernel: Asymmetric key parser 'x509' registered Dec 13 13:27:09.913335 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 13:27:09.913342 kernel: io scheduler mq-deadline registered Dec 13 13:27:09.913349 kernel: io scheduler kyber registered Dec 13 13:27:09.913362 kernel: io scheduler bfq registered Dec 13 13:27:09.913371 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 13:27:09.913377 kernel: ACPI: button: Power Button [PWRB] Dec 13 13:27:09.913385 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 13:27:09.913454 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 13:27:09.913464 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:27:09.913473 kernel: thunder_xcv, ver 1.0 Dec 13 13:27:09.913480 kernel: thunder_bgx, ver 1.0 Dec 13 13:27:09.913487 kernel: nicpf, ver 1.0 Dec 13 13:27:09.913494 kernel: nicvf, ver 1.0 Dec 13 13:27:09.913566 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 13:27:09.913629 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T13:27:09 UTC (1734096429) Dec 13 13:27:09.913638 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 13:27:09.913645 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 13:27:09.913654 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 13:27:09.913661 kernel: watchdog: Hard watchdog permanently disabled Dec 13 13:27:09.913668 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:27:09.913675 kernel: Segment Routing with IPv6 Dec 13 13:27:09.913682 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:27:09.913689 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:27:09.913695 kernel: Key type dns_resolver registered Dec 13 13:27:09.913702 kernel: registered taskstats version 1 Dec 13 13:27:09.913709 kernel: Loading compiled-in X.509 certificates Dec 13 13:27:09.913717 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 752b3e36c6039904ea643ccad2b3f5f3cb4ebf78' Dec 13 13:27:09.913724 kernel: Key type .fscrypt registered Dec 13 13:27:09.913731 kernel: Key type fscrypt-provisioning registered Dec 13 13:27:09.913738 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:27:09.913745 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:27:09.913752 kernel: ima: No architecture policies found Dec 13 13:27:09.913759 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 13:27:09.913766 kernel: clk: Disabling unused clocks Dec 13 13:27:09.913773 kernel: Freeing unused kernel memory: 39936K Dec 13 13:27:09.913781 kernel: Run /init as init process Dec 13 13:27:09.913787 kernel: with arguments: Dec 13 13:27:09.913794 kernel: /init Dec 13 13:27:09.913801 kernel: with environment: Dec 13 13:27:09.913808 kernel: HOME=/ Dec 13 13:27:09.913814 kernel: TERM=linux Dec 13 13:27:09.913821 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:27:09.913830 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:27:09.913840 systemd[1]: Detected virtualization kvm. Dec 13 13:27:09.913853 systemd[1]: Detected architecture arm64. Dec 13 13:27:09.913860 systemd[1]: Running in initrd. Dec 13 13:27:09.913867 systemd[1]: No hostname configured, using default hostname. Dec 13 13:27:09.913890 systemd[1]: Hostname set to . Dec 13 13:27:09.913899 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:27:09.913906 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:27:09.913914 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:27:09.913924 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:27:09.913931 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:27:09.913939 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:27:09.913947 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:27:09.913954 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:27:09.913963 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:27:09.913972 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:27:09.913980 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:27:09.913987 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:27:09.913994 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:27:09.914002 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:27:09.914009 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:27:09.914017 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:27:09.914024 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:27:09.914031 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:27:09.914040 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:27:09.914047 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:27:09.914055 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:27:09.914062 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:27:09.914070 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:27:09.914077 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:27:09.914084 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:27:09.914092 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:27:09.914100 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:27:09.914108 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:27:09.914115 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:27:09.914122 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:27:09.914130 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:09.914137 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:27:09.914144 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:27:09.914152 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:27:09.914179 systemd-journald[240]: Collecting audit messages is disabled. Dec 13 13:27:09.914199 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:27:09.914207 kernel: Bridge firewalling registered Dec 13 13:27:09.914214 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:27:09.914222 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:27:09.914229 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:09.914237 systemd-journald[240]: Journal started Dec 13 13:27:09.914261 systemd-journald[240]: Runtime Journal (/run/log/journal/41d8945f79bf4c5588bd88c05c68f2e7) is 5.9M, max 47.3M, 41.4M free. Dec 13 13:27:09.891995 systemd-modules-load[241]: Inserted module 'overlay' Dec 13 13:27:09.906999 systemd-modules-load[241]: Inserted module 'br_netfilter' Dec 13 13:27:09.918739 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:27:09.919144 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:27:09.928984 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:27:09.930534 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:27:09.933022 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:27:09.935728 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:27:09.941926 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:27:09.944596 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:09.946917 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:27:09.956021 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:27:09.957071 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:27:09.960415 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:27:09.966909 dracut-cmdline[279]: dracut-dracut-053 Dec 13 13:27:09.969331 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c48af8adabdaf1d8e07ceb011d2665929c607ddf2c4d40203b31334d745cc472 Dec 13 13:27:09.989197 systemd-resolved[285]: Positive Trust Anchors: Dec 13 13:27:09.989213 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:27:09.989244 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:27:09.993730 systemd-resolved[285]: Defaulting to hostname 'linux'. Dec 13 13:27:09.996447 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:27:09.998438 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:27:10.037890 kernel: SCSI subsystem initialized Dec 13 13:27:10.040900 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:27:10.047902 kernel: iscsi: registered transport (tcp) Dec 13 13:27:10.060895 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:27:10.060912 kernel: QLogic iSCSI HBA Driver Dec 13 13:27:10.103322 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:27:10.114038 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:27:10.129047 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:27:10.129101 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:27:10.130038 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:27:10.178916 kernel: raid6: neonx8 gen() 15719 MB/s Dec 13 13:27:10.195893 kernel: raid6: neonx4 gen() 15691 MB/s Dec 13 13:27:10.212898 kernel: raid6: neonx2 gen() 13180 MB/s Dec 13 13:27:10.229892 kernel: raid6: neonx1 gen() 10486 MB/s Dec 13 13:27:10.246902 kernel: raid6: int64x8 gen() 6780 MB/s Dec 13 13:27:10.263891 kernel: raid6: int64x4 gen() 7341 MB/s Dec 13 13:27:10.280895 kernel: raid6: int64x2 gen() 6076 MB/s Dec 13 13:27:10.297993 kernel: raid6: int64x1 gen() 5047 MB/s Dec 13 13:27:10.298009 kernel: raid6: using algorithm neonx8 gen() 15719 MB/s Dec 13 13:27:10.315973 kernel: raid6: .... xor() 11980 MB/s, rmw enabled Dec 13 13:27:10.315987 kernel: raid6: using neon recovery algorithm Dec 13 13:27:10.321390 kernel: xor: measuring software checksum speed Dec 13 13:27:10.321404 kernel: 8regs : 21641 MB/sec Dec 13 13:27:10.322053 kernel: 32regs : 21676 MB/sec Dec 13 13:27:10.325047 kernel: arm64_neon : 1698 MB/sec Dec 13 13:27:10.325064 kernel: xor: using function: 32regs (21676 MB/sec) Dec 13 13:27:10.374908 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:27:10.386224 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:27:10.397043 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:27:10.408192 systemd-udevd[463]: Using default interface naming scheme 'v255'. Dec 13 13:27:10.411451 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:27:10.414804 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:27:10.429625 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Dec 13 13:27:10.456962 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:27:10.467033 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:27:10.511424 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:27:10.523042 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:27:10.536585 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:27:10.538950 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:27:10.540868 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:27:10.543078 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:27:10.556096 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:27:10.563315 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 13 13:27:10.570181 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 13:27:10.570295 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:27:10.570306 kernel: GPT:9289727 != 19775487 Dec 13 13:27:10.570315 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:27:10.570329 kernel: GPT:9289727 != 19775487 Dec 13 13:27:10.570338 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:27:10.570347 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:27:10.572112 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:27:10.577110 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:27:10.577191 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:10.581270 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:27:10.582611 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:27:10.582680 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:10.585790 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:10.594904 kernel: BTRFS: device fsid 47b12626-f7d3-4179-9720-ca262eb4c614 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (516) Dec 13 13:27:10.596924 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (526) Dec 13 13:27:10.598118 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:10.607924 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 13:27:10.611912 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:10.619509 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 13:27:10.623349 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 13:27:10.624532 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 13:27:10.630047 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:27:10.642021 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:27:10.643755 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:27:10.649283 disk-uuid[554]: Primary Header is updated. Dec 13 13:27:10.649283 disk-uuid[554]: Secondary Entries is updated. Dec 13 13:27:10.649283 disk-uuid[554]: Secondary Header is updated. Dec 13 13:27:10.658899 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:27:10.659097 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:11.664902 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:27:11.666430 disk-uuid[556]: The operation has completed successfully. Dec 13 13:27:11.690529 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:27:11.690642 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:27:11.710054 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:27:11.712756 sh[575]: Success Dec 13 13:27:11.725318 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 13:27:11.769329 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:27:11.771149 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:27:11.772075 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:27:11.783143 kernel: BTRFS info (device dm-0): first mount of filesystem 47b12626-f7d3-4179-9720-ca262eb4c614 Dec 13 13:27:11.783188 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:27:11.783198 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:27:11.784166 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:27:11.785479 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:27:11.789185 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:27:11.790213 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:27:11.790881 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:27:11.794011 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:27:11.803804 kernel: BTRFS info (device vda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:27:11.803842 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:27:11.803858 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:27:11.806892 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:27:11.813868 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:27:11.815886 kernel: BTRFS info (device vda6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:27:11.821534 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:27:11.828032 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:27:11.885862 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:27:11.896023 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:27:11.915420 ignition[676]: Ignition 2.20.0 Dec 13 13:27:11.915430 ignition[676]: Stage: fetch-offline Dec 13 13:27:11.916004 systemd-networkd[765]: lo: Link UP Dec 13 13:27:11.915472 ignition[676]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:11.916007 systemd-networkd[765]: lo: Gained carrier Dec 13 13:27:11.915480 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:27:11.916823 systemd-networkd[765]: Enumeration completed Dec 13 13:27:11.915652 ignition[676]: parsed url from cmdline: "" Dec 13 13:27:11.916966 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:27:11.915656 ignition[676]: no config URL provided Dec 13 13:27:11.917237 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:11.915660 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:27:11.917240 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:27:11.915668 ignition[676]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:27:11.917999 systemd-networkd[765]: eth0: Link UP Dec 13 13:27:11.915693 ignition[676]: op(1): [started] loading QEMU firmware config module Dec 13 13:27:11.918002 systemd-networkd[765]: eth0: Gained carrier Dec 13 13:27:11.915697 ignition[676]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 13:27:11.918008 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:11.926835 ignition[676]: op(1): [finished] loading QEMU firmware config module Dec 13 13:27:11.918291 systemd[1]: Reached target network.target - Network. Dec 13 13:27:11.933916 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.127/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:27:11.940280 ignition[676]: parsing config with SHA512: 899af7a8b4df54d99f0e457cea1105115373164765ee96ddad735ae65629a9ccf77f095f308725ce3a7103c6bfbac4a34edf713d66388d66968bd3583d3ba01c Dec 13 13:27:11.943543 unknown[676]: fetched base config from "system" Dec 13 13:27:11.943553 unknown[676]: fetched user config from "qemu" Dec 13 13:27:11.943871 ignition[676]: fetch-offline: fetch-offline passed Dec 13 13:27:11.945641 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:27:11.943963 ignition[676]: Ignition finished successfully Dec 13 13:27:11.947088 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 13:27:11.957002 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:27:11.967050 ignition[772]: Ignition 2.20.0 Dec 13 13:27:11.967060 ignition[772]: Stage: kargs Dec 13 13:27:11.967200 ignition[772]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:11.967209 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:27:11.967828 ignition[772]: kargs: kargs passed Dec 13 13:27:11.970622 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:27:11.967870 ignition[772]: Ignition finished successfully Dec 13 13:27:11.981027 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:27:11.990566 ignition[781]: Ignition 2.20.0 Dec 13 13:27:11.990576 ignition[781]: Stage: disks Dec 13 13:27:11.990737 ignition[781]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:11.993379 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:27:11.990747 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:27:11.994430 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:27:11.991420 ignition[781]: disks: disks passed Dec 13 13:27:11.996039 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:27:11.991461 ignition[781]: Ignition finished successfully Dec 13 13:27:11.997948 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:27:11.999671 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:27:12.001054 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:27:12.013008 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:27:12.022286 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 13:27:12.025698 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:27:12.027615 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:27:12.069885 kernel: EXT4-fs (vda9): mounted filesystem 0aa4851d-a2ba-4d04-90b3-5d00bf608ecc r/w with ordered data mode. Quota mode: none. Dec 13 13:27:12.070216 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:27:12.071370 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:27:12.083956 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:27:12.085599 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:27:12.086727 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:27:12.086820 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:27:12.086846 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:27:12.095321 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (800) Dec 13 13:27:12.095342 kernel: BTRFS info (device vda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:27:12.093307 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:27:12.099739 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:27:12.099759 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:27:12.095013 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:27:12.101894 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:27:12.102959 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:27:12.142907 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:27:12.146087 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:27:12.149313 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:27:12.152206 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:27:12.219954 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:27:12.230956 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:27:12.233160 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:27:12.237890 kernel: BTRFS info (device vda6): last unmount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:27:12.253268 ignition[913]: INFO : Ignition 2.20.0 Dec 13 13:27:12.253268 ignition[913]: INFO : Stage: mount Dec 13 13:27:12.254732 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:12.254732 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:27:12.254732 ignition[913]: INFO : mount: mount passed Dec 13 13:27:12.254732 ignition[913]: INFO : Ignition finished successfully Dec 13 13:27:12.254911 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:27:12.257217 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:27:12.272965 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:27:12.782258 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:27:12.791118 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:27:12.797898 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (927) Dec 13 13:27:12.797936 kernel: BTRFS info (device vda6): first mount of filesystem d0a3d620-8ab2-45d8-a26c-bb488ffd59f2 Dec 13 13:27:12.797946 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 13:27:12.799394 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:27:12.801898 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:27:12.802531 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:27:12.818252 ignition[944]: INFO : Ignition 2.20.0 Dec 13 13:27:12.818252 ignition[944]: INFO : Stage: files Dec 13 13:27:12.819830 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:12.819830 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:27:12.819830 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:27:12.823055 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:27:12.823055 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:27:12.826161 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:27:12.827442 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:27:12.827442 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:27:12.826696 unknown[944]: wrote ssh authorized keys file for user: core Dec 13 13:27:12.831044 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:27:12.831044 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:27:12.831044 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:27:12.831044 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:27:12.831044 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:27:12.831044 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:27:12.831044 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:27:12.831044 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Dec 13 13:27:13.094192 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 13:27:13.102041 systemd-networkd[765]: eth0: Gained IPv6LL Dec 13 13:27:13.511207 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 13:27:13.511207 ignition[944]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Dec 13 13:27:13.514703 ignition[944]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:27:13.514703 ignition[944]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:27:13.514703 ignition[944]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Dec 13 13:27:13.514703 ignition[944]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 13:27:13.536725 ignition[944]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:27:13.540541 ignition[944]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:27:13.542030 ignition[944]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 13:27:13.542030 ignition[944]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:27:13.542030 ignition[944]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:27:13.542030 ignition[944]: INFO : files: files passed Dec 13 13:27:13.542030 ignition[944]: INFO : Ignition finished successfully Dec 13 13:27:13.545353 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:27:13.560025 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:27:13.561730 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:27:13.564286 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:27:13.564391 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:27:13.569596 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 13:27:13.572063 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:27:13.572063 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:27:13.575091 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:27:13.576750 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:27:13.578119 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:27:13.588056 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:27:13.611578 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:27:13.611687 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:27:13.613873 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:27:13.615706 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:27:13.617599 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:27:13.618452 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:27:13.633746 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:27:13.648042 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:27:13.655369 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:27:13.656569 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:27:13.658556 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:27:13.660272 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:27:13.660394 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:27:13.662821 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:27:13.664801 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:27:13.666391 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:27:13.668017 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:27:13.670162 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:27:13.672142 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:27:13.673994 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:27:13.675904 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:27:13.677957 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:27:13.679685 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:27:13.681188 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:27:13.681309 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:27:13.683563 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:27:13.685555 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:27:13.687440 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:27:13.691020 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:27:13.692258 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:27:13.692388 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:27:13.695107 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:27:13.695220 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:27:13.697139 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:27:13.698660 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:27:13.698762 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:27:13.700738 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:27:13.702245 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:27:13.703906 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:27:13.703994 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:27:13.706128 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:27:13.706221 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:27:13.707743 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:27:13.707849 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:27:13.709537 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:27:13.709635 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:27:13.721062 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:27:13.721940 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:27:13.722070 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:27:13.725128 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:27:13.726587 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:27:13.726707 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:27:13.729711 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:27:13.729968 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:27:13.733782 ignition[998]: INFO : Ignition 2.20.0 Dec 13 13:27:13.733782 ignition[998]: INFO : Stage: umount Dec 13 13:27:13.733782 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:13.733782 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:27:13.737594 ignition[998]: INFO : umount: umount passed Dec 13 13:27:13.737594 ignition[998]: INFO : Ignition finished successfully Dec 13 13:27:13.736485 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:27:13.736575 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:27:13.740868 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:27:13.741774 systemd[1]: Stopped target network.target - Network. Dec 13 13:27:13.744841 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:27:13.744932 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:27:13.746614 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:27:13.746659 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:27:13.748477 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:27:13.748522 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:27:13.750093 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:27:13.750138 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:27:13.752282 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:27:13.753913 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:27:13.756118 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:27:13.756203 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:27:13.758037 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:27:13.758122 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:27:13.762020 systemd-networkd[765]: eth0: DHCPv6 lease lost Dec 13 13:27:13.762260 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:27:13.762338 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:27:13.763986 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:27:13.764085 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:27:13.768062 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:27:13.768098 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:27:13.786007 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:27:13.787048 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:27:13.787107 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:27:13.789096 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:27:13.789138 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:27:13.791018 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:27:13.791062 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:27:13.793265 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:27:13.797047 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:27:13.797125 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:27:13.811197 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:27:13.811323 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:27:13.813619 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:27:13.813693 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:27:13.816760 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:27:13.816805 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:27:13.817952 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:27:13.817985 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:27:13.819711 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:27:13.819756 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:27:13.822350 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:27:13.822395 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:27:13.824916 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:27:13.824961 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:13.827652 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:27:13.827693 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:27:13.839027 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:27:13.840036 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:27:13.840093 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:27:13.842108 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:27:13.842150 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:13.846808 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:27:13.846912 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:27:13.848309 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:27:13.850804 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:27:13.859432 systemd[1]: Switching root. Dec 13 13:27:13.882058 systemd-journald[240]: Journal stopped Dec 13 13:27:14.583969 systemd-journald[240]: Received SIGTERM from PID 1 (systemd). Dec 13 13:27:14.584029 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:27:14.584042 kernel: SELinux: policy capability open_perms=1 Dec 13 13:27:14.584052 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:27:14.584062 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:27:14.584075 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:27:14.584085 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:27:14.584098 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:27:14.584108 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:27:14.584117 kernel: audit: type=1403 audit(1734096434.019:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:27:14.584128 systemd[1]: Successfully loaded SELinux policy in 34.171ms. Dec 13 13:27:14.584145 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.017ms. Dec 13 13:27:14.584156 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:27:14.584167 systemd[1]: Detected virtualization kvm. Dec 13 13:27:14.584178 systemd[1]: Detected architecture arm64. Dec 13 13:27:14.584188 systemd[1]: Detected first boot. Dec 13 13:27:14.584198 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:27:14.584208 zram_generator::config[1044]: No configuration found. Dec 13 13:27:14.584220 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:27:14.584230 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:27:14.584240 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:27:14.584252 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:27:14.584263 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:27:14.584273 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:27:14.584288 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:27:14.584298 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:27:14.584308 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:27:14.584322 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:27:14.584333 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:27:14.584353 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:27:14.584366 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:27:14.584377 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:27:14.584387 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:27:14.584412 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:27:14.584422 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:27:14.584438 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:27:14.584450 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 13:27:14.584460 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:27:14.584470 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:27:14.584480 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:27:14.584491 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:27:14.584501 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:27:14.584511 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:27:14.584524 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:27:14.584534 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:27:14.584545 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:27:14.584556 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:27:14.584566 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:27:14.584577 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:27:14.584588 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:27:14.584598 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:27:14.584608 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:27:14.584619 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:27:14.584632 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:27:14.584642 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:27:14.584652 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:27:14.584662 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:27:14.584672 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:27:14.584682 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:27:14.584693 systemd[1]: Reached target machines.target - Containers. Dec 13 13:27:14.584702 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:27:14.584714 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:27:14.584724 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:27:14.584735 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:27:14.584745 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:27:14.584755 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:27:14.584766 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:27:14.584776 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:27:14.584786 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:27:14.584798 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:27:14.584810 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:27:14.584820 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:27:14.584830 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:27:14.584840 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:27:14.584850 kernel: fuse: init (API version 7.39) Dec 13 13:27:14.584859 kernel: loop: module loaded Dec 13 13:27:14.584869 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:27:14.584898 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:27:14.584909 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:27:14.584921 kernel: ACPI: bus type drm_connector registered Dec 13 13:27:14.584931 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:27:14.584941 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:27:14.584951 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:27:14.584961 systemd[1]: Stopped verity-setup.service. Dec 13 13:27:14.584993 systemd-journald[1111]: Collecting audit messages is disabled. Dec 13 13:27:14.585015 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:27:14.585028 systemd-journald[1111]: Journal started Dec 13 13:27:14.585052 systemd-journald[1111]: Runtime Journal (/run/log/journal/41d8945f79bf4c5588bd88c05c68f2e7) is 5.9M, max 47.3M, 41.4M free. Dec 13 13:27:14.392041 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:27:14.405462 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 13:27:14.405798 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:27:14.587038 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:27:14.588947 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:27:14.589566 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:27:14.590639 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:27:14.591812 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:27:14.593015 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:27:14.594980 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:27:14.596396 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:27:14.597840 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:27:14.598009 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:27:14.599426 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:27:14.599558 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:27:14.601031 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:27:14.601179 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:27:14.602607 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:27:14.602741 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:27:14.604209 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:27:14.604357 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:27:14.605665 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:27:14.605808 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:27:14.607211 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:27:14.609918 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:27:14.611437 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:27:14.623761 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:27:14.630973 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:27:14.632986 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:27:14.634030 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:27:14.634069 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:27:14.635932 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:27:14.638047 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:27:14.640172 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:27:14.641205 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:27:14.642733 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:27:14.644743 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:27:14.645981 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:27:14.650046 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:27:14.651911 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:27:14.652707 systemd-journald[1111]: Time spent on flushing to /var/log/journal/41d8945f79bf4c5588bd88c05c68f2e7 is 17.265ms for 839 entries. Dec 13 13:27:14.652707 systemd-journald[1111]: System Journal (/var/log/journal/41d8945f79bf4c5588bd88c05c68f2e7) is 8.0M, max 195.6M, 187.6M free. Dec 13 13:27:14.685576 systemd-journald[1111]: Received client request to flush runtime journal. Dec 13 13:27:14.685632 kernel: loop0: detected capacity change from 0 to 194096 Dec 13 13:27:14.654013 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:27:14.658986 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:27:14.662127 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:27:14.664648 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:27:14.666019 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:27:14.667221 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:27:14.668795 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:27:14.677280 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:27:14.681259 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:27:14.682722 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:27:14.686951 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:27:14.690708 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:27:14.693175 udevadm[1162]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 13:27:14.715247 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:27:14.717323 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:27:14.718085 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:27:14.718897 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:27:14.730827 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:27:14.739064 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:27:14.756648 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Dec 13 13:27:14.756667 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Dec 13 13:27:14.759905 kernel: loop1: detected capacity change from 0 to 116784 Dec 13 13:27:14.763229 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:27:14.800908 kernel: loop2: detected capacity change from 0 to 113552 Dec 13 13:27:14.835167 kernel: loop3: detected capacity change from 0 to 194096 Dec 13 13:27:14.842904 kernel: loop4: detected capacity change from 0 to 116784 Dec 13 13:27:14.848955 kernel: loop5: detected capacity change from 0 to 113552 Dec 13 13:27:14.852277 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 13:27:14.852661 (sd-merge)[1180]: Merged extensions into '/usr'. Dec 13 13:27:14.856405 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:27:14.856424 systemd[1]: Reloading... Dec 13 13:27:14.911969 zram_generator::config[1207]: No configuration found. Dec 13 13:27:14.941580 ldconfig[1150]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:27:15.006251 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:27:15.041918 systemd[1]: Reloading finished in 185 ms. Dec 13 13:27:15.068711 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:27:15.070234 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:27:15.089061 systemd[1]: Starting ensure-sysext.service... Dec 13 13:27:15.090968 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:27:15.103228 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:27:15.103245 systemd[1]: Reloading... Dec 13 13:27:15.116121 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:27:15.116366 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:27:15.117068 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:27:15.117295 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Dec 13 13:27:15.117354 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Dec 13 13:27:15.120029 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:27:15.120044 systemd-tmpfiles[1243]: Skipping /boot Dec 13 13:27:15.128779 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:27:15.128799 systemd-tmpfiles[1243]: Skipping /boot Dec 13 13:27:15.161917 zram_generator::config[1273]: No configuration found. Dec 13 13:27:15.244147 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:27:15.280074 systemd[1]: Reloading finished in 176 ms. Dec 13 13:27:15.294987 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:27:15.308269 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:27:15.314530 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:27:15.316932 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:27:15.319092 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:27:15.322062 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:27:15.325078 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:27:15.329128 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:27:15.334712 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:27:15.340187 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:27:15.347162 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:27:15.350176 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:27:15.351375 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:27:15.356150 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:27:15.358335 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:27:15.361055 systemd-udevd[1311]: Using default interface naming scheme 'v255'. Dec 13 13:27:15.362809 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:27:15.362990 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:27:15.364610 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:27:15.364754 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:27:15.368015 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:27:15.368141 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:27:15.375797 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:27:15.383945 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:27:15.389738 augenrules[1341]: No rules Dec 13 13:27:15.396197 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:27:15.401174 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:27:15.407165 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:27:15.408898 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:27:15.410577 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:27:15.411949 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:27:15.412854 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:27:15.415033 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:27:15.416580 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:27:15.416750 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:27:15.421315 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:27:15.423778 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:27:15.423967 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:27:15.426385 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:27:15.426523 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:27:15.429277 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:27:15.429406 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:27:15.449755 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1359) Dec 13 13:27:15.449824 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1360) Dec 13 13:27:15.447241 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:27:15.458453 systemd[1]: Finished ensure-sysext.service. Dec 13 13:27:15.468919 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1360) Dec 13 13:27:15.471208 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 13:27:15.478923 systemd-resolved[1309]: Positive Trust Anchors: Dec 13 13:27:15.478941 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:27:15.478974 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:27:15.480137 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:27:15.482867 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:27:15.484037 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:27:15.485550 systemd-resolved[1309]: Defaulting to hostname 'linux'. Dec 13 13:27:15.486324 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:27:15.488955 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:27:15.490992 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:27:15.492053 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:27:15.494263 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:27:15.498070 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 13:27:15.500039 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:27:15.500333 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:27:15.501765 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:27:15.501927 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:27:15.504305 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:27:15.504454 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:27:15.505807 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:27:15.505984 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:27:15.507802 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:27:15.507969 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:27:15.511017 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:27:15.511253 augenrules[1382]: /sbin/augenrules: No change Dec 13 13:27:15.516732 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:27:15.521783 augenrules[1413]: No rules Dec 13 13:27:15.525097 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:27:15.526245 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:27:15.526319 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:27:15.526683 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:27:15.526921 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:27:15.550960 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:27:15.572158 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:15.576145 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 13:27:15.577955 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:27:15.583868 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:27:15.587077 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:27:15.613780 systemd-networkd[1394]: lo: Link UP Dec 13 13:27:15.613790 systemd-networkd[1394]: lo: Gained carrier Dec 13 13:27:15.618046 systemd-networkd[1394]: Enumeration completed Dec 13 13:27:15.619983 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:27:15.618273 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:27:15.619550 systemd[1]: Reached target network.target - Network. Dec 13 13:27:15.623513 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:15.623525 systemd-networkd[1394]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:27:15.624635 systemd-networkd[1394]: eth0: Link UP Dec 13 13:27:15.624639 systemd-networkd[1394]: eth0: Gained carrier Dec 13 13:27:15.624653 systemd-networkd[1394]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:15.626086 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:27:15.627893 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:15.651942 systemd-networkd[1394]: eth0: DHCPv4 address 10.0.0.127/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:27:15.653157 systemd-timesyncd[1395]: Network configuration changed, trying to establish connection. Dec 13 13:27:15.211076 systemd-timesyncd[1395]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 13:27:15.214985 systemd-journald[1111]: Time jumped backwards, rotating. Dec 13 13:27:15.211087 systemd-resolved[1309]: Clock change detected. Flushing caches. Dec 13 13:27:15.211132 systemd-timesyncd[1395]: Initial clock synchronization to Fri 2024-12-13 13:27:15.210957 UTC. Dec 13 13:27:15.219525 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:27:15.221011 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:27:15.222810 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:27:15.224164 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:27:15.225335 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:27:15.226677 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:27:15.227767 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:27:15.229054 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:27:15.230222 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:27:15.230254 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:27:15.231071 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:27:15.232717 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:27:15.235067 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:27:15.240897 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:27:15.243146 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:27:15.244702 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:27:15.245858 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:27:15.246808 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:27:15.247772 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:27:15.247805 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:27:15.248674 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:27:15.251156 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:27:15.250665 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:27:15.254226 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:27:15.260287 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:27:15.261092 jq[1442]: false Dec 13 13:27:15.261409 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:27:15.262495 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:27:15.264675 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:27:15.269220 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:27:15.274351 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:27:15.275698 extend-filesystems[1443]: Found loop3 Dec 13 13:27:15.277633 extend-filesystems[1443]: Found loop4 Dec 13 13:27:15.277633 extend-filesystems[1443]: Found loop5 Dec 13 13:27:15.277633 extend-filesystems[1443]: Found vda Dec 13 13:27:15.277633 extend-filesystems[1443]: Found vda1 Dec 13 13:27:15.277633 extend-filesystems[1443]: Found vda2 Dec 13 13:27:15.277633 extend-filesystems[1443]: Found vda3 Dec 13 13:27:15.277633 extend-filesystems[1443]: Found usr Dec 13 13:27:15.277633 extend-filesystems[1443]: Found vda4 Dec 13 13:27:15.277633 extend-filesystems[1443]: Found vda6 Dec 13 13:27:15.277633 extend-filesystems[1443]: Found vda7 Dec 13 13:27:15.277633 extend-filesystems[1443]: Found vda9 Dec 13 13:27:15.277633 extend-filesystems[1443]: Checking size of /dev/vda9 Dec 13 13:27:15.276765 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:27:15.283599 dbus-daemon[1441]: [system] SELinux support is enabled Dec 13 13:27:15.277208 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:27:15.278279 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:27:15.294291 jq[1457]: true Dec 13 13:27:15.281175 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:27:15.288527 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:27:15.293569 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:27:15.300587 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:27:15.300849 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:27:15.301166 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:27:15.301351 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:27:15.302784 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:27:15.302946 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:27:15.316481 extend-filesystems[1443]: Resized partition /dev/vda9 Dec 13 13:27:15.318312 extend-filesystems[1471]: resize2fs 1.47.1 (20-May-2024) Dec 13 13:27:15.321684 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:27:15.324184 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:27:15.324219 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:27:15.325809 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:27:15.325832 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:27:15.336484 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1357) Dec 13 13:27:15.336572 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 13:27:15.336586 jq[1463]: true Dec 13 13:27:15.344488 update_engine[1456]: I20241213 13:27:15.343863 1456 main.cc:92] Flatcar Update Engine starting Dec 13 13:27:15.347262 update_engine[1456]: I20241213 13:27:15.346036 1456 update_check_scheduler.cc:74] Next update check in 2m43s Dec 13 13:27:15.347486 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:27:15.351424 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 13:27:15.351864 systemd-logind[1450]: New seat seat0. Dec 13 13:27:15.352360 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:27:15.354120 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:27:15.375058 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 13:27:15.383635 extend-filesystems[1471]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 13:27:15.383635 extend-filesystems[1471]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 13:27:15.383635 extend-filesystems[1471]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 13:27:15.391432 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Dec 13 13:27:15.393200 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:27:15.393382 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:27:15.416154 bash[1491]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:27:15.420027 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:27:15.420091 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:27:15.422693 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 13:27:15.536987 containerd[1464]: time="2024-12-13T13:27:15.536849988Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:27:15.558875 containerd[1464]: time="2024-12-13T13:27:15.558371108Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:15.560158 containerd[1464]: time="2024-12-13T13:27:15.560119188Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:15.561079 containerd[1464]: time="2024-12-13T13:27:15.560230948Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:27:15.561079 containerd[1464]: time="2024-12-13T13:27:15.560264108Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:27:15.561079 containerd[1464]: time="2024-12-13T13:27:15.560418428Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:27:15.561079 containerd[1464]: time="2024-12-13T13:27:15.560436508Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:15.561079 containerd[1464]: time="2024-12-13T13:27:15.560488748Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:15.561079 containerd[1464]: time="2024-12-13T13:27:15.560499348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:15.561079 containerd[1464]: time="2024-12-13T13:27:15.560669228Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:15.561079 containerd[1464]: time="2024-12-13T13:27:15.560684188Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:15.561079 containerd[1464]: time="2024-12-13T13:27:15.560696268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:15.561079 containerd[1464]: time="2024-12-13T13:27:15.560705148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:15.561079 containerd[1464]: time="2024-12-13T13:27:15.560771068Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:15.561079 containerd[1464]: time="2024-12-13T13:27:15.560945788Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:15.561297 containerd[1464]: time="2024-12-13T13:27:15.561035868Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:15.561348 containerd[1464]: time="2024-12-13T13:27:15.561331228Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:27:15.561472 containerd[1464]: time="2024-12-13T13:27:15.561453668Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:27:15.561594 containerd[1464]: time="2024-12-13T13:27:15.561576668Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:27:15.564772 containerd[1464]: time="2024-12-13T13:27:15.564746108Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:27:15.564876 containerd[1464]: time="2024-12-13T13:27:15.564861988Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:27:15.564939 containerd[1464]: time="2024-12-13T13:27:15.564926268Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:27:15.564994 containerd[1464]: time="2024-12-13T13:27:15.564981908Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:27:15.565044 containerd[1464]: time="2024-12-13T13:27:15.565032548Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:27:15.565290 containerd[1464]: time="2024-12-13T13:27:15.565268228Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:27:15.565620 containerd[1464]: time="2024-12-13T13:27:15.565589908Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:27:15.565756 containerd[1464]: time="2024-12-13T13:27:15.565736748Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:27:15.565778 containerd[1464]: time="2024-12-13T13:27:15.565760948Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:27:15.565807 containerd[1464]: time="2024-12-13T13:27:15.565777428Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:27:15.565807 containerd[1464]: time="2024-12-13T13:27:15.565792148Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:27:15.565807 containerd[1464]: time="2024-12-13T13:27:15.565803908Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:27:15.565856 containerd[1464]: time="2024-12-13T13:27:15.565816828Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:27:15.565856 containerd[1464]: time="2024-12-13T13:27:15.565830748Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:27:15.565856 containerd[1464]: time="2024-12-13T13:27:15.565844348Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:27:15.565904 containerd[1464]: time="2024-12-13T13:27:15.565858788Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:27:15.565904 containerd[1464]: time="2024-12-13T13:27:15.565872068Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:27:15.565904 containerd[1464]: time="2024-12-13T13:27:15.565883068Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:27:15.565954 containerd[1464]: time="2024-12-13T13:27:15.565903268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:27:15.565954 containerd[1464]: time="2024-12-13T13:27:15.565917828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:27:15.565954 containerd[1464]: time="2024-12-13T13:27:15.565929828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:27:15.565954 containerd[1464]: time="2024-12-13T13:27:15.565941468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:27:15.565954 containerd[1464]: time="2024-12-13T13:27:15.565953028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:27:15.566036 containerd[1464]: time="2024-12-13T13:27:15.565966548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:27:15.566036 containerd[1464]: time="2024-12-13T13:27:15.565978588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:27:15.566036 containerd[1464]: time="2024-12-13T13:27:15.565991028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:27:15.566036 containerd[1464]: time="2024-12-13T13:27:15.566003268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:27:15.566036 containerd[1464]: time="2024-12-13T13:27:15.566017268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:27:15.566036 containerd[1464]: time="2024-12-13T13:27:15.566029028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:27:15.566149 containerd[1464]: time="2024-12-13T13:27:15.566040508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:27:15.566149 containerd[1464]: time="2024-12-13T13:27:15.566073948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:27:15.566149 containerd[1464]: time="2024-12-13T13:27:15.566090668Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:27:15.566149 containerd[1464]: time="2024-12-13T13:27:15.566110068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:27:15.566149 containerd[1464]: time="2024-12-13T13:27:15.566122668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:27:15.566149 containerd[1464]: time="2024-12-13T13:27:15.566133108Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:27:15.566900 containerd[1464]: time="2024-12-13T13:27:15.566862868Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:27:15.566930 containerd[1464]: time="2024-12-13T13:27:15.566900828Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:27:15.566930 containerd[1464]: time="2024-12-13T13:27:15.566913948Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:27:15.566930 containerd[1464]: time="2024-12-13T13:27:15.566926428Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:27:15.566981 containerd[1464]: time="2024-12-13T13:27:15.566935468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:27:15.566981 containerd[1464]: time="2024-12-13T13:27:15.566955708Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:27:15.566981 containerd[1464]: time="2024-12-13T13:27:15.566965508Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:27:15.566981 containerd[1464]: time="2024-12-13T13:27:15.566976228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:27:15.567310 containerd[1464]: time="2024-12-13T13:27:15.567252708Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:27:15.567310 containerd[1464]: time="2024-12-13T13:27:15.567305428Z" level=info msg="Connect containerd service" Dec 13 13:27:15.567433 containerd[1464]: time="2024-12-13T13:27:15.567339188Z" level=info msg="using legacy CRI server" Dec 13 13:27:15.567433 containerd[1464]: time="2024-12-13T13:27:15.567347228Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:27:15.567614 containerd[1464]: time="2024-12-13T13:27:15.567581708Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:27:15.568279 containerd[1464]: time="2024-12-13T13:27:15.568241908Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:27:15.568866 containerd[1464]: time="2024-12-13T13:27:15.568527028Z" level=info msg="Start subscribing containerd event" Dec 13 13:27:15.568866 containerd[1464]: time="2024-12-13T13:27:15.568584628Z" level=info msg="Start recovering state" Dec 13 13:27:15.568866 containerd[1464]: time="2024-12-13T13:27:15.568648268Z" level=info msg="Start event monitor" Dec 13 13:27:15.568866 containerd[1464]: time="2024-12-13T13:27:15.568659428Z" level=info msg="Start snapshots syncer" Dec 13 13:27:15.568866 containerd[1464]: time="2024-12-13T13:27:15.568673508Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:27:15.568866 containerd[1464]: time="2024-12-13T13:27:15.568681388Z" level=info msg="Start streaming server" Dec 13 13:27:15.568866 containerd[1464]: time="2024-12-13T13:27:15.568807228Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:27:15.568866 containerd[1464]: time="2024-12-13T13:27:15.568850228Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:27:15.569023 containerd[1464]: time="2024-12-13T13:27:15.568895028Z" level=info msg="containerd successfully booted in 0.035781s" Dec 13 13:27:15.569010 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:27:15.912505 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:27:15.930823 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:27:15.941287 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:27:15.947287 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:27:15.948130 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:27:15.951073 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:27:15.963738 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:27:15.976351 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:27:15.978309 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 13:27:15.979562 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:27:16.690215 systemd-networkd[1394]: eth0: Gained IPv6LL Dec 13 13:27:16.692635 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:27:16.694560 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:27:16.708290 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 13:27:16.710770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:27:16.712865 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:27:16.728232 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 13:27:16.728402 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 13:27:16.730390 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:27:16.731671 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:27:17.188477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:27:17.190112 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:27:17.191238 systemd[1]: Startup finished in 565ms (kernel) + 4.316s (initrd) + 3.651s (userspace) = 8.533s. Dec 13 13:27:17.192821 (kubelet)[1545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:27:17.208517 agetty[1522]: failed to open credentials directory Dec 13 13:27:17.208612 agetty[1521]: failed to open credentials directory Dec 13 13:27:17.650162 kubelet[1545]: E1213 13:27:17.650076 1545 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:27:17.651980 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:27:17.652135 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:27:21.756635 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:27:21.757778 systemd[1]: Started sshd@0-10.0.0.127:22-10.0.0.1:37418.service - OpenSSH per-connection server daemon (10.0.0.1:37418). Dec 13 13:27:21.818371 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 37418 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:21.820174 sshd-session[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:21.828248 systemd-logind[1450]: New session 1 of user core. Dec 13 13:27:21.829271 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:27:21.837283 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:27:21.846330 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:27:21.849420 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:27:21.855561 (systemd)[1563]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:27:21.934947 systemd[1563]: Queued start job for default target default.target. Dec 13 13:27:21.948406 systemd[1563]: Created slice app.slice - User Application Slice. Dec 13 13:27:21.948461 systemd[1563]: Reached target paths.target - Paths. Dec 13 13:27:21.948474 systemd[1563]: Reached target timers.target - Timers. Dec 13 13:27:21.949811 systemd[1563]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:27:21.960121 systemd[1563]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:27:21.960190 systemd[1563]: Reached target sockets.target - Sockets. Dec 13 13:27:21.960203 systemd[1563]: Reached target basic.target - Basic System. Dec 13 13:27:21.960248 systemd[1563]: Reached target default.target - Main User Target. Dec 13 13:27:21.960275 systemd[1563]: Startup finished in 99ms. Dec 13 13:27:21.960520 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:27:21.962114 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:27:22.024720 systemd[1]: Started sshd@1-10.0.0.127:22-10.0.0.1:37426.service - OpenSSH per-connection server daemon (10.0.0.1:37426). Dec 13 13:27:22.070522 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 37426 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:22.071702 sshd-session[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:22.076301 systemd-logind[1450]: New session 2 of user core. Dec 13 13:27:22.082222 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:27:22.133537 sshd[1576]: Connection closed by 10.0.0.1 port 37426 Dec 13 13:27:22.133848 sshd-session[1574]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:22.150400 systemd[1]: sshd@1-10.0.0.127:22-10.0.0.1:37426.service: Deactivated successfully. Dec 13 13:27:22.151765 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 13:27:22.153524 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Dec 13 13:27:22.160327 systemd[1]: Started sshd@2-10.0.0.127:22-10.0.0.1:37428.service - OpenSSH per-connection server daemon (10.0.0.1:37428). Dec 13 13:27:22.161272 systemd-logind[1450]: Removed session 2. Dec 13 13:27:22.199038 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 37428 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:22.200299 sshd-session[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:22.204134 systemd-logind[1450]: New session 3 of user core. Dec 13 13:27:22.214236 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:27:22.261167 sshd[1583]: Connection closed by 10.0.0.1 port 37428 Dec 13 13:27:22.261502 sshd-session[1581]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:22.274341 systemd[1]: sshd@2-10.0.0.127:22-10.0.0.1:37428.service: Deactivated successfully. Dec 13 13:27:22.277270 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 13:27:22.278608 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Dec 13 13:27:22.287393 systemd[1]: Started sshd@3-10.0.0.127:22-10.0.0.1:37442.service - OpenSSH per-connection server daemon (10.0.0.1:37442). Dec 13 13:27:22.288435 systemd-logind[1450]: Removed session 3. Dec 13 13:27:22.325376 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 37442 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:22.326544 sshd-session[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:22.330111 systemd-logind[1450]: New session 4 of user core. Dec 13 13:27:22.343181 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:27:22.393819 sshd[1590]: Connection closed by 10.0.0.1 port 37442 Dec 13 13:27:22.394176 sshd-session[1588]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:22.404294 systemd[1]: sshd@3-10.0.0.127:22-10.0.0.1:37442.service: Deactivated successfully. Dec 13 13:27:22.405667 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 13:27:22.408748 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Dec 13 13:27:22.409310 systemd[1]: Started sshd@4-10.0.0.127:22-10.0.0.1:37454.service - OpenSSH per-connection server daemon (10.0.0.1:37454). Dec 13 13:27:22.410000 systemd-logind[1450]: Removed session 4. Dec 13 13:27:22.452019 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 37454 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:22.453263 sshd-session[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:22.457109 systemd-logind[1450]: New session 5 of user core. Dec 13 13:27:22.469199 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:27:22.528606 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 13:27:22.528910 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:27:22.545972 sudo[1598]: pam_unix(sudo:session): session closed for user root Dec 13 13:27:22.550280 sshd[1597]: Connection closed by 10.0.0.1 port 37454 Dec 13 13:27:22.550749 sshd-session[1595]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:22.568399 systemd[1]: sshd@4-10.0.0.127:22-10.0.0.1:37454.service: Deactivated successfully. Dec 13 13:27:22.571627 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:27:22.573210 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:27:22.574948 systemd[1]: Started sshd@5-10.0.0.127:22-10.0.0.1:47720.service - OpenSSH per-connection server daemon (10.0.0.1:47720). Dec 13 13:27:22.575559 systemd-logind[1450]: Removed session 5. Dec 13 13:27:22.616600 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 47720 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:22.617773 sshd-session[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:22.621315 systemd-logind[1450]: New session 6 of user core. Dec 13 13:27:22.630185 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:27:22.680215 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 13:27:22.680504 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:27:22.683458 sudo[1607]: pam_unix(sudo:session): session closed for user root Dec 13 13:27:22.687918 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 13:27:22.688210 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:27:22.710415 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:27:22.733079 augenrules[1629]: No rules Dec 13 13:27:22.734204 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:27:22.735152 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:27:22.736028 sudo[1606]: pam_unix(sudo:session): session closed for user root Dec 13 13:27:22.738314 sshd[1605]: Connection closed by 10.0.0.1 port 47720 Dec 13 13:27:22.738192 sshd-session[1603]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:22.744279 systemd[1]: sshd@5-10.0.0.127:22-10.0.0.1:47720.service: Deactivated successfully. Dec 13 13:27:22.745763 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:27:22.748253 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:27:22.749412 systemd[1]: Started sshd@6-10.0.0.127:22-10.0.0.1:47724.service - OpenSSH per-connection server daemon (10.0.0.1:47724). Dec 13 13:27:22.751477 systemd-logind[1450]: Removed session 6. Dec 13 13:27:22.790818 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 47724 ssh2: RSA SHA256:q9cWvSR3bBxu+L28Z4JmOHhvW5qF2BbU+1GVJNGhIf4 Dec 13 13:27:22.791905 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:27:22.796872 systemd-logind[1450]: New session 7 of user core. Dec 13 13:27:22.807288 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:27:22.858882 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:27:22.859461 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:27:22.880329 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 13:27:22.894410 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 13:27:22.895187 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 13:27:23.386261 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:27:23.396275 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:27:23.414451 systemd[1]: Reloading requested from client PID 1687 ('systemctl') (unit session-7.scope)... Dec 13 13:27:23.414469 systemd[1]: Reloading... Dec 13 13:27:23.490089 zram_generator::config[1725]: No configuration found. Dec 13 13:27:23.663691 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:27:23.713902 systemd[1]: Reloading finished in 299 ms. Dec 13 13:27:23.752648 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:27:23.755065 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:27:23.755256 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:27:23.764391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:27:23.849098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:27:23.852711 (kubelet)[1772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:27:23.889748 kubelet[1772]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:27:23.889748 kubelet[1772]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:27:23.889748 kubelet[1772]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:27:23.890920 kubelet[1772]: I1213 13:27:23.890572 1772 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:27:25.425404 kubelet[1772]: I1213 13:27:25.424535 1772 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 13:27:25.425404 kubelet[1772]: I1213 13:27:25.424570 1772 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:27:25.425404 kubelet[1772]: I1213 13:27:25.424869 1772 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 13:27:25.459957 kubelet[1772]: I1213 13:27:25.459930 1772 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:27:25.467596 kubelet[1772]: I1213 13:27:25.467542 1772 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:27:25.468569 kubelet[1772]: I1213 13:27:25.468529 1772 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:27:25.468740 kubelet[1772]: I1213 13:27:25.468567 1772 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.127","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:27:25.468827 kubelet[1772]: I1213 13:27:25.468804 1772 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:27:25.468827 kubelet[1772]: I1213 13:27:25.468813 1772 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:27:25.469114 kubelet[1772]: I1213 13:27:25.469087 1772 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:27:25.471811 kubelet[1772]: I1213 13:27:25.471785 1772 kubelet.go:400] "Attempting to sync node with API server" Dec 13 13:27:25.471811 kubelet[1772]: I1213 13:27:25.471809 1772 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:27:25.472815 kubelet[1772]: I1213 13:27:25.472027 1772 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:27:25.472815 kubelet[1772]: I1213 13:27:25.472293 1772 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:27:25.472815 kubelet[1772]: E1213 13:27:25.472450 1772 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:25.472815 kubelet[1772]: E1213 13:27:25.472599 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:25.473675 kubelet[1772]: I1213 13:27:25.473652 1772 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:27:25.474099 kubelet[1772]: I1213 13:27:25.474085 1772 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:27:25.474235 kubelet[1772]: W1213 13:27:25.474221 1772 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:27:25.475858 kubelet[1772]: I1213 13:27:25.475082 1772 server.go:1264] "Started kubelet" Dec 13 13:27:25.475858 kubelet[1772]: I1213 13:27:25.475468 1772 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:27:25.476903 kubelet[1772]: I1213 13:27:25.476845 1772 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:27:25.477291 kubelet[1772]: I1213 13:27:25.477270 1772 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:27:25.477528 kubelet[1772]: I1213 13:27:25.477471 1772 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:27:25.477659 kubelet[1772]: I1213 13:27:25.477604 1772 server.go:455] "Adding debug handlers to kubelet server" Dec 13 13:27:25.480901 kubelet[1772]: E1213 13:27:25.480567 1772 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Dec 13 13:27:25.480985 kubelet[1772]: I1213 13:27:25.480916 1772 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:27:25.481309 kubelet[1772]: I1213 13:27:25.481028 1772 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 13:27:25.482007 kubelet[1772]: I1213 13:27:25.481983 1772 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:27:25.482080 kubelet[1772]: W1213 13:27:25.482025 1772 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 13:27:25.482080 kubelet[1772]: E1213 13:27:25.482074 1772 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 13:27:25.482220 kubelet[1772]: W1213 13:27:25.482172 1772 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.127" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 13:27:25.482260 kubelet[1772]: E1213 13:27:25.482240 1772 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.127" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 13:27:25.482930 kubelet[1772]: E1213 13:27:25.482883 1772 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:27:25.484063 kubelet[1772]: I1213 13:27:25.483814 1772 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:27:25.484063 kubelet[1772]: I1213 13:27:25.483902 1772 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:27:25.484768 kubelet[1772]: E1213 13:27:25.484460 1772 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.127.1810bf878ece7f84 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.127,UID:10.0.0.127,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.127,},FirstTimestamp:2024-12-13 13:27:25.475028868 +0000 UTC m=+1.619300201,LastTimestamp:2024-12-13 13:27:25.475028868 +0000 UTC m=+1.619300201,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.127,}" Dec 13 13:27:25.485044 kubelet[1772]: I1213 13:27:25.484992 1772 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:27:25.490812 kubelet[1772]: E1213 13:27:25.490770 1772 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.127\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 13:27:25.491052 kubelet[1772]: W1213 13:27:25.491025 1772 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 13:27:25.491104 kubelet[1772]: E1213 13:27:25.491088 1772 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 13:27:25.492034 kubelet[1772]: E1213 13:27:25.491935 1772 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.127.1810bf878f462da4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.127,UID:10.0.0.127,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.127,},FirstTimestamp:2024-12-13 13:27:25.482872228 +0000 UTC m=+1.627143561,LastTimestamp:2024-12-13 13:27:25.482872228 +0000 UTC m=+1.627143561,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.127,}" Dec 13 13:27:25.498430 kubelet[1772]: I1213 13:27:25.498408 1772 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:27:25.498857 kubelet[1772]: I1213 13:27:25.498554 1772 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:27:25.498857 kubelet[1772]: I1213 13:27:25.498575 1772 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:27:25.563252 kubelet[1772]: I1213 13:27:25.563221 1772 policy_none.go:49] "None policy: Start" Dec 13 13:27:25.564110 kubelet[1772]: I1213 13:27:25.564081 1772 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:27:25.564110 kubelet[1772]: I1213 13:27:25.564112 1772 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:27:25.569942 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:27:25.580989 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:27:25.582019 kubelet[1772]: I1213 13:27:25.581986 1772 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.127" Dec 13 13:27:25.585607 kubelet[1772]: I1213 13:27:25.585573 1772 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.127" Dec 13 13:27:25.585987 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:27:25.591203 kubelet[1772]: I1213 13:27:25.591164 1772 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:27:25.592962 kubelet[1772]: I1213 13:27:25.592916 1772 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:27:25.593040 kubelet[1772]: I1213 13:27:25.593018 1772 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:27:25.593040 kubelet[1772]: I1213 13:27:25.593026 1772 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:27:25.593040 kubelet[1772]: I1213 13:27:25.593036 1772 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 13:27:25.593121 kubelet[1772]: E1213 13:27:25.593088 1772 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:27:25.593289 kubelet[1772]: I1213 13:27:25.593242 1772 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:27:25.593369 kubelet[1772]: I1213 13:27:25.593351 1772 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:27:25.595961 kubelet[1772]: E1213 13:27:25.595804 1772 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Dec 13 13:27:25.596351 kubelet[1772]: E1213 13:27:25.596331 1772 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.127\" not found" Dec 13 13:27:25.696124 kubelet[1772]: E1213 13:27:25.696013 1772 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Dec 13 13:27:25.796163 kubelet[1772]: E1213 13:27:25.796121 1772 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Dec 13 13:27:25.869120 sudo[1640]: pam_unix(sudo:session): session closed for user root Dec 13 13:27:25.870279 sshd[1639]: Connection closed by 10.0.0.1 port 47724 Dec 13 13:27:25.870608 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Dec 13 13:27:25.873115 systemd[1]: sshd@6-10.0.0.127:22-10.0.0.1:47724.service: Deactivated successfully. Dec 13 13:27:25.874656 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:27:25.876331 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:27:25.877271 systemd-logind[1450]: Removed session 7. Dec 13 13:27:25.896527 kubelet[1772]: E1213 13:27:25.896492 1772 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Dec 13 13:27:25.997506 kubelet[1772]: E1213 13:27:25.997367 1772 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Dec 13 13:27:26.098273 kubelet[1772]: E1213 13:27:26.098216 1772 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Dec 13 13:27:26.198925 kubelet[1772]: E1213 13:27:26.198878 1772 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Dec 13 13:27:26.299660 kubelet[1772]: E1213 13:27:26.299561 1772 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Dec 13 13:27:26.400182 kubelet[1772]: E1213 13:27:26.400142 1772 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Dec 13 13:27:26.428433 kubelet[1772]: I1213 13:27:26.428353 1772 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 13:27:26.428872 kubelet[1772]: W1213 13:27:26.428530 1772 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 13:27:26.428872 kubelet[1772]: W1213 13:27:26.428564 1772 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 13:27:26.428872 kubelet[1772]: W1213 13:27:26.428585 1772 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 13:27:26.472788 kubelet[1772]: E1213 13:27:26.472748 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:26.501098 kubelet[1772]: E1213 13:27:26.501034 1772 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Dec 13 13:27:26.602920 kubelet[1772]: I1213 13:27:26.602735 1772 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 13:27:26.603876 containerd[1464]: time="2024-12-13T13:27:26.603322388Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:27:26.604509 kubelet[1772]: I1213 13:27:26.603524 1772 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 13:27:27.473663 kubelet[1772]: E1213 13:27:27.473546 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:27.474512 kubelet[1772]: I1213 13:27:27.473628 1772 apiserver.go:52] "Watching apiserver" Dec 13 13:27:27.478938 kubelet[1772]: I1213 13:27:27.478225 1772 topology_manager.go:215] "Topology Admit Handler" podUID="6460d141-485e-4722-bdeb-f495d785a82d" podNamespace="calico-system" podName="calico-node-hz8hn" Dec 13 13:27:27.478938 kubelet[1772]: I1213 13:27:27.478334 1772 topology_manager.go:215] "Topology Admit Handler" podUID="6db58e4d-8f86-4eb5-876a-c966f0f897e6" podNamespace="calico-system" podName="csi-node-driver-xtgnp" Dec 13 13:27:27.480724 kubelet[1772]: I1213 13:27:27.480113 1772 topology_manager.go:215] "Topology Admit Handler" podUID="2bca7f70-03f0-4786-ba23-e18d19a34260" podNamespace="kube-system" podName="kube-proxy-dbj2h" Dec 13 13:27:27.480724 kubelet[1772]: E1213 13:27:27.480497 1772 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xtgnp" podUID="6db58e4d-8f86-4eb5-876a-c966f0f897e6" Dec 13 13:27:27.481573 kubelet[1772]: I1213 13:27:27.481503 1772 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 13:27:27.489942 systemd[1]: Created slice kubepods-besteffort-pod6460d141_485e_4722_bdeb_f495d785a82d.slice - libcontainer container kubepods-besteffort-pod6460d141_485e_4722_bdeb_f495d785a82d.slice. Dec 13 13:27:27.494878 kubelet[1772]: I1213 13:27:27.494480 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6460d141-485e-4722-bdeb-f495d785a82d-tigera-ca-bundle\") pod \"calico-node-hz8hn\" (UID: \"6460d141-485e-4722-bdeb-f495d785a82d\") " pod="calico-system/calico-node-hz8hn" Dec 13 13:27:27.494878 kubelet[1772]: I1213 13:27:27.494531 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6460d141-485e-4722-bdeb-f495d785a82d-var-lib-calico\") pod \"calico-node-hz8hn\" (UID: \"6460d141-485e-4722-bdeb-f495d785a82d\") " pod="calico-system/calico-node-hz8hn" Dec 13 13:27:27.494878 kubelet[1772]: I1213 13:27:27.494554 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6460d141-485e-4722-bdeb-f495d785a82d-flexvol-driver-host\") pod \"calico-node-hz8hn\" (UID: \"6460d141-485e-4722-bdeb-f495d785a82d\") " pod="calico-system/calico-node-hz8hn" Dec 13 13:27:27.494878 kubelet[1772]: I1213 13:27:27.494570 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8g66\" (UniqueName: \"kubernetes.io/projected/6460d141-485e-4722-bdeb-f495d785a82d-kube-api-access-g8g66\") pod \"calico-node-hz8hn\" (UID: \"6460d141-485e-4722-bdeb-f495d785a82d\") " pod="calico-system/calico-node-hz8hn" Dec 13 13:27:27.494878 kubelet[1772]: I1213 13:27:27.494586 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6db58e4d-8f86-4eb5-876a-c966f0f897e6-kubelet-dir\") pod \"csi-node-driver-xtgnp\" (UID: \"6db58e4d-8f86-4eb5-876a-c966f0f897e6\") " pod="calico-system/csi-node-driver-xtgnp" Dec 13 13:27:27.495185 kubelet[1772]: I1213 13:27:27.494603 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6460d141-485e-4722-bdeb-f495d785a82d-xtables-lock\") pod \"calico-node-hz8hn\" (UID: \"6460d141-485e-4722-bdeb-f495d785a82d\") " pod="calico-system/calico-node-hz8hn" Dec 13 13:27:27.495185 kubelet[1772]: I1213 13:27:27.494619 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v8pj\" (UniqueName: \"kubernetes.io/projected/2bca7f70-03f0-4786-ba23-e18d19a34260-kube-api-access-2v8pj\") pod \"kube-proxy-dbj2h\" (UID: \"2bca7f70-03f0-4786-ba23-e18d19a34260\") " pod="kube-system/kube-proxy-dbj2h" Dec 13 13:27:27.495185 kubelet[1772]: I1213 13:27:27.494635 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6460d141-485e-4722-bdeb-f495d785a82d-cni-net-dir\") pod \"calico-node-hz8hn\" (UID: \"6460d141-485e-4722-bdeb-f495d785a82d\") " pod="calico-system/calico-node-hz8hn" Dec 13 13:27:27.495185 kubelet[1772]: I1213 13:27:27.494650 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2bca7f70-03f0-4786-ba23-e18d19a34260-kube-proxy\") pod \"kube-proxy-dbj2h\" (UID: \"2bca7f70-03f0-4786-ba23-e18d19a34260\") " pod="kube-system/kube-proxy-dbj2h" Dec 13 13:27:27.495185 kubelet[1772]: I1213 13:27:27.494664 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2bca7f70-03f0-4786-ba23-e18d19a34260-xtables-lock\") pod \"kube-proxy-dbj2h\" (UID: \"2bca7f70-03f0-4786-ba23-e18d19a34260\") " pod="kube-system/kube-proxy-dbj2h" Dec 13 13:27:27.495542 kubelet[1772]: I1213 13:27:27.494679 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6460d141-485e-4722-bdeb-f495d785a82d-policysync\") pod \"calico-node-hz8hn\" (UID: \"6460d141-485e-4722-bdeb-f495d785a82d\") " pod="calico-system/calico-node-hz8hn" Dec 13 13:27:27.495542 kubelet[1772]: I1213 13:27:27.494703 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6460d141-485e-4722-bdeb-f495d785a82d-cni-bin-dir\") pod \"calico-node-hz8hn\" (UID: \"6460d141-485e-4722-bdeb-f495d785a82d\") " pod="calico-system/calico-node-hz8hn" Dec 13 13:27:27.495542 kubelet[1772]: I1213 13:27:27.494719 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6db58e4d-8f86-4eb5-876a-c966f0f897e6-registration-dir\") pod \"csi-node-driver-xtgnp\" (UID: \"6db58e4d-8f86-4eb5-876a-c966f0f897e6\") " pod="calico-system/csi-node-driver-xtgnp" Dec 13 13:27:27.495542 kubelet[1772]: I1213 13:27:27.494734 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpp42\" (UniqueName: \"kubernetes.io/projected/6db58e4d-8f86-4eb5-876a-c966f0f897e6-kube-api-access-bpp42\") pod \"csi-node-driver-xtgnp\" (UID: \"6db58e4d-8f86-4eb5-876a-c966f0f897e6\") " pod="calico-system/csi-node-driver-xtgnp" Dec 13 13:27:27.495542 kubelet[1772]: I1213 13:27:27.494756 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2bca7f70-03f0-4786-ba23-e18d19a34260-lib-modules\") pod \"kube-proxy-dbj2h\" (UID: \"2bca7f70-03f0-4786-ba23-e18d19a34260\") " pod="kube-system/kube-proxy-dbj2h" Dec 13 13:27:27.495639 kubelet[1772]: I1213 13:27:27.494770 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6460d141-485e-4722-bdeb-f495d785a82d-lib-modules\") pod \"calico-node-hz8hn\" (UID: \"6460d141-485e-4722-bdeb-f495d785a82d\") " pod="calico-system/calico-node-hz8hn" Dec 13 13:27:27.495639 kubelet[1772]: I1213 13:27:27.494784 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6460d141-485e-4722-bdeb-f495d785a82d-node-certs\") pod \"calico-node-hz8hn\" (UID: \"6460d141-485e-4722-bdeb-f495d785a82d\") " pod="calico-system/calico-node-hz8hn" Dec 13 13:27:27.495639 kubelet[1772]: I1213 13:27:27.494799 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6460d141-485e-4722-bdeb-f495d785a82d-var-run-calico\") pod \"calico-node-hz8hn\" (UID: \"6460d141-485e-4722-bdeb-f495d785a82d\") " pod="calico-system/calico-node-hz8hn" Dec 13 13:27:27.495639 kubelet[1772]: I1213 13:27:27.494813 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6db58e4d-8f86-4eb5-876a-c966f0f897e6-varrun\") pod \"csi-node-driver-xtgnp\" (UID: \"6db58e4d-8f86-4eb5-876a-c966f0f897e6\") " pod="calico-system/csi-node-driver-xtgnp" Dec 13 13:27:27.495639 kubelet[1772]: I1213 13:27:27.494826 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6db58e4d-8f86-4eb5-876a-c966f0f897e6-socket-dir\") pod \"csi-node-driver-xtgnp\" (UID: \"6db58e4d-8f86-4eb5-876a-c966f0f897e6\") " pod="calico-system/csi-node-driver-xtgnp" Dec 13 13:27:27.495728 kubelet[1772]: I1213 13:27:27.494847 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6460d141-485e-4722-bdeb-f495d785a82d-cni-log-dir\") pod \"calico-node-hz8hn\" (UID: \"6460d141-485e-4722-bdeb-f495d785a82d\") " pod="calico-system/calico-node-hz8hn" Dec 13 13:27:27.505690 systemd[1]: Created slice kubepods-besteffort-pod2bca7f70_03f0_4786_ba23_e18d19a34260.slice - libcontainer container kubepods-besteffort-pod2bca7f70_03f0_4786_ba23_e18d19a34260.slice. Dec 13 13:27:27.596928 kubelet[1772]: E1213 13:27:27.596893 1772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:27.596928 kubelet[1772]: W1213 13:27:27.596917 1772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:27.597031 kubelet[1772]: E1213 13:27:27.596941 1772 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:27.597235 kubelet[1772]: E1213 13:27:27.597212 1772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:27.597235 kubelet[1772]: W1213 13:27:27.597230 1772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:27.597309 kubelet[1772]: E1213 13:27:27.597245 1772 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:27.599311 kubelet[1772]: E1213 13:27:27.599291 1772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:27.599461 kubelet[1772]: W1213 13:27:27.599444 1772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:27.599545 kubelet[1772]: E1213 13:27:27.599533 1772 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:27.601562 kubelet[1772]: E1213 13:27:27.601542 1772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:27.601562 kubelet[1772]: W1213 13:27:27.601562 1772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:27.601656 kubelet[1772]: E1213 13:27:27.601576 1772 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:27.607414 kubelet[1772]: E1213 13:27:27.607395 1772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:27.607414 kubelet[1772]: W1213 13:27:27.607412 1772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:27.607516 kubelet[1772]: E1213 13:27:27.607431 1772 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:27.607671 kubelet[1772]: E1213 13:27:27.607659 1772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:27.607671 kubelet[1772]: W1213 13:27:27.607670 1772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:27.607727 kubelet[1772]: E1213 13:27:27.607681 1772 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:27.610577 kubelet[1772]: E1213 13:27:27.610560 1772 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:27:27.610577 kubelet[1772]: W1213 13:27:27.610575 1772 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:27:27.610649 kubelet[1772]: E1213 13:27:27.610586 1772 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:27:27.803169 kubelet[1772]: E1213 13:27:27.802849 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:27.803630 containerd[1464]: time="2024-12-13T13:27:27.803581548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hz8hn,Uid:6460d141-485e-4722-bdeb-f495d785a82d,Namespace:calico-system,Attempt:0,}" Dec 13 13:27:27.816391 kubelet[1772]: E1213 13:27:27.816282 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:27.816997 containerd[1464]: time="2024-12-13T13:27:27.816722188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dbj2h,Uid:2bca7f70-03f0-4786-ba23-e18d19a34260,Namespace:kube-system,Attempt:0,}" Dec 13 13:27:28.315613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount559377660.mount: Deactivated successfully. Dec 13 13:27:28.320801 containerd[1464]: time="2024-12-13T13:27:28.320068228Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:27:28.321208 containerd[1464]: time="2024-12-13T13:27:28.321181908Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Dec 13 13:27:28.322937 containerd[1464]: time="2024-12-13T13:27:28.322884668Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:27:28.324166 containerd[1464]: time="2024-12-13T13:27:28.324071428Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:27:28.326504 containerd[1464]: time="2024-12-13T13:27:28.326383908Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:27:28.329024 containerd[1464]: time="2024-12-13T13:27:28.328996988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:27:28.329987 containerd[1464]: time="2024-12-13T13:27:28.329883268Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 513.08276ms" Dec 13 13:27:28.332167 containerd[1464]: time="2024-12-13T13:27:28.332140308Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 528.46568ms" Dec 13 13:27:28.428668 containerd[1464]: time="2024-12-13T13:27:28.428565188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:27:28.428668 containerd[1464]: time="2024-12-13T13:27:28.428637508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:27:28.429186 containerd[1464]: time="2024-12-13T13:27:28.429125108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:28.430110 containerd[1464]: time="2024-12-13T13:27:28.429790748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:28.430110 containerd[1464]: time="2024-12-13T13:27:28.429711748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:27:28.430110 containerd[1464]: time="2024-12-13T13:27:28.429762708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:27:28.430110 containerd[1464]: time="2024-12-13T13:27:28.429778228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:28.430110 containerd[1464]: time="2024-12-13T13:27:28.429843868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:28.474434 kubelet[1772]: E1213 13:27:28.474393 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:28.525222 systemd[1]: Started cri-containerd-98cc1c0fce9752731438ed69ea31bdac31138600082cf79b092de259d4e157bd.scope - libcontainer container 98cc1c0fce9752731438ed69ea31bdac31138600082cf79b092de259d4e157bd. Dec 13 13:27:28.526386 systemd[1]: Started cri-containerd-bdf58bfc18a6b7b33b98ce11b2043e23a0b7bae39536abca379c2027362ab76b.scope - libcontainer container bdf58bfc18a6b7b33b98ce11b2043e23a0b7bae39536abca379c2027362ab76b. Dec 13 13:27:28.548335 containerd[1464]: time="2024-12-13T13:27:28.548274308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hz8hn,Uid:6460d141-485e-4722-bdeb-f495d785a82d,Namespace:calico-system,Attempt:0,} returns sandbox id \"bdf58bfc18a6b7b33b98ce11b2043e23a0b7bae39536abca379c2027362ab76b\"" Dec 13 13:27:28.549580 containerd[1464]: time="2024-12-13T13:27:28.549551708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dbj2h,Uid:2bca7f70-03f0-4786-ba23-e18d19a34260,Namespace:kube-system,Attempt:0,} returns sandbox id \"98cc1c0fce9752731438ed69ea31bdac31138600082cf79b092de259d4e157bd\"" Dec 13 13:27:28.550428 kubelet[1772]: E1213 13:27:28.550169 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:28.550650 kubelet[1772]: E1213 13:27:28.550628 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:28.552535 containerd[1464]: time="2024-12-13T13:27:28.552507308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 13:27:29.419780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2135275320.mount: Deactivated successfully. Dec 13 13:27:29.474588 kubelet[1772]: E1213 13:27:29.474531 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:29.510127 containerd[1464]: time="2024-12-13T13:27:29.510070068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:29.510667 containerd[1464]: time="2024-12-13T13:27:29.510573268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603" Dec 13 13:27:29.511503 containerd[1464]: time="2024-12-13T13:27:29.511462628Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:29.513260 containerd[1464]: time="2024-12-13T13:27:29.513224748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:29.514314 containerd[1464]: time="2024-12-13T13:27:29.514280268Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 961.7346ms" Dec 13 13:27:29.514357 containerd[1464]: time="2024-12-13T13:27:29.514313828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 13:27:29.515651 containerd[1464]: time="2024-12-13T13:27:29.515616468Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 13:27:29.516624 containerd[1464]: time="2024-12-13T13:27:29.516587988Z" level=info msg="CreateContainer within sandbox \"bdf58bfc18a6b7b33b98ce11b2043e23a0b7bae39536abca379c2027362ab76b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 13:27:29.529847 containerd[1464]: time="2024-12-13T13:27:29.529702948Z" level=info msg="CreateContainer within sandbox \"bdf58bfc18a6b7b33b98ce11b2043e23a0b7bae39536abca379c2027362ab76b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f80dce2be3990051ca0a78647fb3021e2e3a4c927c9920361ba61c13fce27df1\"" Dec 13 13:27:29.530383 containerd[1464]: time="2024-12-13T13:27:29.530322748Z" level=info msg="StartContainer for \"f80dce2be3990051ca0a78647fb3021e2e3a4c927c9920361ba61c13fce27df1\"" Dec 13 13:27:29.559255 systemd[1]: Started cri-containerd-f80dce2be3990051ca0a78647fb3021e2e3a4c927c9920361ba61c13fce27df1.scope - libcontainer container f80dce2be3990051ca0a78647fb3021e2e3a4c927c9920361ba61c13fce27df1. Dec 13 13:27:29.583511 containerd[1464]: time="2024-12-13T13:27:29.583383788Z" level=info msg="StartContainer for \"f80dce2be3990051ca0a78647fb3021e2e3a4c927c9920361ba61c13fce27df1\" returns successfully" Dec 13 13:27:29.593486 kubelet[1772]: E1213 13:27:29.593430 1772 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xtgnp" podUID="6db58e4d-8f86-4eb5-876a-c966f0f897e6" Dec 13 13:27:29.609537 kubelet[1772]: E1213 13:27:29.609502 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:29.621181 systemd[1]: cri-containerd-f80dce2be3990051ca0a78647fb3021e2e3a4c927c9920361ba61c13fce27df1.scope: Deactivated successfully. Dec 13 13:27:29.639658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f80dce2be3990051ca0a78647fb3021e2e3a4c927c9920361ba61c13fce27df1-rootfs.mount: Deactivated successfully. Dec 13 13:27:29.709865 containerd[1464]: time="2024-12-13T13:27:29.709521148Z" level=info msg="shim disconnected" id=f80dce2be3990051ca0a78647fb3021e2e3a4c927c9920361ba61c13fce27df1 namespace=k8s.io Dec 13 13:27:29.709865 containerd[1464]: time="2024-12-13T13:27:29.709579028Z" level=warning msg="cleaning up after shim disconnected" id=f80dce2be3990051ca0a78647fb3021e2e3a4c927c9920361ba61c13fce27df1 namespace=k8s.io Dec 13 13:27:29.709865 containerd[1464]: time="2024-12-13T13:27:29.709588788Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:27:30.475279 kubelet[1772]: E1213 13:27:30.475242 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:30.520941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1827803663.mount: Deactivated successfully. Dec 13 13:27:30.616654 kubelet[1772]: E1213 13:27:30.616609 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:30.709032 containerd[1464]: time="2024-12-13T13:27:30.708984468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:30.709991 containerd[1464]: time="2024-12-13T13:27:30.709926948Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662013" Dec 13 13:27:30.715662 containerd[1464]: time="2024-12-13T13:27:30.715620588Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:30.718221 containerd[1464]: time="2024-12-13T13:27:30.718166268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:30.718977 containerd[1464]: time="2024-12-13T13:27:30.718864668Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 1.20321164s" Dec 13 13:27:30.718977 containerd[1464]: time="2024-12-13T13:27:30.718894828Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Dec 13 13:27:30.720205 containerd[1464]: time="2024-12-13T13:27:30.720179268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 13:27:30.720942 containerd[1464]: time="2024-12-13T13:27:30.720916468Z" level=info msg="CreateContainer within sandbox \"98cc1c0fce9752731438ed69ea31bdac31138600082cf79b092de259d4e157bd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:27:30.733271 containerd[1464]: time="2024-12-13T13:27:30.733175788Z" level=info msg="CreateContainer within sandbox \"98cc1c0fce9752731438ed69ea31bdac31138600082cf79b092de259d4e157bd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"437b9b7f9c53280453f7b6137b9db16393f6542d54ac9c5d33a80cce41a8a168\"" Dec 13 13:27:30.733773 containerd[1464]: time="2024-12-13T13:27:30.733731468Z" level=info msg="StartContainer for \"437b9b7f9c53280453f7b6137b9db16393f6542d54ac9c5d33a80cce41a8a168\"" Dec 13 13:27:30.765220 systemd[1]: Started cri-containerd-437b9b7f9c53280453f7b6137b9db16393f6542d54ac9c5d33a80cce41a8a168.scope - libcontainer container 437b9b7f9c53280453f7b6137b9db16393f6542d54ac9c5d33a80cce41a8a168. Dec 13 13:27:30.792739 containerd[1464]: time="2024-12-13T13:27:30.792694668Z" level=info msg="StartContainer for \"437b9b7f9c53280453f7b6137b9db16393f6542d54ac9c5d33a80cce41a8a168\" returns successfully" Dec 13 13:27:31.475520 kubelet[1772]: E1213 13:27:31.475473 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:31.595025 kubelet[1772]: E1213 13:27:31.594966 1772 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xtgnp" podUID="6db58e4d-8f86-4eb5-876a-c966f0f897e6" Dec 13 13:27:31.618513 kubelet[1772]: E1213 13:27:31.618455 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:31.631799 kubelet[1772]: I1213 13:27:31.631737 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dbj2h" podStartSLOduration=4.463562548 podStartE2EDuration="6.631720668s" podCreationTimestamp="2024-12-13 13:27:25 +0000 UTC" firstStartedPulling="2024-12-13 13:27:28.551520268 +0000 UTC m=+4.695791601" lastFinishedPulling="2024-12-13 13:27:30.719678388 +0000 UTC m=+6.863949721" observedRunningTime="2024-12-13 13:27:31.631473428 +0000 UTC m=+7.775744761" watchObservedRunningTime="2024-12-13 13:27:31.631720668 +0000 UTC m=+7.775992001" Dec 13 13:27:32.476612 kubelet[1772]: E1213 13:27:32.476561 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:32.582907 containerd[1464]: time="2024-12-13T13:27:32.582860108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:32.583802 containerd[1464]: time="2024-12-13T13:27:32.583641668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 13:27:32.584511 containerd[1464]: time="2024-12-13T13:27:32.584445908Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:32.586579 containerd[1464]: time="2024-12-13T13:27:32.586521028Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:32.587241 containerd[1464]: time="2024-12-13T13:27:32.587215948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 1.8670072s" Dec 13 13:27:32.587307 containerd[1464]: time="2024-12-13T13:27:32.587245828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 13:27:32.589148 containerd[1464]: time="2024-12-13T13:27:32.589109828Z" level=info msg="CreateContainer within sandbox \"bdf58bfc18a6b7b33b98ce11b2043e23a0b7bae39536abca379c2027362ab76b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 13:27:32.599768 containerd[1464]: time="2024-12-13T13:27:32.599725188Z" level=info msg="CreateContainer within sandbox \"bdf58bfc18a6b7b33b98ce11b2043e23a0b7bae39536abca379c2027362ab76b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"03a69d309f26d3f9929da85a04f5702886e78688ab7225e21d54eb3c387333f8\"" Dec 13 13:27:32.600248 containerd[1464]: time="2024-12-13T13:27:32.600217668Z" level=info msg="StartContainer for \"03a69d309f26d3f9929da85a04f5702886e78688ab7225e21d54eb3c387333f8\"" Dec 13 13:27:32.624122 kubelet[1772]: E1213 13:27:32.623753 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:32.633257 systemd[1]: Started cri-containerd-03a69d309f26d3f9929da85a04f5702886e78688ab7225e21d54eb3c387333f8.scope - libcontainer container 03a69d309f26d3f9929da85a04f5702886e78688ab7225e21d54eb3c387333f8. Dec 13 13:27:32.660378 containerd[1464]: time="2024-12-13T13:27:32.658775148Z" level=info msg="StartContainer for \"03a69d309f26d3f9929da85a04f5702886e78688ab7225e21d54eb3c387333f8\" returns successfully" Dec 13 13:27:33.116319 containerd[1464]: time="2024-12-13T13:27:33.116251828Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:27:33.118252 systemd[1]: cri-containerd-03a69d309f26d3f9929da85a04f5702886e78688ab7225e21d54eb3c387333f8.scope: Deactivated successfully. Dec 13 13:27:33.135273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03a69d309f26d3f9929da85a04f5702886e78688ab7225e21d54eb3c387333f8-rootfs.mount: Deactivated successfully. Dec 13 13:27:33.144863 kubelet[1772]: I1213 13:27:33.144703 1772 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 13:27:33.304430 containerd[1464]: time="2024-12-13T13:27:33.304349708Z" level=info msg="shim disconnected" id=03a69d309f26d3f9929da85a04f5702886e78688ab7225e21d54eb3c387333f8 namespace=k8s.io Dec 13 13:27:33.304430 containerd[1464]: time="2024-12-13T13:27:33.304405948Z" level=warning msg="cleaning up after shim disconnected" id=03a69d309f26d3f9929da85a04f5702886e78688ab7225e21d54eb3c387333f8 namespace=k8s.io Dec 13 13:27:33.304430 containerd[1464]: time="2024-12-13T13:27:33.304426828Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:27:33.477450 kubelet[1772]: E1213 13:27:33.477376 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:33.599486 systemd[1]: Created slice kubepods-besteffort-pod6db58e4d_8f86_4eb5_876a_c966f0f897e6.slice - libcontainer container kubepods-besteffort-pod6db58e4d_8f86_4eb5_876a_c966f0f897e6.slice. Dec 13 13:27:33.601258 containerd[1464]: time="2024-12-13T13:27:33.601223148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xtgnp,Uid:6db58e4d-8f86-4eb5-876a-c966f0f897e6,Namespace:calico-system,Attempt:0,}" Dec 13 13:27:33.628341 kubelet[1772]: E1213 13:27:33.627833 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:33.629178 containerd[1464]: time="2024-12-13T13:27:33.629139988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 13:27:33.748444 containerd[1464]: time="2024-12-13T13:27:33.748327668Z" level=error msg="Failed to destroy network for sandbox \"d121ff8dc9be5b6a4709169d66a9b6d247e5c740153385f0cd8c3d89d66c7eae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:33.748780 containerd[1464]: time="2024-12-13T13:27:33.748743988Z" level=error msg="encountered an error cleaning up failed sandbox \"d121ff8dc9be5b6a4709169d66a9b6d247e5c740153385f0cd8c3d89d66c7eae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:33.748909 containerd[1464]: time="2024-12-13T13:27:33.748820508Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xtgnp,Uid:6db58e4d-8f86-4eb5-876a-c966f0f897e6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d121ff8dc9be5b6a4709169d66a9b6d247e5c740153385f0cd8c3d89d66c7eae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:33.749220 kubelet[1772]: E1213 13:27:33.749162 1772 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d121ff8dc9be5b6a4709169d66a9b6d247e5c740153385f0cd8c3d89d66c7eae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:33.749288 kubelet[1772]: E1213 13:27:33.749245 1772 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d121ff8dc9be5b6a4709169d66a9b6d247e5c740153385f0cd8c3d89d66c7eae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xtgnp" Dec 13 13:27:33.749288 kubelet[1772]: E1213 13:27:33.749266 1772 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d121ff8dc9be5b6a4709169d66a9b6d247e5c740153385f0cd8c3d89d66c7eae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xtgnp" Dec 13 13:27:33.749341 kubelet[1772]: E1213 13:27:33.749315 1772 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xtgnp_calico-system(6db58e4d-8f86-4eb5-876a-c966f0f897e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xtgnp_calico-system(6db58e4d-8f86-4eb5-876a-c966f0f897e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d121ff8dc9be5b6a4709169d66a9b6d247e5c740153385f0cd8c3d89d66c7eae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xtgnp" podUID="6db58e4d-8f86-4eb5-876a-c966f0f897e6" Dec 13 13:27:33.750140 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d121ff8dc9be5b6a4709169d66a9b6d247e5c740153385f0cd8c3d89d66c7eae-shm.mount: Deactivated successfully. Dec 13 13:27:34.478182 kubelet[1772]: E1213 13:27:34.478127 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:34.629591 kubelet[1772]: I1213 13:27:34.629549 1772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d121ff8dc9be5b6a4709169d66a9b6d247e5c740153385f0cd8c3d89d66c7eae" Dec 13 13:27:34.630523 containerd[1464]: time="2024-12-13T13:27:34.630486508Z" level=info msg="StopPodSandbox for \"d121ff8dc9be5b6a4709169d66a9b6d247e5c740153385f0cd8c3d89d66c7eae\"" Dec 13 13:27:34.630829 containerd[1464]: time="2024-12-13T13:27:34.630646188Z" level=info msg="Ensure that sandbox d121ff8dc9be5b6a4709169d66a9b6d247e5c740153385f0cd8c3d89d66c7eae in task-service has been cleanup successfully" Dec 13 13:27:34.630829 containerd[1464]: time="2024-12-13T13:27:34.630802428Z" level=info msg="TearDown network for sandbox \"d121ff8dc9be5b6a4709169d66a9b6d247e5c740153385f0cd8c3d89d66c7eae\" successfully" Dec 13 13:27:34.630829 containerd[1464]: time="2024-12-13T13:27:34.630814628Z" level=info msg="StopPodSandbox for \"d121ff8dc9be5b6a4709169d66a9b6d247e5c740153385f0cd8c3d89d66c7eae\" returns successfully" Dec 13 13:27:34.631401 containerd[1464]: time="2024-12-13T13:27:34.631376308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xtgnp,Uid:6db58e4d-8f86-4eb5-876a-c966f0f897e6,Namespace:calico-system,Attempt:1,}" Dec 13 13:27:34.631994 systemd[1]: run-netns-cni\x2def37cb5c\x2d75a7\x2d9860\x2dafc2\x2d3d24c7ed1728.mount: Deactivated successfully. Dec 13 13:27:34.997943 containerd[1464]: time="2024-12-13T13:27:34.997894308Z" level=error msg="Failed to destroy network for sandbox \"23d5ad943eeb44b640c89f2f4ab8c313a31b16da83efabc7f4f8ad312d6f593b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:34.999498 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-23d5ad943eeb44b640c89f2f4ab8c313a31b16da83efabc7f4f8ad312d6f593b-shm.mount: Deactivated successfully. Dec 13 13:27:34.999734 containerd[1464]: time="2024-12-13T13:27:34.999566628Z" level=error msg="encountered an error cleaning up failed sandbox \"23d5ad943eeb44b640c89f2f4ab8c313a31b16da83efabc7f4f8ad312d6f593b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:34.999734 containerd[1464]: time="2024-12-13T13:27:34.999627428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xtgnp,Uid:6db58e4d-8f86-4eb5-876a-c966f0f897e6,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"23d5ad943eeb44b640c89f2f4ab8c313a31b16da83efabc7f4f8ad312d6f593b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:35.000526 kubelet[1772]: E1213 13:27:34.999819 1772 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23d5ad943eeb44b640c89f2f4ab8c313a31b16da83efabc7f4f8ad312d6f593b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:35.000526 kubelet[1772]: E1213 13:27:34.999872 1772 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23d5ad943eeb44b640c89f2f4ab8c313a31b16da83efabc7f4f8ad312d6f593b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xtgnp" Dec 13 13:27:35.000526 kubelet[1772]: E1213 13:27:34.999891 1772 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23d5ad943eeb44b640c89f2f4ab8c313a31b16da83efabc7f4f8ad312d6f593b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xtgnp" Dec 13 13:27:35.000630 kubelet[1772]: E1213 13:27:34.999929 1772 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xtgnp_calico-system(6db58e4d-8f86-4eb5-876a-c966f0f897e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xtgnp_calico-system(6db58e4d-8f86-4eb5-876a-c966f0f897e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23d5ad943eeb44b640c89f2f4ab8c313a31b16da83efabc7f4f8ad312d6f593b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xtgnp" podUID="6db58e4d-8f86-4eb5-876a-c966f0f897e6" Dec 13 13:27:35.478557 kubelet[1772]: E1213 13:27:35.478492 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:35.633233 kubelet[1772]: I1213 13:27:35.632506 1772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23d5ad943eeb44b640c89f2f4ab8c313a31b16da83efabc7f4f8ad312d6f593b" Dec 13 13:27:35.633369 containerd[1464]: time="2024-12-13T13:27:35.633134348Z" level=info msg="StopPodSandbox for \"23d5ad943eeb44b640c89f2f4ab8c313a31b16da83efabc7f4f8ad312d6f593b\"" Dec 13 13:27:35.633369 containerd[1464]: time="2024-12-13T13:27:35.633341308Z" level=info msg="Ensure that sandbox 23d5ad943eeb44b640c89f2f4ab8c313a31b16da83efabc7f4f8ad312d6f593b in task-service has been cleanup successfully" Dec 13 13:27:35.633669 containerd[1464]: time="2024-12-13T13:27:35.633545948Z" level=info msg="TearDown network for sandbox \"23d5ad943eeb44b640c89f2f4ab8c313a31b16da83efabc7f4f8ad312d6f593b\" successfully" Dec 13 13:27:35.633669 containerd[1464]: time="2024-12-13T13:27:35.633560908Z" level=info msg="StopPodSandbox for \"23d5ad943eeb44b640c89f2f4ab8c313a31b16da83efabc7f4f8ad312d6f593b\" returns successfully" Dec 13 13:27:35.634039 containerd[1464]: time="2024-12-13T13:27:35.633882188Z" level=info msg="StopPodSandbox for \"d121ff8dc9be5b6a4709169d66a9b6d247e5c740153385f0cd8c3d89d66c7eae\"" Dec 13 13:27:35.634039 containerd[1464]: time="2024-12-13T13:27:35.633962788Z" level=info msg="TearDown network for sandbox \"d121ff8dc9be5b6a4709169d66a9b6d247e5c740153385f0cd8c3d89d66c7eae\" successfully" Dec 13 13:27:35.634039 containerd[1464]: time="2024-12-13T13:27:35.633973348Z" level=info msg="StopPodSandbox for \"d121ff8dc9be5b6a4709169d66a9b6d247e5c740153385f0cd8c3d89d66c7eae\" returns successfully" Dec 13 13:27:35.634685 containerd[1464]: time="2024-12-13T13:27:35.634369068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xtgnp,Uid:6db58e4d-8f86-4eb5-876a-c966f0f897e6,Namespace:calico-system,Attempt:2,}" Dec 13 13:27:35.635294 systemd[1]: run-netns-cni\x2d77e0b69e\x2df30c\x2d0000\x2df9f0\x2d50adbb3cbd9a.mount: Deactivated successfully. Dec 13 13:27:35.691916 containerd[1464]: time="2024-12-13T13:27:35.691867588Z" level=error msg="Failed to destroy network for sandbox \"799ce9e975edfc551013b491cb3242ddd519d82ea15e902998da165d8c3b02d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:35.693326 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-799ce9e975edfc551013b491cb3242ddd519d82ea15e902998da165d8c3b02d3-shm.mount: Deactivated successfully. Dec 13 13:27:35.694163 containerd[1464]: time="2024-12-13T13:27:35.694126708Z" level=error msg="encountered an error cleaning up failed sandbox \"799ce9e975edfc551013b491cb3242ddd519d82ea15e902998da165d8c3b02d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:35.694212 containerd[1464]: time="2024-12-13T13:27:35.694196508Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xtgnp,Uid:6db58e4d-8f86-4eb5-876a-c966f0f897e6,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"799ce9e975edfc551013b491cb3242ddd519d82ea15e902998da165d8c3b02d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:35.694432 kubelet[1772]: E1213 13:27:35.694397 1772 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"799ce9e975edfc551013b491cb3242ddd519d82ea15e902998da165d8c3b02d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:35.694548 kubelet[1772]: E1213 13:27:35.694464 1772 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"799ce9e975edfc551013b491cb3242ddd519d82ea15e902998da165d8c3b02d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xtgnp" Dec 13 13:27:35.694548 kubelet[1772]: E1213 13:27:35.694485 1772 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"799ce9e975edfc551013b491cb3242ddd519d82ea15e902998da165d8c3b02d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xtgnp" Dec 13 13:27:35.694548 kubelet[1772]: E1213 13:27:35.694531 1772 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xtgnp_calico-system(6db58e4d-8f86-4eb5-876a-c966f0f897e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xtgnp_calico-system(6db58e4d-8f86-4eb5-876a-c966f0f897e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"799ce9e975edfc551013b491cb3242ddd519d82ea15e902998da165d8c3b02d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xtgnp" podUID="6db58e4d-8f86-4eb5-876a-c966f0f897e6" Dec 13 13:27:36.479739 kubelet[1772]: E1213 13:27:36.479697 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:36.564603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4201319278.mount: Deactivated successfully. Dec 13 13:27:36.636018 kubelet[1772]: I1213 13:27:36.635992 1772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="799ce9e975edfc551013b491cb3242ddd519d82ea15e902998da165d8c3b02d3" Dec 13 13:27:36.636598 containerd[1464]: time="2024-12-13T13:27:36.636563228Z" level=info msg="StopPodSandbox for \"799ce9e975edfc551013b491cb3242ddd519d82ea15e902998da165d8c3b02d3\"" Dec 13 13:27:36.637152 containerd[1464]: time="2024-12-13T13:27:36.637009428Z" level=info msg="Ensure that sandbox 799ce9e975edfc551013b491cb3242ddd519d82ea15e902998da165d8c3b02d3 in task-service has been cleanup successfully" Dec 13 13:27:36.637346 containerd[1464]: time="2024-12-13T13:27:36.637256868Z" level=info msg="TearDown network for sandbox \"799ce9e975edfc551013b491cb3242ddd519d82ea15e902998da165d8c3b02d3\" successfully" Dec 13 13:27:36.637346 containerd[1464]: time="2024-12-13T13:27:36.637276428Z" level=info msg="StopPodSandbox for \"799ce9e975edfc551013b491cb3242ddd519d82ea15e902998da165d8c3b02d3\" returns successfully" Dec 13 13:27:36.637573 containerd[1464]: time="2024-12-13T13:27:36.637539028Z" level=info msg="StopPodSandbox for \"23d5ad943eeb44b640c89f2f4ab8c313a31b16da83efabc7f4f8ad312d6f593b\"" Dec 13 13:27:36.637643 containerd[1464]: time="2024-12-13T13:27:36.637628148Z" level=info msg="TearDown network for sandbox \"23d5ad943eeb44b640c89f2f4ab8c313a31b16da83efabc7f4f8ad312d6f593b\" successfully" Dec 13 13:27:36.637667 containerd[1464]: time="2024-12-13T13:27:36.637643348Z" level=info msg="StopPodSandbox for \"23d5ad943eeb44b640c89f2f4ab8c313a31b16da83efabc7f4f8ad312d6f593b\" returns successfully" Dec 13 13:27:36.638386 containerd[1464]: time="2024-12-13T13:27:36.638356508Z" level=info msg="StopPodSandbox for \"d121ff8dc9be5b6a4709169d66a9b6d247e5c740153385f0cd8c3d89d66c7eae\"" Dec 13 13:27:36.638478 containerd[1464]: time="2024-12-13T13:27:36.638438508Z" level=info msg="TearDown network for sandbox \"d121ff8dc9be5b6a4709169d66a9b6d247e5c740153385f0cd8c3d89d66c7eae\" successfully" Dec 13 13:27:36.638478 containerd[1464]: time="2024-12-13T13:27:36.638462068Z" level=info msg="StopPodSandbox for \"d121ff8dc9be5b6a4709169d66a9b6d247e5c740153385f0cd8c3d89d66c7eae\" returns successfully" Dec 13 13:27:36.638743 systemd[1]: run-netns-cni\x2d58107365\x2d2ad8\x2d56e1\x2dfbb7\x2d5f3e322810ca.mount: Deactivated successfully. Dec 13 13:27:36.639059 containerd[1464]: time="2024-12-13T13:27:36.638949588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xtgnp,Uid:6db58e4d-8f86-4eb5-876a-c966f0f897e6,Namespace:calico-system,Attempt:3,}" Dec 13 13:27:36.735903 containerd[1464]: time="2024-12-13T13:27:36.735789228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:36.737160 containerd[1464]: time="2024-12-13T13:27:36.737095388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 13:27:36.739725 containerd[1464]: time="2024-12-13T13:27:36.739671228Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:36.742370 containerd[1464]: time="2024-12-13T13:27:36.742330148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:36.742988 containerd[1464]: time="2024-12-13T13:27:36.742947508Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 3.113764s" Dec 13 13:27:36.742988 containerd[1464]: time="2024-12-13T13:27:36.742979748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 13:27:36.752392 containerd[1464]: time="2024-12-13T13:27:36.752156948Z" level=info msg="CreateContainer within sandbox \"bdf58bfc18a6b7b33b98ce11b2043e23a0b7bae39536abca379c2027362ab76b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 13:27:36.765655 containerd[1464]: time="2024-12-13T13:27:36.765603548Z" level=info msg="CreateContainer within sandbox \"bdf58bfc18a6b7b33b98ce11b2043e23a0b7bae39536abca379c2027362ab76b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"baec1fd3b7c4efbd5fbd823f2231ba2915df75f174a7e4fee379efe344b715ba\"" Dec 13 13:27:36.767043 containerd[1464]: time="2024-12-13T13:27:36.766247228Z" level=info msg="StartContainer for \"baec1fd3b7c4efbd5fbd823f2231ba2915df75f174a7e4fee379efe344b715ba\"" Dec 13 13:27:36.792582 containerd[1464]: time="2024-12-13T13:27:36.792532948Z" level=error msg="Failed to destroy network for sandbox \"9b6c0ae3639281c52aba91ac25f6f3605d9355ee387e509ff0c605fa0081ec28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:36.792879 containerd[1464]: time="2024-12-13T13:27:36.792851188Z" level=error msg="encountered an error cleaning up failed sandbox \"9b6c0ae3639281c52aba91ac25f6f3605d9355ee387e509ff0c605fa0081ec28\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:36.792933 containerd[1464]: time="2024-12-13T13:27:36.792915068Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xtgnp,Uid:6db58e4d-8f86-4eb5-876a-c966f0f897e6,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9b6c0ae3639281c52aba91ac25f6f3605d9355ee387e509ff0c605fa0081ec28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:36.793622 kubelet[1772]: E1213 13:27:36.793128 1772 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b6c0ae3639281c52aba91ac25f6f3605d9355ee387e509ff0c605fa0081ec28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:27:36.793225 systemd[1]: Started cri-containerd-baec1fd3b7c4efbd5fbd823f2231ba2915df75f174a7e4fee379efe344b715ba.scope - libcontainer container baec1fd3b7c4efbd5fbd823f2231ba2915df75f174a7e4fee379efe344b715ba. Dec 13 13:27:36.793945 kubelet[1772]: E1213 13:27:36.793799 1772 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b6c0ae3639281c52aba91ac25f6f3605d9355ee387e509ff0c605fa0081ec28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xtgnp" Dec 13 13:27:36.793945 kubelet[1772]: E1213 13:27:36.793832 1772 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b6c0ae3639281c52aba91ac25f6f3605d9355ee387e509ff0c605fa0081ec28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xtgnp" Dec 13 13:27:36.793945 kubelet[1772]: E1213 13:27:36.793884 1772 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xtgnp_calico-system(6db58e4d-8f86-4eb5-876a-c966f0f897e6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xtgnp_calico-system(6db58e4d-8f86-4eb5-876a-c966f0f897e6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b6c0ae3639281c52aba91ac25f6f3605d9355ee387e509ff0c605fa0081ec28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xtgnp" podUID="6db58e4d-8f86-4eb5-876a-c966f0f897e6" Dec 13 13:27:36.827950 containerd[1464]: time="2024-12-13T13:27:36.827907668Z" level=info msg="StartContainer for \"baec1fd3b7c4efbd5fbd823f2231ba2915df75f174a7e4fee379efe344b715ba\" returns successfully" Dec 13 13:27:36.975075 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 13:27:36.975161 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 13:27:37.265344 kubelet[1772]: I1213 13:27:37.265300 1772 topology_manager.go:215] "Topology Admit Handler" podUID="a12b9a0d-9155-4fa4-b369-8eeade2e9035" podNamespace="default" podName="nginx-deployment-85f456d6dd-lqtnc" Dec 13 13:27:37.277699 systemd[1]: Created slice kubepods-besteffort-poda12b9a0d_9155_4fa4_b369_8eeade2e9035.slice - libcontainer container kubepods-besteffort-poda12b9a0d_9155_4fa4_b369_8eeade2e9035.slice. Dec 13 13:27:37.351800 kubelet[1772]: I1213 13:27:37.351757 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49hhm\" (UniqueName: \"kubernetes.io/projected/a12b9a0d-9155-4fa4-b369-8eeade2e9035-kube-api-access-49hhm\") pod \"nginx-deployment-85f456d6dd-lqtnc\" (UID: \"a12b9a0d-9155-4fa4-b369-8eeade2e9035\") " pod="default/nginx-deployment-85f456d6dd-lqtnc" Dec 13 13:27:37.480580 kubelet[1772]: E1213 13:27:37.480538 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:37.581070 containerd[1464]: time="2024-12-13T13:27:37.580633428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-lqtnc,Uid:a12b9a0d-9155-4fa4-b369-8eeade2e9035,Namespace:default,Attempt:0,}" Dec 13 13:27:37.641679 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b6c0ae3639281c52aba91ac25f6f3605d9355ee387e509ff0c605fa0081ec28-shm.mount: Deactivated successfully. Dec 13 13:27:37.644585 kubelet[1772]: E1213 13:27:37.644088 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:37.648775 kubelet[1772]: I1213 13:27:37.648750 1772 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b6c0ae3639281c52aba91ac25f6f3605d9355ee387e509ff0c605fa0081ec28" Dec 13 13:27:37.649274 containerd[1464]: time="2024-12-13T13:27:37.649242468Z" level=info msg="StopPodSandbox for \"9b6c0ae3639281c52aba91ac25f6f3605d9355ee387e509ff0c605fa0081ec28\"" Dec 13 13:27:37.649572 containerd[1464]: time="2024-12-13T13:27:37.649400308Z" level=info msg="Ensure that sandbox 9b6c0ae3639281c52aba91ac25f6f3605d9355ee387e509ff0c605fa0081ec28 in task-service has been cleanup successfully" Dec 13 13:27:37.651575 containerd[1464]: time="2024-12-13T13:27:37.649992388Z" level=info msg="TearDown network for sandbox \"9b6c0ae3639281c52aba91ac25f6f3605d9355ee387e509ff0c605fa0081ec28\" successfully" Dec 13 13:27:37.651575 containerd[1464]: time="2024-12-13T13:27:37.650015388Z" level=info msg="StopPodSandbox for \"9b6c0ae3639281c52aba91ac25f6f3605d9355ee387e509ff0c605fa0081ec28\" returns successfully" Dec 13 13:27:37.651575 containerd[1464]: time="2024-12-13T13:27:37.650311988Z" level=info msg="StopPodSandbox for \"799ce9e975edfc551013b491cb3242ddd519d82ea15e902998da165d8c3b02d3\"" Dec 13 13:27:37.651575 containerd[1464]: time="2024-12-13T13:27:37.650397548Z" level=info msg="TearDown network for sandbox \"799ce9e975edfc551013b491cb3242ddd519d82ea15e902998da165d8c3b02d3\" successfully" Dec 13 13:27:37.651575 containerd[1464]: time="2024-12-13T13:27:37.650410148Z" level=info msg="StopPodSandbox for \"799ce9e975edfc551013b491cb3242ddd519d82ea15e902998da165d8c3b02d3\" returns successfully" Dec 13 13:27:37.651575 containerd[1464]: time="2024-12-13T13:27:37.650728708Z" level=info msg="StopPodSandbox for \"23d5ad943eeb44b640c89f2f4ab8c313a31b16da83efabc7f4f8ad312d6f593b\"" Dec 13 13:27:37.651575 containerd[1464]: time="2024-12-13T13:27:37.650804948Z" level=info msg="TearDown network for sandbox \"23d5ad943eeb44b640c89f2f4ab8c313a31b16da83efabc7f4f8ad312d6f593b\" successfully" Dec 13 13:27:37.651575 containerd[1464]: time="2024-12-13T13:27:37.650813828Z" level=info msg="StopPodSandbox for \"23d5ad943eeb44b640c89f2f4ab8c313a31b16da83efabc7f4f8ad312d6f593b\" returns successfully" Dec 13 13:27:37.651575 containerd[1464]: time="2024-12-13T13:27:37.651060948Z" level=info msg="StopPodSandbox for \"d121ff8dc9be5b6a4709169d66a9b6d247e5c740153385f0cd8c3d89d66c7eae\"" Dec 13 13:27:37.651575 containerd[1464]: time="2024-12-13T13:27:37.651127748Z" level=info msg="TearDown network for sandbox \"d121ff8dc9be5b6a4709169d66a9b6d247e5c740153385f0cd8c3d89d66c7eae\" successfully" Dec 13 13:27:37.651575 containerd[1464]: time="2024-12-13T13:27:37.651135948Z" level=info msg="StopPodSandbox for \"d121ff8dc9be5b6a4709169d66a9b6d247e5c740153385f0cd8c3d89d66c7eae\" returns successfully" Dec 13 13:27:37.651575 containerd[1464]: time="2024-12-13T13:27:37.651512868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xtgnp,Uid:6db58e4d-8f86-4eb5-876a-c966f0f897e6,Namespace:calico-system,Attempt:4,}" Dec 13 13:27:37.651008 systemd[1]: run-netns-cni\x2dd07d822a\x2d09a5\x2d2fea\x2dc03e\x2d101043dadb8b.mount: Deactivated successfully. Dec 13 13:27:37.778846 systemd-networkd[1394]: cali99e65d2c042: Link UP Dec 13 13:27:37.779316 systemd-networkd[1394]: cali99e65d2c042: Gained carrier Dec 13 13:27:37.788249 kubelet[1772]: I1213 13:27:37.788190 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hz8hn" podStartSLOduration=4.595333108 podStartE2EDuration="12.788154428s" podCreationTimestamp="2024-12-13 13:27:25 +0000 UTC" firstStartedPulling="2024-12-13 13:27:28.551308628 +0000 UTC m=+4.695579961" lastFinishedPulling="2024-12-13 13:27:36.744129948 +0000 UTC m=+12.888401281" observedRunningTime="2024-12-13 13:27:37.661925308 +0000 UTC m=+13.806196641" watchObservedRunningTime="2024-12-13 13:27:37.788154428 +0000 UTC m=+13.932425761" Dec 13 13:27:37.790280 containerd[1464]: 2024-12-13 13:27:37.620 [INFO][2452] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:27:37.790280 containerd[1464]: 2024-12-13 13:27:37.637 [INFO][2452] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.127-k8s-nginx--deployment--85f456d6dd--lqtnc-eth0 nginx-deployment-85f456d6dd- default a12b9a0d-9155-4fa4-b369-8eeade2e9035 984 0 2024-12-13 13:27:37 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.127 nginx-deployment-85f456d6dd-lqtnc eth0 default [] [] [kns.default ksa.default.default] cali99e65d2c042 [] []}} ContainerID="653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b" Namespace="default" Pod="nginx-deployment-85f456d6dd-lqtnc" WorkloadEndpoint="10.0.0.127-k8s-nginx--deployment--85f456d6dd--lqtnc-" Dec 13 13:27:37.790280 containerd[1464]: 2024-12-13 13:27:37.637 [INFO][2452] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b" Namespace="default" Pod="nginx-deployment-85f456d6dd-lqtnc" WorkloadEndpoint="10.0.0.127-k8s-nginx--deployment--85f456d6dd--lqtnc-eth0" Dec 13 13:27:37.790280 containerd[1464]: 2024-12-13 13:27:37.720 [INFO][2468] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b" HandleID="k8s-pod-network.653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b" Workload="10.0.0.127-k8s-nginx--deployment--85f456d6dd--lqtnc-eth0" Dec 13 13:27:37.790280 containerd[1464]: 2024-12-13 13:27:37.736 [INFO][2468] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b" HandleID="k8s-pod-network.653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b" Workload="10.0.0.127-k8s-nginx--deployment--85f456d6dd--lqtnc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000529e30), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.127", "pod":"nginx-deployment-85f456d6dd-lqtnc", "timestamp":"2024-12-13 13:27:37.720412868 +0000 UTC"}, Hostname:"10.0.0.127", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:27:37.790280 containerd[1464]: 2024-12-13 13:27:37.736 [INFO][2468] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:27:37.790280 containerd[1464]: 2024-12-13 13:27:37.736 [INFO][2468] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:27:37.790280 containerd[1464]: 2024-12-13 13:27:37.736 [INFO][2468] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.127' Dec 13 13:27:37.790280 containerd[1464]: 2024-12-13 13:27:37.739 [INFO][2468] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b" host="10.0.0.127" Dec 13 13:27:37.790280 containerd[1464]: 2024-12-13 13:27:37.743 [INFO][2468] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.127" Dec 13 13:27:37.790280 containerd[1464]: 2024-12-13 13:27:37.748 [INFO][2468] ipam/ipam.go 489: Trying affinity for 192.168.54.0/26 host="10.0.0.127" Dec 13 13:27:37.790280 containerd[1464]: 2024-12-13 13:27:37.750 [INFO][2468] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.0/26 host="10.0.0.127" Dec 13 13:27:37.790280 containerd[1464]: 2024-12-13 13:27:37.752 [INFO][2468] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="10.0.0.127" Dec 13 13:27:37.790280 containerd[1464]: 2024-12-13 13:27:37.754 [INFO][2468] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b" host="10.0.0.127" Dec 13 13:27:37.790280 containerd[1464]: 2024-12-13 13:27:37.757 [INFO][2468] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b Dec 13 13:27:37.790280 containerd[1464]: 2024-12-13 13:27:37.762 [INFO][2468] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b" host="10.0.0.127" Dec 13 13:27:37.790280 containerd[1464]: 2024-12-13 13:27:37.766 [INFO][2468] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.1/26] block=192.168.54.0/26 handle="k8s-pod-network.653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b" host="10.0.0.127" Dec 13 13:27:37.790280 containerd[1464]: 2024-12-13 13:27:37.767 [INFO][2468] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.1/26] handle="k8s-pod-network.653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b" host="10.0.0.127" Dec 13 13:27:37.790280 containerd[1464]: 2024-12-13 13:27:37.767 [INFO][2468] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:27:37.790280 containerd[1464]: 2024-12-13 13:27:37.767 [INFO][2468] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.1/26] IPv6=[] ContainerID="653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b" HandleID="k8s-pod-network.653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b" Workload="10.0.0.127-k8s-nginx--deployment--85f456d6dd--lqtnc-eth0" Dec 13 13:27:37.791101 containerd[1464]: 2024-12-13 13:27:37.768 [INFO][2452] cni-plugin/k8s.go 386: Populated endpoint ContainerID="653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b" Namespace="default" Pod="nginx-deployment-85f456d6dd-lqtnc" WorkloadEndpoint="10.0.0.127-k8s-nginx--deployment--85f456d6dd--lqtnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.127-k8s-nginx--deployment--85f456d6dd--lqtnc-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"a12b9a0d-9155-4fa4-b369-8eeade2e9035", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 27, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.127", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-lqtnc", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali99e65d2c042", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:27:37.791101 containerd[1464]: 2024-12-13 13:27:37.768 [INFO][2452] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.1/32] ContainerID="653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b" Namespace="default" Pod="nginx-deployment-85f456d6dd-lqtnc" WorkloadEndpoint="10.0.0.127-k8s-nginx--deployment--85f456d6dd--lqtnc-eth0" Dec 13 13:27:37.791101 containerd[1464]: 2024-12-13 13:27:37.768 [INFO][2452] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali99e65d2c042 ContainerID="653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b" Namespace="default" Pod="nginx-deployment-85f456d6dd-lqtnc" WorkloadEndpoint="10.0.0.127-k8s-nginx--deployment--85f456d6dd--lqtnc-eth0" Dec 13 13:27:37.791101 containerd[1464]: 2024-12-13 13:27:37.779 [INFO][2452] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b" Namespace="default" Pod="nginx-deployment-85f456d6dd-lqtnc" WorkloadEndpoint="10.0.0.127-k8s-nginx--deployment--85f456d6dd--lqtnc-eth0" Dec 13 13:27:37.791101 containerd[1464]: 2024-12-13 13:27:37.779 [INFO][2452] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b" Namespace="default" Pod="nginx-deployment-85f456d6dd-lqtnc" WorkloadEndpoint="10.0.0.127-k8s-nginx--deployment--85f456d6dd--lqtnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.127-k8s-nginx--deployment--85f456d6dd--lqtnc-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"a12b9a0d-9155-4fa4-b369-8eeade2e9035", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 27, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.127", ContainerID:"653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b", Pod:"nginx-deployment-85f456d6dd-lqtnc", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali99e65d2c042", MAC:"56:53:5c:09:fa:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:27:37.791101 containerd[1464]: 2024-12-13 13:27:37.788 [INFO][2452] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b" Namespace="default" Pod="nginx-deployment-85f456d6dd-lqtnc" WorkloadEndpoint="10.0.0.127-k8s-nginx--deployment--85f456d6dd--lqtnc-eth0" Dec 13 13:27:37.805321 systemd-networkd[1394]: cali58da364586e: Link UP Dec 13 13:27:37.805880 systemd-networkd[1394]: cali58da364586e: Gained carrier Dec 13 13:27:37.810560 containerd[1464]: time="2024-12-13T13:27:37.810474828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:27:37.810560 containerd[1464]: time="2024-12-13T13:27:37.810523228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:27:37.810560 containerd[1464]: time="2024-12-13T13:27:37.810535068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:37.810715 containerd[1464]: time="2024-12-13T13:27:37.810602948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:37.817419 containerd[1464]: 2024-12-13 13:27:37.698 [INFO][2489] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:27:37.817419 containerd[1464]: 2024-12-13 13:27:37.715 [INFO][2489] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.127-k8s-csi--node--driver--xtgnp-eth0 csi-node-driver- calico-system 6db58e4d-8f86-4eb5-876a-c966f0f897e6 733 0 2024-12-13 13:27:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.127 csi-node-driver-xtgnp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali58da364586e [] []}} ContainerID="5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c" Namespace="calico-system" Pod="csi-node-driver-xtgnp" WorkloadEndpoint="10.0.0.127-k8s-csi--node--driver--xtgnp-" Dec 13 13:27:37.817419 containerd[1464]: 2024-12-13 13:27:37.715 [INFO][2489] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c" Namespace="calico-system" Pod="csi-node-driver-xtgnp" WorkloadEndpoint="10.0.0.127-k8s-csi--node--driver--xtgnp-eth0" Dec 13 13:27:37.817419 containerd[1464]: 2024-12-13 13:27:37.741 [INFO][2513] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c" HandleID="k8s-pod-network.5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c" Workload="10.0.0.127-k8s-csi--node--driver--xtgnp-eth0" Dec 13 13:27:37.817419 containerd[1464]: 2024-12-13 13:27:37.756 [INFO][2513] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c" HandleID="k8s-pod-network.5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c" Workload="10.0.0.127-k8s-csi--node--driver--xtgnp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004d2e20), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.127", "pod":"csi-node-driver-xtgnp", "timestamp":"2024-12-13 13:27:37.741938868 +0000 UTC"}, Hostname:"10.0.0.127", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:27:37.817419 containerd[1464]: 2024-12-13 13:27:37.756 [INFO][2513] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:27:37.817419 containerd[1464]: 2024-12-13 13:27:37.767 [INFO][2513] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:27:37.817419 containerd[1464]: 2024-12-13 13:27:37.767 [INFO][2513] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.127' Dec 13 13:27:37.817419 containerd[1464]: 2024-12-13 13:27:37.769 [INFO][2513] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c" host="10.0.0.127" Dec 13 13:27:37.817419 containerd[1464]: 2024-12-13 13:27:37.773 [INFO][2513] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.127" Dec 13 13:27:37.817419 containerd[1464]: 2024-12-13 13:27:37.780 [INFO][2513] ipam/ipam.go 489: Trying affinity for 192.168.54.0/26 host="10.0.0.127" Dec 13 13:27:37.817419 containerd[1464]: 2024-12-13 13:27:37.782 [INFO][2513] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.0/26 host="10.0.0.127" Dec 13 13:27:37.817419 containerd[1464]: 2024-12-13 13:27:37.789 [INFO][2513] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="10.0.0.127" Dec 13 13:27:37.817419 containerd[1464]: 2024-12-13 13:27:37.789 [INFO][2513] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c" host="10.0.0.127" Dec 13 13:27:37.817419 containerd[1464]: 2024-12-13 13:27:37.791 [INFO][2513] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c Dec 13 13:27:37.817419 containerd[1464]: 2024-12-13 13:27:37.796 [INFO][2513] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c" host="10.0.0.127" Dec 13 13:27:37.817419 containerd[1464]: 2024-12-13 13:27:37.802 [INFO][2513] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.2/26] block=192.168.54.0/26 handle="k8s-pod-network.5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c" host="10.0.0.127" Dec 13 13:27:37.817419 containerd[1464]: 2024-12-13 13:27:37.802 [INFO][2513] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.2/26] handle="k8s-pod-network.5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c" host="10.0.0.127" Dec 13 13:27:37.817419 containerd[1464]: 2024-12-13 13:27:37.802 [INFO][2513] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:27:37.817419 containerd[1464]: 2024-12-13 13:27:37.802 [INFO][2513] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.2/26] IPv6=[] ContainerID="5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c" HandleID="k8s-pod-network.5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c" Workload="10.0.0.127-k8s-csi--node--driver--xtgnp-eth0" Dec 13 13:27:37.817907 containerd[1464]: 2024-12-13 13:27:37.804 [INFO][2489] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c" Namespace="calico-system" Pod="csi-node-driver-xtgnp" WorkloadEndpoint="10.0.0.127-k8s-csi--node--driver--xtgnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.127-k8s-csi--node--driver--xtgnp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6db58e4d-8f86-4eb5-876a-c966f0f897e6", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 27, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.127", ContainerID:"", Pod:"csi-node-driver-xtgnp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali58da364586e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:27:37.817907 containerd[1464]: 2024-12-13 13:27:37.804 [INFO][2489] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.2/32] ContainerID="5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c" Namespace="calico-system" Pod="csi-node-driver-xtgnp" WorkloadEndpoint="10.0.0.127-k8s-csi--node--driver--xtgnp-eth0" Dec 13 13:27:37.817907 containerd[1464]: 2024-12-13 13:27:37.804 [INFO][2489] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58da364586e ContainerID="5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c" Namespace="calico-system" Pod="csi-node-driver-xtgnp" WorkloadEndpoint="10.0.0.127-k8s-csi--node--driver--xtgnp-eth0" Dec 13 13:27:37.817907 containerd[1464]: 2024-12-13 13:27:37.805 [INFO][2489] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c" Namespace="calico-system" Pod="csi-node-driver-xtgnp" WorkloadEndpoint="10.0.0.127-k8s-csi--node--driver--xtgnp-eth0" Dec 13 13:27:37.817907 containerd[1464]: 2024-12-13 13:27:37.805 [INFO][2489] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c" Namespace="calico-system" Pod="csi-node-driver-xtgnp" WorkloadEndpoint="10.0.0.127-k8s-csi--node--driver--xtgnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.127-k8s-csi--node--driver--xtgnp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6db58e4d-8f86-4eb5-876a-c966f0f897e6", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 27, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.127", ContainerID:"5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c", Pod:"csi-node-driver-xtgnp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali58da364586e", MAC:"f6:a0:5c:11:cf:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:27:37.817907 containerd[1464]: 2024-12-13 13:27:37.816 [INFO][2489] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c" Namespace="calico-system" Pod="csi-node-driver-xtgnp" WorkloadEndpoint="10.0.0.127-k8s-csi--node--driver--xtgnp-eth0" Dec 13 13:27:37.827236 systemd[1]: Started cri-containerd-653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b.scope - libcontainer container 653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b. Dec 13 13:27:37.838112 containerd[1464]: time="2024-12-13T13:27:37.835978188Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:27:37.838112 containerd[1464]: time="2024-12-13T13:27:37.836401148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:27:37.838112 containerd[1464]: time="2024-12-13T13:27:37.836414388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:37.838112 containerd[1464]: time="2024-12-13T13:27:37.836509988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:37.839685 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:27:37.859633 systemd[1]: Started cri-containerd-5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c.scope - libcontainer container 5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c. Dec 13 13:27:37.869939 containerd[1464]: time="2024-12-13T13:27:37.869880468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-lqtnc,Uid:a12b9a0d-9155-4fa4-b369-8eeade2e9035,Namespace:default,Attempt:0,} returns sandbox id \"653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b\"" Dec 13 13:27:37.870298 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:27:37.872606 containerd[1464]: time="2024-12-13T13:27:37.872521268Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 13:27:37.880752 containerd[1464]: time="2024-12-13T13:27:37.880708348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xtgnp,Uid:6db58e4d-8f86-4eb5-876a-c966f0f897e6,Namespace:calico-system,Attempt:4,} returns sandbox id \"5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c\"" Dec 13 13:27:38.423430 kernel: bpftool[2751]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 13:27:38.480743 kubelet[1772]: E1213 13:27:38.480669 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:38.573091 systemd-networkd[1394]: vxlan.calico: Link UP Dec 13 13:27:38.573101 systemd-networkd[1394]: vxlan.calico: Gained carrier Dec 13 13:27:38.656321 kubelet[1772]: E1213 13:27:38.656288 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:27:38.898211 systemd-networkd[1394]: cali58da364586e: Gained IPv6LL Dec 13 13:27:39.480812 kubelet[1772]: E1213 13:27:39.480770 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:39.603173 systemd-networkd[1394]: cali99e65d2c042: Gained IPv6LL Dec 13 13:27:39.942349 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount231462003.mount: Deactivated successfully. Dec 13 13:27:40.481540 kubelet[1772]: E1213 13:27:40.481485 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:40.562727 systemd-networkd[1394]: vxlan.calico: Gained IPv6LL Dec 13 13:27:40.850459 containerd[1464]: time="2024-12-13T13:27:40.850320828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:40.851177 containerd[1464]: time="2024-12-13T13:27:40.850887868Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67696939" Dec 13 13:27:40.864025 containerd[1464]: time="2024-12-13T13:27:40.863534588Z" level=info msg="ImageCreate event name:\"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:40.866715 containerd[1464]: time="2024-12-13T13:27:40.866372908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:40.868083 containerd[1464]: time="2024-12-13T13:27:40.868031708Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"67696817\" in 2.9954694s" Dec 13 13:27:40.868083 containerd[1464]: time="2024-12-13T13:27:40.868080748Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 13:27:40.869506 containerd[1464]: time="2024-12-13T13:27:40.869434708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 13:27:40.871182 containerd[1464]: time="2024-12-13T13:27:40.870414388Z" level=info msg="CreateContainer within sandbox \"653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 13:27:40.881336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3641501277.mount: Deactivated successfully. Dec 13 13:27:40.885607 containerd[1464]: time="2024-12-13T13:27:40.885570428Z" level=info msg="CreateContainer within sandbox \"653af73e646812e09997034aad3b9e112b03fd4520725bff9eafbafdf036c19b\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"6778bcec22b3509d17a07ce90db11b21e63a0310f97c2c07e6890eed4866593b\"" Dec 13 13:27:40.886367 containerd[1464]: time="2024-12-13T13:27:40.886272268Z" level=info msg="StartContainer for \"6778bcec22b3509d17a07ce90db11b21e63a0310f97c2c07e6890eed4866593b\"" Dec 13 13:27:40.973255 systemd[1]: Started cri-containerd-6778bcec22b3509d17a07ce90db11b21e63a0310f97c2c07e6890eed4866593b.scope - libcontainer container 6778bcec22b3509d17a07ce90db11b21e63a0310f97c2c07e6890eed4866593b. Dec 13 13:27:40.996076 containerd[1464]: time="2024-12-13T13:27:40.995968788Z" level=info msg="StartContainer for \"6778bcec22b3509d17a07ce90db11b21e63a0310f97c2c07e6890eed4866593b\" returns successfully" Dec 13 13:27:41.482557 kubelet[1772]: E1213 13:27:41.482515 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:41.670567 kubelet[1772]: I1213 13:27:41.670451 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-lqtnc" podStartSLOduration=1.673800988 podStartE2EDuration="4.670425788s" podCreationTimestamp="2024-12-13 13:27:37 +0000 UTC" firstStartedPulling="2024-12-13 13:27:37.872111868 +0000 UTC m=+14.016383161" lastFinishedPulling="2024-12-13 13:27:40.868736628 +0000 UTC m=+17.013007961" observedRunningTime="2024-12-13 13:27:41.670282068 +0000 UTC m=+17.814553361" watchObservedRunningTime="2024-12-13 13:27:41.670425788 +0000 UTC m=+17.814697121" Dec 13 13:27:42.090017 containerd[1464]: time="2024-12-13T13:27:42.089969228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:42.090875 containerd[1464]: time="2024-12-13T13:27:42.090722108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 13:27:42.091547 containerd[1464]: time="2024-12-13T13:27:42.091516388Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:42.099379 containerd[1464]: time="2024-12-13T13:27:42.099343988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:42.100877 containerd[1464]: time="2024-12-13T13:27:42.100089188Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.23062332s" Dec 13 13:27:42.100877 containerd[1464]: time="2024-12-13T13:27:42.100117708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 13:27:42.112667 containerd[1464]: time="2024-12-13T13:27:42.112633828Z" level=info msg="CreateContainer within sandbox \"5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 13:27:42.132515 containerd[1464]: time="2024-12-13T13:27:42.132471588Z" level=info msg="CreateContainer within sandbox \"5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"40251b22a22a3dd7d6f4e863269b374232dc2172145add46c95b8977bfdedbc7\"" Dec 13 13:27:42.134479 containerd[1464]: time="2024-12-13T13:27:42.133111788Z" level=info msg="StartContainer for \"40251b22a22a3dd7d6f4e863269b374232dc2172145add46c95b8977bfdedbc7\"" Dec 13 13:27:42.165986 systemd[1]: Started cri-containerd-40251b22a22a3dd7d6f4e863269b374232dc2172145add46c95b8977bfdedbc7.scope - libcontainer container 40251b22a22a3dd7d6f4e863269b374232dc2172145add46c95b8977bfdedbc7. Dec 13 13:27:42.196165 containerd[1464]: time="2024-12-13T13:27:42.196123468Z" level=info msg="StartContainer for \"40251b22a22a3dd7d6f4e863269b374232dc2172145add46c95b8977bfdedbc7\" returns successfully" Dec 13 13:27:42.197679 containerd[1464]: time="2024-12-13T13:27:42.197605508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 13:27:42.483451 kubelet[1772]: E1213 13:27:42.483373 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:42.988202 containerd[1464]: time="2024-12-13T13:27:42.988144748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:42.988827 containerd[1464]: time="2024-12-13T13:27:42.988781988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 13:27:42.989621 containerd[1464]: time="2024-12-13T13:27:42.989572348Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:42.991445 containerd[1464]: time="2024-12-13T13:27:42.991410908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:42.992301 containerd[1464]: time="2024-12-13T13:27:42.992017428Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 794.37632ms" Dec 13 13:27:42.992301 containerd[1464]: time="2024-12-13T13:27:42.992063988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 13:27:42.996209 containerd[1464]: time="2024-12-13T13:27:42.996160108Z" level=info msg="CreateContainer within sandbox \"5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 13:27:43.011245 containerd[1464]: time="2024-12-13T13:27:43.011170388Z" level=info msg="CreateContainer within sandbox \"5913778f049f46b63b60486fa0b347b35ef02c165209be549c64a82d18ff396c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3c17b47fad5d311c9e95bad658c668c9f73fdb429efaa3424a4d7bbee9841c8d\"" Dec 13 13:27:43.011961 containerd[1464]: time="2024-12-13T13:27:43.011915628Z" level=info msg="StartContainer for \"3c17b47fad5d311c9e95bad658c668c9f73fdb429efaa3424a4d7bbee9841c8d\"" Dec 13 13:27:43.033302 systemd[1]: Started cri-containerd-3c17b47fad5d311c9e95bad658c668c9f73fdb429efaa3424a4d7bbee9841c8d.scope - libcontainer container 3c17b47fad5d311c9e95bad658c668c9f73fdb429efaa3424a4d7bbee9841c8d. Dec 13 13:27:43.065151 containerd[1464]: time="2024-12-13T13:27:43.065060868Z" level=info msg="StartContainer for \"3c17b47fad5d311c9e95bad658c668c9f73fdb429efaa3424a4d7bbee9841c8d\" returns successfully" Dec 13 13:27:43.484509 kubelet[1772]: E1213 13:27:43.484442 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:43.610907 kubelet[1772]: I1213 13:27:43.610863 1772 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 13:27:43.610907 kubelet[1772]: I1213 13:27:43.610900 1772 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 13:27:44.484782 kubelet[1772]: E1213 13:27:44.484741 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:44.711863 kubelet[1772]: I1213 13:27:44.711810 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xtgnp" podStartSLOduration=14.601229988 podStartE2EDuration="19.711792148s" podCreationTimestamp="2024-12-13 13:27:25 +0000 UTC" firstStartedPulling="2024-12-13 13:27:37.882210508 +0000 UTC m=+14.026481801" lastFinishedPulling="2024-12-13 13:27:42.992772628 +0000 UTC m=+19.137043961" observedRunningTime="2024-12-13 13:27:43.686806468 +0000 UTC m=+19.831077801" watchObservedRunningTime="2024-12-13 13:27:44.711792148 +0000 UTC m=+20.856063481" Dec 13 13:27:44.712063 kubelet[1772]: I1213 13:27:44.712012 1772 topology_manager.go:215] "Topology Admit Handler" podUID="f02c12d4-65a8-432c-9251-e03b8f840cf9" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 13:27:44.717587 systemd[1]: Created slice kubepods-besteffort-podf02c12d4_65a8_432c_9251_e03b8f840cf9.slice - libcontainer container kubepods-besteffort-podf02c12d4_65a8_432c_9251_e03b8f840cf9.slice. Dec 13 13:27:44.791325 kubelet[1772]: I1213 13:27:44.791197 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/f02c12d4-65a8-432c-9251-e03b8f840cf9-data\") pod \"nfs-server-provisioner-0\" (UID: \"f02c12d4-65a8-432c-9251-e03b8f840cf9\") " pod="default/nfs-server-provisioner-0" Dec 13 13:27:44.791325 kubelet[1772]: I1213 13:27:44.791236 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srg9g\" (UniqueName: \"kubernetes.io/projected/f02c12d4-65a8-432c-9251-e03b8f840cf9-kube-api-access-srg9g\") pod \"nfs-server-provisioner-0\" (UID: \"f02c12d4-65a8-432c-9251-e03b8f840cf9\") " pod="default/nfs-server-provisioner-0" Dec 13 13:27:45.021079 containerd[1464]: time="2024-12-13T13:27:45.021002348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f02c12d4-65a8-432c-9251-e03b8f840cf9,Namespace:default,Attempt:0,}" Dec 13 13:27:45.149374 systemd-networkd[1394]: cali60e51b789ff: Link UP Dec 13 13:27:45.150145 systemd-networkd[1394]: cali60e51b789ff: Gained carrier Dec 13 13:27:45.160205 containerd[1464]: 2024-12-13 13:27:45.068 [INFO][3027] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.127-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default f02c12d4-65a8-432c-9251-e03b8f840cf9 1069 0 2024-12-13 13:27:44 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.127 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.127-k8s-nfs--server--provisioner--0-" Dec 13 13:27:45.160205 containerd[1464]: 2024-12-13 13:27:45.068 [INFO][3027] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.127-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:27:45.160205 containerd[1464]: 2024-12-13 13:27:45.098 [INFO][3040] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d" HandleID="k8s-pod-network.47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d" Workload="10.0.0.127-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:27:45.160205 containerd[1464]: 2024-12-13 13:27:45.108 [INFO][3040] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d" HandleID="k8s-pod-network.47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d" Workload="10.0.0.127-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000306ad0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.127", "pod":"nfs-server-provisioner-0", "timestamp":"2024-12-13 13:27:45.098574988 +0000 UTC"}, Hostname:"10.0.0.127", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:27:45.160205 containerd[1464]: 2024-12-13 13:27:45.109 [INFO][3040] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:27:45.160205 containerd[1464]: 2024-12-13 13:27:45.109 [INFO][3040] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:27:45.160205 containerd[1464]: 2024-12-13 13:27:45.109 [INFO][3040] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.127' Dec 13 13:27:45.160205 containerd[1464]: 2024-12-13 13:27:45.110 [INFO][3040] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d" host="10.0.0.127" Dec 13 13:27:45.160205 containerd[1464]: 2024-12-13 13:27:45.114 [INFO][3040] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.127" Dec 13 13:27:45.160205 containerd[1464]: 2024-12-13 13:27:45.118 [INFO][3040] ipam/ipam.go 489: Trying affinity for 192.168.54.0/26 host="10.0.0.127" Dec 13 13:27:45.160205 containerd[1464]: 2024-12-13 13:27:45.120 [INFO][3040] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.0/26 host="10.0.0.127" Dec 13 13:27:45.160205 containerd[1464]: 2024-12-13 13:27:45.122 [INFO][3040] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="10.0.0.127" Dec 13 13:27:45.160205 containerd[1464]: 2024-12-13 13:27:45.122 [INFO][3040] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d" host="10.0.0.127" Dec 13 13:27:45.160205 containerd[1464]: 2024-12-13 13:27:45.123 [INFO][3040] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d Dec 13 13:27:45.160205 containerd[1464]: 2024-12-13 13:27:45.127 [INFO][3040] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d" host="10.0.0.127" Dec 13 13:27:45.160205 containerd[1464]: 2024-12-13 13:27:45.132 [INFO][3040] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.3/26] block=192.168.54.0/26 handle="k8s-pod-network.47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d" host="10.0.0.127" Dec 13 13:27:45.160205 containerd[1464]: 2024-12-13 13:27:45.132 [INFO][3040] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.3/26] handle="k8s-pod-network.47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d" host="10.0.0.127" Dec 13 13:27:45.160205 containerd[1464]: 2024-12-13 13:27:45.132 [INFO][3040] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:27:45.160205 containerd[1464]: 2024-12-13 13:27:45.132 [INFO][3040] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.3/26] IPv6=[] ContainerID="47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d" HandleID="k8s-pod-network.47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d" Workload="10.0.0.127-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:27:45.160714 containerd[1464]: 2024-12-13 13:27:45.134 [INFO][3027] cni-plugin/k8s.go 386: Populated endpoint ContainerID="47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.127-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.127-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"f02c12d4-65a8-432c-9251-e03b8f840cf9", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 27, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.127", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.54.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:27:45.160714 containerd[1464]: 2024-12-13 13:27:45.135 [INFO][3027] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.3/32] ContainerID="47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.127-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:27:45.160714 containerd[1464]: 2024-12-13 13:27:45.135 [INFO][3027] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.127-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:27:45.160714 containerd[1464]: 2024-12-13 13:27:45.149 [INFO][3027] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.127-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:27:45.160849 containerd[1464]: 2024-12-13 13:27:45.150 [INFO][3027] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.127-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.127-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"f02c12d4-65a8-432c-9251-e03b8f840cf9", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 27, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.127", ContainerID:"47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.54.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"b6:68:ac:71:97:44", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:27:45.160849 containerd[1464]: 2024-12-13 13:27:45.157 [INFO][3027] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.127-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:27:45.185623 containerd[1464]: time="2024-12-13T13:27:45.182823508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:27:45.185623 containerd[1464]: time="2024-12-13T13:27:45.182875268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:27:45.185623 containerd[1464]: time="2024-12-13T13:27:45.182886228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:45.185623 containerd[1464]: time="2024-12-13T13:27:45.182951708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:45.204197 systemd[1]: Started cri-containerd-47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d.scope - libcontainer container 47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d. Dec 13 13:27:45.217198 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:27:45.286294 containerd[1464]: time="2024-12-13T13:27:45.286233908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:f02c12d4-65a8-432c-9251-e03b8f840cf9,Namespace:default,Attempt:0,} returns sandbox id \"47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d\"" Dec 13 13:27:45.287611 containerd[1464]: time="2024-12-13T13:27:45.287583708Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 13:27:45.472580 kubelet[1772]: E1213 13:27:45.472472 1772 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:45.485021 kubelet[1772]: E1213 13:27:45.484982 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:46.486119 kubelet[1772]: E1213 13:27:46.486069 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:46.642318 systemd-networkd[1394]: cali60e51b789ff: Gained IPv6LL Dec 13 13:27:47.033970 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount938605184.mount: Deactivated successfully. Dec 13 13:27:47.486465 kubelet[1772]: E1213 13:27:47.486426 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:48.291433 containerd[1464]: time="2024-12-13T13:27:48.291373780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:48.292621 containerd[1464]: time="2024-12-13T13:27:48.292582146Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Dec 13 13:27:48.293429 containerd[1464]: time="2024-12-13T13:27:48.293366709Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:48.297470 containerd[1464]: time="2024-12-13T13:27:48.296424924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:48.297470 containerd[1464]: time="2024-12-13T13:27:48.297327289Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.009713581s" Dec 13 13:27:48.297470 containerd[1464]: time="2024-12-13T13:27:48.297355649Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Dec 13 13:27:48.307371 containerd[1464]: time="2024-12-13T13:27:48.307325497Z" level=info msg="CreateContainer within sandbox \"47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 13:27:48.317271 containerd[1464]: time="2024-12-13T13:27:48.317228226Z" level=info msg="CreateContainer within sandbox \"47a3ce12ee4f0b870d49929b8b9c43360d59375db243d57fca5f4549d711213d\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"738c4843a57c3e4b0c9c4c3b4dfc81fe693d2476e78cb9f69888b7e81ef352fe\"" Dec 13 13:27:48.318432 containerd[1464]: time="2024-12-13T13:27:48.318392591Z" level=info msg="StartContainer for \"738c4843a57c3e4b0c9c4c3b4dfc81fe693d2476e78cb9f69888b7e81ef352fe\"" Dec 13 13:27:48.352277 systemd[1]: Started cri-containerd-738c4843a57c3e4b0c9c4c3b4dfc81fe693d2476e78cb9f69888b7e81ef352fe.scope - libcontainer container 738c4843a57c3e4b0c9c4c3b4dfc81fe693d2476e78cb9f69888b7e81ef352fe. Dec 13 13:27:48.428089 containerd[1464]: time="2024-12-13T13:27:48.428016445Z" level=info msg="StartContainer for \"738c4843a57c3e4b0c9c4c3b4dfc81fe693d2476e78cb9f69888b7e81ef352fe\" returns successfully" Dec 13 13:27:48.486846 kubelet[1772]: E1213 13:27:48.486777 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:49.487761 kubelet[1772]: E1213 13:27:49.487707 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:50.488244 kubelet[1772]: E1213 13:27:50.488190 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:51.488779 kubelet[1772]: E1213 13:27:51.488733 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:52.489183 kubelet[1772]: E1213 13:27:52.489101 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:53.489655 kubelet[1772]: E1213 13:27:53.489597 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:54.490131 kubelet[1772]: E1213 13:27:54.490088 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:55.490839 kubelet[1772]: E1213 13:27:55.490771 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:56.491477 kubelet[1772]: E1213 13:27:56.491420 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:57.491862 kubelet[1772]: E1213 13:27:57.491815 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:57.907085 kubelet[1772]: I1213 13:27:57.906894 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=10.895899927 podStartE2EDuration="13.906874352s" podCreationTimestamp="2024-12-13 13:27:44 +0000 UTC" firstStartedPulling="2024-12-13 13:27:45.287349748 +0000 UTC m=+21.431621081" lastFinishedPulling="2024-12-13 13:27:48.298324173 +0000 UTC m=+24.442595506" observedRunningTime="2024-12-13 13:27:48.692245733 +0000 UTC m=+24.836517146" watchObservedRunningTime="2024-12-13 13:27:57.906874352 +0000 UTC m=+34.051145685" Dec 13 13:27:57.907321 kubelet[1772]: I1213 13:27:57.907300 1772 topology_manager.go:215] "Topology Admit Handler" podUID="185ae3c2-21ad-4c60-96da-aa465520380a" podNamespace="default" podName="test-pod-1" Dec 13 13:27:57.914598 systemd[1]: Created slice kubepods-besteffort-pod185ae3c2_21ad_4c60_96da_aa465520380a.slice - libcontainer container kubepods-besteffort-pod185ae3c2_21ad_4c60_96da_aa465520380a.slice. Dec 13 13:27:58.071094 kubelet[1772]: I1213 13:27:58.070825 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3e49077c-e206-46a4-b15f-6c31309cf374\" (UniqueName: \"kubernetes.io/nfs/185ae3c2-21ad-4c60-96da-aa465520380a-pvc-3e49077c-e206-46a4-b15f-6c31309cf374\") pod \"test-pod-1\" (UID: \"185ae3c2-21ad-4c60-96da-aa465520380a\") " pod="default/test-pod-1" Dec 13 13:27:58.071094 kubelet[1772]: I1213 13:27:58.070867 1772 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cj7c\" (UniqueName: \"kubernetes.io/projected/185ae3c2-21ad-4c60-96da-aa465520380a-kube-api-access-4cj7c\") pod \"test-pod-1\" (UID: \"185ae3c2-21ad-4c60-96da-aa465520380a\") " pod="default/test-pod-1" Dec 13 13:27:58.198082 kernel: FS-Cache: Loaded Dec 13 13:27:58.223285 kernel: RPC: Registered named UNIX socket transport module. Dec 13 13:27:58.223429 kernel: RPC: Registered udp transport module. Dec 13 13:27:58.223449 kernel: RPC: Registered tcp transport module. Dec 13 13:27:58.223464 kernel: RPC: Registered tcp-with-tls transport module. Dec 13 13:27:58.224556 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 13:27:58.378438 kernel: NFS: Registering the id_resolver key type Dec 13 13:27:58.378557 kernel: Key type id_resolver registered Dec 13 13:27:58.379075 kernel: Key type id_legacy registered Dec 13 13:27:58.406468 nfsidmap[3238]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 13:27:58.412741 nfsidmap[3241]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 13:27:58.493623 kubelet[1772]: E1213 13:27:58.493521 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:58.556327 containerd[1464]: time="2024-12-13T13:27:58.556282108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:185ae3c2-21ad-4c60-96da-aa465520380a,Namespace:default,Attempt:0,}" Dec 13 13:27:58.664356 systemd-networkd[1394]: cali5ec59c6bf6e: Link UP Dec 13 13:27:58.665191 systemd-networkd[1394]: cali5ec59c6bf6e: Gained carrier Dec 13 13:27:58.674717 containerd[1464]: 2024-12-13 13:27:58.601 [INFO][3247] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.127-k8s-test--pod--1-eth0 default 185ae3c2-21ad-4c60-96da-aa465520380a 1128 0 2024-12-13 13:27:45 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.127 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.127-k8s-test--pod--1-" Dec 13 13:27:58.674717 containerd[1464]: 2024-12-13 13:27:58.601 [INFO][3247] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.127-k8s-test--pod--1-eth0" Dec 13 13:27:58.674717 containerd[1464]: 2024-12-13 13:27:58.625 [INFO][3259] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d" HandleID="k8s-pod-network.90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d" Workload="10.0.0.127-k8s-test--pod--1-eth0" Dec 13 13:27:58.674717 containerd[1464]: 2024-12-13 13:27:58.636 [INFO][3259] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d" HandleID="k8s-pod-network.90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d" Workload="10.0.0.127-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000431490), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.127", "pod":"test-pod-1", "timestamp":"2024-12-13 13:27:58.625084644 +0000 UTC"}, Hostname:"10.0.0.127", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:27:58.674717 containerd[1464]: 2024-12-13 13:27:58.636 [INFO][3259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:27:58.674717 containerd[1464]: 2024-12-13 13:27:58.636 [INFO][3259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:27:58.674717 containerd[1464]: 2024-12-13 13:27:58.636 [INFO][3259] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.127' Dec 13 13:27:58.674717 containerd[1464]: 2024-12-13 13:27:58.637 [INFO][3259] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d" host="10.0.0.127" Dec 13 13:27:58.674717 containerd[1464]: 2024-12-13 13:27:58.642 [INFO][3259] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.127" Dec 13 13:27:58.674717 containerd[1464]: 2024-12-13 13:27:58.646 [INFO][3259] ipam/ipam.go 489: Trying affinity for 192.168.54.0/26 host="10.0.0.127" Dec 13 13:27:58.674717 containerd[1464]: 2024-12-13 13:27:58.648 [INFO][3259] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.0/26 host="10.0.0.127" Dec 13 13:27:58.674717 containerd[1464]: 2024-12-13 13:27:58.650 [INFO][3259] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.0/26 host="10.0.0.127" Dec 13 13:27:58.674717 containerd[1464]: 2024-12-13 13:27:58.650 [INFO][3259] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.0/26 handle="k8s-pod-network.90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d" host="10.0.0.127" Dec 13 13:27:58.674717 containerd[1464]: 2024-12-13 13:27:58.652 [INFO][3259] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d Dec 13 13:27:58.674717 containerd[1464]: 2024-12-13 13:27:58.655 [INFO][3259] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.0/26 handle="k8s-pod-network.90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d" host="10.0.0.127" Dec 13 13:27:58.674717 containerd[1464]: 2024-12-13 13:27:58.660 [INFO][3259] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.4/26] block=192.168.54.0/26 handle="k8s-pod-network.90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d" host="10.0.0.127" Dec 13 13:27:58.674717 containerd[1464]: 2024-12-13 13:27:58.660 [INFO][3259] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.4/26] handle="k8s-pod-network.90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d" host="10.0.0.127" Dec 13 13:27:58.674717 containerd[1464]: 2024-12-13 13:27:58.660 [INFO][3259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:27:58.674717 containerd[1464]: 2024-12-13 13:27:58.660 [INFO][3259] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.4/26] IPv6=[] ContainerID="90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d" HandleID="k8s-pod-network.90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d" Workload="10.0.0.127-k8s-test--pod--1-eth0" Dec 13 13:27:58.674717 containerd[1464]: 2024-12-13 13:27:58.662 [INFO][3247] cni-plugin/k8s.go 386: Populated endpoint ContainerID="90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.127-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.127-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"185ae3c2-21ad-4c60-96da-aa465520380a", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.127", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:27:58.675355 containerd[1464]: 2024-12-13 13:27:58.662 [INFO][3247] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.4/32] ContainerID="90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.127-k8s-test--pod--1-eth0" Dec 13 13:27:58.675355 containerd[1464]: 2024-12-13 13:27:58.662 [INFO][3247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.127-k8s-test--pod--1-eth0" Dec 13 13:27:58.675355 containerd[1464]: 2024-12-13 13:27:58.665 [INFO][3247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.127-k8s-test--pod--1-eth0" Dec 13 13:27:58.675355 containerd[1464]: 2024-12-13 13:27:58.665 [INFO][3247] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.127-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.127-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"185ae3c2-21ad-4c60-96da-aa465520380a", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 27, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.127", ContainerID:"90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"96:37:6e:1f:e0:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:27:58.675355 containerd[1464]: 2024-12-13 13:27:58.673 [INFO][3247] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.127-k8s-test--pod--1-eth0" Dec 13 13:27:58.692529 containerd[1464]: time="2024-12-13T13:27:58.692176055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:27:58.692529 containerd[1464]: time="2024-12-13T13:27:58.692289136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:27:58.692529 containerd[1464]: time="2024-12-13T13:27:58.692311896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:58.692909 containerd[1464]: time="2024-12-13T13:27:58.692800257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:27:58.711285 systemd[1]: Started cri-containerd-90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d.scope - libcontainer container 90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d. Dec 13 13:27:58.721032 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:27:58.736621 containerd[1464]: time="2024-12-13T13:27:58.736554409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:185ae3c2-21ad-4c60-96da-aa465520380a,Namespace:default,Attempt:0,} returns sandbox id \"90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d\"" Dec 13 13:27:58.738037 containerd[1464]: time="2024-12-13T13:27:58.738012973Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 13:27:58.967880 containerd[1464]: time="2024-12-13T13:27:58.967825760Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:27:58.968382 containerd[1464]: time="2024-12-13T13:27:58.968346121Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 13 13:27:58.971605 containerd[1464]: time="2024-12-13T13:27:58.971567689Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"67696817\" in 233.523956ms" Dec 13 13:27:58.971807 containerd[1464]: time="2024-12-13T13:27:58.971700690Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 13:27:58.974855 containerd[1464]: time="2024-12-13T13:27:58.974704097Z" level=info msg="CreateContainer within sandbox \"90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 13:27:59.006422 containerd[1464]: time="2024-12-13T13:27:59.006359737Z" level=info msg="CreateContainer within sandbox \"90525aec119e1605e850ced3e4334dab5242b623b0ab481cf95417daa4fa3c9d\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"b341b5e5e9fd450f1a8d0ddc8ade6c170e23e06023b31558e8bfc7b8c32ab5d1\"" Dec 13 13:27:59.007593 containerd[1464]: time="2024-12-13T13:27:59.006772178Z" level=info msg="StartContainer for \"b341b5e5e9fd450f1a8d0ddc8ade6c170e23e06023b31558e8bfc7b8c32ab5d1\"" Dec 13 13:27:59.034195 systemd[1]: Started cri-containerd-b341b5e5e9fd450f1a8d0ddc8ade6c170e23e06023b31558e8bfc7b8c32ab5d1.scope - libcontainer container b341b5e5e9fd450f1a8d0ddc8ade6c170e23e06023b31558e8bfc7b8c32ab5d1. Dec 13 13:27:59.058818 containerd[1464]: time="2024-12-13T13:27:59.058717463Z" level=info msg="StartContainer for \"b341b5e5e9fd450f1a8d0ddc8ade6c170e23e06023b31558e8bfc7b8c32ab5d1\" returns successfully" Dec 13 13:27:59.494475 kubelet[1772]: E1213 13:27:59.494429 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:27:59.719917 kubelet[1772]: I1213 13:27:59.719858 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=14.485078608 podStartE2EDuration="14.719841727s" podCreationTimestamp="2024-12-13 13:27:45 +0000 UTC" firstStartedPulling="2024-12-13 13:27:58.737630452 +0000 UTC m=+34.881901785" lastFinishedPulling="2024-12-13 13:27:58.972393571 +0000 UTC m=+35.116664904" observedRunningTime="2024-12-13 13:27:59.717480721 +0000 UTC m=+35.861752054" watchObservedRunningTime="2024-12-13 13:27:59.719841727 +0000 UTC m=+35.864113060" Dec 13 13:28:00.494924 kubelet[1772]: E1213 13:28:00.494882 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:00.660118 update_engine[1456]: I20241213 13:28:00.659880 1456 update_attempter.cc:509] Updating boot flags... Dec 13 13:28:00.690749 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3237) Dec 13 13:28:00.721757 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3224) Dec 13 13:28:00.722724 systemd-networkd[1394]: cali5ec59c6bf6e: Gained IPv6LL Dec 13 13:28:00.748088 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3224) Dec 13 13:28:01.495839 kubelet[1772]: E1213 13:28:01.495789 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:02.496859 kubelet[1772]: E1213 13:28:02.496810 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"