Jan 13 20:23:00.924600 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 20:23:00.924622 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:57:23 -00 2025 Jan 13 20:23:00.924632 kernel: KASLR enabled Jan 13 20:23:00.924638 kernel: efi: EFI v2.7 by EDK II Jan 13 20:23:00.924643 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Jan 13 20:23:00.924649 kernel: random: crng init done Jan 13 20:23:00.924656 kernel: secureboot: Secure boot disabled Jan 13 20:23:00.924662 kernel: ACPI: Early table checksum verification disabled Jan 13 20:23:00.924668 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 13 20:23:00.924675 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 13 20:23:00.924681 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:23:00.924687 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:23:00.924693 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:23:00.924698 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:23:00.924706 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:23:00.924713 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:23:00.924719 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:23:00.924725 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:23:00.924732 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:23:00.924738 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 13 20:23:00.924744 kernel: NUMA: Failed to initialise from firmware Jan 13 20:23:00.924750 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:23:00.924756 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 13 20:23:00.924762 kernel: Zone ranges: Jan 13 20:23:00.924768 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:23:00.924776 kernel: DMA32 empty Jan 13 20:23:00.924782 kernel: Normal empty Jan 13 20:23:00.924788 kernel: Movable zone start for each node Jan 13 20:23:00.924794 kernel: Early memory node ranges Jan 13 20:23:00.924800 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 13 20:23:00.924806 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 13 20:23:00.924812 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 13 20:23:00.924818 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 13 20:23:00.924824 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 13 20:23:00.924830 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 13 20:23:00.924836 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 13 20:23:00.924842 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:23:00.924850 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 13 20:23:00.924856 kernel: psci: probing for conduit method from ACPI. Jan 13 20:23:00.924862 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 20:23:00.924871 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:23:00.924878 kernel: psci: Trusted OS migration not required Jan 13 20:23:00.924885 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:23:00.924893 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 20:23:00.924899 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:23:00.924906 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:23:00.924913 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 13 20:23:00.924919 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:23:00.924926 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:23:00.924933 kernel: CPU features: detected: Hardware dirty bit management Jan 13 20:23:00.924939 kernel: CPU features: detected: Spectre-v4 Jan 13 20:23:00.924945 kernel: CPU features: detected: Spectre-BHB Jan 13 20:23:00.924952 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 20:23:00.924960 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 20:23:00.924966 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 20:23:00.924973 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 20:23:00.924979 kernel: alternatives: applying boot alternatives Jan 13 20:23:00.924987 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:23:00.924994 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:23:00.925000 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:23:00.925007 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:23:00.925014 kernel: Fallback order for Node 0: 0 Jan 13 20:23:00.925020 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 13 20:23:00.925027 kernel: Policy zone: DMA Jan 13 20:23:00.925034 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:23:00.925041 kernel: software IO TLB: area num 4. Jan 13 20:23:00.925048 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 13 20:23:00.925055 kernel: Memory: 2386324K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 185964K reserved, 0K cma-reserved) Jan 13 20:23:00.925061 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 20:23:00.925068 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:23:00.925075 kernel: rcu: RCU event tracing is enabled. Jan 13 20:23:00.925082 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 20:23:00.925101 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:23:00.925107 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:23:00.925114 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:23:00.925121 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 20:23:00.925129 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:23:00.925136 kernel: GICv3: 256 SPIs implemented Jan 13 20:23:00.925142 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:23:00.925149 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:23:00.925155 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 20:23:00.925162 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 20:23:00.925168 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 20:23:00.925175 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:23:00.925182 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:23:00.925188 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 13 20:23:00.925195 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 13 20:23:00.925203 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:23:00.925210 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:23:00.925217 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 20:23:00.925224 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 20:23:00.925231 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 20:23:00.925237 kernel: arm-pv: using stolen time PV Jan 13 20:23:00.925244 kernel: Console: colour dummy device 80x25 Jan 13 20:23:00.925251 kernel: ACPI: Core revision 20230628 Jan 13 20:23:00.925265 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 20:23:00.925273 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:23:00.925282 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:23:00.925289 kernel: landlock: Up and running. Jan 13 20:23:00.925295 kernel: SELinux: Initializing. Jan 13 20:23:00.925302 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:23:00.925309 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:23:00.925316 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:23:00.925323 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:23:00.925330 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:23:00.925337 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:23:00.925345 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 20:23:00.925352 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 20:23:00.925358 kernel: Remapping and enabling EFI services. Jan 13 20:23:00.925365 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:23:00.925372 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:23:00.925378 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 20:23:00.925385 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 13 20:23:00.925392 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:23:00.925399 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 20:23:00.925405 kernel: Detected PIPT I-cache on CPU2 Jan 13 20:23:00.925413 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 13 20:23:00.925420 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 13 20:23:00.925431 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:23:00.925444 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 13 20:23:00.925454 kernel: Detected PIPT I-cache on CPU3 Jan 13 20:23:00.925462 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 13 20:23:00.925470 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 13 20:23:00.925477 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:23:00.925485 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 13 20:23:00.925493 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 20:23:00.925500 kernel: SMP: Total of 4 processors activated. Jan 13 20:23:00.925507 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:23:00.925515 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 20:23:00.925522 kernel: CPU features: detected: Common not Private translations Jan 13 20:23:00.925529 kernel: CPU features: detected: CRC32 instructions Jan 13 20:23:00.925536 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 20:23:00.925543 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 20:23:00.925551 kernel: CPU features: detected: LSE atomic instructions Jan 13 20:23:00.925558 kernel: CPU features: detected: Privileged Access Never Jan 13 20:23:00.925565 kernel: CPU features: detected: RAS Extension Support Jan 13 20:23:00.925572 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 20:23:00.925579 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:23:00.925587 kernel: alternatives: applying system-wide alternatives Jan 13 20:23:00.925594 kernel: devtmpfs: initialized Jan 13 20:23:00.925601 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:23:00.925608 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 20:23:00.925616 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:23:00.925623 kernel: SMBIOS 3.0.0 present. Jan 13 20:23:00.925631 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 13 20:23:00.925638 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:23:00.925645 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:23:00.925652 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:23:00.925659 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:23:00.925666 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:23:00.925673 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Jan 13 20:23:00.925681 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:23:00.925689 kernel: cpuidle: using governor menu Jan 13 20:23:00.925696 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:23:00.925703 kernel: ASID allocator initialised with 32768 entries Jan 13 20:23:00.925710 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:23:00.925717 kernel: Serial: AMBA PL011 UART driver Jan 13 20:23:00.925724 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 20:23:00.925731 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 20:23:00.925738 kernel: Modules: 508960 pages in range for PLT usage Jan 13 20:23:00.925746 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:23:00.925754 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:23:00.925761 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:23:00.925768 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:23:00.925775 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:23:00.925782 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:23:00.925789 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:23:00.925796 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:23:00.925803 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:23:00.925812 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:23:00.925819 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:23:00.925825 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:23:00.925832 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:23:00.925839 kernel: ACPI: Interpreter enabled Jan 13 20:23:00.925846 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:23:00.925853 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:23:00.925861 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 20:23:00.925868 kernel: printk: console [ttyAMA0] enabled Jan 13 20:23:00.925875 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:23:00.926009 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:23:00.926092 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:23:00.926193 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:23:00.926269 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 20:23:00.926336 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 20:23:00.926346 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 20:23:00.926357 kernel: PCI host bridge to bus 0000:00 Jan 13 20:23:00.926437 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 20:23:00.926497 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:23:00.926574 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 20:23:00.926631 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:23:00.926712 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 20:23:00.926795 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:23:00.926866 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 13 20:23:00.926933 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 13 20:23:00.926998 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:23:00.927063 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:23:00.927142 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 13 20:23:00.927208 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 13 20:23:00.927287 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 20:23:00.927365 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:23:00.927422 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 20:23:00.927432 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:23:00.927439 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:23:00.927446 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:23:00.927453 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:23:00.927461 kernel: iommu: Default domain type: Translated Jan 13 20:23:00.927468 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:23:00.927476 kernel: efivars: Registered efivars operations Jan 13 20:23:00.927484 kernel: vgaarb: loaded Jan 13 20:23:00.927491 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:23:00.927498 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:23:00.927505 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:23:00.927512 kernel: pnp: PnP ACPI init Jan 13 20:23:00.927582 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 20:23:00.927593 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:23:00.927602 kernel: NET: Registered PF_INET protocol family Jan 13 20:23:00.927609 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:23:00.927616 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:23:00.927623 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:23:00.927630 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:23:00.927638 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:23:00.927645 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:23:00.927653 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:23:00.927660 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:23:00.927668 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:23:00.927676 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:23:00.927683 kernel: kvm [1]: HYP mode not available Jan 13 20:23:00.927690 kernel: Initialise system trusted keyrings Jan 13 20:23:00.927697 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:23:00.927704 kernel: Key type asymmetric registered Jan 13 20:23:00.927711 kernel: Asymmetric key parser 'x509' registered Jan 13 20:23:00.927718 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:23:00.927725 kernel: io scheduler mq-deadline registered Jan 13 20:23:00.927734 kernel: io scheduler kyber registered Jan 13 20:23:00.927741 kernel: io scheduler bfq registered Jan 13 20:23:00.927748 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:23:00.927756 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:23:00.927763 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:23:00.927827 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 13 20:23:00.927837 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:23:00.927844 kernel: thunder_xcv, ver 1.0 Jan 13 20:23:00.927851 kernel: thunder_bgx, ver 1.0 Jan 13 20:23:00.927860 kernel: nicpf, ver 1.0 Jan 13 20:23:00.927867 kernel: nicvf, ver 1.0 Jan 13 20:23:00.927938 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:23:00.927999 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:23:00 UTC (1736799780) Jan 13 20:23:00.928009 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:23:00.928016 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 20:23:00.928023 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:23:00.928031 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:23:00.928039 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:23:00.928046 kernel: Segment Routing with IPv6 Jan 13 20:23:00.928053 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:23:00.928060 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:23:00.928067 kernel: Key type dns_resolver registered Jan 13 20:23:00.928074 kernel: registered taskstats version 1 Jan 13 20:23:00.928081 kernel: Loading compiled-in X.509 certificates Jan 13 20:23:00.931133 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: a9edf9d44b1b82dedf7830d1843430df7c4d16cb' Jan 13 20:23:00.931143 kernel: Key type .fscrypt registered Jan 13 20:23:00.931150 kernel: Key type fscrypt-provisioning registered Jan 13 20:23:00.931162 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:23:00.931169 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:23:00.931177 kernel: ima: No architecture policies found Jan 13 20:23:00.931184 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:23:00.931191 kernel: clk: Disabling unused clocks Jan 13 20:23:00.931198 kernel: Freeing unused kernel memory: 39680K Jan 13 20:23:00.931206 kernel: Run /init as init process Jan 13 20:23:00.931213 kernel: with arguments: Jan 13 20:23:00.931222 kernel: /init Jan 13 20:23:00.931229 kernel: with environment: Jan 13 20:23:00.931235 kernel: HOME=/ Jan 13 20:23:00.931243 kernel: TERM=linux Jan 13 20:23:00.931249 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:23:00.931269 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:23:00.931279 systemd[1]: Detected virtualization kvm. Jan 13 20:23:00.931287 systemd[1]: Detected architecture arm64. Jan 13 20:23:00.931297 systemd[1]: Running in initrd. Jan 13 20:23:00.931307 systemd[1]: No hostname configured, using default hostname. Jan 13 20:23:00.931314 systemd[1]: Hostname set to . Jan 13 20:23:00.931322 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:23:00.931330 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:23:00.931338 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:23:00.931346 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:23:00.931354 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:23:00.931364 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:23:00.931371 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:23:00.931379 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:23:00.931389 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:23:00.931397 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:23:00.931405 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:23:00.931413 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:23:00.931423 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:23:00.931431 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:23:00.931438 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:23:00.931446 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:23:00.931454 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:23:00.931461 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:23:00.931469 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:23:00.931477 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:23:00.931486 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:23:00.931494 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:23:00.931502 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:23:00.931510 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:23:00.931517 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:23:00.931525 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:23:00.931533 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:23:00.931540 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:23:00.931548 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:23:00.931557 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:23:00.931565 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:23:00.931573 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:23:00.931581 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:23:00.931589 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:23:00.931597 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:23:00.931629 systemd-journald[240]: Collecting audit messages is disabled. Jan 13 20:23:00.931649 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:23:00.931659 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:23:00.931666 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:23:00.931675 systemd-journald[240]: Journal started Jan 13 20:23:00.931693 systemd-journald[240]: Runtime Journal (/run/log/journal/e508cb541fd9484d8c4002892aaa6b38) is 5.9M, max 47.3M, 41.4M free. Jan 13 20:23:00.916618 systemd-modules-load[241]: Inserted module 'overlay' Jan 13 20:23:00.934335 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:23:00.934362 kernel: Bridge firewalling registered Jan 13 20:23:00.934039 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:23:00.934296 systemd-modules-load[241]: Inserted module 'br_netfilter' Jan 13 20:23:00.935328 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:23:00.937487 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:23:00.941080 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:23:00.943314 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:23:00.948116 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:23:00.953402 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:23:00.955345 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:23:00.957436 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:23:00.970284 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:23:00.972333 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:23:00.980811 dracut-cmdline[279]: dracut-dracut-053 Jan 13 20:23:00.983411 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:23:01.002844 systemd-resolved[281]: Positive Trust Anchors: Jan 13 20:23:01.002918 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:23:01.002950 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:23:01.007622 systemd-resolved[281]: Defaulting to hostname 'linux'. Jan 13 20:23:01.008595 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:23:01.010233 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:23:01.053116 kernel: SCSI subsystem initialized Jan 13 20:23:01.058103 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:23:01.065105 kernel: iscsi: registered transport (tcp) Jan 13 20:23:01.078113 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:23:01.078159 kernel: QLogic iSCSI HBA Driver Jan 13 20:23:01.120652 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:23:01.138269 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:23:01.154339 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:23:01.154392 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:23:01.155631 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:23:01.206130 kernel: raid6: neonx8 gen() 15750 MB/s Jan 13 20:23:01.222106 kernel: raid6: neonx4 gen() 15396 MB/s Jan 13 20:23:01.239112 kernel: raid6: neonx2 gen() 13171 MB/s Jan 13 20:23:01.256109 kernel: raid6: neonx1 gen() 10356 MB/s Jan 13 20:23:01.273103 kernel: raid6: int64x8 gen() 6892 MB/s Jan 13 20:23:01.290106 kernel: raid6: int64x4 gen() 7291 MB/s Jan 13 20:23:01.307106 kernel: raid6: int64x2 gen() 6102 MB/s Jan 13 20:23:01.324109 kernel: raid6: int64x1 gen() 4996 MB/s Jan 13 20:23:01.324127 kernel: raid6: using algorithm neonx8 gen() 15750 MB/s Jan 13 20:23:01.341113 kernel: raid6: .... xor() 11830 MB/s, rmw enabled Jan 13 20:23:01.341133 kernel: raid6: using neon recovery algorithm Jan 13 20:23:01.346210 kernel: xor: measuring software checksum speed Jan 13 20:23:01.346225 kernel: 8regs : 19778 MB/sec Jan 13 20:23:01.347225 kernel: 32regs : 19035 MB/sec Jan 13 20:23:01.347249 kernel: arm64_neon : 26981 MB/sec Jan 13 20:23:01.347276 kernel: xor: using function: arm64_neon (26981 MB/sec) Jan 13 20:23:01.403118 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:23:01.415978 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:23:01.427274 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:23:01.440153 systemd-udevd[464]: Using default interface naming scheme 'v255'. Jan 13 20:23:01.443884 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:23:01.453286 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:23:01.465553 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Jan 13 20:23:01.495579 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:23:01.513293 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:23:01.555340 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:23:01.567528 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:23:01.578732 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:23:01.580936 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:23:01.582513 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:23:01.584370 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:23:01.592274 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:23:01.602585 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:23:01.613124 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 13 20:23:01.624027 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 20:23:01.624155 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:23:01.624167 kernel: GPT:9289727 != 19775487 Jan 13 20:23:01.624177 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:23:01.624186 kernel: GPT:9289727 != 19775487 Jan 13 20:23:01.624202 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:23:01.624213 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:23:01.625495 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:23:01.625622 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:23:01.629363 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:23:01.630303 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:23:01.630441 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:23:01.634023 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:23:01.642108 kernel: BTRFS: device fsid 8e09fced-e016-4c4f-bac5-4013d13dfd78 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (512) Jan 13 20:23:01.644121 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (526) Jan 13 20:23:01.647468 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:23:01.661071 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:23:01.663080 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:23:01.668919 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:23:01.676697 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:23:01.678185 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:23:01.683314 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:23:01.693242 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:23:01.695303 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:23:01.700681 disk-uuid[552]: Primary Header is updated. Jan 13 20:23:01.700681 disk-uuid[552]: Secondary Entries is updated. Jan 13 20:23:01.700681 disk-uuid[552]: Secondary Header is updated. Jan 13 20:23:01.707112 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:23:01.715491 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:23:02.714152 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:23:02.719673 disk-uuid[553]: The operation has completed successfully. Jan 13 20:23:02.743351 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:23:02.743458 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:23:02.764313 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:23:02.767273 sh[573]: Success Jan 13 20:23:02.786054 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:23:02.812906 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:23:02.827672 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:23:02.829711 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:23:02.839581 kernel: BTRFS info (device dm-0): first mount of filesystem 8e09fced-e016-4c4f-bac5-4013d13dfd78 Jan 13 20:23:02.839629 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:23:02.839640 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:23:02.840480 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:23:02.841565 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:23:02.844879 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:23:02.846155 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:23:02.851255 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:23:02.852671 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:23:02.860992 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:23:02.861040 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:23:02.861057 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:23:02.864120 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:23:02.871748 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:23:02.873257 kernel: BTRFS info (device vda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:23:02.879795 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:23:02.887293 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:23:02.979694 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:23:02.980848 ignition[664]: Ignition 2.20.0 Jan 13 20:23:02.980854 ignition[664]: Stage: fetch-offline Jan 13 20:23:02.980887 ignition[664]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:23:02.980895 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:23:02.981063 ignition[664]: parsed url from cmdline: "" Jan 13 20:23:02.981066 ignition[664]: no config URL provided Jan 13 20:23:02.981071 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:23:02.981078 ignition[664]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:23:02.981132 ignition[664]: op(1): [started] loading QEMU firmware config module Jan 13 20:23:02.986313 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:23:02.981137 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 20:23:02.990234 ignition[664]: op(1): [finished] loading QEMU firmware config module Jan 13 20:23:03.004007 ignition[664]: parsing config with SHA512: 27853e5b4528bdb2ceb30ed4e325914cdeb1f53fbb5e93d0ac30447828f58b526566c33f699d228fc693761cf54d79980ec2bc10905711132118245999b0dbc9 Jan 13 20:23:03.008931 unknown[664]: fetched base config from "system" Jan 13 20:23:03.008941 unknown[664]: fetched user config from "qemu" Jan 13 20:23:03.009232 ignition[664]: fetch-offline: fetch-offline passed Jan 13 20:23:03.009316 ignition[664]: Ignition finished successfully Jan 13 20:23:03.012037 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:23:03.016368 systemd-networkd[771]: lo: Link UP Jan 13 20:23:03.016377 systemd-networkd[771]: lo: Gained carrier Jan 13 20:23:03.017409 systemd-networkd[771]: Enumeration completed Jan 13 20:23:03.017956 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:23:03.018826 systemd[1]: Reached target network.target - Network. Jan 13 20:23:03.019938 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:23:03.020572 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:23:03.020575 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:23:03.021475 systemd-networkd[771]: eth0: Link UP Jan 13 20:23:03.021478 systemd-networkd[771]: eth0: Gained carrier Jan 13 20:23:03.021485 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:23:03.028323 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:23:03.038160 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:23:03.039009 ignition[774]: Ignition 2.20.0 Jan 13 20:23:03.039016 ignition[774]: Stage: kargs Jan 13 20:23:03.039181 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:23:03.039190 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:23:03.042307 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:23:03.039945 ignition[774]: kargs: kargs passed Jan 13 20:23:03.039985 ignition[774]: Ignition finished successfully Jan 13 20:23:03.054285 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:23:03.063893 ignition[784]: Ignition 2.20.0 Jan 13 20:23:03.063904 ignition[784]: Stage: disks Jan 13 20:23:03.064069 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:23:03.066535 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:23:03.064079 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:23:03.067477 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:23:03.064786 ignition[784]: disks: disks passed Jan 13 20:23:03.068681 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:23:03.064833 ignition[784]: Ignition finished successfully Jan 13 20:23:03.070147 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:23:03.071416 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:23:03.072468 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:23:03.082259 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:23:03.092529 systemd-fsck[795]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:23:03.096864 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:23:03.107173 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:23:03.151113 kernel: EXT4-fs (vda9): mounted filesystem 8fd847fb-a6be-44f6-9adf-0a0a79b9fa94 r/w with ordered data mode. Quota mode: none. Jan 13 20:23:03.151695 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:23:03.152866 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:23:03.163189 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:23:03.164803 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:23:03.165927 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:23:03.165972 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:23:03.172201 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (803) Jan 13 20:23:03.172224 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:23:03.172235 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:23:03.165994 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:23:03.174956 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:23:03.174975 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:23:03.172852 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:23:03.176485 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:23:03.178385 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:23:03.219644 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:23:03.223983 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:23:03.231717 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:23:03.237237 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:23:03.308446 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:23:03.317240 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:23:03.318616 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:23:03.323113 kernel: BTRFS info (device vda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:23:03.337742 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:23:03.342641 ignition[916]: INFO : Ignition 2.20.0 Jan 13 20:23:03.342641 ignition[916]: INFO : Stage: mount Jan 13 20:23:03.343904 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:23:03.343904 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:23:03.343904 ignition[916]: INFO : mount: mount passed Jan 13 20:23:03.343904 ignition[916]: INFO : Ignition finished successfully Jan 13 20:23:03.345386 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:23:03.357229 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:23:03.838727 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:23:03.848308 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:23:03.854554 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (930) Jan 13 20:23:03.854596 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:23:03.855278 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:23:03.855292 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:23:03.858101 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:23:03.858877 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:23:03.875370 ignition[947]: INFO : Ignition 2.20.0 Jan 13 20:23:03.875370 ignition[947]: INFO : Stage: files Jan 13 20:23:03.876609 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:23:03.876609 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:23:03.876609 ignition[947]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:23:03.879038 ignition[947]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:23:03.879038 ignition[947]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:23:03.879038 ignition[947]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:23:03.879038 ignition[947]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:23:03.883032 ignition[947]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:23:03.883032 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:23:03.883032 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:23:03.883032 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:23:03.883032 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:23:03.883032 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:23:03.883032 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:23:03.883032 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:23:03.883032 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 13 20:23:03.879364 unknown[947]: wrote ssh authorized keys file for user: core Jan 13 20:23:04.162380 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 20:23:04.270214 systemd-networkd[771]: eth0: Gained IPv6LL Jan 13 20:23:04.405477 ignition[947]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 20:23:04.405477 ignition[947]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 13 20:23:04.408218 ignition[947]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:23:04.408218 ignition[947]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:23:04.408218 ignition[947]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 13 20:23:04.408218 ignition[947]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 20:23:04.452930 ignition[947]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:23:04.457032 ignition[947]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:23:04.458186 ignition[947]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 20:23:04.458186 ignition[947]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:23:04.458186 ignition[947]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:23:04.458186 ignition[947]: INFO : files: files passed Jan 13 20:23:04.458186 ignition[947]: INFO : Ignition finished successfully Jan 13 20:23:04.459619 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:23:04.469275 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:23:04.470740 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:23:04.474504 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:23:04.474612 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:23:04.479411 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 20:23:04.482742 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:23:04.482742 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:23:04.485172 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:23:04.486209 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:23:04.487276 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:23:04.503272 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:23:04.521588 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:23:04.522293 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:23:04.523433 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:23:04.524794 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:23:04.526049 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:23:04.526734 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:23:04.542163 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:23:04.554330 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:23:04.561919 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:23:04.562865 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:23:04.564306 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:23:04.565550 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:23:04.565666 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:23:04.567499 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:23:04.568884 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:23:04.570051 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:23:04.571287 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:23:04.572675 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:23:04.574045 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:23:04.575531 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:23:04.576947 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:23:04.578360 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:23:04.579589 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:23:04.580691 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:23:04.580805 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:23:04.582525 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:23:04.583945 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:23:04.585324 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:23:04.586748 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:23:04.587660 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:23:04.587766 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:23:04.589744 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:23:04.589855 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:23:04.591371 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:23:04.592584 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:23:04.598178 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:23:04.599113 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:23:04.600638 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:23:04.601766 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:23:04.601849 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:23:04.602928 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:23:04.603003 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:23:04.604110 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:23:04.604213 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:23:04.605482 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:23:04.605577 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:23:04.619256 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:23:04.620592 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:23:04.621235 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:23:04.621353 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:23:04.622646 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:23:04.622735 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:23:04.627031 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:23:04.627144 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:23:04.631581 ignition[1001]: INFO : Ignition 2.20.0 Jan 13 20:23:04.631581 ignition[1001]: INFO : Stage: umount Jan 13 20:23:04.633505 ignition[1001]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:23:04.633505 ignition[1001]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:23:04.633505 ignition[1001]: INFO : umount: umount passed Jan 13 20:23:04.633505 ignition[1001]: INFO : Ignition finished successfully Jan 13 20:23:04.634756 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:23:04.635311 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:23:04.635421 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:23:04.638001 systemd[1]: Stopped target network.target - Network. Jan 13 20:23:04.638726 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:23:04.638795 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:23:04.640092 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:23:04.640136 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:23:04.641378 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:23:04.641419 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:23:04.642651 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:23:04.642691 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:23:04.644130 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:23:04.645513 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:23:04.646954 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:23:04.647059 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:23:04.648455 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:23:04.648538 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:23:04.655185 systemd-networkd[771]: eth0: DHCPv6 lease lost Jan 13 20:23:04.655989 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:23:04.656107 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:23:04.657858 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:23:04.659131 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:23:04.661364 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:23:04.661405 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:23:04.669286 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:23:04.669982 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:23:04.670035 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:23:04.671525 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:23:04.671564 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:23:04.672950 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:23:04.672991 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:23:04.674548 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:23:04.674587 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:23:04.676160 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:23:04.684962 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:23:04.685080 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:23:04.689516 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:23:04.689644 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:23:04.691457 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:23:04.691518 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:23:04.692662 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:23:04.692699 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:23:04.693915 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:23:04.693955 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:23:04.695914 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:23:04.695956 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:23:04.697812 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:23:04.697854 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:23:04.711228 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:23:04.712057 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:23:04.712124 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:23:04.713711 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 20:23:04.713754 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:23:04.715316 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:23:04.715360 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:23:04.716966 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:23:04.717010 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:23:04.719123 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:23:04.719206 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:23:04.720891 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:23:04.723969 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:23:04.733655 systemd[1]: Switching root. Jan 13 20:23:04.756501 systemd-journald[240]: Journal stopped Jan 13 20:23:05.404396 systemd-journald[240]: Received SIGTERM from PID 1 (systemd). Jan 13 20:23:05.404451 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:23:05.404465 kernel: SELinux: policy capability open_perms=1 Jan 13 20:23:05.404475 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:23:05.404484 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:23:05.404493 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:23:05.404503 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:23:05.404514 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:23:05.404524 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:23:05.404537 kernel: audit: type=1403 audit(1736799784.877:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:23:05.404549 systemd[1]: Successfully loaded SELinux policy in 31.489ms. Jan 13 20:23:05.404572 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.952ms. Jan 13 20:23:05.404583 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:23:05.404595 systemd[1]: Detected virtualization kvm. Jan 13 20:23:05.404605 systemd[1]: Detected architecture arm64. Jan 13 20:23:05.404615 systemd[1]: Detected first boot. Jan 13 20:23:05.404626 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:23:05.404643 zram_generator::config[1046]: No configuration found. Jan 13 20:23:05.404655 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:23:05.404665 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:23:05.404676 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:23:05.404687 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:23:05.404706 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:23:05.404718 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:23:05.404728 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:23:05.404739 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:23:05.404749 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:23:05.404760 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:23:05.404770 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:23:05.404781 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:23:05.404792 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:23:05.404804 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:23:05.404815 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:23:05.404825 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:23:05.404836 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:23:05.404847 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:23:05.404858 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 20:23:05.404872 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:23:05.404882 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:23:05.404894 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:23:05.404907 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:23:05.404918 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:23:05.404928 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:23:05.404939 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:23:05.404949 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:23:05.404960 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:23:05.404971 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:23:05.404981 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:23:05.404993 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:23:05.405004 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:23:05.405015 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:23:05.405026 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:23:05.405036 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:23:05.405047 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:23:05.405057 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:23:05.405067 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:23:05.405078 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:23:05.405099 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:23:05.405112 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:23:05.405122 systemd[1]: Reached target machines.target - Containers. Jan 13 20:23:05.405134 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:23:05.405145 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:23:05.405156 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:23:05.405166 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:23:05.405181 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:23:05.405193 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:23:05.405203 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:23:05.405214 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:23:05.405224 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:23:05.405235 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:23:05.405252 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:23:05.405264 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:23:05.405274 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:23:05.405287 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:23:05.405297 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:23:05.405307 kernel: fuse: init (API version 7.39) Jan 13 20:23:05.405317 kernel: loop: module loaded Jan 13 20:23:05.405327 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:23:05.405337 kernel: ACPI: bus type drm_connector registered Jan 13 20:23:05.405347 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:23:05.405362 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:23:05.405374 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:23:05.405384 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:23:05.405399 systemd[1]: Stopped verity-setup.service. Jan 13 20:23:05.405410 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:23:05.405421 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:23:05.405431 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:23:05.405441 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:23:05.405472 systemd-journald[1117]: Collecting audit messages is disabled. Jan 13 20:23:05.405495 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:23:05.405505 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:23:05.405516 systemd-journald[1117]: Journal started Jan 13 20:23:05.405537 systemd-journald[1117]: Runtime Journal (/run/log/journal/e508cb541fd9484d8c4002892aaa6b38) is 5.9M, max 47.3M, 41.4M free. Jan 13 20:23:05.228511 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:23:05.248050 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:23:05.248398 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:23:05.407558 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:23:05.409149 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:23:05.410389 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:23:05.411637 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:23:05.411893 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:23:05.413351 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:23:05.413591 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:23:05.416439 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:23:05.416583 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:23:05.417626 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:23:05.417746 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:23:05.419282 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:23:05.421172 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:23:05.422297 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:23:05.422471 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:23:05.423523 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:23:05.424604 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:23:05.425938 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:23:05.437818 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:23:05.453207 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:23:05.455117 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:23:05.455927 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:23:05.455966 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:23:05.457668 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:23:05.459694 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:23:05.461566 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:23:05.462497 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:23:05.464192 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:23:05.465900 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:23:05.466928 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:23:05.469342 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:23:05.470330 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:23:05.474340 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:23:05.477315 systemd-journald[1117]: Time spent on flushing to /var/log/journal/e508cb541fd9484d8c4002892aaa6b38 is 26.369ms for 839 entries. Jan 13 20:23:05.477315 systemd-journald[1117]: System Journal (/var/log/journal/e508cb541fd9484d8c4002892aaa6b38) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:23:05.530583 systemd-journald[1117]: Received client request to flush runtime journal. Jan 13 20:23:05.530637 kernel: loop0: detected capacity change from 0 to 113536 Jan 13 20:23:05.530654 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:23:05.479417 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:23:05.481602 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:23:05.486632 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:23:05.488392 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:23:05.489419 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:23:05.490798 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:23:05.492380 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:23:05.495838 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:23:05.507356 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:23:05.510624 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:23:05.524162 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:23:05.534254 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:23:05.535342 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:23:05.535798 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jan 13 20:23:05.535816 systemd-tmpfiles[1158]: ACLs are not supported, ignoring. Jan 13 20:23:05.538134 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:23:05.540450 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:23:05.541872 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:23:05.557378 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:23:05.564123 kernel: loop1: detected capacity change from 0 to 194096 Jan 13 20:23:05.580037 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:23:05.587321 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:23:05.595106 kernel: loop2: detected capacity change from 0 to 116808 Jan 13 20:23:05.603935 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jan 13 20:23:05.603952 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jan 13 20:23:05.608697 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:23:05.625141 kernel: loop3: detected capacity change from 0 to 113536 Jan 13 20:23:05.630105 kernel: loop4: detected capacity change from 0 to 194096 Jan 13 20:23:05.637107 kernel: loop5: detected capacity change from 0 to 116808 Jan 13 20:23:05.640697 (sd-merge)[1184]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 20:23:05.641152 (sd-merge)[1184]: Merged extensions into '/usr'. Jan 13 20:23:05.644745 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:23:05.644857 systemd[1]: Reloading... Jan 13 20:23:05.713158 zram_generator::config[1216]: No configuration found. Jan 13 20:23:05.783939 ldconfig[1152]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:23:05.808015 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:23:05.843177 systemd[1]: Reloading finished in 197 ms. Jan 13 20:23:05.871576 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:23:05.873026 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:23:05.884231 systemd[1]: Starting ensure-sysext.service... Jan 13 20:23:05.886531 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:23:05.897195 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:23:05.897207 systemd[1]: Reloading... Jan 13 20:23:05.911826 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:23:05.912078 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:23:05.912822 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:23:05.913045 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jan 13 20:23:05.913128 systemd-tmpfiles[1245]: ACLs are not supported, ignoring. Jan 13 20:23:05.923468 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:23:05.923481 systemd-tmpfiles[1245]: Skipping /boot Jan 13 20:23:05.933023 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:23:05.933037 systemd-tmpfiles[1245]: Skipping /boot Jan 13 20:23:05.957160 zram_generator::config[1278]: No configuration found. Jan 13 20:23:06.031936 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:23:06.067700 systemd[1]: Reloading finished in 170 ms. Jan 13 20:23:06.088790 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:23:06.103498 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:23:06.110310 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:23:06.112574 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:23:06.114445 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:23:06.118365 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:23:06.125310 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:23:06.128622 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:23:06.132711 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:23:06.139841 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:23:06.141745 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:23:06.147326 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:23:06.148323 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:23:06.152351 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:23:06.154537 systemd-udevd[1318]: Using default interface naming scheme 'v255'. Jan 13 20:23:06.154950 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:23:06.156442 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:23:06.157163 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:23:06.158542 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:23:06.159476 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:23:06.160917 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:23:06.161063 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:23:06.170803 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:23:06.183720 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:23:06.186035 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:23:06.192612 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:23:06.194011 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:23:06.195857 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:23:06.197890 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:23:06.200101 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:23:06.201870 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:23:06.204120 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:23:06.205720 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:23:06.207558 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:23:06.208779 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:23:06.210315 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:23:06.211734 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:23:06.211852 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:23:06.212853 augenrules[1363]: No rules Jan 13 20:23:06.213652 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:23:06.213887 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:23:06.228364 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:23:06.237119 systemd[1]: Finished ensure-sysext.service. Jan 13 20:23:06.245223 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1345) Jan 13 20:23:06.246195 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 13 20:23:06.261382 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:23:06.262160 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:23:06.266259 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:23:06.269273 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:23:06.272378 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:23:06.275236 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:23:06.277218 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:23:06.279253 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:23:06.284197 augenrules[1383]: /sbin/augenrules: No change Jan 13 20:23:06.284276 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:23:06.285092 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:23:06.285546 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:23:06.289138 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:23:06.290268 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:23:06.290399 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:23:06.291409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:23:06.291520 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:23:06.295494 augenrules[1411]: No rules Jan 13 20:23:06.298867 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:23:06.299636 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:23:06.302468 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:23:06.302613 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:23:06.312991 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:23:06.318534 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:23:06.319993 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:23:06.320058 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:23:06.334589 systemd-resolved[1311]: Positive Trust Anchors: Jan 13 20:23:06.334667 systemd-resolved[1311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:23:06.334702 systemd-resolved[1311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:23:06.343965 systemd-resolved[1311]: Defaulting to hostname 'linux'. Jan 13 20:23:06.348330 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:23:06.350995 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:23:06.352177 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:23:06.368567 systemd-networkd[1401]: lo: Link UP Jan 13 20:23:06.368583 systemd-networkd[1401]: lo: Gained carrier Jan 13 20:23:06.369504 systemd-networkd[1401]: Enumeration completed Jan 13 20:23:06.371741 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:23:06.371748 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:23:06.372552 systemd-networkd[1401]: eth0: Link UP Jan 13 20:23:06.372561 systemd-networkd[1401]: eth0: Gained carrier Jan 13 20:23:06.372573 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:23:06.380603 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:23:06.381753 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:23:06.383138 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:23:06.384462 systemd[1]: Reached target network.target - Network. Jan 13 20:23:06.385885 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:23:06.387954 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:23:06.393213 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:23:06.395206 systemd-networkd[1401]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:23:06.395431 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:23:06.396336 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Jan 13 20:23:06.397271 systemd-timesyncd[1403]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 20:23:06.397324 systemd-timesyncd[1403]: Initial clock synchronization to Mon 2025-01-13 20:23:06.171454 UTC. Jan 13 20:23:06.424117 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:23:06.438926 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:23:06.469585 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:23:06.470724 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:23:06.471580 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:23:06.472456 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:23:06.473363 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:23:06.474407 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:23:06.475295 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:23:06.476193 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:23:06.477048 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:23:06.477082 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:23:06.477729 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:23:06.479206 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:23:06.481458 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:23:06.489075 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:23:06.490942 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:23:06.492236 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:23:06.493071 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:23:06.493762 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:23:06.494480 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:23:06.494512 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:23:06.495360 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:23:06.496987 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:23:06.499025 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:23:06.499977 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:23:06.502596 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:23:06.506207 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:23:06.508689 jq[1443]: false Jan 13 20:23:06.509059 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:23:06.510799 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:23:06.513417 dbus-daemon[1442]: [system] SELinux support is enabled Jan 13 20:23:06.513440 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:23:06.519274 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:23:06.526219 extend-filesystems[1444]: Found loop3 Jan 13 20:23:06.526219 extend-filesystems[1444]: Found loop4 Jan 13 20:23:06.527517 extend-filesystems[1444]: Found loop5 Jan 13 20:23:06.527517 extend-filesystems[1444]: Found vda Jan 13 20:23:06.527517 extend-filesystems[1444]: Found vda1 Jan 13 20:23:06.527517 extend-filesystems[1444]: Found vda2 Jan 13 20:23:06.527517 extend-filesystems[1444]: Found vda3 Jan 13 20:23:06.527517 extend-filesystems[1444]: Found usr Jan 13 20:23:06.527517 extend-filesystems[1444]: Found vda4 Jan 13 20:23:06.527517 extend-filesystems[1444]: Found vda6 Jan 13 20:23:06.527517 extend-filesystems[1444]: Found vda7 Jan 13 20:23:06.527517 extend-filesystems[1444]: Found vda9 Jan 13 20:23:06.527517 extend-filesystems[1444]: Checking size of /dev/vda9 Jan 13 20:23:06.527755 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:23:06.528183 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:23:06.529314 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:23:06.533961 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:23:06.535222 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:23:06.539952 jq[1460]: true Jan 13 20:23:06.540438 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:23:06.543483 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:23:06.545115 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:23:06.545503 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:23:06.545682 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:23:06.547384 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:23:06.547595 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:23:06.550204 extend-filesystems[1444]: Resized partition /dev/vda9 Jan 13 20:23:06.559608 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:23:06.559678 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:23:06.561338 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:23:06.561356 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:23:06.563665 (ntainerd)[1467]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:23:06.563987 jq[1466]: true Jan 13 20:23:06.566188 extend-filesystems[1465]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:23:06.570418 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 20:23:06.570462 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1362) Jan 13 20:23:06.576765 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:23:06.582780 systemd-logind[1449]: New seat seat0. Jan 13 20:23:06.584261 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:23:06.591112 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 20:23:06.604271 extend-filesystems[1465]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:23:06.604271 extend-filesystems[1465]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:23:06.604271 extend-filesystems[1465]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 20:23:06.606954 extend-filesystems[1444]: Resized filesystem in /dev/vda9 Jan 13 20:23:06.606807 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:23:06.607696 update_engine[1456]: I20250113 20:23:06.607491 1456 main.cc:92] Flatcar Update Engine starting Jan 13 20:23:06.607012 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:23:06.609755 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:23:06.610115 update_engine[1456]: I20250113 20:23:06.609764 1456 update_check_scheduler.cc:74] Next update check in 11m25s Jan 13 20:23:06.618448 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:23:06.650961 bash[1494]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:23:06.652370 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:23:06.653898 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 20:23:06.664197 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:23:06.760847 containerd[1467]: time="2025-01-13T20:23:06.758732040Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:23:06.783472 containerd[1467]: time="2025-01-13T20:23:06.783427040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:23:06.785772 containerd[1467]: time="2025-01-13T20:23:06.784789440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:23:06.785772 containerd[1467]: time="2025-01-13T20:23:06.784823160Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:23:06.785772 containerd[1467]: time="2025-01-13T20:23:06.784838880Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:23:06.785772 containerd[1467]: time="2025-01-13T20:23:06.784972920Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:23:06.785772 containerd[1467]: time="2025-01-13T20:23:06.784989560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:23:06.785772 containerd[1467]: time="2025-01-13T20:23:06.785038560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:23:06.785772 containerd[1467]: time="2025-01-13T20:23:06.785050280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:23:06.785772 containerd[1467]: time="2025-01-13T20:23:06.785217440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:23:06.785772 containerd[1467]: time="2025-01-13T20:23:06.785232280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:23:06.785772 containerd[1467]: time="2025-01-13T20:23:06.785253440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:23:06.785772 containerd[1467]: time="2025-01-13T20:23:06.785265400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:23:06.785998 containerd[1467]: time="2025-01-13T20:23:06.785345640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:23:06.785998 containerd[1467]: time="2025-01-13T20:23:06.785526680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:23:06.785998 containerd[1467]: time="2025-01-13T20:23:06.785614720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:23:06.785998 containerd[1467]: time="2025-01-13T20:23:06.785626800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:23:06.785998 containerd[1467]: time="2025-01-13T20:23:06.785693280Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:23:06.785998 containerd[1467]: time="2025-01-13T20:23:06.785731000Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:23:06.788950 containerd[1467]: time="2025-01-13T20:23:06.788924280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:23:06.789069 containerd[1467]: time="2025-01-13T20:23:06.789053080Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:23:06.789156 containerd[1467]: time="2025-01-13T20:23:06.789143480Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:23:06.789210 containerd[1467]: time="2025-01-13T20:23:06.789198880Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:23:06.789279 containerd[1467]: time="2025-01-13T20:23:06.789266440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:23:06.789480 containerd[1467]: time="2025-01-13T20:23:06.789459320Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:23:06.789758 containerd[1467]: time="2025-01-13T20:23:06.789740000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:23:06.789916 containerd[1467]: time="2025-01-13T20:23:06.789897640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:23:06.789980 containerd[1467]: time="2025-01-13T20:23:06.789967440Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:23:06.790042 containerd[1467]: time="2025-01-13T20:23:06.790029400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:23:06.790109 containerd[1467]: time="2025-01-13T20:23:06.790082160Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:23:06.790159 containerd[1467]: time="2025-01-13T20:23:06.790148440Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:23:06.790207 containerd[1467]: time="2025-01-13T20:23:06.790196160Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:23:06.790265 containerd[1467]: time="2025-01-13T20:23:06.790252920Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:23:06.790319 containerd[1467]: time="2025-01-13T20:23:06.790306560Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:23:06.790387 containerd[1467]: time="2025-01-13T20:23:06.790374000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:23:06.790437 containerd[1467]: time="2025-01-13T20:23:06.790425640Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:23:06.790491 containerd[1467]: time="2025-01-13T20:23:06.790478800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:23:06.790546 containerd[1467]: time="2025-01-13T20:23:06.790535640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:23:06.790598 containerd[1467]: time="2025-01-13T20:23:06.790586880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:23:06.790656 containerd[1467]: time="2025-01-13T20:23:06.790644320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:23:06.790715 containerd[1467]: time="2025-01-13T20:23:06.790701960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:23:06.790767 containerd[1467]: time="2025-01-13T20:23:06.790755120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:23:06.790830 containerd[1467]: time="2025-01-13T20:23:06.790817280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:23:06.790881 containerd[1467]: time="2025-01-13T20:23:06.790869040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:23:06.790943 containerd[1467]: time="2025-01-13T20:23:06.790930320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:23:06.790995 containerd[1467]: time="2025-01-13T20:23:06.790983600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:23:06.791051 containerd[1467]: time="2025-01-13T20:23:06.791039440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:23:06.791122 containerd[1467]: time="2025-01-13T20:23:06.791109880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:23:06.791178 containerd[1467]: time="2025-01-13T20:23:06.791165440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:23:06.791231 containerd[1467]: time="2025-01-13T20:23:06.791219080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:23:06.791312 containerd[1467]: time="2025-01-13T20:23:06.791297480Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:23:06.791375 containerd[1467]: time="2025-01-13T20:23:06.791362680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:23:06.791434 containerd[1467]: time="2025-01-13T20:23:06.791421800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:23:06.791481 containerd[1467]: time="2025-01-13T20:23:06.791470720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:23:06.791710 containerd[1467]: time="2025-01-13T20:23:06.791693600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:23:06.791779 containerd[1467]: time="2025-01-13T20:23:06.791764800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:23:06.791824 containerd[1467]: time="2025-01-13T20:23:06.791813200Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:23:06.791885 containerd[1467]: time="2025-01-13T20:23:06.791871560Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:23:06.791930 containerd[1467]: time="2025-01-13T20:23:06.791919480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:23:06.791979 containerd[1467]: time="2025-01-13T20:23:06.791967800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:23:06.792027 containerd[1467]: time="2025-01-13T20:23:06.792016040Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:23:06.792079 containerd[1467]: time="2025-01-13T20:23:06.792067400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:23:06.792536 containerd[1467]: time="2025-01-13T20:23:06.792462520Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:23:06.792717 containerd[1467]: time="2025-01-13T20:23:06.792700360Z" level=info msg="Connect containerd service" Jan 13 20:23:06.792796 containerd[1467]: time="2025-01-13T20:23:06.792783600Z" level=info msg="using legacy CRI server" Jan 13 20:23:06.792843 containerd[1467]: time="2025-01-13T20:23:06.792831440Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:23:06.793137 containerd[1467]: time="2025-01-13T20:23:06.793120920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:23:06.794213 containerd[1467]: time="2025-01-13T20:23:06.794137120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:23:06.794479 containerd[1467]: time="2025-01-13T20:23:06.794440160Z" level=info msg="Start subscribing containerd event" Jan 13 20:23:06.794534 containerd[1467]: time="2025-01-13T20:23:06.794506640Z" level=info msg="Start recovering state" Jan 13 20:23:06.794604 containerd[1467]: time="2025-01-13T20:23:06.794590520Z" level=info msg="Start event monitor" Jan 13 20:23:06.794604 containerd[1467]: time="2025-01-13T20:23:06.794607920Z" level=info msg="Start snapshots syncer" Jan 13 20:23:06.794659 containerd[1467]: time="2025-01-13T20:23:06.794618160Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:23:06.794659 containerd[1467]: time="2025-01-13T20:23:06.794629000Z" level=info msg="Start streaming server" Jan 13 20:23:06.794861 containerd[1467]: time="2025-01-13T20:23:06.794743680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:23:06.794861 containerd[1467]: time="2025-01-13T20:23:06.794784920Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:23:06.794933 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:23:06.796440 containerd[1467]: time="2025-01-13T20:23:06.795992440Z" level=info msg="containerd successfully booted in 0.038639s" Jan 13 20:23:06.980907 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:23:07.000362 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:23:07.008399 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:23:07.013468 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:23:07.015127 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:23:07.017350 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:23:07.030434 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:23:07.032828 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:23:07.034710 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 20:23:07.035724 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:23:08.174196 systemd-networkd[1401]: eth0: Gained IPv6LL Jan 13 20:23:08.176788 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:23:08.178454 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:23:08.190370 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:23:08.192641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:23:08.194481 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:23:08.216907 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:23:08.220273 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:23:08.220530 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:23:08.221810 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:23:08.671554 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:23:08.672755 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:23:08.673642 systemd[1]: Startup finished in 527ms (kernel) + 4.173s (initrd) + 3.830s (userspace) = 8.531s. Jan 13 20:23:08.675869 (kubelet)[1547]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:23:09.118820 kubelet[1547]: E0113 20:23:09.118712 1547 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:23:09.121319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:23:09.121463 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:23:13.033937 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:23:13.035094 systemd[1]: Started sshd@0-10.0.0.113:22-10.0.0.1:53772.service - OpenSSH per-connection server daemon (10.0.0.1:53772). Jan 13 20:23:13.096873 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 53772 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:13.098318 sshd-session[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:13.108991 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:23:13.119324 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:23:13.120842 systemd-logind[1449]: New session 1 of user core. Jan 13 20:23:13.130120 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:23:13.132321 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:23:13.138137 (systemd)[1566]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:23:13.206611 systemd[1566]: Queued start job for default target default.target. Jan 13 20:23:13.213988 systemd[1566]: Created slice app.slice - User Application Slice. Jan 13 20:23:13.214031 systemd[1566]: Reached target paths.target - Paths. Jan 13 20:23:13.214042 systemd[1566]: Reached target timers.target - Timers. Jan 13 20:23:13.215162 systemd[1566]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:23:13.224053 systemd[1566]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:23:13.224148 systemd[1566]: Reached target sockets.target - Sockets. Jan 13 20:23:13.224160 systemd[1566]: Reached target basic.target - Basic System. Jan 13 20:23:13.224196 systemd[1566]: Reached target default.target - Main User Target. Jan 13 20:23:13.224221 systemd[1566]: Startup finished in 81ms. Jan 13 20:23:13.224518 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:23:13.225777 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:23:13.291565 systemd[1]: Started sshd@1-10.0.0.113:22-10.0.0.1:53786.service - OpenSSH per-connection server daemon (10.0.0.1:53786). Jan 13 20:23:13.336337 sshd[1577]: Accepted publickey for core from 10.0.0.1 port 53786 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:13.337525 sshd-session[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:13.342244 systemd-logind[1449]: New session 2 of user core. Jan 13 20:23:13.348255 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:23:13.398333 sshd[1579]: Connection closed by 10.0.0.1 port 53786 Jan 13 20:23:13.398956 sshd-session[1577]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:13.409371 systemd[1]: sshd@1-10.0.0.113:22-10.0.0.1:53786.service: Deactivated successfully. Jan 13 20:23:13.410649 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:23:13.413361 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:23:13.414352 systemd[1]: Started sshd@2-10.0.0.113:22-10.0.0.1:53800.service - OpenSSH per-connection server daemon (10.0.0.1:53800). Jan 13 20:23:13.415151 systemd-logind[1449]: Removed session 2. Jan 13 20:23:13.461466 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 53800 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:13.462777 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:13.467516 systemd-logind[1449]: New session 3 of user core. Jan 13 20:23:13.474225 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:23:13.522733 sshd[1586]: Connection closed by 10.0.0.1 port 53800 Jan 13 20:23:13.522617 sshd-session[1584]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:13.532286 systemd[1]: sshd@2-10.0.0.113:22-10.0.0.1:53800.service: Deactivated successfully. Jan 13 20:23:13.533627 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:23:13.534793 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:23:13.535910 systemd[1]: Started sshd@3-10.0.0.113:22-10.0.0.1:53806.service - OpenSSH per-connection server daemon (10.0.0.1:53806). Jan 13 20:23:13.537407 systemd-logind[1449]: Removed session 3. Jan 13 20:23:13.582177 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 53806 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:13.583427 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:13.587038 systemd-logind[1449]: New session 4 of user core. Jan 13 20:23:13.598240 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:23:13.652672 sshd[1593]: Connection closed by 10.0.0.1 port 53806 Jan 13 20:23:13.652180 sshd-session[1591]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:13.661335 systemd[1]: sshd@3-10.0.0.113:22-10.0.0.1:53806.service: Deactivated successfully. Jan 13 20:23:13.663304 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:23:13.666167 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:23:13.675317 systemd[1]: Started sshd@4-10.0.0.113:22-10.0.0.1:53812.service - OpenSSH per-connection server daemon (10.0.0.1:53812). Jan 13 20:23:13.677775 systemd-logind[1449]: Removed session 4. Jan 13 20:23:13.718855 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 53812 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:13.720049 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:13.723887 systemd-logind[1449]: New session 5 of user core. Jan 13 20:23:13.736225 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:23:13.803001 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:23:13.803636 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:23:13.827966 sudo[1601]: pam_unix(sudo:session): session closed for user root Jan 13 20:23:13.829591 sshd[1600]: Connection closed by 10.0.0.1 port 53812 Jan 13 20:23:13.830311 sshd-session[1598]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:13.842313 systemd[1]: sshd@4-10.0.0.113:22-10.0.0.1:53812.service: Deactivated successfully. Jan 13 20:23:13.844429 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:23:13.845690 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:23:13.846942 systemd[1]: Started sshd@5-10.0.0.113:22-10.0.0.1:53820.service - OpenSSH per-connection server daemon (10.0.0.1:53820). Jan 13 20:23:13.847674 systemd-logind[1449]: Removed session 5. Jan 13 20:23:13.896275 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 53820 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:13.897537 sshd-session[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:13.901744 systemd-logind[1449]: New session 6 of user core. Jan 13 20:23:13.917267 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:23:13.970751 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:23:13.971020 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:23:13.974006 sudo[1610]: pam_unix(sudo:session): session closed for user root Jan 13 20:23:13.978277 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:23:13.978537 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:23:13.999408 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:23:14.021519 augenrules[1632]: No rules Jan 13 20:23:14.022144 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:23:14.022309 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:23:14.023237 sudo[1609]: pam_unix(sudo:session): session closed for user root Jan 13 20:23:14.024823 sshd[1608]: Connection closed by 10.0.0.1 port 53820 Jan 13 20:23:14.024718 sshd-session[1606]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:14.038353 systemd[1]: sshd@5-10.0.0.113:22-10.0.0.1:53820.service: Deactivated successfully. Jan 13 20:23:14.041355 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:23:14.043281 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:23:14.043959 systemd[1]: Started sshd@6-10.0.0.113:22-10.0.0.1:53836.service - OpenSSH per-connection server daemon (10.0.0.1:53836). Jan 13 20:23:14.044986 systemd-logind[1449]: Removed session 6. Jan 13 20:23:14.088689 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 53836 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:14.089813 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:14.094185 systemd-logind[1449]: New session 7 of user core. Jan 13 20:23:14.104264 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:23:14.154215 sudo[1643]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:23:14.154759 sudo[1643]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:23:14.170357 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:23:14.184106 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:23:14.184282 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:23:14.685514 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:23:14.694305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:23:14.714344 systemd[1]: Reloading requested from client PID 1692 ('systemctl') (unit session-7.scope)... Jan 13 20:23:14.714358 systemd[1]: Reloading... Jan 13 20:23:14.791116 zram_generator::config[1733]: No configuration found. Jan 13 20:23:14.974346 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:23:15.025554 systemd[1]: Reloading finished in 310 ms. Jan 13 20:23:15.072498 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 20:23:15.072556 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 20:23:15.072787 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:23:15.076544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:23:15.172164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:23:15.176393 (kubelet)[1776]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:23:15.218525 kubelet[1776]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:23:15.218525 kubelet[1776]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:23:15.218525 kubelet[1776]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:23:15.219522 kubelet[1776]: I0113 20:23:15.219465 1776 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:23:16.477324 kubelet[1776]: I0113 20:23:16.477274 1776 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 20:23:16.477324 kubelet[1776]: I0113 20:23:16.477310 1776 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:23:16.477757 kubelet[1776]: I0113 20:23:16.477523 1776 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 20:23:16.521415 kubelet[1776]: I0113 20:23:16.521291 1776 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:23:16.542032 kubelet[1776]: I0113 20:23:16.541992 1776 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:23:16.546151 kubelet[1776]: I0113 20:23:16.545676 1776 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:23:16.546151 kubelet[1776]: I0113 20:23:16.545732 1776 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.113","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 20:23:16.546151 kubelet[1776]: I0113 20:23:16.545969 1776 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:23:16.546151 kubelet[1776]: I0113 20:23:16.545979 1776 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 20:23:16.546414 kubelet[1776]: I0113 20:23:16.546240 1776 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:23:16.547950 kubelet[1776]: I0113 20:23:16.547912 1776 kubelet.go:400] "Attempting to sync node with API server" Jan 13 20:23:16.547950 kubelet[1776]: I0113 20:23:16.547941 1776 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:23:16.548255 kubelet[1776]: I0113 20:23:16.548176 1776 kubelet.go:312] "Adding apiserver pod source" Jan 13 20:23:16.548351 kubelet[1776]: I0113 20:23:16.548319 1776 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:23:16.548527 kubelet[1776]: E0113 20:23:16.548500 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:16.548682 kubelet[1776]: E0113 20:23:16.548549 1776 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:16.549591 kubelet[1776]: I0113 20:23:16.549488 1776 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:23:16.549843 kubelet[1776]: I0113 20:23:16.549824 1776 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:23:16.549944 kubelet[1776]: W0113 20:23:16.549922 1776 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:23:16.550834 kubelet[1776]: I0113 20:23:16.550812 1776 server.go:1264] "Started kubelet" Jan 13 20:23:16.551359 kubelet[1776]: I0113 20:23:16.551332 1776 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:23:16.552634 kubelet[1776]: I0113 20:23:16.552543 1776 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:23:16.556228 kubelet[1776]: I0113 20:23:16.552990 1776 server.go:455] "Adding debug handlers to kubelet server" Jan 13 20:23:16.556228 kubelet[1776]: I0113 20:23:16.551393 1776 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:23:16.556228 kubelet[1776]: I0113 20:23:16.553879 1776 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:23:16.556228 kubelet[1776]: E0113 20:23:16.555273 1776 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" Jan 13 20:23:16.556228 kubelet[1776]: I0113 20:23:16.555414 1776 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 20:23:16.556228 kubelet[1776]: I0113 20:23:16.555508 1776 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 20:23:16.559404 kubelet[1776]: I0113 20:23:16.558613 1776 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:23:16.561350 kubelet[1776]: I0113 20:23:16.561152 1776 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:23:16.561350 kubelet[1776]: I0113 20:23:16.561237 1776 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:23:16.562780 kubelet[1776]: E0113 20:23:16.562709 1776 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:23:16.564081 kubelet[1776]: E0113 20:23:16.563531 1776 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.113\" not found" node="10.0.0.113" Jan 13 20:23:16.564320 kubelet[1776]: I0113 20:23:16.564297 1776 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:23:16.573488 kubelet[1776]: I0113 20:23:16.573259 1776 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:23:16.573488 kubelet[1776]: I0113 20:23:16.573277 1776 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:23:16.573488 kubelet[1776]: I0113 20:23:16.573296 1776 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:23:16.639909 kubelet[1776]: I0113 20:23:16.639876 1776 policy_none.go:49] "None policy: Start" Jan 13 20:23:16.641250 kubelet[1776]: I0113 20:23:16.641190 1776 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:23:16.641250 kubelet[1776]: I0113 20:23:16.641218 1776 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:23:16.648944 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:23:16.657260 kubelet[1776]: I0113 20:23:16.656783 1776 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.113" Jan 13 20:23:16.660634 kubelet[1776]: I0113 20:23:16.660603 1776 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.113" Jan 13 20:23:16.660965 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:23:16.661528 kubelet[1776]: I0113 20:23:16.661386 1776 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:23:16.663737 kubelet[1776]: I0113 20:23:16.663164 1776 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:23:16.663737 kubelet[1776]: I0113 20:23:16.663260 1776 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:23:16.663737 kubelet[1776]: I0113 20:23:16.663279 1776 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 20:23:16.663737 kubelet[1776]: E0113 20:23:16.663322 1776 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:23:16.666048 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:23:16.682672 kubelet[1776]: I0113 20:23:16.682469 1776 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:23:16.683168 kubelet[1776]: I0113 20:23:16.682686 1776 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:23:16.684046 kubelet[1776]: I0113 20:23:16.683478 1776 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:23:16.684046 kubelet[1776]: I0113 20:23:16.683705 1776 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 20:23:16.684461 containerd[1467]: time="2025-01-13T20:23:16.684325611Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:23:16.685178 kubelet[1776]: I0113 20:23:16.684661 1776 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 20:23:16.701925 sudo[1643]: pam_unix(sudo:session): session closed for user root Jan 13 20:23:16.703232 sshd[1642]: Connection closed by 10.0.0.1 port 53836 Jan 13 20:23:16.703660 sshd-session[1640]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:16.706505 systemd[1]: sshd@6-10.0.0.113:22-10.0.0.1:53836.service: Deactivated successfully. Jan 13 20:23:16.708300 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:23:16.709631 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:23:16.712648 systemd-logind[1449]: Removed session 7. Jan 13 20:23:17.479736 kubelet[1776]: I0113 20:23:17.479696 1776 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 20:23:17.480333 kubelet[1776]: W0113 20:23:17.479851 1776 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 20:23:17.480333 kubelet[1776]: W0113 20:23:17.479923 1776 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 20:23:17.480333 kubelet[1776]: W0113 20:23:17.479930 1776 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 20:23:17.548807 kubelet[1776]: I0113 20:23:17.548766 1776 apiserver.go:52] "Watching apiserver" Jan 13 20:23:17.548807 kubelet[1776]: E0113 20:23:17.548781 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:17.553768 kubelet[1776]: I0113 20:23:17.553722 1776 topology_manager.go:215] "Topology Admit Handler" podUID="7ec3b1fa-7471-4eee-af7b-54c09770e896" podNamespace="kube-system" podName="cilium-kqbk2" Jan 13 20:23:17.553925 kubelet[1776]: I0113 20:23:17.553898 1776 topology_manager.go:215] "Topology Admit Handler" podUID="b9921655-4cb9-4a65-9a89-ef33887dc2e0" podNamespace="kube-system" podName="kube-proxy-h9lc6" Jan 13 20:23:17.556192 kubelet[1776]: I0113 20:23:17.556157 1776 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 20:23:17.560524 systemd[1]: Created slice kubepods-besteffort-podb9921655_4cb9_4a65_9a89_ef33887dc2e0.slice - libcontainer container kubepods-besteffort-podb9921655_4cb9_4a65_9a89_ef33887dc2e0.slice. Jan 13 20:23:17.563285 kubelet[1776]: I0113 20:23:17.563181 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7ec3b1fa-7471-4eee-af7b-54c09770e896-clustermesh-secrets\") pod \"cilium-kqbk2\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " pod="kube-system/cilium-kqbk2" Jan 13 20:23:17.563285 kubelet[1776]: I0113 20:23:17.563215 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-host-proc-sys-kernel\") pod \"cilium-kqbk2\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " pod="kube-system/cilium-kqbk2" Jan 13 20:23:17.563285 kubelet[1776]: I0113 20:23:17.563233 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7ec3b1fa-7471-4eee-af7b-54c09770e896-hubble-tls\") pod \"cilium-kqbk2\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " pod="kube-system/cilium-kqbk2" Jan 13 20:23:17.563285 kubelet[1776]: I0113 20:23:17.563260 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h52t8\" (UniqueName: \"kubernetes.io/projected/7ec3b1fa-7471-4eee-af7b-54c09770e896-kube-api-access-h52t8\") pod \"cilium-kqbk2\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " pod="kube-system/cilium-kqbk2" Jan 13 20:23:17.563285 kubelet[1776]: I0113 20:23:17.563279 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b9921655-4cb9-4a65-9a89-ef33887dc2e0-xtables-lock\") pod \"kube-proxy-h9lc6\" (UID: \"b9921655-4cb9-4a65-9a89-ef33887dc2e0\") " pod="kube-system/kube-proxy-h9lc6" Jan 13 20:23:17.563454 kubelet[1776]: I0113 20:23:17.563295 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-cilium-run\") pod \"cilium-kqbk2\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " pod="kube-system/cilium-kqbk2" Jan 13 20:23:17.563454 kubelet[1776]: I0113 20:23:17.563310 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-hostproc\") pod \"cilium-kqbk2\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " pod="kube-system/cilium-kqbk2" Jan 13 20:23:17.563454 kubelet[1776]: I0113 20:23:17.563329 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-etc-cni-netd\") pod \"cilium-kqbk2\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " pod="kube-system/cilium-kqbk2" Jan 13 20:23:17.563454 kubelet[1776]: I0113 20:23:17.563348 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7pq8\" (UniqueName: \"kubernetes.io/projected/b9921655-4cb9-4a65-9a89-ef33887dc2e0-kube-api-access-x7pq8\") pod \"kube-proxy-h9lc6\" (UID: \"b9921655-4cb9-4a65-9a89-ef33887dc2e0\") " pod="kube-system/kube-proxy-h9lc6" Jan 13 20:23:17.563454 kubelet[1776]: I0113 20:23:17.563388 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-xtables-lock\") pod \"cilium-kqbk2\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " pod="kube-system/cilium-kqbk2" Jan 13 20:23:17.563454 kubelet[1776]: I0113 20:23:17.563424 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b9921655-4cb9-4a65-9a89-ef33887dc2e0-kube-proxy\") pod \"kube-proxy-h9lc6\" (UID: \"b9921655-4cb9-4a65-9a89-ef33887dc2e0\") " pod="kube-system/kube-proxy-h9lc6" Jan 13 20:23:17.563768 kubelet[1776]: I0113 20:23:17.563445 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-cilium-cgroup\") pod \"cilium-kqbk2\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " pod="kube-system/cilium-kqbk2" Jan 13 20:23:17.563768 kubelet[1776]: I0113 20:23:17.563467 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-cni-path\") pod \"cilium-kqbk2\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " pod="kube-system/cilium-kqbk2" Jan 13 20:23:17.563768 kubelet[1776]: I0113 20:23:17.563487 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-lib-modules\") pod \"cilium-kqbk2\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " pod="kube-system/cilium-kqbk2" Jan 13 20:23:17.563768 kubelet[1776]: I0113 20:23:17.563504 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-bpf-maps\") pod \"cilium-kqbk2\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " pod="kube-system/cilium-kqbk2" Jan 13 20:23:17.563768 kubelet[1776]: I0113 20:23:17.563524 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ec3b1fa-7471-4eee-af7b-54c09770e896-cilium-config-path\") pod \"cilium-kqbk2\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " pod="kube-system/cilium-kqbk2" Jan 13 20:23:17.563768 kubelet[1776]: I0113 20:23:17.563557 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-host-proc-sys-net\") pod \"cilium-kqbk2\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " pod="kube-system/cilium-kqbk2" Jan 13 20:23:17.563897 kubelet[1776]: I0113 20:23:17.563584 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b9921655-4cb9-4a65-9a89-ef33887dc2e0-lib-modules\") pod \"kube-proxy-h9lc6\" (UID: \"b9921655-4cb9-4a65-9a89-ef33887dc2e0\") " pod="kube-system/kube-proxy-h9lc6" Jan 13 20:23:17.576268 systemd[1]: Created slice kubepods-burstable-pod7ec3b1fa_7471_4eee_af7b_54c09770e896.slice - libcontainer container kubepods-burstable-pod7ec3b1fa_7471_4eee_af7b_54c09770e896.slice. Jan 13 20:23:17.874634 kubelet[1776]: E0113 20:23:17.874503 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:17.875440 containerd[1467]: time="2025-01-13T20:23:17.875380412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h9lc6,Uid:b9921655-4cb9-4a65-9a89-ef33887dc2e0,Namespace:kube-system,Attempt:0,}" Jan 13 20:23:17.888758 kubelet[1776]: E0113 20:23:17.888709 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:17.889285 containerd[1467]: time="2025-01-13T20:23:17.889181402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kqbk2,Uid:7ec3b1fa-7471-4eee-af7b-54c09770e896,Namespace:kube-system,Attempt:0,}" Jan 13 20:23:18.429456 containerd[1467]: time="2025-01-13T20:23:18.429409661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:23:18.430566 containerd[1467]: time="2025-01-13T20:23:18.430528824Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:23:18.431378 containerd[1467]: time="2025-01-13T20:23:18.431288259Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 13 20:23:18.432248 containerd[1467]: time="2025-01-13T20:23:18.432184202Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:23:18.432864 containerd[1467]: time="2025-01-13T20:23:18.432836697Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:23:18.435223 containerd[1467]: time="2025-01-13T20:23:18.435166664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:23:18.436921 containerd[1467]: time="2025-01-13T20:23:18.436535873Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 547.271206ms" Jan 13 20:23:18.436921 containerd[1467]: time="2025-01-13T20:23:18.436893296Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 561.419496ms" Jan 13 20:23:18.548994 kubelet[1776]: E0113 20:23:18.548901 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:18.551642 containerd[1467]: time="2025-01-13T20:23:18.551543130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:23:18.551642 containerd[1467]: time="2025-01-13T20:23:18.551613510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:23:18.551642 containerd[1467]: time="2025-01-13T20:23:18.551624240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:23:18.551819 containerd[1467]: time="2025-01-13T20:23:18.551698276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:23:18.551819 containerd[1467]: time="2025-01-13T20:23:18.551741592Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:23:18.551819 containerd[1467]: time="2025-01-13T20:23:18.551787095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:23:18.551819 containerd[1467]: time="2025-01-13T20:23:18.551797944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:23:18.551892 containerd[1467]: time="2025-01-13T20:23:18.551854256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:23:18.667258 systemd[1]: Started cri-containerd-5ff77488e3d153ea4e4789925d3b27511e45966fc9a0c73f1566cdfcdbff9a64.scope - libcontainer container 5ff77488e3d153ea4e4789925d3b27511e45966fc9a0c73f1566cdfcdbff9a64. Jan 13 20:23:18.669336 systemd[1]: Started cri-containerd-785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef.scope - libcontainer container 785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef. Jan 13 20:23:18.673989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount458762717.mount: Deactivated successfully. Jan 13 20:23:18.690303 containerd[1467]: time="2025-01-13T20:23:18.689690468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-h9lc6,Uid:b9921655-4cb9-4a65-9a89-ef33887dc2e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ff77488e3d153ea4e4789925d3b27511e45966fc9a0c73f1566cdfcdbff9a64\"" Jan 13 20:23:18.691206 containerd[1467]: time="2025-01-13T20:23:18.691178182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kqbk2,Uid:7ec3b1fa-7471-4eee-af7b-54c09770e896,Namespace:kube-system,Attempt:0,} returns sandbox id \"785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef\"" Jan 13 20:23:18.692005 kubelet[1776]: E0113 20:23:18.691979 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:18.692218 kubelet[1776]: E0113 20:23:18.692196 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:18.693368 containerd[1467]: time="2025-01-13T20:23:18.693334246Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 20:23:19.549461 kubelet[1776]: E0113 20:23:19.549405 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:19.619035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount174258067.mount: Deactivated successfully. Jan 13 20:23:19.808430 containerd[1467]: time="2025-01-13T20:23:19.808317122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:19.809410 containerd[1467]: time="2025-01-13T20:23:19.809204410Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662013" Jan 13 20:23:19.810734 containerd[1467]: time="2025-01-13T20:23:19.810675204Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:19.812577 containerd[1467]: time="2025-01-13T20:23:19.812529644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:19.813458 containerd[1467]: time="2025-01-13T20:23:19.813323268Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 1.119952217s" Jan 13 20:23:19.813458 containerd[1467]: time="2025-01-13T20:23:19.813359660Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Jan 13 20:23:19.814595 containerd[1467]: time="2025-01-13T20:23:19.814569943Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:23:19.815886 containerd[1467]: time="2025-01-13T20:23:19.815850066Z" level=info msg="CreateContainer within sandbox \"5ff77488e3d153ea4e4789925d3b27511e45966fc9a0c73f1566cdfcdbff9a64\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:23:19.829793 containerd[1467]: time="2025-01-13T20:23:19.829752365Z" level=info msg="CreateContainer within sandbox \"5ff77488e3d153ea4e4789925d3b27511e45966fc9a0c73f1566cdfcdbff9a64\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"31e1d71f2ed5452664bb27176ccfe75a0ecb1a7f76ce7492c3a1e1c7e9f8abf3\"" Jan 13 20:23:19.830573 containerd[1467]: time="2025-01-13T20:23:19.830521370Z" level=info msg="StartContainer for \"31e1d71f2ed5452664bb27176ccfe75a0ecb1a7f76ce7492c3a1e1c7e9f8abf3\"" Jan 13 20:23:19.855249 systemd[1]: Started cri-containerd-31e1d71f2ed5452664bb27176ccfe75a0ecb1a7f76ce7492c3a1e1c7e9f8abf3.scope - libcontainer container 31e1d71f2ed5452664bb27176ccfe75a0ecb1a7f76ce7492c3a1e1c7e9f8abf3. Jan 13 20:23:19.879708 containerd[1467]: time="2025-01-13T20:23:19.879598899Z" level=info msg="StartContainer for \"31e1d71f2ed5452664bb27176ccfe75a0ecb1a7f76ce7492c3a1e1c7e9f8abf3\" returns successfully" Jan 13 20:23:20.549714 kubelet[1776]: E0113 20:23:20.549655 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:20.676566 kubelet[1776]: E0113 20:23:20.676533 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:20.685697 kubelet[1776]: I0113 20:23:20.685589 1776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-h9lc6" podStartSLOduration=3.5640457899999998 podStartE2EDuration="4.685574568s" podCreationTimestamp="2025-01-13 20:23:16 +0000 UTC" firstStartedPulling="2025-01-13 20:23:18.692893011 +0000 UTC m=+3.513411238" lastFinishedPulling="2025-01-13 20:23:19.814421789 +0000 UTC m=+4.634940016" observedRunningTime="2025-01-13 20:23:20.684901134 +0000 UTC m=+5.505419361" watchObservedRunningTime="2025-01-13 20:23:20.685574568 +0000 UTC m=+5.506092795" Jan 13 20:23:21.550565 kubelet[1776]: E0113 20:23:21.550527 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:21.678948 kubelet[1776]: E0113 20:23:21.678561 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:22.551616 kubelet[1776]: E0113 20:23:22.551567 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:23.552570 kubelet[1776]: E0113 20:23:23.552519 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:24.553016 kubelet[1776]: E0113 20:23:24.552982 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:25.553800 kubelet[1776]: E0113 20:23:25.553754 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:26.554038 kubelet[1776]: E0113 20:23:26.553983 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:27.554195 kubelet[1776]: E0113 20:23:27.554144 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:28.554540 kubelet[1776]: E0113 20:23:28.554499 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:29.554793 kubelet[1776]: E0113 20:23:29.554731 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:30.555439 kubelet[1776]: E0113 20:23:30.555403 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:30.910260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount890749137.mount: Deactivated successfully. Jan 13 20:23:31.556198 kubelet[1776]: E0113 20:23:31.556158 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:32.147254 containerd[1467]: time="2025-01-13T20:23:32.147197529Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:32.147848 containerd[1467]: time="2025-01-13T20:23:32.147811993Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157650914" Jan 13 20:23:32.148657 containerd[1467]: time="2025-01-13T20:23:32.148624179Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:32.150208 containerd[1467]: time="2025-01-13T20:23:32.150167751Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 12.335564556s" Jan 13 20:23:32.150245 containerd[1467]: time="2025-01-13T20:23:32.150209429Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 13 20:23:32.152325 containerd[1467]: time="2025-01-13T20:23:32.152282111Z" level=info msg="CreateContainer within sandbox \"785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:23:32.165578 containerd[1467]: time="2025-01-13T20:23:32.165521837Z" level=info msg="CreateContainer within sandbox \"785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"21472d1fcc19a11f1a9bba84b2d0f4457bef3382e048cd2b2e93dab6ffee9263\"" Jan 13 20:23:32.166093 containerd[1467]: time="2025-01-13T20:23:32.166022175Z" level=info msg="StartContainer for \"21472d1fcc19a11f1a9bba84b2d0f4457bef3382e048cd2b2e93dab6ffee9263\"" Jan 13 20:23:32.184519 systemd[1]: run-containerd-runc-k8s.io-21472d1fcc19a11f1a9bba84b2d0f4457bef3382e048cd2b2e93dab6ffee9263-runc.axHqBd.mount: Deactivated successfully. Jan 13 20:23:32.194278 systemd[1]: Started cri-containerd-21472d1fcc19a11f1a9bba84b2d0f4457bef3382e048cd2b2e93dab6ffee9263.scope - libcontainer container 21472d1fcc19a11f1a9bba84b2d0f4457bef3382e048cd2b2e93dab6ffee9263. Jan 13 20:23:32.217059 containerd[1467]: time="2025-01-13T20:23:32.216944320Z" level=info msg="StartContainer for \"21472d1fcc19a11f1a9bba84b2d0f4457bef3382e048cd2b2e93dab6ffee9263\" returns successfully" Jan 13 20:23:32.321454 systemd[1]: cri-containerd-21472d1fcc19a11f1a9bba84b2d0f4457bef3382e048cd2b2e93dab6ffee9263.scope: Deactivated successfully. Jan 13 20:23:32.467746 containerd[1467]: time="2025-01-13T20:23:32.467618592Z" level=info msg="shim disconnected" id=21472d1fcc19a11f1a9bba84b2d0f4457bef3382e048cd2b2e93dab6ffee9263 namespace=k8s.io Jan 13 20:23:32.467746 containerd[1467]: time="2025-01-13T20:23:32.467672738Z" level=warning msg="cleaning up after shim disconnected" id=21472d1fcc19a11f1a9bba84b2d0f4457bef3382e048cd2b2e93dab6ffee9263 namespace=k8s.io Jan 13 20:23:32.467746 containerd[1467]: time="2025-01-13T20:23:32.467682448Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:23:32.556524 kubelet[1776]: E0113 20:23:32.556491 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:32.693197 kubelet[1776]: E0113 20:23:32.693120 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:32.695061 containerd[1467]: time="2025-01-13T20:23:32.694952225Z" level=info msg="CreateContainer within sandbox \"785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:23:32.707212 containerd[1467]: time="2025-01-13T20:23:32.707106839Z" level=info msg="CreateContainer within sandbox \"785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b39e05ca91cc507667942abac8a844603da86f57d4902868ffe7090785b27e21\"" Jan 13 20:23:32.707731 containerd[1467]: time="2025-01-13T20:23:32.707556588Z" level=info msg="StartContainer for \"b39e05ca91cc507667942abac8a844603da86f57d4902868ffe7090785b27e21\"" Jan 13 20:23:32.732320 systemd[1]: Started cri-containerd-b39e05ca91cc507667942abac8a844603da86f57d4902868ffe7090785b27e21.scope - libcontainer container b39e05ca91cc507667942abac8a844603da86f57d4902868ffe7090785b27e21. Jan 13 20:23:32.750931 containerd[1467]: time="2025-01-13T20:23:32.750872919Z" level=info msg="StartContainer for \"b39e05ca91cc507667942abac8a844603da86f57d4902868ffe7090785b27e21\" returns successfully" Jan 13 20:23:32.773678 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:23:32.773886 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:23:32.773943 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:23:32.778393 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:23:32.778571 systemd[1]: cri-containerd-b39e05ca91cc507667942abac8a844603da86f57d4902868ffe7090785b27e21.scope: Deactivated successfully. Jan 13 20:23:32.796556 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:23:32.797630 containerd[1467]: time="2025-01-13T20:23:32.797565384Z" level=info msg="shim disconnected" id=b39e05ca91cc507667942abac8a844603da86f57d4902868ffe7090785b27e21 namespace=k8s.io Jan 13 20:23:32.797630 containerd[1467]: time="2025-01-13T20:23:32.797628681Z" level=warning msg="cleaning up after shim disconnected" id=b39e05ca91cc507667942abac8a844603da86f57d4902868ffe7090785b27e21 namespace=k8s.io Jan 13 20:23:32.797758 containerd[1467]: time="2025-01-13T20:23:32.797639350Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:23:33.161366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21472d1fcc19a11f1a9bba84b2d0f4457bef3382e048cd2b2e93dab6ffee9263-rootfs.mount: Deactivated successfully. Jan 13 20:23:33.557109 kubelet[1776]: E0113 20:23:33.556986 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:33.695822 kubelet[1776]: E0113 20:23:33.695624 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:33.697304 containerd[1467]: time="2025-01-13T20:23:33.697269659Z" level=info msg="CreateContainer within sandbox \"785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:23:33.724987 containerd[1467]: time="2025-01-13T20:23:33.724898544Z" level=info msg="CreateContainer within sandbox \"785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"323d97f645984a6086ef24f2a31cb175ba20114503e10143afc7b4ca9d4f002d\"" Jan 13 20:23:33.726108 containerd[1467]: time="2025-01-13T20:23:33.725352705Z" level=info msg="StartContainer for \"323d97f645984a6086ef24f2a31cb175ba20114503e10143afc7b4ca9d4f002d\"" Jan 13 20:23:33.757325 systemd[1]: Started cri-containerd-323d97f645984a6086ef24f2a31cb175ba20114503e10143afc7b4ca9d4f002d.scope - libcontainer container 323d97f645984a6086ef24f2a31cb175ba20114503e10143afc7b4ca9d4f002d. Jan 13 20:23:33.786432 containerd[1467]: time="2025-01-13T20:23:33.786381213Z" level=info msg="StartContainer for \"323d97f645984a6086ef24f2a31cb175ba20114503e10143afc7b4ca9d4f002d\" returns successfully" Jan 13 20:23:33.794985 systemd[1]: cri-containerd-323d97f645984a6086ef24f2a31cb175ba20114503e10143afc7b4ca9d4f002d.scope: Deactivated successfully. Jan 13 20:23:33.819321 containerd[1467]: time="2025-01-13T20:23:33.819163577Z" level=info msg="shim disconnected" id=323d97f645984a6086ef24f2a31cb175ba20114503e10143afc7b4ca9d4f002d namespace=k8s.io Jan 13 20:23:33.819321 containerd[1467]: time="2025-01-13T20:23:33.819241189Z" level=warning msg="cleaning up after shim disconnected" id=323d97f645984a6086ef24f2a31cb175ba20114503e10143afc7b4ca9d4f002d namespace=k8s.io Jan 13 20:23:33.819321 containerd[1467]: time="2025-01-13T20:23:33.819254297Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:23:34.160982 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-323d97f645984a6086ef24f2a31cb175ba20114503e10143afc7b4ca9d4f002d-rootfs.mount: Deactivated successfully. Jan 13 20:23:34.557553 kubelet[1776]: E0113 20:23:34.557439 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:34.699294 kubelet[1776]: E0113 20:23:34.699128 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:34.701104 containerd[1467]: time="2025-01-13T20:23:34.701052486Z" level=info msg="CreateContainer within sandbox \"785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:23:34.720834 containerd[1467]: time="2025-01-13T20:23:34.720720592Z" level=info msg="CreateContainer within sandbox \"785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f0c139c96ebcb1271583c1392fea1da87fdc1991ca8566de8d8516fa3129e0e7\"" Jan 13 20:23:34.721506 containerd[1467]: time="2025-01-13T20:23:34.721473934Z" level=info msg="StartContainer for \"f0c139c96ebcb1271583c1392fea1da87fdc1991ca8566de8d8516fa3129e0e7\"" Jan 13 20:23:34.748265 systemd[1]: Started cri-containerd-f0c139c96ebcb1271583c1392fea1da87fdc1991ca8566de8d8516fa3129e0e7.scope - libcontainer container f0c139c96ebcb1271583c1392fea1da87fdc1991ca8566de8d8516fa3129e0e7. Jan 13 20:23:34.767069 systemd[1]: cri-containerd-f0c139c96ebcb1271583c1392fea1da87fdc1991ca8566de8d8516fa3129e0e7.scope: Deactivated successfully. Jan 13 20:23:34.770108 containerd[1467]: time="2025-01-13T20:23:34.769956047Z" level=info msg="StartContainer for \"f0c139c96ebcb1271583c1392fea1da87fdc1991ca8566de8d8516fa3129e0e7\" returns successfully" Jan 13 20:23:34.790721 containerd[1467]: time="2025-01-13T20:23:34.790661477Z" level=info msg="shim disconnected" id=f0c139c96ebcb1271583c1392fea1da87fdc1991ca8566de8d8516fa3129e0e7 namespace=k8s.io Jan 13 20:23:34.790721 containerd[1467]: time="2025-01-13T20:23:34.790722670Z" level=warning msg="cleaning up after shim disconnected" id=f0c139c96ebcb1271583c1392fea1da87fdc1991ca8566de8d8516fa3129e0e7 namespace=k8s.io Jan 13 20:23:34.790949 containerd[1467]: time="2025-01-13T20:23:34.790731063Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:23:35.161012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f0c139c96ebcb1271583c1392fea1da87fdc1991ca8566de8d8516fa3129e0e7-rootfs.mount: Deactivated successfully. Jan 13 20:23:35.558433 kubelet[1776]: E0113 20:23:35.558319 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:35.703550 kubelet[1776]: E0113 20:23:35.703012 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:35.705576 containerd[1467]: time="2025-01-13T20:23:35.705539257Z" level=info msg="CreateContainer within sandbox \"785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:23:35.723720 containerd[1467]: time="2025-01-13T20:23:35.723674320Z" level=info msg="CreateContainer within sandbox \"785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894\"" Jan 13 20:23:35.724181 containerd[1467]: time="2025-01-13T20:23:35.724131014Z" level=info msg="StartContainer for \"51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894\"" Jan 13 20:23:35.750309 systemd[1]: Started cri-containerd-51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894.scope - libcontainer container 51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894. Jan 13 20:23:35.776112 containerd[1467]: time="2025-01-13T20:23:35.776048794Z" level=info msg="StartContainer for \"51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894\" returns successfully" Jan 13 20:23:35.840137 kubelet[1776]: I0113 20:23:35.839518 1776 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 20:23:36.354116 kernel: Initializing XFRM netlink socket Jan 13 20:23:36.549185 kubelet[1776]: E0113 20:23:36.549133 1776 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:36.559271 kubelet[1776]: E0113 20:23:36.559238 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:36.708743 kubelet[1776]: E0113 20:23:36.708420 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:36.726556 kubelet[1776]: I0113 20:23:36.726487 1776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kqbk2" podStartSLOduration=7.268332773 podStartE2EDuration="20.726464795s" podCreationTimestamp="2025-01-13 20:23:16 +0000 UTC" firstStartedPulling="2025-01-13 20:23:18.692918484 +0000 UTC m=+3.513436711" lastFinishedPulling="2025-01-13 20:23:32.151050506 +0000 UTC m=+16.971568733" observedRunningTime="2025-01-13 20:23:36.725563645 +0000 UTC m=+21.546081872" watchObservedRunningTime="2025-01-13 20:23:36.726464795 +0000 UTC m=+21.546983022" Jan 13 20:23:37.560274 kubelet[1776]: E0113 20:23:37.560221 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:37.710061 kubelet[1776]: E0113 20:23:37.710027 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:37.933226 kubelet[1776]: I0113 20:23:37.933033 1776 topology_manager.go:215] "Topology Admit Handler" podUID="0a0e0797-6690-42ff-a033-ede618f2a8f4" podNamespace="default" podName="nginx-deployment-85f456d6dd-kq8td" Jan 13 20:23:37.938922 systemd[1]: Created slice kubepods-besteffort-pod0a0e0797_6690_42ff_a033_ede618f2a8f4.slice - libcontainer container kubepods-besteffort-pod0a0e0797_6690_42ff_a033_ede618f2a8f4.slice. Jan 13 20:23:37.971626 systemd-networkd[1401]: cilium_host: Link UP Jan 13 20:23:37.971750 systemd-networkd[1401]: cilium_net: Link UP Jan 13 20:23:37.971871 systemd-networkd[1401]: cilium_net: Gained carrier Jan 13 20:23:37.971997 systemd-networkd[1401]: cilium_host: Gained carrier Jan 13 20:23:37.994046 kubelet[1776]: I0113 20:23:37.993959 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7kt5\" (UniqueName: \"kubernetes.io/projected/0a0e0797-6690-42ff-a033-ede618f2a8f4-kube-api-access-k7kt5\") pod \"nginx-deployment-85f456d6dd-kq8td\" (UID: \"0a0e0797-6690-42ff-a033-ede618f2a8f4\") " pod="default/nginx-deployment-85f456d6dd-kq8td" Jan 13 20:23:38.054057 systemd-networkd[1401]: cilium_vxlan: Link UP Jan 13 20:23:38.054064 systemd-networkd[1401]: cilium_vxlan: Gained carrier Jan 13 20:23:38.206219 systemd-networkd[1401]: cilium_net: Gained IPv6LL Jan 13 20:23:38.241886 containerd[1467]: time="2025-01-13T20:23:38.241829394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kq8td,Uid:0a0e0797-6690-42ff-a033-ede618f2a8f4,Namespace:default,Attempt:0,}" Jan 13 20:23:38.318208 systemd-networkd[1401]: cilium_host: Gained IPv6LL Jan 13 20:23:38.384118 kernel: NET: Registered PF_ALG protocol family Jan 13 20:23:38.560921 kubelet[1776]: E0113 20:23:38.560575 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:38.711284 kubelet[1776]: E0113 20:23:38.711256 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:38.961730 systemd-networkd[1401]: lxc_health: Link UP Jan 13 20:23:38.967216 systemd-networkd[1401]: lxc_health: Gained carrier Jan 13 20:23:39.315611 systemd-networkd[1401]: lxc05bcdbb220af: Link UP Jan 13 20:23:39.325411 kernel: eth0: renamed from tmp42031 Jan 13 20:23:39.336650 systemd-networkd[1401]: lxc05bcdbb220af: Gained carrier Jan 13 20:23:39.561296 kubelet[1776]: E0113 20:23:39.561244 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:39.727341 systemd-networkd[1401]: cilium_vxlan: Gained IPv6LL Jan 13 20:23:40.558300 systemd-networkd[1401]: lxc_health: Gained IPv6LL Jan 13 20:23:40.561967 kubelet[1776]: E0113 20:23:40.561909 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:40.709794 kubelet[1776]: E0113 20:23:40.709722 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:41.134303 systemd-networkd[1401]: lxc05bcdbb220af: Gained IPv6LL Jan 13 20:23:41.562996 kubelet[1776]: E0113 20:23:41.562886 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:42.563486 kubelet[1776]: E0113 20:23:42.563442 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:42.817406 containerd[1467]: time="2025-01-13T20:23:42.816998373Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:23:42.817406 containerd[1467]: time="2025-01-13T20:23:42.817319668Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:23:42.817406 containerd[1467]: time="2025-01-13T20:23:42.817332308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:23:42.817986 containerd[1467]: time="2025-01-13T20:23:42.817406352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:23:42.845265 systemd[1]: Started cri-containerd-4203186082bddf6942a5dea68833fb83fa9109c9a523052766d4bb5382f8222b.scope - libcontainer container 4203186082bddf6942a5dea68833fb83fa9109c9a523052766d4bb5382f8222b. Jan 13 20:23:42.854391 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:23:42.868641 containerd[1467]: time="2025-01-13T20:23:42.868572263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-kq8td,Uid:0a0e0797-6690-42ff-a033-ede618f2a8f4,Namespace:default,Attempt:0,} returns sandbox id \"4203186082bddf6942a5dea68833fb83fa9109c9a523052766d4bb5382f8222b\"" Jan 13 20:23:42.870536 containerd[1467]: time="2025-01-13T20:23:42.870476513Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:23:43.563875 kubelet[1776]: E0113 20:23:43.563832 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:44.564361 kubelet[1776]: E0113 20:23:44.564308 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:44.655275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2228981680.mount: Deactivated successfully. Jan 13 20:23:44.757534 kubelet[1776]: I0113 20:23:44.757502 1776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 13 20:23:44.758660 kubelet[1776]: E0113 20:23:44.758633 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:45.391582 containerd[1467]: time="2025-01-13T20:23:45.391532295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:45.392642 containerd[1467]: time="2025-01-13T20:23:45.392601178Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67697045" Jan 13 20:23:45.394132 containerd[1467]: time="2025-01-13T20:23:45.393424211Z" level=info msg="ImageCreate event name:\"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:45.397465 containerd[1467]: time="2025-01-13T20:23:45.397416572Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:45.398588 containerd[1467]: time="2025-01-13T20:23:45.398556618Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"67696923\" in 2.528049224s" Jan 13 20:23:45.398588 containerd[1467]: time="2025-01-13T20:23:45.398588419Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\"" Jan 13 20:23:45.400633 containerd[1467]: time="2025-01-13T20:23:45.400606101Z" level=info msg="CreateContainer within sandbox \"4203186082bddf6942a5dea68833fb83fa9109c9a523052766d4bb5382f8222b\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 20:23:45.414082 containerd[1467]: time="2025-01-13T20:23:45.414016640Z" level=info msg="CreateContainer within sandbox \"4203186082bddf6942a5dea68833fb83fa9109c9a523052766d4bb5382f8222b\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"28ceeb22d15e9ab6c9d6f9a147d8010b460ca862427c705014722655e34661c9\"" Jan 13 20:23:45.414633 containerd[1467]: time="2025-01-13T20:23:45.414599104Z" level=info msg="StartContainer for \"28ceeb22d15e9ab6c9d6f9a147d8010b460ca862427c705014722655e34661c9\"" Jan 13 20:23:45.438239 systemd[1]: Started cri-containerd-28ceeb22d15e9ab6c9d6f9a147d8010b460ca862427c705014722655e34661c9.scope - libcontainer container 28ceeb22d15e9ab6c9d6f9a147d8010b460ca862427c705014722655e34661c9. Jan 13 20:23:45.460271 containerd[1467]: time="2025-01-13T20:23:45.460223740Z" level=info msg="StartContainer for \"28ceeb22d15e9ab6c9d6f9a147d8010b460ca862427c705014722655e34661c9\" returns successfully" Jan 13 20:23:45.565492 kubelet[1776]: E0113 20:23:45.565435 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:45.723891 kubelet[1776]: E0113 20:23:45.723785 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:45.736630 kubelet[1776]: I0113 20:23:45.736551 1776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-kq8td" podStartSLOduration=6.206951452 podStartE2EDuration="8.736536982s" podCreationTimestamp="2025-01-13 20:23:37 +0000 UTC" firstStartedPulling="2025-01-13 20:23:42.869854924 +0000 UTC m=+27.690373111" lastFinishedPulling="2025-01-13 20:23:45.399440414 +0000 UTC m=+30.219958641" observedRunningTime="2025-01-13 20:23:45.736363175 +0000 UTC m=+30.556881402" watchObservedRunningTime="2025-01-13 20:23:45.736536982 +0000 UTC m=+30.557055169" Jan 13 20:23:46.565928 kubelet[1776]: E0113 20:23:46.565897 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:47.568948 kubelet[1776]: E0113 20:23:47.566562 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:48.567320 kubelet[1776]: E0113 20:23:48.567152 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:49.567837 kubelet[1776]: E0113 20:23:49.567782 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:50.312208 kubelet[1776]: I0113 20:23:50.310346 1776 topology_manager.go:215] "Topology Admit Handler" podUID="d352cc0a-a9f7-4ee5-ac2d-bc01a54b76eb" podNamespace="default" podName="nfs-server-provisioner-0" Jan 13 20:23:50.319700 systemd[1]: Created slice kubepods-besteffort-podd352cc0a_a9f7_4ee5_ac2d_bc01a54b76eb.slice - libcontainer container kubepods-besteffort-podd352cc0a_a9f7_4ee5_ac2d_bc01a54b76eb.slice. Jan 13 20:23:50.359923 kubelet[1776]: I0113 20:23:50.359866 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76nxs\" (UniqueName: \"kubernetes.io/projected/d352cc0a-a9f7-4ee5-ac2d-bc01a54b76eb-kube-api-access-76nxs\") pod \"nfs-server-provisioner-0\" (UID: \"d352cc0a-a9f7-4ee5-ac2d-bc01a54b76eb\") " pod="default/nfs-server-provisioner-0" Jan 13 20:23:50.359923 kubelet[1776]: I0113 20:23:50.359919 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d352cc0a-a9f7-4ee5-ac2d-bc01a54b76eb-data\") pod \"nfs-server-provisioner-0\" (UID: \"d352cc0a-a9f7-4ee5-ac2d-bc01a54b76eb\") " pod="default/nfs-server-provisioner-0" Jan 13 20:23:50.568433 kubelet[1776]: E0113 20:23:50.568286 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:50.623685 containerd[1467]: time="2025-01-13T20:23:50.623608374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d352cc0a-a9f7-4ee5-ac2d-bc01a54b76eb,Namespace:default,Attempt:0,}" Jan 13 20:23:50.650671 systemd-networkd[1401]: lxc731af5f493c8: Link UP Jan 13 20:23:50.660286 kernel: eth0: renamed from tmp62fa7 Jan 13 20:23:50.665789 systemd-networkd[1401]: lxc731af5f493c8: Gained carrier Jan 13 20:23:50.863909 containerd[1467]: time="2025-01-13T20:23:50.863733628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:23:50.864182 containerd[1467]: time="2025-01-13T20:23:50.863863352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:23:50.864182 containerd[1467]: time="2025-01-13T20:23:50.863882912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:23:50.864182 containerd[1467]: time="2025-01-13T20:23:50.863964075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:23:50.888297 systemd[1]: Started cri-containerd-62fa7037be93cce93c1e9b11d889a447e12abc547506a60e8b42f3d57a38bd2f.scope - libcontainer container 62fa7037be93cce93c1e9b11d889a447e12abc547506a60e8b42f3d57a38bd2f. Jan 13 20:23:50.898395 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:23:50.914468 containerd[1467]: time="2025-01-13T20:23:50.914411673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d352cc0a-a9f7-4ee5-ac2d-bc01a54b76eb,Namespace:default,Attempt:0,} returns sandbox id \"62fa7037be93cce93c1e9b11d889a447e12abc547506a60e8b42f3d57a38bd2f\"" Jan 13 20:23:50.917568 containerd[1467]: time="2025-01-13T20:23:50.916251169Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 20:23:51.568446 kubelet[1776]: E0113 20:23:51.568407 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:52.014472 systemd-networkd[1401]: lxc731af5f493c8: Gained IPv6LL Jan 13 20:23:52.143198 update_engine[1456]: I20250113 20:23:52.143136 1456 update_attempter.cc:509] Updating boot flags... Jan 13 20:23:52.167380 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3009) Jan 13 20:23:52.187134 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3009) Jan 13 20:23:52.536026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2913428407.mount: Deactivated successfully. Jan 13 20:23:52.569035 kubelet[1776]: E0113 20:23:52.568983 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:53.569687 kubelet[1776]: E0113 20:23:53.569643 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:53.868519 containerd[1467]: time="2025-01-13T20:23:53.868284036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:53.871143 containerd[1467]: time="2025-01-13T20:23:53.869382386Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Jan 13 20:23:53.871143 containerd[1467]: time="2025-01-13T20:23:53.870646379Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:53.873356 containerd[1467]: time="2025-01-13T20:23:53.873273689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:53.874523 containerd[1467]: time="2025-01-13T20:23:53.874492801Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 2.958198191s" Jan 13 20:23:53.874596 containerd[1467]: time="2025-01-13T20:23:53.874526922Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 13 20:23:53.877474 containerd[1467]: time="2025-01-13T20:23:53.877435239Z" level=info msg="CreateContainer within sandbox \"62fa7037be93cce93c1e9b11d889a447e12abc547506a60e8b42f3d57a38bd2f\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 20:23:53.888439 containerd[1467]: time="2025-01-13T20:23:53.888393330Z" level=info msg="CreateContainer within sandbox \"62fa7037be93cce93c1e9b11d889a447e12abc547506a60e8b42f3d57a38bd2f\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"5f08bf67395aa2d3a103f50efdb80d94e5fab8e6de037184f9b820400c7c057e\"" Jan 13 20:23:53.889099 containerd[1467]: time="2025-01-13T20:23:53.888984066Z" level=info msg="StartContainer for \"5f08bf67395aa2d3a103f50efdb80d94e5fab8e6de037184f9b820400c7c057e\"" Jan 13 20:23:53.960260 systemd[1]: Started cri-containerd-5f08bf67395aa2d3a103f50efdb80d94e5fab8e6de037184f9b820400c7c057e.scope - libcontainer container 5f08bf67395aa2d3a103f50efdb80d94e5fab8e6de037184f9b820400c7c057e. Jan 13 20:23:54.012952 containerd[1467]: time="2025-01-13T20:23:54.012899544Z" level=info msg="StartContainer for \"5f08bf67395aa2d3a103f50efdb80d94e5fab8e6de037184f9b820400c7c057e\" returns successfully" Jan 13 20:23:54.570708 kubelet[1776]: E0113 20:23:54.570641 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:54.751746 kubelet[1776]: I0113 20:23:54.751674 1776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.791830499 podStartE2EDuration="4.751659135s" podCreationTimestamp="2025-01-13 20:23:50 +0000 UTC" firstStartedPulling="2025-01-13 20:23:50.915751594 +0000 UTC m=+35.736269781" lastFinishedPulling="2025-01-13 20:23:53.87558019 +0000 UTC m=+38.696098417" observedRunningTime="2025-01-13 20:23:54.751005199 +0000 UTC m=+39.571523426" watchObservedRunningTime="2025-01-13 20:23:54.751659135 +0000 UTC m=+39.572177362" Jan 13 20:23:55.571437 kubelet[1776]: E0113 20:23:55.571383 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:56.548427 kubelet[1776]: E0113 20:23:56.548382 1776 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:56.571800 kubelet[1776]: E0113 20:23:56.571772 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:57.572082 kubelet[1776]: E0113 20:23:57.572028 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:58.572235 kubelet[1776]: E0113 20:23:58.572186 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:59.573009 kubelet[1776]: E0113 20:23:59.572962 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:00.573869 kubelet[1776]: E0113 20:24:00.573821 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:01.574569 kubelet[1776]: E0113 20:24:01.574521 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:02.574687 kubelet[1776]: E0113 20:24:02.574629 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:03.574953 kubelet[1776]: E0113 20:24:03.574906 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:04.295167 kubelet[1776]: I0113 20:24:04.295075 1776 topology_manager.go:215] "Topology Admit Handler" podUID="d9e9c8f4-d6ff-497e-bea7-7331625f5be3" podNamespace="default" podName="test-pod-1" Jan 13 20:24:04.301043 systemd[1]: Created slice kubepods-besteffort-podd9e9c8f4_d6ff_497e_bea7_7331625f5be3.slice - libcontainer container kubepods-besteffort-podd9e9c8f4_d6ff_497e_bea7_7331625f5be3.slice. Jan 13 20:24:04.346750 kubelet[1776]: I0113 20:24:04.346476 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjqgc\" (UniqueName: \"kubernetes.io/projected/d9e9c8f4-d6ff-497e-bea7-7331625f5be3-kube-api-access-hjqgc\") pod \"test-pod-1\" (UID: \"d9e9c8f4-d6ff-497e-bea7-7331625f5be3\") " pod="default/test-pod-1" Jan 13 20:24:04.346750 kubelet[1776]: I0113 20:24:04.346525 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-60274dba-4955-4692-8379-a4a7af775e25\" (UniqueName: \"kubernetes.io/nfs/d9e9c8f4-d6ff-497e-bea7-7331625f5be3-pvc-60274dba-4955-4692-8379-a4a7af775e25\") pod \"test-pod-1\" (UID: \"d9e9c8f4-d6ff-497e-bea7-7331625f5be3\") " pod="default/test-pod-1" Jan 13 20:24:04.497871 kernel: FS-Cache: Loaded Jan 13 20:24:04.523597 kernel: RPC: Registered named UNIX socket transport module. Jan 13 20:24:04.523701 kernel: RPC: Registered udp transport module. Jan 13 20:24:04.523731 kernel: RPC: Registered tcp transport module. Jan 13 20:24:04.524579 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 20:24:04.524604 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 20:24:04.575366 kubelet[1776]: E0113 20:24:04.575242 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:04.747112 kernel: NFS: Registering the id_resolver key type Jan 13 20:24:04.747216 kernel: Key type id_resolver registered Jan 13 20:24:04.747234 kernel: Key type id_legacy registered Jan 13 20:24:04.772386 nfsidmap[3185]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 20:24:04.776044 nfsidmap[3188]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 20:24:04.905179 containerd[1467]: time="2025-01-13T20:24:04.905055906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d9e9c8f4-d6ff-497e-bea7-7331625f5be3,Namespace:default,Attempt:0,}" Jan 13 20:24:04.932695 systemd-networkd[1401]: lxc883b126e243b: Link UP Jan 13 20:24:04.940118 kernel: eth0: renamed from tmpd2687 Jan 13 20:24:04.948951 systemd-networkd[1401]: lxc883b126e243b: Gained carrier Jan 13 20:24:05.151864 containerd[1467]: time="2025-01-13T20:24:05.151740416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:24:05.151864 containerd[1467]: time="2025-01-13T20:24:05.151831697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:24:05.151864 containerd[1467]: time="2025-01-13T20:24:05.151845698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:05.152147 containerd[1467]: time="2025-01-13T20:24:05.151941979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:05.173301 systemd[1]: Started cri-containerd-d2687608eab62e301d8c47878bb05991cd7dc63ab80faa7989e29c5027fd73d2.scope - libcontainer container d2687608eab62e301d8c47878bb05991cd7dc63ab80faa7989e29c5027fd73d2. Jan 13 20:24:05.188212 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:24:05.228642 containerd[1467]: time="2025-01-13T20:24:05.228338892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d9e9c8f4-d6ff-497e-bea7-7331625f5be3,Namespace:default,Attempt:0,} returns sandbox id \"d2687608eab62e301d8c47878bb05991cd7dc63ab80faa7989e29c5027fd73d2\"" Jan 13 20:24:05.230995 containerd[1467]: time="2025-01-13T20:24:05.230824650Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:24:05.576162 kubelet[1776]: E0113 20:24:05.576110 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:05.857847 containerd[1467]: time="2025-01-13T20:24:05.857712795Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:05.858539 containerd[1467]: time="2025-01-13T20:24:05.858486127Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 20:24:05.863043 containerd[1467]: time="2025-01-13T20:24:05.862841315Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"67696923\" in 631.976904ms" Jan 13 20:24:05.863043 containerd[1467]: time="2025-01-13T20:24:05.862883236Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\"" Jan 13 20:24:05.864903 containerd[1467]: time="2025-01-13T20:24:05.864868067Z" level=info msg="CreateContainer within sandbox \"d2687608eab62e301d8c47878bb05991cd7dc63ab80faa7989e29c5027fd73d2\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 20:24:05.878301 containerd[1467]: time="2025-01-13T20:24:05.878239435Z" level=info msg="CreateContainer within sandbox \"d2687608eab62e301d8c47878bb05991cd7dc63ab80faa7989e29c5027fd73d2\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"a73b6badb019e71f012b148886e64c83957128b911831bd24472f2eb8117ff99\"" Jan 13 20:24:05.878807 containerd[1467]: time="2025-01-13T20:24:05.878739043Z" level=info msg="StartContainer for \"a73b6badb019e71f012b148886e64c83957128b911831bd24472f2eb8117ff99\"" Jan 13 20:24:05.916338 systemd[1]: Started cri-containerd-a73b6badb019e71f012b148886e64c83957128b911831bd24472f2eb8117ff99.scope - libcontainer container a73b6badb019e71f012b148886e64c83957128b911831bd24472f2eb8117ff99. Jan 13 20:24:05.940689 containerd[1467]: time="2025-01-13T20:24:05.940457567Z" level=info msg="StartContainer for \"a73b6badb019e71f012b148886e64c83957128b911831bd24472f2eb8117ff99\" returns successfully" Jan 13 20:24:06.350351 systemd-networkd[1401]: lxc883b126e243b: Gained IPv6LL Jan 13 20:24:06.460499 systemd[1]: run-containerd-runc-k8s.io-a73b6badb019e71f012b148886e64c83957128b911831bd24472f2eb8117ff99-runc.nquako.mount: Deactivated successfully. Jan 13 20:24:06.576778 kubelet[1776]: E0113 20:24:06.576701 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:06.789016 kubelet[1776]: I0113 20:24:06.788960 1776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.155715746 podStartE2EDuration="16.78894499s" podCreationTimestamp="2025-01-13 20:23:50 +0000 UTC" firstStartedPulling="2025-01-13 20:24:05.230313402 +0000 UTC m=+50.050831589" lastFinishedPulling="2025-01-13 20:24:05.863542606 +0000 UTC m=+50.684060833" observedRunningTime="2025-01-13 20:24:06.788851308 +0000 UTC m=+51.609369535" watchObservedRunningTime="2025-01-13 20:24:06.78894499 +0000 UTC m=+51.609463217" Jan 13 20:24:07.577098 kubelet[1776]: E0113 20:24:07.577046 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:08.577844 kubelet[1776]: E0113 20:24:08.577784 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:09.578642 kubelet[1776]: E0113 20:24:09.578595 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:10.578823 kubelet[1776]: E0113 20:24:10.578753 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:11.579288 kubelet[1776]: E0113 20:24:11.579232 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:12.579707 kubelet[1776]: E0113 20:24:12.579640 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:13.580615 kubelet[1776]: E0113 20:24:13.580563 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:14.007824 containerd[1467]: time="2025-01-13T20:24:14.007685806Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:24:14.013358 containerd[1467]: time="2025-01-13T20:24:14.013314030Z" level=info msg="StopContainer for \"51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894\" with timeout 2 (s)" Jan 13 20:24:14.013934 containerd[1467]: time="2025-01-13T20:24:14.013826916Z" level=info msg="Stop container \"51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894\" with signal terminated" Jan 13 20:24:14.024347 systemd-networkd[1401]: lxc_health: Link DOWN Jan 13 20:24:14.024355 systemd-networkd[1401]: lxc_health: Lost carrier Jan 13 20:24:14.052645 systemd[1]: cri-containerd-51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894.scope: Deactivated successfully. Jan 13 20:24:14.053022 systemd[1]: cri-containerd-51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894.scope: Consumed 6.567s CPU time. Jan 13 20:24:14.073383 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894-rootfs.mount: Deactivated successfully. Jan 13 20:24:14.086784 containerd[1467]: time="2025-01-13T20:24:14.086719154Z" level=info msg="shim disconnected" id=51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894 namespace=k8s.io Jan 13 20:24:14.086784 containerd[1467]: time="2025-01-13T20:24:14.086777954Z" level=warning msg="cleaning up after shim disconnected" id=51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894 namespace=k8s.io Jan 13 20:24:14.086784 containerd[1467]: time="2025-01-13T20:24:14.086785434Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:14.100046 containerd[1467]: time="2025-01-13T20:24:14.100002466Z" level=info msg="StopContainer for \"51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894\" returns successfully" Jan 13 20:24:14.100792 containerd[1467]: time="2025-01-13T20:24:14.100745155Z" level=info msg="StopPodSandbox for \"785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef\"" Jan 13 20:24:14.104184 containerd[1467]: time="2025-01-13T20:24:14.104141514Z" level=info msg="Container to stop \"f0c139c96ebcb1271583c1392fea1da87fdc1991ca8566de8d8516fa3129e0e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:24:14.104184 containerd[1467]: time="2025-01-13T20:24:14.104177434Z" level=info msg="Container to stop \"21472d1fcc19a11f1a9bba84b2d0f4457bef3382e048cd2b2e93dab6ffee9263\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:24:14.104279 containerd[1467]: time="2025-01-13T20:24:14.104188354Z" level=info msg="Container to stop \"b39e05ca91cc507667942abac8a844603da86f57d4902868ffe7090785b27e21\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:24:14.104279 containerd[1467]: time="2025-01-13T20:24:14.104197474Z" level=info msg="Container to stop \"323d97f645984a6086ef24f2a31cb175ba20114503e10143afc7b4ca9d4f002d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:24:14.104279 containerd[1467]: time="2025-01-13T20:24:14.104205435Z" level=info msg="Container to stop \"51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:24:14.105662 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef-shm.mount: Deactivated successfully. Jan 13 20:24:14.112861 systemd[1]: cri-containerd-785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef.scope: Deactivated successfully. Jan 13 20:24:14.137329 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef-rootfs.mount: Deactivated successfully. Jan 13 20:24:14.140518 containerd[1467]: time="2025-01-13T20:24:14.140298849Z" level=info msg="shim disconnected" id=785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef namespace=k8s.io Jan 13 20:24:14.140518 containerd[1467]: time="2025-01-13T20:24:14.140358010Z" level=warning msg="cleaning up after shim disconnected" id=785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef namespace=k8s.io Jan 13 20:24:14.140518 containerd[1467]: time="2025-01-13T20:24:14.140367050Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:14.155799 containerd[1467]: time="2025-01-13T20:24:14.155729146Z" level=info msg="TearDown network for sandbox \"785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef\" successfully" Jan 13 20:24:14.155799 containerd[1467]: time="2025-01-13T20:24:14.155768507Z" level=info msg="StopPodSandbox for \"785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef\" returns successfully" Jan 13 20:24:14.293912 kubelet[1776]: I0113 20:24:14.292374 1776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h52t8\" (UniqueName: \"kubernetes.io/projected/7ec3b1fa-7471-4eee-af7b-54c09770e896-kube-api-access-h52t8\") pod \"7ec3b1fa-7471-4eee-af7b-54c09770e896\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " Jan 13 20:24:14.293912 kubelet[1776]: I0113 20:24:14.292421 1776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-host-proc-sys-net\") pod \"7ec3b1fa-7471-4eee-af7b-54c09770e896\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " Jan 13 20:24:14.293912 kubelet[1776]: I0113 20:24:14.292444 1776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ec3b1fa-7471-4eee-af7b-54c09770e896-cilium-config-path\") pod \"7ec3b1fa-7471-4eee-af7b-54c09770e896\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " Jan 13 20:24:14.293912 kubelet[1776]: I0113 20:24:14.292461 1776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7ec3b1fa-7471-4eee-af7b-54c09770e896-hubble-tls\") pod \"7ec3b1fa-7471-4eee-af7b-54c09770e896\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " Jan 13 20:24:14.293912 kubelet[1776]: I0113 20:24:14.292478 1776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-cni-path\") pod \"7ec3b1fa-7471-4eee-af7b-54c09770e896\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " Jan 13 20:24:14.293912 kubelet[1776]: I0113 20:24:14.292491 1776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-lib-modules\") pod \"7ec3b1fa-7471-4eee-af7b-54c09770e896\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " Jan 13 20:24:14.294215 kubelet[1776]: I0113 20:24:14.292507 1776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-bpf-maps\") pod \"7ec3b1fa-7471-4eee-af7b-54c09770e896\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " Jan 13 20:24:14.294215 kubelet[1776]: I0113 20:24:14.292522 1776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-etc-cni-netd\") pod \"7ec3b1fa-7471-4eee-af7b-54c09770e896\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " Jan 13 20:24:14.294215 kubelet[1776]: I0113 20:24:14.292518 1776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7ec3b1fa-7471-4eee-af7b-54c09770e896" (UID: "7ec3b1fa-7471-4eee-af7b-54c09770e896"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:14.294215 kubelet[1776]: I0113 20:24:14.292559 1776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7ec3b1fa-7471-4eee-af7b-54c09770e896" (UID: "7ec3b1fa-7471-4eee-af7b-54c09770e896"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:14.294215 kubelet[1776]: I0113 20:24:14.292536 1776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-host-proc-sys-kernel\") pod \"7ec3b1fa-7471-4eee-af7b-54c09770e896\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " Jan 13 20:24:14.294349 kubelet[1776]: I0113 20:24:14.292643 1776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-cilium-cgroup\") pod \"7ec3b1fa-7471-4eee-af7b-54c09770e896\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " Jan 13 20:24:14.294349 kubelet[1776]: I0113 20:24:14.292669 1776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7ec3b1fa-7471-4eee-af7b-54c09770e896-clustermesh-secrets\") pod \"7ec3b1fa-7471-4eee-af7b-54c09770e896\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " Jan 13 20:24:14.294349 kubelet[1776]: I0113 20:24:14.292705 1776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-cilium-run\") pod \"7ec3b1fa-7471-4eee-af7b-54c09770e896\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " Jan 13 20:24:14.294349 kubelet[1776]: I0113 20:24:14.292720 1776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-hostproc\") pod \"7ec3b1fa-7471-4eee-af7b-54c09770e896\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " Jan 13 20:24:14.294349 kubelet[1776]: I0113 20:24:14.292735 1776 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-xtables-lock\") pod \"7ec3b1fa-7471-4eee-af7b-54c09770e896\" (UID: \"7ec3b1fa-7471-4eee-af7b-54c09770e896\") " Jan 13 20:24:14.294349 kubelet[1776]: I0113 20:24:14.292766 1776 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-host-proc-sys-net\") on node \"10.0.0.113\" DevicePath \"\"" Jan 13 20:24:14.294349 kubelet[1776]: I0113 20:24:14.292777 1776 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-host-proc-sys-kernel\") on node \"10.0.0.113\" DevicePath \"\"" Jan 13 20:24:14.294512 kubelet[1776]: I0113 20:24:14.292795 1776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7ec3b1fa-7471-4eee-af7b-54c09770e896" (UID: "7ec3b1fa-7471-4eee-af7b-54c09770e896"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:14.294512 kubelet[1776]: I0113 20:24:14.292811 1776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7ec3b1fa-7471-4eee-af7b-54c09770e896" (UID: "7ec3b1fa-7471-4eee-af7b-54c09770e896"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:14.294512 kubelet[1776]: I0113 20:24:14.293708 1776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7ec3b1fa-7471-4eee-af7b-54c09770e896" (UID: "7ec3b1fa-7471-4eee-af7b-54c09770e896"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:14.294512 kubelet[1776]: I0113 20:24:14.293798 1776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-hostproc" (OuterVolumeSpecName: "hostproc") pod "7ec3b1fa-7471-4eee-af7b-54c09770e896" (UID: "7ec3b1fa-7471-4eee-af7b-54c09770e896"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:14.294512 kubelet[1776]: I0113 20:24:14.293850 1776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7ec3b1fa-7471-4eee-af7b-54c09770e896" (UID: "7ec3b1fa-7471-4eee-af7b-54c09770e896"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:14.294651 kubelet[1776]: I0113 20:24:14.294484 1776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ec3b1fa-7471-4eee-af7b-54c09770e896-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7ec3b1fa-7471-4eee-af7b-54c09770e896" (UID: "7ec3b1fa-7471-4eee-af7b-54c09770e896"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:24:14.294651 kubelet[1776]: I0113 20:24:14.294535 1776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7ec3b1fa-7471-4eee-af7b-54c09770e896" (UID: "7ec3b1fa-7471-4eee-af7b-54c09770e896"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:14.294651 kubelet[1776]: I0113 20:24:14.294553 1776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7ec3b1fa-7471-4eee-af7b-54c09770e896" (UID: "7ec3b1fa-7471-4eee-af7b-54c09770e896"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:14.294651 kubelet[1776]: I0113 20:24:14.294573 1776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-cni-path" (OuterVolumeSpecName: "cni-path") pod "7ec3b1fa-7471-4eee-af7b-54c09770e896" (UID: "7ec3b1fa-7471-4eee-af7b-54c09770e896"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:14.299738 systemd[1]: var-lib-kubelet-pods-7ec3b1fa\x2d7471\x2d4eee\x2daf7b\x2d54c09770e896-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:24:14.299838 systemd[1]: var-lib-kubelet-pods-7ec3b1fa\x2d7471\x2d4eee\x2daf7b\x2d54c09770e896-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:24:14.300270 kubelet[1776]: I0113 20:24:14.299916 1776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ec3b1fa-7471-4eee-af7b-54c09770e896-kube-api-access-h52t8" (OuterVolumeSpecName: "kube-api-access-h52t8") pod "7ec3b1fa-7471-4eee-af7b-54c09770e896" (UID: "7ec3b1fa-7471-4eee-af7b-54c09770e896"). InnerVolumeSpecName "kube-api-access-h52t8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:24:14.300734 kubelet[1776]: I0113 20:24:14.300519 1776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ec3b1fa-7471-4eee-af7b-54c09770e896-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7ec3b1fa-7471-4eee-af7b-54c09770e896" (UID: "7ec3b1fa-7471-4eee-af7b-54c09770e896"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:24:14.300940 kubelet[1776]: I0113 20:24:14.300917 1776 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ec3b1fa-7471-4eee-af7b-54c09770e896-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7ec3b1fa-7471-4eee-af7b-54c09770e896" (UID: "7ec3b1fa-7471-4eee-af7b-54c09770e896"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:24:14.393228 kubelet[1776]: I0113 20:24:14.393162 1776 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-bpf-maps\") on node \"10.0.0.113\" DevicePath \"\"" Jan 13 20:24:14.393228 kubelet[1776]: I0113 20:24:14.393211 1776 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-etc-cni-netd\") on node \"10.0.0.113\" DevicePath \"\"" Jan 13 20:24:14.393228 kubelet[1776]: I0113 20:24:14.393221 1776 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-cni-path\") on node \"10.0.0.113\" DevicePath \"\"" Jan 13 20:24:14.393228 kubelet[1776]: I0113 20:24:14.393229 1776 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-lib-modules\") on node \"10.0.0.113\" DevicePath \"\"" Jan 13 20:24:14.393228 kubelet[1776]: I0113 20:24:14.393240 1776 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7ec3b1fa-7471-4eee-af7b-54c09770e896-clustermesh-secrets\") on node \"10.0.0.113\" DevicePath \"\"" Jan 13 20:24:14.393464 kubelet[1776]: I0113 20:24:14.393249 1776 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-cilium-cgroup\") on node \"10.0.0.113\" DevicePath \"\"" Jan 13 20:24:14.393464 kubelet[1776]: I0113 20:24:14.393257 1776 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-xtables-lock\") on node \"10.0.0.113\" DevicePath \"\"" Jan 13 20:24:14.393464 kubelet[1776]: I0113 20:24:14.393265 1776 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-cilium-run\") on node \"10.0.0.113\" DevicePath \"\"" Jan 13 20:24:14.393464 kubelet[1776]: I0113 20:24:14.393272 1776 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7ec3b1fa-7471-4eee-af7b-54c09770e896-hostproc\") on node \"10.0.0.113\" DevicePath \"\"" Jan 13 20:24:14.393464 kubelet[1776]: I0113 20:24:14.393279 1776 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7ec3b1fa-7471-4eee-af7b-54c09770e896-cilium-config-path\") on node \"10.0.0.113\" DevicePath \"\"" Jan 13 20:24:14.393464 kubelet[1776]: I0113 20:24:14.393287 1776 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7ec3b1fa-7471-4eee-af7b-54c09770e896-hubble-tls\") on node \"10.0.0.113\" DevicePath \"\"" Jan 13 20:24:14.393464 kubelet[1776]: I0113 20:24:14.393296 1776 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-h52t8\" (UniqueName: \"kubernetes.io/projected/7ec3b1fa-7471-4eee-af7b-54c09770e896-kube-api-access-h52t8\") on node \"10.0.0.113\" DevicePath \"\"" Jan 13 20:24:14.581002 kubelet[1776]: E0113 20:24:14.580878 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:14.670495 systemd[1]: Removed slice kubepods-burstable-pod7ec3b1fa_7471_4eee_af7b_54c09770e896.slice - libcontainer container kubepods-burstable-pod7ec3b1fa_7471_4eee_af7b_54c09770e896.slice. Jan 13 20:24:14.670580 systemd[1]: kubepods-burstable-pod7ec3b1fa_7471_4eee_af7b_54c09770e896.slice: Consumed 6.761s CPU time. Jan 13 20:24:14.795718 kubelet[1776]: I0113 20:24:14.795611 1776 scope.go:117] "RemoveContainer" containerID="51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894" Jan 13 20:24:14.797893 containerd[1467]: time="2025-01-13T20:24:14.797842044Z" level=info msg="RemoveContainer for \"51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894\"" Jan 13 20:24:14.801607 containerd[1467]: time="2025-01-13T20:24:14.801552046Z" level=info msg="RemoveContainer for \"51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894\" returns successfully" Jan 13 20:24:14.802231 kubelet[1776]: I0113 20:24:14.801836 1776 scope.go:117] "RemoveContainer" containerID="f0c139c96ebcb1271583c1392fea1da87fdc1991ca8566de8d8516fa3129e0e7" Jan 13 20:24:14.803431 containerd[1467]: time="2025-01-13T20:24:14.803110504Z" level=info msg="RemoveContainer for \"f0c139c96ebcb1271583c1392fea1da87fdc1991ca8566de8d8516fa3129e0e7\"" Jan 13 20:24:14.806337 containerd[1467]: time="2025-01-13T20:24:14.806304701Z" level=info msg="RemoveContainer for \"f0c139c96ebcb1271583c1392fea1da87fdc1991ca8566de8d8516fa3129e0e7\" returns successfully" Jan 13 20:24:14.806646 kubelet[1776]: I0113 20:24:14.806625 1776 scope.go:117] "RemoveContainer" containerID="323d97f645984a6086ef24f2a31cb175ba20114503e10143afc7b4ca9d4f002d" Jan 13 20:24:14.807929 containerd[1467]: time="2025-01-13T20:24:14.807903159Z" level=info msg="RemoveContainer for \"323d97f645984a6086ef24f2a31cb175ba20114503e10143afc7b4ca9d4f002d\"" Jan 13 20:24:14.811405 containerd[1467]: time="2025-01-13T20:24:14.811356239Z" level=info msg="RemoveContainer for \"323d97f645984a6086ef24f2a31cb175ba20114503e10143afc7b4ca9d4f002d\" returns successfully" Jan 13 20:24:14.815240 kubelet[1776]: I0113 20:24:14.815215 1776 scope.go:117] "RemoveContainer" containerID="b39e05ca91cc507667942abac8a844603da86f57d4902868ffe7090785b27e21" Jan 13 20:24:14.820053 containerd[1467]: time="2025-01-13T20:24:14.820004658Z" level=info msg="RemoveContainer for \"b39e05ca91cc507667942abac8a844603da86f57d4902868ffe7090785b27e21\"" Jan 13 20:24:14.822608 containerd[1467]: time="2025-01-13T20:24:14.822519887Z" level=info msg="RemoveContainer for \"b39e05ca91cc507667942abac8a844603da86f57d4902868ffe7090785b27e21\" returns successfully" Jan 13 20:24:14.822908 kubelet[1776]: I0113 20:24:14.822785 1776 scope.go:117] "RemoveContainer" containerID="21472d1fcc19a11f1a9bba84b2d0f4457bef3382e048cd2b2e93dab6ffee9263" Jan 13 20:24:14.824107 containerd[1467]: time="2025-01-13T20:24:14.823874223Z" level=info msg="RemoveContainer for \"21472d1fcc19a11f1a9bba84b2d0f4457bef3382e048cd2b2e93dab6ffee9263\"" Jan 13 20:24:14.826018 containerd[1467]: time="2025-01-13T20:24:14.825981207Z" level=info msg="RemoveContainer for \"21472d1fcc19a11f1a9bba84b2d0f4457bef3382e048cd2b2e93dab6ffee9263\" returns successfully" Jan 13 20:24:14.826191 kubelet[1776]: I0113 20:24:14.826168 1776 scope.go:117] "RemoveContainer" containerID="51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894" Jan 13 20:24:14.826491 containerd[1467]: time="2025-01-13T20:24:14.826405492Z" level=error msg="ContainerStatus for \"51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894\": not found" Jan 13 20:24:14.826608 kubelet[1776]: E0113 20:24:14.826560 1776 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894\": not found" containerID="51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894" Jan 13 20:24:14.826695 kubelet[1776]: I0113 20:24:14.826616 1776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894"} err="failed to get container status \"51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894\": rpc error: code = NotFound desc = an error occurred when try to find container \"51ff7807528c779c14179baed580fc98a69061e0863e294d8c823927402ba894\": not found" Jan 13 20:24:14.826738 kubelet[1776]: I0113 20:24:14.826698 1776 scope.go:117] "RemoveContainer" containerID="f0c139c96ebcb1271583c1392fea1da87fdc1991ca8566de8d8516fa3129e0e7" Jan 13 20:24:14.826946 containerd[1467]: time="2025-01-13T20:24:14.826893538Z" level=error msg="ContainerStatus for \"f0c139c96ebcb1271583c1392fea1da87fdc1991ca8566de8d8516fa3129e0e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f0c139c96ebcb1271583c1392fea1da87fdc1991ca8566de8d8516fa3129e0e7\": not found" Jan 13 20:24:14.827064 kubelet[1776]: E0113 20:24:14.827043 1776 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f0c139c96ebcb1271583c1392fea1da87fdc1991ca8566de8d8516fa3129e0e7\": not found" containerID="f0c139c96ebcb1271583c1392fea1da87fdc1991ca8566de8d8516fa3129e0e7" Jan 13 20:24:14.827129 kubelet[1776]: I0113 20:24:14.827071 1776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f0c139c96ebcb1271583c1392fea1da87fdc1991ca8566de8d8516fa3129e0e7"} err="failed to get container status \"f0c139c96ebcb1271583c1392fea1da87fdc1991ca8566de8d8516fa3129e0e7\": rpc error: code = NotFound desc = an error occurred when try to find container \"f0c139c96ebcb1271583c1392fea1da87fdc1991ca8566de8d8516fa3129e0e7\": not found" Jan 13 20:24:14.827129 kubelet[1776]: I0113 20:24:14.827104 1776 scope.go:117] "RemoveContainer" containerID="323d97f645984a6086ef24f2a31cb175ba20114503e10143afc7b4ca9d4f002d" Jan 13 20:24:14.827336 containerd[1467]: time="2025-01-13T20:24:14.827305982Z" level=error msg="ContainerStatus for \"323d97f645984a6086ef24f2a31cb175ba20114503e10143afc7b4ca9d4f002d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"323d97f645984a6086ef24f2a31cb175ba20114503e10143afc7b4ca9d4f002d\": not found" Jan 13 20:24:14.827536 kubelet[1776]: E0113 20:24:14.827455 1776 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"323d97f645984a6086ef24f2a31cb175ba20114503e10143afc7b4ca9d4f002d\": not found" containerID="323d97f645984a6086ef24f2a31cb175ba20114503e10143afc7b4ca9d4f002d" Jan 13 20:24:14.827575 kubelet[1776]: I0113 20:24:14.827548 1776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"323d97f645984a6086ef24f2a31cb175ba20114503e10143afc7b4ca9d4f002d"} err="failed to get container status \"323d97f645984a6086ef24f2a31cb175ba20114503e10143afc7b4ca9d4f002d\": rpc error: code = NotFound desc = an error occurred when try to find container \"323d97f645984a6086ef24f2a31cb175ba20114503e10143afc7b4ca9d4f002d\": not found" Jan 13 20:24:14.827575 kubelet[1776]: I0113 20:24:14.827566 1776 scope.go:117] "RemoveContainer" containerID="b39e05ca91cc507667942abac8a844603da86f57d4902868ffe7090785b27e21" Jan 13 20:24:14.827779 containerd[1467]: time="2025-01-13T20:24:14.827737987Z" level=error msg="ContainerStatus for \"b39e05ca91cc507667942abac8a844603da86f57d4902868ffe7090785b27e21\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b39e05ca91cc507667942abac8a844603da86f57d4902868ffe7090785b27e21\": not found" Jan 13 20:24:14.827876 kubelet[1776]: E0113 20:24:14.827857 1776 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b39e05ca91cc507667942abac8a844603da86f57d4902868ffe7090785b27e21\": not found" containerID="b39e05ca91cc507667942abac8a844603da86f57d4902868ffe7090785b27e21" Jan 13 20:24:14.827917 kubelet[1776]: I0113 20:24:14.827881 1776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b39e05ca91cc507667942abac8a844603da86f57d4902868ffe7090785b27e21"} err="failed to get container status \"b39e05ca91cc507667942abac8a844603da86f57d4902868ffe7090785b27e21\": rpc error: code = NotFound desc = an error occurred when try to find container \"b39e05ca91cc507667942abac8a844603da86f57d4902868ffe7090785b27e21\": not found" Jan 13 20:24:14.827917 kubelet[1776]: I0113 20:24:14.827896 1776 scope.go:117] "RemoveContainer" containerID="21472d1fcc19a11f1a9bba84b2d0f4457bef3382e048cd2b2e93dab6ffee9263" Jan 13 20:24:14.828244 containerd[1467]: time="2025-01-13T20:24:14.828144952Z" level=error msg="ContainerStatus for \"21472d1fcc19a11f1a9bba84b2d0f4457bef3382e048cd2b2e93dab6ffee9263\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"21472d1fcc19a11f1a9bba84b2d0f4457bef3382e048cd2b2e93dab6ffee9263\": not found" Jan 13 20:24:14.828315 kubelet[1776]: E0113 20:24:14.828271 1776 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"21472d1fcc19a11f1a9bba84b2d0f4457bef3382e048cd2b2e93dab6ffee9263\": not found" containerID="21472d1fcc19a11f1a9bba84b2d0f4457bef3382e048cd2b2e93dab6ffee9263" Jan 13 20:24:14.828315 kubelet[1776]: I0113 20:24:14.828288 1776 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"21472d1fcc19a11f1a9bba84b2d0f4457bef3382e048cd2b2e93dab6ffee9263"} err="failed to get container status \"21472d1fcc19a11f1a9bba84b2d0f4457bef3382e048cd2b2e93dab6ffee9263\": rpc error: code = NotFound desc = an error occurred when try to find container \"21472d1fcc19a11f1a9bba84b2d0f4457bef3382e048cd2b2e93dab6ffee9263\": not found" Jan 13 20:24:14.987930 systemd[1]: var-lib-kubelet-pods-7ec3b1fa\x2d7471\x2d4eee\x2daf7b\x2d54c09770e896-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh52t8.mount: Deactivated successfully. Jan 13 20:24:15.581715 kubelet[1776]: E0113 20:24:15.581655 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:16.548679 kubelet[1776]: E0113 20:24:16.548640 1776 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:16.563376 containerd[1467]: time="2025-01-13T20:24:16.563339049Z" level=info msg="StopPodSandbox for \"785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef\"" Jan 13 20:24:16.563720 containerd[1467]: time="2025-01-13T20:24:16.563436450Z" level=info msg="TearDown network for sandbox \"785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef\" successfully" Jan 13 20:24:16.563720 containerd[1467]: time="2025-01-13T20:24:16.563447530Z" level=info msg="StopPodSandbox for \"785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef\" returns successfully" Jan 13 20:24:16.564902 containerd[1467]: time="2025-01-13T20:24:16.563935535Z" level=info msg="RemovePodSandbox for \"785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef\"" Jan 13 20:24:16.564902 containerd[1467]: time="2025-01-13T20:24:16.563967655Z" level=info msg="Forcibly stopping sandbox \"785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef\"" Jan 13 20:24:16.564902 containerd[1467]: time="2025-01-13T20:24:16.564012976Z" level=info msg="TearDown network for sandbox \"785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef\" successfully" Jan 13 20:24:16.566214 containerd[1467]: time="2025-01-13T20:24:16.566179239Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:24:16.566341 containerd[1467]: time="2025-01-13T20:24:16.566322841Z" level=info msg="RemovePodSandbox \"785b4cad46f1cda7d7b21254c3118b1afef8f191ab54bbfffa3df70173209bef\" returns successfully" Jan 13 20:24:16.582878 kubelet[1776]: E0113 20:24:16.582814 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:16.667203 kubelet[1776]: I0113 20:24:16.667172 1776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ec3b1fa-7471-4eee-af7b-54c09770e896" path="/var/lib/kubelet/pods/7ec3b1fa-7471-4eee-af7b-54c09770e896/volumes" Jan 13 20:24:16.697730 kubelet[1776]: E0113 20:24:16.697688 1776 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:24:17.039206 kubelet[1776]: I0113 20:24:17.039162 1776 topology_manager.go:215] "Topology Admit Handler" podUID="5b607210-4125-47cd-b1d2-88b6b6f4353a" podNamespace="kube-system" podName="cilium-operator-599987898-qgvvs" Jan 13 20:24:17.039335 kubelet[1776]: E0113 20:24:17.039214 1776 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7ec3b1fa-7471-4eee-af7b-54c09770e896" containerName="apply-sysctl-overwrites" Jan 13 20:24:17.039335 kubelet[1776]: E0113 20:24:17.039225 1776 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7ec3b1fa-7471-4eee-af7b-54c09770e896" containerName="clean-cilium-state" Jan 13 20:24:17.039335 kubelet[1776]: E0113 20:24:17.039230 1776 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7ec3b1fa-7471-4eee-af7b-54c09770e896" containerName="cilium-agent" Jan 13 20:24:17.039335 kubelet[1776]: E0113 20:24:17.039237 1776 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7ec3b1fa-7471-4eee-af7b-54c09770e896" containerName="mount-cgroup" Jan 13 20:24:17.039335 kubelet[1776]: E0113 20:24:17.039244 1776 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7ec3b1fa-7471-4eee-af7b-54c09770e896" containerName="mount-bpf-fs" Jan 13 20:24:17.039335 kubelet[1776]: I0113 20:24:17.039262 1776 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ec3b1fa-7471-4eee-af7b-54c09770e896" containerName="cilium-agent" Jan 13 20:24:17.044617 systemd[1]: Created slice kubepods-besteffort-pod5b607210_4125_47cd_b1d2_88b6b6f4353a.slice - libcontainer container kubepods-besteffort-pod5b607210_4125_47cd_b1d2_88b6b6f4353a.slice. Jan 13 20:24:17.045970 kubelet[1776]: I0113 20:24:17.045766 1776 topology_manager.go:215] "Topology Admit Handler" podUID="a517dada-13bc-4779-9a4b-37eef7310ca4" podNamespace="kube-system" podName="cilium-wztvc" Jan 13 20:24:17.046197 kubelet[1776]: W0113 20:24:17.046176 1776 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.0.0.113" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.113' and this object Jan 13 20:24:17.046573 kubelet[1776]: E0113 20:24:17.046555 1776 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.0.0.113" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.113' and this object Jan 13 20:24:17.050959 systemd[1]: Created slice kubepods-burstable-poda517dada_13bc_4779_9a4b_37eef7310ca4.slice - libcontainer container kubepods-burstable-poda517dada_13bc_4779_9a4b_37eef7310ca4.slice. Jan 13 20:24:17.107022 kubelet[1776]: I0113 20:24:17.106969 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a517dada-13bc-4779-9a4b-37eef7310ca4-host-proc-sys-kernel\") pod \"cilium-wztvc\" (UID: \"a517dada-13bc-4779-9a4b-37eef7310ca4\") " pod="kube-system/cilium-wztvc" Jan 13 20:24:17.107140 kubelet[1776]: I0113 20:24:17.107054 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a517dada-13bc-4779-9a4b-37eef7310ca4-hubble-tls\") pod \"cilium-wztvc\" (UID: \"a517dada-13bc-4779-9a4b-37eef7310ca4\") " pod="kube-system/cilium-wztvc" Jan 13 20:24:17.107140 kubelet[1776]: I0113 20:24:17.107107 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k987w\" (UniqueName: \"kubernetes.io/projected/5b607210-4125-47cd-b1d2-88b6b6f4353a-kube-api-access-k987w\") pod \"cilium-operator-599987898-qgvvs\" (UID: \"5b607210-4125-47cd-b1d2-88b6b6f4353a\") " pod="kube-system/cilium-operator-599987898-qgvvs" Jan 13 20:24:17.107140 kubelet[1776]: I0113 20:24:17.107126 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a517dada-13bc-4779-9a4b-37eef7310ca4-cilium-ipsec-secrets\") pod \"cilium-wztvc\" (UID: \"a517dada-13bc-4779-9a4b-37eef7310ca4\") " pod="kube-system/cilium-wztvc" Jan 13 20:24:17.107242 kubelet[1776]: I0113 20:24:17.107145 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a517dada-13bc-4779-9a4b-37eef7310ca4-cni-path\") pod \"cilium-wztvc\" (UID: \"a517dada-13bc-4779-9a4b-37eef7310ca4\") " pod="kube-system/cilium-wztvc" Jan 13 20:24:17.107242 kubelet[1776]: I0113 20:24:17.107161 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a517dada-13bc-4779-9a4b-37eef7310ca4-cilium-cgroup\") pod \"cilium-wztvc\" (UID: \"a517dada-13bc-4779-9a4b-37eef7310ca4\") " pod="kube-system/cilium-wztvc" Jan 13 20:24:17.107242 kubelet[1776]: I0113 20:24:17.107196 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a517dada-13bc-4779-9a4b-37eef7310ca4-bpf-maps\") pod \"cilium-wztvc\" (UID: \"a517dada-13bc-4779-9a4b-37eef7310ca4\") " pod="kube-system/cilium-wztvc" Jan 13 20:24:17.107242 kubelet[1776]: I0113 20:24:17.107232 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a517dada-13bc-4779-9a4b-37eef7310ca4-etc-cni-netd\") pod \"cilium-wztvc\" (UID: \"a517dada-13bc-4779-9a4b-37eef7310ca4\") " pod="kube-system/cilium-wztvc" Jan 13 20:24:17.107318 kubelet[1776]: I0113 20:24:17.107259 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a517dada-13bc-4779-9a4b-37eef7310ca4-lib-modules\") pod \"cilium-wztvc\" (UID: \"a517dada-13bc-4779-9a4b-37eef7310ca4\") " pod="kube-system/cilium-wztvc" Jan 13 20:24:17.107318 kubelet[1776]: I0113 20:24:17.107281 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a517dada-13bc-4779-9a4b-37eef7310ca4-xtables-lock\") pod \"cilium-wztvc\" (UID: \"a517dada-13bc-4779-9a4b-37eef7310ca4\") " pod="kube-system/cilium-wztvc" Jan 13 20:24:17.107318 kubelet[1776]: I0113 20:24:17.107308 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a517dada-13bc-4779-9a4b-37eef7310ca4-cilium-config-path\") pod \"cilium-wztvc\" (UID: \"a517dada-13bc-4779-9a4b-37eef7310ca4\") " pod="kube-system/cilium-wztvc" Jan 13 20:24:17.107380 kubelet[1776]: I0113 20:24:17.107325 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a517dada-13bc-4779-9a4b-37eef7310ca4-host-proc-sys-net\") pod \"cilium-wztvc\" (UID: \"a517dada-13bc-4779-9a4b-37eef7310ca4\") " pod="kube-system/cilium-wztvc" Jan 13 20:24:17.107380 kubelet[1776]: I0113 20:24:17.107342 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a517dada-13bc-4779-9a4b-37eef7310ca4-cilium-run\") pod \"cilium-wztvc\" (UID: \"a517dada-13bc-4779-9a4b-37eef7310ca4\") " pod="kube-system/cilium-wztvc" Jan 13 20:24:17.107380 kubelet[1776]: I0113 20:24:17.107366 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a517dada-13bc-4779-9a4b-37eef7310ca4-clustermesh-secrets\") pod \"cilium-wztvc\" (UID: \"a517dada-13bc-4779-9a4b-37eef7310ca4\") " pod="kube-system/cilium-wztvc" Jan 13 20:24:17.107452 kubelet[1776]: I0113 20:24:17.107381 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj9tc\" (UniqueName: \"kubernetes.io/projected/a517dada-13bc-4779-9a4b-37eef7310ca4-kube-api-access-dj9tc\") pod \"cilium-wztvc\" (UID: \"a517dada-13bc-4779-9a4b-37eef7310ca4\") " pod="kube-system/cilium-wztvc" Jan 13 20:24:17.107452 kubelet[1776]: I0113 20:24:17.107396 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b607210-4125-47cd-b1d2-88b6b6f4353a-cilium-config-path\") pod \"cilium-operator-599987898-qgvvs\" (UID: \"5b607210-4125-47cd-b1d2-88b6b6f4353a\") " pod="kube-system/cilium-operator-599987898-qgvvs" Jan 13 20:24:17.107452 kubelet[1776]: I0113 20:24:17.107432 1776 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a517dada-13bc-4779-9a4b-37eef7310ca4-hostproc\") pod \"cilium-wztvc\" (UID: \"a517dada-13bc-4779-9a4b-37eef7310ca4\") " pod="kube-system/cilium-wztvc" Jan 13 20:24:17.583359 kubelet[1776]: E0113 20:24:17.583305 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:17.837252 kubelet[1776]: I0113 20:24:17.836238 1776 setters.go:580] "Node became not ready" node="10.0.0.113" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:24:17Z","lastTransitionTime":"2025-01-13T20:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:24:18.209787 kubelet[1776]: E0113 20:24:18.209670 1776 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 13 20:24:18.209787 kubelet[1776]: E0113 20:24:18.209763 1776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5b607210-4125-47cd-b1d2-88b6b6f4353a-cilium-config-path podName:5b607210-4125-47cd-b1d2-88b6b6f4353a nodeName:}" failed. No retries permitted until 2025-01-13 20:24:18.709742921 +0000 UTC m=+63.530261148 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/5b607210-4125-47cd-b1d2-88b6b6f4353a-cilium-config-path") pod "cilium-operator-599987898-qgvvs" (UID: "5b607210-4125-47cd-b1d2-88b6b6f4353a") : failed to sync configmap cache: timed out waiting for the condition Jan 13 20:24:18.209969 kubelet[1776]: E0113 20:24:18.209681 1776 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jan 13 20:24:18.210115 kubelet[1776]: E0113 20:24:18.210073 1776 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a517dada-13bc-4779-9a4b-37eef7310ca4-cilium-config-path podName:a517dada-13bc-4779-9a4b-37eef7310ca4 nodeName:}" failed. No retries permitted until 2025-01-13 20:24:18.710055924 +0000 UTC m=+63.530574151 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/a517dada-13bc-4779-9a4b-37eef7310ca4-cilium-config-path") pod "cilium-wztvc" (UID: "a517dada-13bc-4779-9a4b-37eef7310ca4") : failed to sync configmap cache: timed out waiting for the condition Jan 13 20:24:18.584355 kubelet[1776]: E0113 20:24:18.584298 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:18.847895 kubelet[1776]: E0113 20:24:18.847744 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:18.848515 containerd[1467]: time="2025-01-13T20:24:18.848244896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-qgvvs,Uid:5b607210-4125-47cd-b1d2-88b6b6f4353a,Namespace:kube-system,Attempt:0,}" Jan 13 20:24:18.866647 kubelet[1776]: E0113 20:24:18.866346 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:18.866976 containerd[1467]: time="2025-01-13T20:24:18.866824648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:24:18.866976 containerd[1467]: time="2025-01-13T20:24:18.866953409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:24:18.866976 containerd[1467]: time="2025-01-13T20:24:18.866965249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:18.867112 containerd[1467]: time="2025-01-13T20:24:18.867040530Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:18.867446 containerd[1467]: time="2025-01-13T20:24:18.867315453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wztvc,Uid:a517dada-13bc-4779-9a4b-37eef7310ca4,Namespace:kube-system,Attempt:0,}" Jan 13 20:24:18.888046 containerd[1467]: time="2025-01-13T20:24:18.887301019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:24:18.888046 containerd[1467]: time="2025-01-13T20:24:18.887367699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:24:18.888046 containerd[1467]: time="2025-01-13T20:24:18.887384700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:18.888046 containerd[1467]: time="2025-01-13T20:24:18.887481821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:18.894361 systemd[1]: Started cri-containerd-3945c962c520c2aab589bf92f886a9b3b1b1daa91c28ffef10b3beaa95856cfe.scope - libcontainer container 3945c962c520c2aab589bf92f886a9b3b1b1daa91c28ffef10b3beaa95856cfe. Jan 13 20:24:18.906602 systemd[1]: Started cri-containerd-1a384418651840d4b3494ec555793ea283ad6cd507a2efb3e5670b2af01580ad.scope - libcontainer container 1a384418651840d4b3494ec555793ea283ad6cd507a2efb3e5670b2af01580ad. Jan 13 20:24:18.934728 containerd[1467]: time="2025-01-13T20:24:18.934677827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wztvc,Uid:a517dada-13bc-4779-9a4b-37eef7310ca4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a384418651840d4b3494ec555793ea283ad6cd507a2efb3e5670b2af01580ad\"" Jan 13 20:24:18.936624 containerd[1467]: time="2025-01-13T20:24:18.935757238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-qgvvs,Uid:5b607210-4125-47cd-b1d2-88b6b6f4353a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3945c962c520c2aab589bf92f886a9b3b1b1daa91c28ffef10b3beaa95856cfe\"" Jan 13 20:24:18.936810 kubelet[1776]: E0113 20:24:18.936052 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:18.937715 kubelet[1776]: E0113 20:24:18.937692 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:18.938688 containerd[1467]: time="2025-01-13T20:24:18.938647388Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:24:18.940064 containerd[1467]: time="2025-01-13T20:24:18.940037522Z" level=info msg="CreateContainer within sandbox \"1a384418651840d4b3494ec555793ea283ad6cd507a2efb3e5670b2af01580ad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:24:18.967585 containerd[1467]: time="2025-01-13T20:24:18.967521365Z" level=info msg="CreateContainer within sandbox \"1a384418651840d4b3494ec555793ea283ad6cd507a2efb3e5670b2af01580ad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2b4eb15a4e9ee01136b86a55d45be2dd0a32856a2c038654f69b5d4fdbb1bacc\"" Jan 13 20:24:18.968412 containerd[1467]: time="2025-01-13T20:24:18.968162011Z" level=info msg="StartContainer for \"2b4eb15a4e9ee01136b86a55d45be2dd0a32856a2c038654f69b5d4fdbb1bacc\"" Jan 13 20:24:18.994325 systemd[1]: Started cri-containerd-2b4eb15a4e9ee01136b86a55d45be2dd0a32856a2c038654f69b5d4fdbb1bacc.scope - libcontainer container 2b4eb15a4e9ee01136b86a55d45be2dd0a32856a2c038654f69b5d4fdbb1bacc. Jan 13 20:24:19.018121 containerd[1467]: time="2025-01-13T20:24:19.015359214Z" level=info msg="StartContainer for \"2b4eb15a4e9ee01136b86a55d45be2dd0a32856a2c038654f69b5d4fdbb1bacc\" returns successfully" Jan 13 20:24:19.064934 systemd[1]: cri-containerd-2b4eb15a4e9ee01136b86a55d45be2dd0a32856a2c038654f69b5d4fdbb1bacc.scope: Deactivated successfully. Jan 13 20:24:19.102986 containerd[1467]: time="2025-01-13T20:24:19.102843053Z" level=info msg="shim disconnected" id=2b4eb15a4e9ee01136b86a55d45be2dd0a32856a2c038654f69b5d4fdbb1bacc namespace=k8s.io Jan 13 20:24:19.103714 containerd[1467]: time="2025-01-13T20:24:19.103689981Z" level=warning msg="cleaning up after shim disconnected" id=2b4eb15a4e9ee01136b86a55d45be2dd0a32856a2c038654f69b5d4fdbb1bacc namespace=k8s.io Jan 13 20:24:19.103786 containerd[1467]: time="2025-01-13T20:24:19.103773662Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:19.585284 kubelet[1776]: E0113 20:24:19.585232 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:19.808027 kubelet[1776]: E0113 20:24:19.807998 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:19.809948 containerd[1467]: time="2025-01-13T20:24:19.809893916Z" level=info msg="CreateContainer within sandbox \"1a384418651840d4b3494ec555793ea283ad6cd507a2efb3e5670b2af01580ad\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:24:19.823140 containerd[1467]: time="2025-01-13T20:24:19.823008608Z" level=info msg="CreateContainer within sandbox \"1a384418651840d4b3494ec555793ea283ad6cd507a2efb3e5670b2af01580ad\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"538e57d6afcb4eeeb253b5acd9281a8485c916a558b8b0d9fe14a48d1fd5b093\"" Jan 13 20:24:19.823909 containerd[1467]: time="2025-01-13T20:24:19.823554853Z" level=info msg="StartContainer for \"538e57d6afcb4eeeb253b5acd9281a8485c916a558b8b0d9fe14a48d1fd5b093\"" Jan 13 20:24:19.866358 systemd[1]: Started cri-containerd-538e57d6afcb4eeeb253b5acd9281a8485c916a558b8b0d9fe14a48d1fd5b093.scope - libcontainer container 538e57d6afcb4eeeb253b5acd9281a8485c916a558b8b0d9fe14a48d1fd5b093. Jan 13 20:24:19.887114 containerd[1467]: time="2025-01-13T20:24:19.887061691Z" level=info msg="StartContainer for \"538e57d6afcb4eeeb253b5acd9281a8485c916a558b8b0d9fe14a48d1fd5b093\" returns successfully" Jan 13 20:24:19.903304 systemd[1]: cri-containerd-538e57d6afcb4eeeb253b5acd9281a8485c916a558b8b0d9fe14a48d1fd5b093.scope: Deactivated successfully. Jan 13 20:24:19.919001 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-538e57d6afcb4eeeb253b5acd9281a8485c916a558b8b0d9fe14a48d1fd5b093-rootfs.mount: Deactivated successfully. Jan 13 20:24:19.923018 containerd[1467]: time="2025-01-13T20:24:19.922803891Z" level=info msg="shim disconnected" id=538e57d6afcb4eeeb253b5acd9281a8485c916a558b8b0d9fe14a48d1fd5b093 namespace=k8s.io Jan 13 20:24:19.923018 containerd[1467]: time="2025-01-13T20:24:19.922860971Z" level=warning msg="cleaning up after shim disconnected" id=538e57d6afcb4eeeb253b5acd9281a8485c916a558b8b0d9fe14a48d1fd5b093 namespace=k8s.io Jan 13 20:24:19.923018 containerd[1467]: time="2025-01-13T20:24:19.922870651Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:20.585470 kubelet[1776]: E0113 20:24:20.585403 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:20.811476 kubelet[1776]: E0113 20:24:20.811448 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:20.813294 containerd[1467]: time="2025-01-13T20:24:20.813259724Z" level=info msg="CreateContainer within sandbox \"1a384418651840d4b3494ec555793ea283ad6cd507a2efb3e5670b2af01580ad\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:24:20.832875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount317057656.mount: Deactivated successfully. Jan 13 20:24:20.833575 containerd[1467]: time="2025-01-13T20:24:20.833205640Z" level=info msg="CreateContainer within sandbox \"1a384418651840d4b3494ec555793ea283ad6cd507a2efb3e5670b2af01580ad\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"901328dc6342f3a83fd2c96e3e80064688cdfad57763ca1ae3a1c34bc8a41a74\"" Jan 13 20:24:20.833915 containerd[1467]: time="2025-01-13T20:24:20.833887366Z" level=info msg="StartContainer for \"901328dc6342f3a83fd2c96e3e80064688cdfad57763ca1ae3a1c34bc8a41a74\"" Jan 13 20:24:20.861305 systemd[1]: Started cri-containerd-901328dc6342f3a83fd2c96e3e80064688cdfad57763ca1ae3a1c34bc8a41a74.scope - libcontainer container 901328dc6342f3a83fd2c96e3e80064688cdfad57763ca1ae3a1c34bc8a41a74. Jan 13 20:24:20.886500 containerd[1467]: time="2025-01-13T20:24:20.886444642Z" level=info msg="StartContainer for \"901328dc6342f3a83fd2c96e3e80064688cdfad57763ca1ae3a1c34bc8a41a74\" returns successfully" Jan 13 20:24:20.888564 systemd[1]: cri-containerd-901328dc6342f3a83fd2c96e3e80064688cdfad57763ca1ae3a1c34bc8a41a74.scope: Deactivated successfully. Jan 13 20:24:20.911112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-901328dc6342f3a83fd2c96e3e80064688cdfad57763ca1ae3a1c34bc8a41a74-rootfs.mount: Deactivated successfully. Jan 13 20:24:20.919021 containerd[1467]: time="2025-01-13T20:24:20.918951441Z" level=info msg="shim disconnected" id=901328dc6342f3a83fd2c96e3e80064688cdfad57763ca1ae3a1c34bc8a41a74 namespace=k8s.io Jan 13 20:24:20.919021 containerd[1467]: time="2025-01-13T20:24:20.919004241Z" level=warning msg="cleaning up after shim disconnected" id=901328dc6342f3a83fd2c96e3e80064688cdfad57763ca1ae3a1c34bc8a41a74 namespace=k8s.io Jan 13 20:24:20.919021 containerd[1467]: time="2025-01-13T20:24:20.919012601Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:21.585939 kubelet[1776]: E0113 20:24:21.585882 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:21.698705 kubelet[1776]: E0113 20:24:21.698658 1776 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:24:21.815043 kubelet[1776]: E0113 20:24:21.814819 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:21.816731 containerd[1467]: time="2025-01-13T20:24:21.816693948Z" level=info msg="CreateContainer within sandbox \"1a384418651840d4b3494ec555793ea283ad6cd507a2efb3e5670b2af01580ad\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:24:21.826559 containerd[1467]: time="2025-01-13T20:24:21.826513162Z" level=info msg="CreateContainer within sandbox \"1a384418651840d4b3494ec555793ea283ad6cd507a2efb3e5670b2af01580ad\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b4fa5c354386cbd98859882b7cc9e2aa1191b1db39bc45190cc40e7312494859\"" Jan 13 20:24:21.827184 containerd[1467]: time="2025-01-13T20:24:21.827081367Z" level=info msg="StartContainer for \"b4fa5c354386cbd98859882b7cc9e2aa1191b1db39bc45190cc40e7312494859\"" Jan 13 20:24:21.828429 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4052540766.mount: Deactivated successfully. Jan 13 20:24:21.851254 systemd[1]: Started cri-containerd-b4fa5c354386cbd98859882b7cc9e2aa1191b1db39bc45190cc40e7312494859.scope - libcontainer container b4fa5c354386cbd98859882b7cc9e2aa1191b1db39bc45190cc40e7312494859. Jan 13 20:24:21.872125 systemd[1]: cri-containerd-b4fa5c354386cbd98859882b7cc9e2aa1191b1db39bc45190cc40e7312494859.scope: Deactivated successfully. Jan 13 20:24:21.876044 containerd[1467]: time="2025-01-13T20:24:21.876003756Z" level=info msg="StartContainer for \"b4fa5c354386cbd98859882b7cc9e2aa1191b1db39bc45190cc40e7312494859\" returns successfully" Jan 13 20:24:21.890028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4fa5c354386cbd98859882b7cc9e2aa1191b1db39bc45190cc40e7312494859-rootfs.mount: Deactivated successfully. Jan 13 20:24:21.898316 containerd[1467]: time="2025-01-13T20:24:21.898243249Z" level=info msg="shim disconnected" id=b4fa5c354386cbd98859882b7cc9e2aa1191b1db39bc45190cc40e7312494859 namespace=k8s.io Jan 13 20:24:21.898316 containerd[1467]: time="2025-01-13T20:24:21.898310370Z" level=warning msg="cleaning up after shim disconnected" id=b4fa5c354386cbd98859882b7cc9e2aa1191b1db39bc45190cc40e7312494859 namespace=k8s.io Jan 13 20:24:21.898316 containerd[1467]: time="2025-01-13T20:24:21.898318970Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:22.586948 kubelet[1776]: E0113 20:24:22.586894 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:22.818575 kubelet[1776]: E0113 20:24:22.818547 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:22.820742 containerd[1467]: time="2025-01-13T20:24:22.820573246Z" level=info msg="CreateContainer within sandbox \"1a384418651840d4b3494ec555793ea283ad6cd507a2efb3e5670b2af01580ad\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:24:23.051162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount968388741.mount: Deactivated successfully. Jan 13 20:24:23.109015 containerd[1467]: time="2025-01-13T20:24:23.108960491Z" level=info msg="CreateContainer within sandbox \"1a384418651840d4b3494ec555793ea283ad6cd507a2efb3e5670b2af01580ad\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0ae93db34ec2eeabc6ab89eda545fe61c42be98514b7cf15b37993c588194f60\"" Jan 13 20:24:23.109563 containerd[1467]: time="2025-01-13T20:24:23.109536136Z" level=info msg="StartContainer for \"0ae93db34ec2eeabc6ab89eda545fe61c42be98514b7cf15b37993c588194f60\"" Jan 13 20:24:23.141289 systemd[1]: Started cri-containerd-0ae93db34ec2eeabc6ab89eda545fe61c42be98514b7cf15b37993c588194f60.scope - libcontainer container 0ae93db34ec2eeabc6ab89eda545fe61c42be98514b7cf15b37993c588194f60. Jan 13 20:24:23.174446 containerd[1467]: time="2025-01-13T20:24:23.174403372Z" level=info msg="StartContainer for \"0ae93db34ec2eeabc6ab89eda545fe61c42be98514b7cf15b37993c588194f60\" returns successfully" Jan 13 20:24:23.429117 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 13 20:24:23.587197 kubelet[1776]: E0113 20:24:23.587150 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:23.823170 kubelet[1776]: E0113 20:24:23.822846 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:23.837340 kubelet[1776]: I0113 20:24:23.837203 1776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wztvc" podStartSLOduration=6.837186579 podStartE2EDuration="6.837186579s" podCreationTimestamp="2025-01-13 20:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:24:23.837123379 +0000 UTC m=+68.657641606" watchObservedRunningTime="2025-01-13 20:24:23.837186579 +0000 UTC m=+68.657704806" Jan 13 20:24:24.587838 kubelet[1776]: E0113 20:24:24.587788 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:24.867787 kubelet[1776]: E0113 20:24:24.867675 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:25.588395 kubelet[1776]: E0113 20:24:25.588346 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:25.845043 kubelet[1776]: E0113 20:24:25.844902 1776 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:34048->127.0.0.1:39723: write tcp 127.0.0.1:34048->127.0.0.1:39723: write: broken pipe Jan 13 20:24:26.325919 systemd-networkd[1401]: lxc_health: Link UP Jan 13 20:24:26.332870 systemd-networkd[1401]: lxc_health: Gained carrier Jan 13 20:24:26.588638 kubelet[1776]: E0113 20:24:26.588459 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:26.872213 kubelet[1776]: E0113 20:24:26.869719 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:27.588677 kubelet[1776]: E0113 20:24:27.588617 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:27.830272 kubelet[1776]: E0113 20:24:27.829943 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:28.046503 systemd-networkd[1401]: lxc_health: Gained IPv6LL Jan 13 20:24:28.419013 containerd[1467]: time="2025-01-13T20:24:28.418179439Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:28.420194 containerd[1467]: time="2025-01-13T20:24:28.420168256Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17137082" Jan 13 20:24:28.421110 containerd[1467]: time="2025-01-13T20:24:28.421064623Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:28.422417 containerd[1467]: time="2025-01-13T20:24:28.422389514Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 9.483706686s" Jan 13 20:24:28.422537 containerd[1467]: time="2025-01-13T20:24:28.422518196Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 13 20:24:28.425541 containerd[1467]: time="2025-01-13T20:24:28.425510301Z" level=info msg="CreateContainer within sandbox \"3945c962c520c2aab589bf92f886a9b3b1b1daa91c28ffef10b3beaa95856cfe\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:24:28.436157 containerd[1467]: time="2025-01-13T20:24:28.436122950Z" level=info msg="CreateContainer within sandbox \"3945c962c520c2aab589bf92f886a9b3b1b1daa91c28ffef10b3beaa95856cfe\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c30e8b26d17c3a200c63bb52f49104b7f0981135ead54d2e6cf22d24a726cd1e\"" Jan 13 20:24:28.437073 containerd[1467]: time="2025-01-13T20:24:28.437051757Z" level=info msg="StartContainer for \"c30e8b26d17c3a200c63bb52f49104b7f0981135ead54d2e6cf22d24a726cd1e\"" Jan 13 20:24:28.464266 systemd[1]: Started cri-containerd-c30e8b26d17c3a200c63bb52f49104b7f0981135ead54d2e6cf22d24a726cd1e.scope - libcontainer container c30e8b26d17c3a200c63bb52f49104b7f0981135ead54d2e6cf22d24a726cd1e. Jan 13 20:24:28.492095 containerd[1467]: time="2025-01-13T20:24:28.490930289Z" level=info msg="StartContainer for \"c30e8b26d17c3a200c63bb52f49104b7f0981135ead54d2e6cf22d24a726cd1e\" returns successfully" Jan 13 20:24:28.589602 kubelet[1776]: E0113 20:24:28.589570 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:28.835397 kubelet[1776]: E0113 20:24:28.835021 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:28.835397 kubelet[1776]: E0113 20:24:28.835339 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:28.843566 kubelet[1776]: I0113 20:24:28.843366 1776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-qgvvs" podStartSLOduration=2.3580608610000002 podStartE2EDuration="11.843352081s" podCreationTimestamp="2025-01-13 20:24:17 +0000 UTC" firstStartedPulling="2025-01-13 20:24:18.938388985 +0000 UTC m=+63.758907212" lastFinishedPulling="2025-01-13 20:24:28.423680245 +0000 UTC m=+73.244198432" observedRunningTime="2025-01-13 20:24:28.842814797 +0000 UTC m=+73.663332984" watchObservedRunningTime="2025-01-13 20:24:28.843352081 +0000 UTC m=+73.663870308" Jan 13 20:24:29.590745 kubelet[1776]: E0113 20:24:29.590699 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:29.841322 kubelet[1776]: E0113 20:24:29.840910 1776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:30.590964 kubelet[1776]: E0113 20:24:30.590924 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:31.592078 kubelet[1776]: E0113 20:24:31.592032 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:32.592809 kubelet[1776]: E0113 20:24:32.592755 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:33.593177 kubelet[1776]: E0113 20:24:33.593120 1776 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"