Jan 29 12:15:15.877370 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 12:15:15.877461 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 29 12:15:15.877472 kernel: KASLR enabled Jan 29 12:15:15.877478 kernel: efi: EFI v2.7 by EDK II Jan 29 12:15:15.877484 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 29 12:15:15.877490 kernel: random: crng init done Jan 29 12:15:15.877497 kernel: ACPI: Early table checksum verification disabled Jan 29 12:15:15.877503 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 29 12:15:15.877509 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 29 12:15:15.877517 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:15:15.877523 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:15:15.877529 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:15:15.877535 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:15:15.877541 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:15:15.877549 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:15:15.877556 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:15:15.877563 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:15:15.877570 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:15:15.877576 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 29 12:15:15.877583 kernel: NUMA: Failed to initialise from firmware Jan 29 12:15:15.877589 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 12:15:15.877596 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jan 29 12:15:15.877602 kernel: Zone ranges: Jan 29 12:15:15.877608 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 12:15:15.877615 kernel: DMA32 empty Jan 29 12:15:15.877623 kernel: Normal empty Jan 29 12:15:15.877629 kernel: Movable zone start for each node Jan 29 12:15:15.877636 kernel: Early memory node ranges Jan 29 12:15:15.877642 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 29 12:15:15.877648 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 29 12:15:15.877654 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 29 12:15:15.877661 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 29 12:15:15.877667 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 29 12:15:15.877673 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 29 12:15:15.877680 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 29 12:15:15.877686 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 12:15:15.877692 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 29 12:15:15.877700 kernel: psci: probing for conduit method from ACPI. Jan 29 12:15:15.877706 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 12:15:15.877713 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 12:15:15.877722 kernel: psci: Trusted OS migration not required Jan 29 12:15:15.877729 kernel: psci: SMC Calling Convention v1.1 Jan 29 12:15:15.877736 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 12:15:15.877744 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 12:15:15.877751 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 12:15:15.877758 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 29 12:15:15.877765 kernel: Detected PIPT I-cache on CPU0 Jan 29 12:15:15.877772 kernel: CPU features: detected: GIC system register CPU interface Jan 29 12:15:15.877778 kernel: CPU features: detected: Hardware dirty bit management Jan 29 12:15:15.877786 kernel: CPU features: detected: Spectre-v4 Jan 29 12:15:15.877792 kernel: CPU features: detected: Spectre-BHB Jan 29 12:15:15.877799 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 12:15:15.877813 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 12:15:15.877822 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 12:15:15.877828 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 12:15:15.877835 kernel: alternatives: applying boot alternatives Jan 29 12:15:15.877843 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 12:15:15.877850 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:15:15.877857 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 12:15:15.877864 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 12:15:15.877871 kernel: Fallback order for Node 0: 0 Jan 29 12:15:15.877878 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 29 12:15:15.877884 kernel: Policy zone: DMA Jan 29 12:15:15.877891 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:15:15.877899 kernel: software IO TLB: area num 4. Jan 29 12:15:15.877906 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 29 12:15:15.877913 kernel: Memory: 2386528K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185760K reserved, 0K cma-reserved) Jan 29 12:15:15.877920 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 12:15:15.877927 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:15:15.877934 kernel: rcu: RCU event tracing is enabled. Jan 29 12:15:15.877942 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 12:15:15.877949 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:15:15.877955 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:15:15.877962 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:15:15.877969 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 12:15:15.877976 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 12:15:15.877984 kernel: GICv3: 256 SPIs implemented Jan 29 12:15:15.877990 kernel: GICv3: 0 Extended SPIs implemented Jan 29 12:15:15.877997 kernel: Root IRQ handler: gic_handle_irq Jan 29 12:15:15.878004 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 12:15:15.878011 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 12:15:15.878017 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 12:15:15.878024 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 12:15:15.878031 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 12:15:15.878038 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 29 12:15:15.878045 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 29 12:15:15.878051 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:15:15.878060 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 12:15:15.878066 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 12:15:15.878073 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 12:15:15.878081 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 12:15:15.878087 kernel: arm-pv: using stolen time PV Jan 29 12:15:15.878094 kernel: Console: colour dummy device 80x25 Jan 29 12:15:15.878101 kernel: ACPI: Core revision 20230628 Jan 29 12:15:15.878109 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 12:15:15.878116 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:15:15.878123 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:15:15.878131 kernel: landlock: Up and running. Jan 29 12:15:15.878138 kernel: SELinux: Initializing. Jan 29 12:15:15.878145 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:15:15.878152 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:15:15.878159 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 12:15:15.878166 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 12:15:15.878173 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:15:15.878180 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:15:15.878187 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 12:15:15.878195 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 12:15:15.878202 kernel: Remapping and enabling EFI services. Jan 29 12:15:15.878209 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:15:15.878216 kernel: Detected PIPT I-cache on CPU1 Jan 29 12:15:15.878223 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 12:15:15.878230 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 29 12:15:15.878237 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 12:15:15.878244 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 12:15:15.878251 kernel: Detected PIPT I-cache on CPU2 Jan 29 12:15:15.878258 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 29 12:15:15.878266 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 29 12:15:15.878273 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 12:15:15.878285 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 29 12:15:15.878293 kernel: Detected PIPT I-cache on CPU3 Jan 29 12:15:15.878301 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 29 12:15:15.878308 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 29 12:15:15.878316 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 12:15:15.878323 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 29 12:15:15.878330 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 12:15:15.878339 kernel: SMP: Total of 4 processors activated. Jan 29 12:15:15.878346 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 12:15:15.878354 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 12:15:15.878361 kernel: CPU features: detected: Common not Private translations Jan 29 12:15:15.878368 kernel: CPU features: detected: CRC32 instructions Jan 29 12:15:15.878382 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 12:15:15.878389 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 12:15:15.878396 kernel: CPU features: detected: LSE atomic instructions Jan 29 12:15:15.878405 kernel: CPU features: detected: Privileged Access Never Jan 29 12:15:15.878413 kernel: CPU features: detected: RAS Extension Support Jan 29 12:15:15.878420 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 12:15:15.878427 kernel: CPU: All CPU(s) started at EL1 Jan 29 12:15:15.878434 kernel: alternatives: applying system-wide alternatives Jan 29 12:15:15.878442 kernel: devtmpfs: initialized Jan 29 12:15:15.878449 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:15:15.878457 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 12:15:15.878464 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:15:15.878473 kernel: SMBIOS 3.0.0 present. Jan 29 12:15:15.878480 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 29 12:15:15.878487 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:15:15.878495 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 12:15:15.878502 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 12:15:15.878509 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 12:15:15.878517 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:15:15.878524 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Jan 29 12:15:15.878531 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:15:15.878540 kernel: cpuidle: using governor menu Jan 29 12:15:15.878547 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 12:15:15.878555 kernel: ASID allocator initialised with 32768 entries Jan 29 12:15:15.878562 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:15:15.878569 kernel: Serial: AMBA PL011 UART driver Jan 29 12:15:15.878576 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 12:15:15.878584 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 12:15:15.878591 kernel: Modules: 509040 pages in range for PLT usage Jan 29 12:15:15.878599 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 12:15:15.878608 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 12:15:15.878615 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 12:15:15.878623 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 12:15:15.878645 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:15:15.878652 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:15:15.878659 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 12:15:15.878667 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 12:15:15.878675 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:15:15.878682 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:15:15.878690 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:15:15.878698 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:15:15.878705 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 12:15:15.878713 kernel: ACPI: Interpreter enabled Jan 29 12:15:15.878721 kernel: ACPI: Using GIC for interrupt routing Jan 29 12:15:15.878728 kernel: ACPI: MCFG table detected, 1 entries Jan 29 12:15:15.878735 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 12:15:15.878742 kernel: printk: console [ttyAMA0] enabled Jan 29 12:15:15.878750 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 12:15:15.878890 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 12:15:15.878963 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 12:15:15.879027 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 12:15:15.879091 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 12:15:15.879153 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 12:15:15.879163 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 12:15:15.879170 kernel: PCI host bridge to bus 0000:00 Jan 29 12:15:15.879242 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 12:15:15.879303 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 12:15:15.879362 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 12:15:15.879448 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 12:15:15.879527 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 12:15:15.879602 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 12:15:15.879675 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 29 12:15:15.879741 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 29 12:15:15.879813 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 12:15:15.879895 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 12:15:15.879960 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 29 12:15:15.880025 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 29 12:15:15.880084 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 12:15:15.880142 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 12:15:15.880205 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 12:15:15.880215 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 12:15:15.880223 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 12:15:15.880230 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 12:15:15.880238 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 12:15:15.880245 kernel: iommu: Default domain type: Translated Jan 29 12:15:15.880253 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 12:15:15.880260 kernel: efivars: Registered efivars operations Jan 29 12:15:15.880269 kernel: vgaarb: loaded Jan 29 12:15:15.880277 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 12:15:15.880284 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:15:15.880292 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:15:15.880299 kernel: pnp: PnP ACPI init Jan 29 12:15:15.880387 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 12:15:15.880398 kernel: pnp: PnP ACPI: found 1 devices Jan 29 12:15:15.880406 kernel: NET: Registered PF_INET protocol family Jan 29 12:15:15.880416 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:15:15.880424 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 12:15:15.880431 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:15:15.880439 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 12:15:15.880446 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 12:15:15.880454 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 12:15:15.880461 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:15:15.880469 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:15:15.880476 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:15:15.880485 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:15:15.880492 kernel: kvm [1]: HYP mode not available Jan 29 12:15:15.880499 kernel: Initialise system trusted keyrings Jan 29 12:15:15.880507 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 12:15:15.880514 kernel: Key type asymmetric registered Jan 29 12:15:15.880521 kernel: Asymmetric key parser 'x509' registered Jan 29 12:15:15.880529 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 12:15:15.880536 kernel: io scheduler mq-deadline registered Jan 29 12:15:15.880543 kernel: io scheduler kyber registered Jan 29 12:15:15.880552 kernel: io scheduler bfq registered Jan 29 12:15:15.880559 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 12:15:15.880567 kernel: ACPI: button: Power Button [PWRB] Jan 29 12:15:15.880574 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 12:15:15.880649 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 29 12:15:15.880659 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:15:15.880667 kernel: thunder_xcv, ver 1.0 Jan 29 12:15:15.880674 kernel: thunder_bgx, ver 1.0 Jan 29 12:15:15.880681 kernel: nicpf, ver 1.0 Jan 29 12:15:15.880690 kernel: nicvf, ver 1.0 Jan 29 12:15:15.880763 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 12:15:15.880834 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T12:15:15 UTC (1738152915) Jan 29 12:15:15.880845 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 12:15:15.880852 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 12:15:15.880860 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 12:15:15.880867 kernel: watchdog: Hard watchdog permanently disabled Jan 29 12:15:15.880875 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:15:15.880884 kernel: Segment Routing with IPv6 Jan 29 12:15:15.880892 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:15:15.880899 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:15:15.880907 kernel: Key type dns_resolver registered Jan 29 12:15:15.880914 kernel: registered taskstats version 1 Jan 29 12:15:15.880921 kernel: Loading compiled-in X.509 certificates Jan 29 12:15:15.880929 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 29 12:15:15.880936 kernel: Key type .fscrypt registered Jan 29 12:15:15.880943 kernel: Key type fscrypt-provisioning registered Jan 29 12:15:15.880952 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 12:15:15.880960 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:15:15.880968 kernel: ima: No architecture policies found Jan 29 12:15:15.880975 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 12:15:15.880982 kernel: clk: Disabling unused clocks Jan 29 12:15:15.880990 kernel: Freeing unused kernel memory: 39360K Jan 29 12:15:15.880998 kernel: Run /init as init process Jan 29 12:15:15.881006 kernel: with arguments: Jan 29 12:15:15.881013 kernel: /init Jan 29 12:15:15.881022 kernel: with environment: Jan 29 12:15:15.881030 kernel: HOME=/ Jan 29 12:15:15.881037 kernel: TERM=linux Jan 29 12:15:15.881045 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:15:15.881054 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:15:15.881064 systemd[1]: Detected virtualization kvm. Jan 29 12:15:15.881072 systemd[1]: Detected architecture arm64. Jan 29 12:15:15.881080 systemd[1]: Running in initrd. Jan 29 12:15:15.881090 systemd[1]: No hostname configured, using default hostname. Jan 29 12:15:15.881098 systemd[1]: Hostname set to . Jan 29 12:15:15.881106 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:15:15.881118 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:15:15.881126 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:15:15.881134 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:15:15.881143 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:15:15.881151 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:15:15.881161 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:15:15.881169 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:15:15.881178 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:15:15.881186 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:15:15.881194 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:15:15.881202 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:15:15.881211 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:15:15.881219 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:15:15.881228 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:15:15.881236 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:15:15.881246 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:15:15.881254 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:15:15.881262 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:15:15.881271 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:15:15.881279 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:15:15.881289 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:15:15.881297 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:15:15.881305 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:15:15.881313 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:15:15.881321 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:15:15.881329 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:15:15.881336 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:15:15.881344 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:15:15.881352 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:15:15.881361 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:15:15.881370 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:15:15.881390 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:15:15.881398 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:15:15.881406 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:15:15.881417 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:15:15.881429 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:15:15.881438 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:15:15.881464 systemd-journald[237]: Collecting audit messages is disabled. Jan 29 12:15:15.881490 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:15:15.881499 systemd-journald[237]: Journal started Jan 29 12:15:15.881517 systemd-journald[237]: Runtime Journal (/run/log/journal/9314489eac80472898f077fd09a13280) is 5.9M, max 47.3M, 41.4M free. Jan 29 12:15:15.872536 systemd-modules-load[238]: Inserted module 'overlay' Jan 29 12:15:15.884396 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:15:15.886392 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:15:15.887881 systemd-modules-load[238]: Inserted module 'br_netfilter' Jan 29 12:15:15.888551 kernel: Bridge firewalling registered Jan 29 12:15:15.892512 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:15:15.894408 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:15:15.895482 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:15:15.897462 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:15:15.899592 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:15:15.914598 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:15:15.916010 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:15:15.924607 dracut-cmdline[270]: dracut-dracut-053 Jan 29 12:15:15.924952 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:15:15.927675 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:15:15.929161 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 29 12:15:15.955389 systemd-resolved[286]: Positive Trust Anchors: Jan 29 12:15:15.955405 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:15:15.955437 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:15:15.960141 systemd-resolved[286]: Defaulting to hostname 'linux'. Jan 29 12:15:15.961080 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:15:15.963082 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:15:16.003413 kernel: SCSI subsystem initialized Jan 29 12:15:16.007395 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:15:16.015414 kernel: iscsi: registered transport (tcp) Jan 29 12:15:16.028640 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:15:16.028662 kernel: QLogic iSCSI HBA Driver Jan 29 12:15:16.081610 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:15:16.094513 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:15:16.112125 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:15:16.112175 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:15:16.113424 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:15:16.159406 kernel: raid6: neonx8 gen() 15766 MB/s Jan 29 12:15:16.176397 kernel: raid6: neonx4 gen() 15660 MB/s Jan 29 12:15:16.193394 kernel: raid6: neonx2 gen() 13211 MB/s Jan 29 12:15:16.210408 kernel: raid6: neonx1 gen() 10469 MB/s Jan 29 12:15:16.227406 kernel: raid6: int64x8 gen() 6916 MB/s Jan 29 12:15:16.244405 kernel: raid6: int64x4 gen() 7324 MB/s Jan 29 12:15:16.261395 kernel: raid6: int64x2 gen() 6123 MB/s Jan 29 12:15:16.278408 kernel: raid6: int64x1 gen() 5049 MB/s Jan 29 12:15:16.278450 kernel: raid6: using algorithm neonx8 gen() 15766 MB/s Jan 29 12:15:16.295418 kernel: raid6: .... xor() 11930 MB/s, rmw enabled Jan 29 12:15:16.295482 kernel: raid6: using neon recovery algorithm Jan 29 12:15:16.301525 kernel: xor: measuring software checksum speed Jan 29 12:15:16.301584 kernel: 8regs : 19807 MB/sec Jan 29 12:15:16.301594 kernel: 32regs : 19683 MB/sec Jan 29 12:15:16.302441 kernel: arm64_neon : 26945 MB/sec Jan 29 12:15:16.302469 kernel: xor: using function: arm64_neon (26945 MB/sec) Jan 29 12:15:16.357412 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:15:16.370108 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:15:16.385581 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:15:16.400230 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jan 29 12:15:16.403495 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:15:16.417594 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:15:16.430279 dracut-pre-trigger[464]: rd.md=0: removing MD RAID activation Jan 29 12:15:16.456684 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:15:16.465586 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:15:16.504338 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:15:16.513772 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:15:16.527073 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:15:16.528673 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:15:16.530629 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:15:16.532109 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:15:16.539533 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:15:16.551854 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:15:16.558419 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 29 12:15:16.567510 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 12:15:16.567619 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 12:15:16.567630 kernel: GPT:9289727 != 19775487 Jan 29 12:15:16.567639 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 12:15:16.567648 kernel: GPT:9289727 != 19775487 Jan 29 12:15:16.567660 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 12:15:16.567669 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:15:16.566059 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:15:16.566182 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:15:16.567359 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:15:16.568134 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:15:16.568270 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:15:16.571979 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:15:16.587639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:15:16.600435 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (513) Jan 29 12:15:16.600474 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:15:16.603385 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (520) Jan 29 12:15:16.607460 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 12:15:16.612047 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 12:15:16.621983 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:15:16.625839 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 12:15:16.626767 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 12:15:16.644016 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:15:16.646621 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:15:16.655794 disk-uuid[548]: Primary Header is updated. Jan 29 12:15:16.655794 disk-uuid[548]: Secondary Entries is updated. Jan 29 12:15:16.655794 disk-uuid[548]: Secondary Header is updated. Jan 29 12:15:16.671478 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:15:16.674331 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:15:17.682434 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:15:17.683509 disk-uuid[549]: The operation has completed successfully. Jan 29 12:15:17.709815 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:15:17.709907 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:15:17.735570 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:15:17.738809 sh[571]: Success Jan 29 12:15:17.757021 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 12:15:17.817018 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:15:17.818565 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:15:17.819265 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:15:17.836877 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 29 12:15:17.836924 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 12:15:17.836942 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:15:17.837645 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:15:17.838664 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:15:17.848045 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:15:17.849470 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:15:17.856564 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:15:17.858338 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:15:17.875015 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:15:17.875046 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 12:15:17.875738 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:15:17.883402 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:15:17.891625 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 12:15:17.892862 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:15:17.905622 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:15:17.913524 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:15:17.968415 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:15:17.976565 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:15:18.004365 systemd-networkd[757]: lo: Link UP Jan 29 12:15:18.004394 systemd-networkd[757]: lo: Gained carrier Jan 29 12:15:18.005282 systemd-networkd[757]: Enumeration completed Jan 29 12:15:18.005925 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:15:18.005928 systemd-networkd[757]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:15:18.006593 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:15:18.007062 systemd-networkd[757]: eth0: Link UP Jan 29 12:15:18.007066 systemd-networkd[757]: eth0: Gained carrier Jan 29 12:15:18.007072 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:15:18.008753 systemd[1]: Reached target network.target - Network. Jan 29 12:15:18.020588 ignition[692]: Ignition 2.19.0 Jan 29 12:15:18.020598 ignition[692]: Stage: fetch-offline Jan 29 12:15:18.020632 ignition[692]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:15:18.020640 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:15:18.020802 ignition[692]: parsed url from cmdline: "" Jan 29 12:15:18.020806 ignition[692]: no config URL provided Jan 29 12:15:18.020810 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:15:18.020818 ignition[692]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:15:18.020841 ignition[692]: op(1): [started] loading QEMU firmware config module Jan 29 12:15:18.020845 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 12:15:18.028426 systemd-networkd[757]: eth0: DHCPv4 address 10.0.0.141/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 12:15:18.032302 ignition[692]: op(1): [finished] loading QEMU firmware config module Jan 29 12:15:18.039808 ignition[692]: parsing config with SHA512: 96ef05f9b4e88a9903ae503aa2c915b5ce6df0e9496d8e1755ec2f74360595e0f4b60e00d74011ebb502af3ec4b95868a7bfdd918454891426661d0727963d82 Jan 29 12:15:18.042822 unknown[692]: fetched base config from "system" Jan 29 12:15:18.042833 unknown[692]: fetched user config from "qemu" Jan 29 12:15:18.043154 ignition[692]: fetch-offline: fetch-offline passed Jan 29 12:15:18.043220 ignition[692]: Ignition finished successfully Jan 29 12:15:18.046814 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:15:18.047885 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 12:15:18.058563 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:15:18.069464 ignition[769]: Ignition 2.19.0 Jan 29 12:15:18.069475 ignition[769]: Stage: kargs Jan 29 12:15:18.069639 ignition[769]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:15:18.069649 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:15:18.070354 ignition[769]: kargs: kargs passed Jan 29 12:15:18.070419 ignition[769]: Ignition finished successfully Jan 29 12:15:18.074154 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:15:18.088694 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:15:18.098929 ignition[776]: Ignition 2.19.0 Jan 29 12:15:18.098938 ignition[776]: Stage: disks Jan 29 12:15:18.099112 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:15:18.099124 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:15:18.099872 ignition[776]: disks: disks passed Jan 29 12:15:18.101720 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:15:18.099924 ignition[776]: Ignition finished successfully Jan 29 12:15:18.104555 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:15:18.106101 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:15:18.107006 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:15:18.108450 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:15:18.109707 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:15:18.120572 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:15:18.132615 systemd-fsck[787]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 12:15:18.137113 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:15:18.149894 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:15:18.195403 kernel: EXT4-fs (vda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 29 12:15:18.195597 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:15:18.196617 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:15:18.208897 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:15:18.210969 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:15:18.211891 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 12:15:18.211930 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:15:18.211950 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:15:18.218113 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (795) Jan 29 12:15:18.217595 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:15:18.219508 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:15:18.222996 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:15:18.223020 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 12:15:18.223031 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:15:18.225386 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:15:18.226658 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:15:18.275896 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:15:18.280237 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:15:18.284396 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:15:18.288312 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:15:18.359651 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:15:18.373523 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:15:18.374964 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:15:18.380396 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:15:18.396752 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:15:18.398094 ignition[910]: INFO : Ignition 2.19.0 Jan 29 12:15:18.398094 ignition[910]: INFO : Stage: mount Jan 29 12:15:18.398094 ignition[910]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:15:18.398094 ignition[910]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:15:18.400914 ignition[910]: INFO : mount: mount passed Jan 29 12:15:18.400914 ignition[910]: INFO : Ignition finished successfully Jan 29 12:15:18.400679 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:15:18.415503 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:15:18.834710 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:15:18.843561 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:15:18.848399 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (923) Jan 29 12:15:18.850431 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 29 12:15:18.850481 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 12:15:18.850493 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:15:18.856579 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:15:18.857297 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:15:18.873527 ignition[940]: INFO : Ignition 2.19.0 Jan 29 12:15:18.873527 ignition[940]: INFO : Stage: files Jan 29 12:15:18.875338 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:15:18.875338 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:15:18.875338 ignition[940]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:15:18.878829 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:15:18.878829 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:15:18.881740 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:15:18.883011 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:15:18.883011 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:15:18.882311 unknown[940]: wrote ssh authorized keys file for user: core Jan 29 12:15:18.886648 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 12:15:18.886648 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 12:15:18.886648 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:15:18.886648 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:15:18.893038 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:15:18.893038 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:15:18.893038 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 12:15:18.893038 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 12:15:18.893038 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 12:15:18.893038 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 29 12:15:19.205511 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jan 29 12:15:19.423966 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 12:15:19.423966 ignition[940]: INFO : files: op(8): [started] processing unit "containerd.service" Jan 29 12:15:19.427411 ignition[940]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 12:15:19.429224 ignition[940]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 12:15:19.429224 ignition[940]: INFO : files: op(8): [finished] processing unit "containerd.service" Jan 29 12:15:19.429224 ignition[940]: INFO : files: op(a): [started] processing unit "coreos-metadata.service" Jan 29 12:15:19.429224 ignition[940]: INFO : files: op(a): op(b): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 12:15:19.429224 ignition[940]: INFO : files: op(a): op(b): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 12:15:19.429224 ignition[940]: INFO : files: op(a): [finished] processing unit "coreos-metadata.service" Jan 29 12:15:19.429224 ignition[940]: INFO : files: op(c): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 12:15:19.467801 ignition[940]: INFO : files: op(c): op(d): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 12:15:19.471766 ignition[940]: INFO : files: op(c): op(d): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 12:15:19.473880 ignition[940]: INFO : files: op(c): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 12:15:19.473880 ignition[940]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:15:19.473880 ignition[940]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:15:19.473880 ignition[940]: INFO : files: files passed Jan 29 12:15:19.473880 ignition[940]: INFO : Ignition finished successfully Jan 29 12:15:19.474396 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:15:19.495739 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:15:19.498014 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:15:19.500620 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:15:19.500706 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:15:19.506438 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 12:15:19.509854 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:15:19.509854 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:15:19.512419 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:15:19.513434 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:15:19.514479 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:15:19.520621 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:15:19.528683 systemd-networkd[757]: eth0: Gained IPv6LL Jan 29 12:15:19.541341 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:15:19.541467 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:15:19.543105 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:15:19.543906 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:15:19.544665 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:15:19.545430 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:15:19.562070 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:15:19.578591 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:15:19.587330 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:15:19.588340 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:15:19.589971 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:15:19.591229 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:15:19.591342 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:15:19.593210 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:15:19.594669 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:15:19.595867 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:15:19.597125 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:15:19.598527 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:15:19.600056 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:15:19.601361 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:15:19.602821 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:15:19.604201 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:15:19.605462 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:15:19.606719 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:15:19.606844 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:15:19.608553 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:15:19.609961 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:15:19.611311 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:15:19.614460 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:15:19.615390 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:15:19.615501 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:15:19.617738 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:15:19.617862 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:15:19.619325 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:15:19.620471 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:15:19.621468 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:15:19.622604 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:15:19.624033 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:15:19.625585 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:15:19.625773 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:15:19.626804 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:15:19.626931 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:15:19.628039 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:15:19.628196 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:15:19.629457 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:15:19.629636 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:15:19.643637 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:15:19.645762 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:15:19.646428 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:15:19.646672 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:15:19.647975 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:15:19.648132 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:15:19.654300 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:15:19.655672 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:15:19.657940 ignition[995]: INFO : Ignition 2.19.0 Jan 29 12:15:19.657940 ignition[995]: INFO : Stage: umount Jan 29 12:15:19.659772 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:15:19.659772 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:15:19.659772 ignition[995]: INFO : umount: umount passed Jan 29 12:15:19.659772 ignition[995]: INFO : Ignition finished successfully Jan 29 12:15:19.660819 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:15:19.661285 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:15:19.662405 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:15:19.663783 systemd[1]: Stopped target network.target - Network. Jan 29 12:15:19.665084 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:15:19.665144 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:15:19.666617 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:15:19.666661 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:15:19.668057 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:15:19.668094 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:15:19.670405 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:15:19.670468 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:15:19.671728 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:15:19.674553 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:15:19.681366 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:15:19.681498 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:15:19.683590 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:15:19.683676 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:15:19.695478 systemd-networkd[757]: eth0: DHCPv6 lease lost Jan 29 12:15:19.696947 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:15:19.697099 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:15:19.699006 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:15:19.699038 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:15:19.708512 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:15:19.709439 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:15:19.709504 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:15:19.711213 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:15:19.711261 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:15:19.712339 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:15:19.712397 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:15:19.714117 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:15:19.717305 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:15:19.718052 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:15:19.721028 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:15:19.721080 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:15:19.725942 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:15:19.726051 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:15:19.736760 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:15:19.736913 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:15:19.738523 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:15:19.738562 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:15:19.739968 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:15:19.740001 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:15:19.741252 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:15:19.741295 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:15:19.743245 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:15:19.743289 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:15:19.745320 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:15:19.745367 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:15:19.760551 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:15:19.761315 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:15:19.761371 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:15:19.763023 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 12:15:19.763065 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:15:19.764482 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:15:19.764522 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:15:19.766108 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:15:19.766146 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:15:19.767852 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:15:19.767949 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:15:19.769600 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:15:19.771320 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:15:19.780788 systemd[1]: Switching root. Jan 29 12:15:19.813029 systemd-journald[237]: Journal stopped Jan 29 12:15:20.483707 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 29 12:15:20.483769 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 12:15:20.483790 kernel: SELinux: policy capability open_perms=1 Jan 29 12:15:20.483803 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 12:15:20.483813 kernel: SELinux: policy capability always_check_network=0 Jan 29 12:15:20.483822 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 12:15:20.483832 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 12:15:20.483841 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 12:15:20.483850 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 12:15:20.483860 kernel: audit: type=1403 audit(1738152919.964:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 12:15:20.483870 systemd[1]: Successfully loaded SELinux policy in 31.664ms. Jan 29 12:15:20.483891 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.016ms. Jan 29 12:15:20.483904 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:15:20.483915 systemd[1]: Detected virtualization kvm. Jan 29 12:15:20.483926 systemd[1]: Detected architecture arm64. Jan 29 12:15:20.483936 systemd[1]: Detected first boot. Jan 29 12:15:20.483948 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:15:20.483960 zram_generator::config[1059]: No configuration found. Jan 29 12:15:20.483971 systemd[1]: Populated /etc with preset unit settings. Jan 29 12:15:20.483982 systemd[1]: Queued start job for default target multi-user.target. Jan 29 12:15:20.483993 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 12:15:20.484004 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 12:15:20.484015 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 12:15:20.484027 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 12:15:20.484037 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 12:15:20.484049 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 12:15:20.484060 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 12:15:20.484071 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 12:15:20.484081 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 12:15:20.484092 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:15:20.484103 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:15:20.484114 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 12:15:20.484124 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 12:15:20.484135 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 12:15:20.484147 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:15:20.484157 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 12:15:20.484168 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:15:20.484178 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 12:15:20.484189 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:15:20.484200 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:15:20.484210 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:15:20.484221 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:15:20.484233 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 12:15:20.484243 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 12:15:20.484255 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:15:20.484266 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:15:20.484276 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:15:20.484287 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:15:20.484297 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:15:20.484308 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 12:15:20.484318 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 12:15:20.484330 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 12:15:20.484341 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 12:15:20.484351 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 12:15:20.484361 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 12:15:20.484387 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 12:15:20.484401 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 12:15:20.484412 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:15:20.484423 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:15:20.484459 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 12:15:20.484473 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:15:20.484484 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:15:20.484494 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:15:20.484505 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 12:15:20.484515 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:15:20.484534 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 12:15:20.484547 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 29 12:15:20.484558 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 29 12:15:20.484571 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:15:20.484582 kernel: fuse: init (API version 7.39) Jan 29 12:15:20.484592 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:15:20.484603 kernel: loop: module loaded Jan 29 12:15:20.484613 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 12:15:20.484623 kernel: ACPI: bus type drm_connector registered Jan 29 12:15:20.484633 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 12:15:20.484664 systemd-journald[1140]: Collecting audit messages is disabled. Jan 29 12:15:20.484691 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:15:20.484703 systemd-journald[1140]: Journal started Jan 29 12:15:20.484723 systemd-journald[1140]: Runtime Journal (/run/log/journal/9314489eac80472898f077fd09a13280) is 5.9M, max 47.3M, 41.4M free. Jan 29 12:15:20.487403 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:15:20.488529 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 12:15:20.489855 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 12:15:20.490793 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 12:15:20.491608 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 12:15:20.492485 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 12:15:20.493359 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 12:15:20.494356 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 12:15:20.495774 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:15:20.496951 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 12:15:20.497125 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 12:15:20.498247 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:15:20.498592 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:15:20.499643 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:15:20.499810 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:15:20.500991 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:15:20.501154 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:15:20.502371 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 12:15:20.502539 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 12:15:20.503568 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:15:20.503787 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:15:20.505018 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:15:20.506188 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 12:15:20.507574 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 12:15:20.518620 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 12:15:20.532472 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 12:15:20.534273 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 12:15:20.535187 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 12:15:20.537251 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 12:15:20.540580 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 12:15:20.541536 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:15:20.545609 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 12:15:20.546550 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:15:20.547723 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:15:20.551589 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:15:20.554075 systemd-journald[1140]: Time spent on flushing to /var/log/journal/9314489eac80472898f077fd09a13280 is 12.541ms for 829 entries. Jan 29 12:15:20.554075 systemd-journald[1140]: System Journal (/var/log/journal/9314489eac80472898f077fd09a13280) is 8.0M, max 195.6M, 187.6M free. Jan 29 12:15:20.584887 systemd-journald[1140]: Received client request to flush runtime journal. Jan 29 12:15:20.554021 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:15:20.556107 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 12:15:20.557461 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 12:15:20.561622 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 12:15:20.566097 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 12:15:20.567368 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 12:15:20.575088 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:15:20.584844 udevadm[1200]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 12:15:20.586830 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 12:15:20.590826 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 29 12:15:20.590837 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 29 12:15:20.595091 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:15:20.604570 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 12:15:20.623242 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 12:15:20.631535 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:15:20.642564 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jan 29 12:15:20.642579 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Jan 29 12:15:20.646192 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:15:20.979927 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 12:15:20.995604 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:15:21.013835 systemd-udevd[1221]: Using default interface naming scheme 'v255'. Jan 29 12:15:21.028515 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:15:21.039589 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:15:21.050521 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 12:15:21.060767 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 29 12:15:21.080442 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1222) Jan 29 12:15:21.096681 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 12:15:21.114070 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:15:21.156556 systemd-networkd[1231]: lo: Link UP Jan 29 12:15:21.156572 systemd-networkd[1231]: lo: Gained carrier Jan 29 12:15:21.157261 systemd-networkd[1231]: Enumeration completed Jan 29 12:15:21.157700 systemd-networkd[1231]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:15:21.157703 systemd-networkd[1231]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:15:21.158263 systemd-networkd[1231]: eth0: Link UP Jan 29 12:15:21.158267 systemd-networkd[1231]: eth0: Gained carrier Jan 29 12:15:21.158278 systemd-networkd[1231]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:15:21.159654 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:15:21.160590 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:15:21.163005 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 12:15:21.167420 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 12:15:21.171694 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 12:15:21.177470 systemd-networkd[1231]: eth0: DHCPv4 address 10.0.0.141/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 12:15:21.184064 lvm[1259]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:15:21.199059 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:15:21.208224 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 12:15:21.209767 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:15:21.221511 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 12:15:21.225204 lvm[1267]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:15:21.255590 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 12:15:21.256667 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:15:21.257592 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 12:15:21.257620 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:15:21.258332 systemd[1]: Reached target machines.target - Containers. Jan 29 12:15:21.260007 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 12:15:21.271552 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 12:15:21.273394 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 12:15:21.274294 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:15:21.275171 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 12:15:21.277819 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 12:15:21.280572 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 12:15:21.285241 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 12:15:21.293286 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 12:15:21.297399 kernel: loop0: detected capacity change from 0 to 194096 Jan 29 12:15:21.300839 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 12:15:21.302438 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 12:15:21.307408 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 12:15:21.332416 kernel: loop1: detected capacity change from 0 to 114328 Jan 29 12:15:21.376908 kernel: loop2: detected capacity change from 0 to 114432 Jan 29 12:15:21.422399 kernel: loop3: detected capacity change from 0 to 194096 Jan 29 12:15:21.435414 kernel: loop4: detected capacity change from 0 to 114328 Jan 29 12:15:21.439474 kernel: loop5: detected capacity change from 0 to 114432 Jan 29 12:15:21.442394 (sd-merge)[1289]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 12:15:21.442788 (sd-merge)[1289]: Merged extensions into '/usr'. Jan 29 12:15:21.446649 systemd[1]: Reloading requested from client PID 1275 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 12:15:21.446669 systemd[1]: Reloading... Jan 29 12:15:21.492406 zram_generator::config[1315]: No configuration found. Jan 29 12:15:21.524763 ldconfig[1271]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 12:15:21.589613 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:15:21.631840 systemd[1]: Reloading finished in 184 ms. Jan 29 12:15:21.644995 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 12:15:21.646163 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 12:15:21.655669 systemd[1]: Starting ensure-sysext.service... Jan 29 12:15:21.657249 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:15:21.660661 systemd[1]: Reloading requested from client PID 1359 ('systemctl') (unit ensure-sysext.service)... Jan 29 12:15:21.660674 systemd[1]: Reloading... Jan 29 12:15:21.672888 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 12:15:21.673146 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 12:15:21.673807 systemd-tmpfiles[1360]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 12:15:21.674028 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Jan 29 12:15:21.674078 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Jan 29 12:15:21.676621 systemd-tmpfiles[1360]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:15:21.676635 systemd-tmpfiles[1360]: Skipping /boot Jan 29 12:15:21.683774 systemd-tmpfiles[1360]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:15:21.683788 systemd-tmpfiles[1360]: Skipping /boot Jan 29 12:15:21.699399 zram_generator::config[1388]: No configuration found. Jan 29 12:15:21.789488 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:15:21.832098 systemd[1]: Reloading finished in 171 ms. Jan 29 12:15:21.848133 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:15:21.862253 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:15:21.864462 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 12:15:21.866443 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 12:15:21.870565 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:15:21.874590 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 12:15:21.880510 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:15:21.881656 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:15:21.885638 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:15:21.889701 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:15:21.890777 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:15:21.894233 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:15:21.894451 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:15:21.898814 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 12:15:21.900419 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 12:15:21.903385 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:15:21.903532 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:15:21.905090 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:15:21.905227 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:15:21.906788 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:15:21.906968 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:15:21.912854 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:15:21.927643 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:15:21.928625 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:15:21.928672 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:15:21.928715 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:15:21.932557 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 12:15:21.933894 systemd[1]: Finished ensure-sysext.service. Jan 29 12:15:21.934929 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:15:21.936199 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:15:21.938238 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 12:15:21.942448 augenrules[1469]: No rules Jan 29 12:15:21.946711 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 12:15:21.947592 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:15:21.947968 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:15:21.950756 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 12:15:21.969308 systemd-resolved[1434]: Positive Trust Anchors: Jan 29 12:15:21.969325 systemd-resolved[1434]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:15:21.969357 systemd-resolved[1434]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:15:21.975823 systemd-resolved[1434]: Defaulting to hostname 'linux'. Jan 29 12:15:21.979585 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:15:21.980588 systemd[1]: Reached target network.target - Network. Jan 29 12:15:21.981247 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:15:22.000563 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 12:15:22.001313 systemd-timesyncd[1480]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 12:15:22.001361 systemd-timesyncd[1480]: Initial clock synchronization to Wed 2025-01-29 12:15:21.671996 UTC. Jan 29 12:15:22.001837 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:15:22.002685 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 12:15:22.003581 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 12:15:22.004477 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 12:15:22.005356 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 12:15:22.005406 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:15:22.006050 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 12:15:22.006981 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 12:15:22.007951 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 12:15:22.008874 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:15:22.010333 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 12:15:22.012638 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 12:15:22.014468 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 12:15:22.020341 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 12:15:22.021145 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:15:22.021858 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:15:22.022684 systemd[1]: System is tainted: cgroupsv1 Jan 29 12:15:22.022732 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:15:22.022763 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:15:22.023924 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 12:15:22.025736 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 12:15:22.027443 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 12:15:22.031545 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 12:15:22.032448 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 12:15:22.036561 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 12:15:22.039804 jq[1489]: false Jan 29 12:15:22.038304 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 12:15:22.044650 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 12:15:22.048507 extend-filesystems[1490]: Found loop3 Jan 29 12:15:22.049263 extend-filesystems[1490]: Found loop4 Jan 29 12:15:22.049263 extend-filesystems[1490]: Found loop5 Jan 29 12:15:22.049263 extend-filesystems[1490]: Found vda Jan 29 12:15:22.049263 extend-filesystems[1490]: Found vda1 Jan 29 12:15:22.049263 extend-filesystems[1490]: Found vda2 Jan 29 12:15:22.049263 extend-filesystems[1490]: Found vda3 Jan 29 12:15:22.049263 extend-filesystems[1490]: Found usr Jan 29 12:15:22.049263 extend-filesystems[1490]: Found vda4 Jan 29 12:15:22.049263 extend-filesystems[1490]: Found vda6 Jan 29 12:15:22.049263 extend-filesystems[1490]: Found vda7 Jan 29 12:15:22.049263 extend-filesystems[1490]: Found vda9 Jan 29 12:15:22.049263 extend-filesystems[1490]: Checking size of /dev/vda9 Jan 29 12:15:22.058624 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 12:15:22.066067 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 12:15:22.069556 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 12:15:22.072133 dbus-daemon[1488]: [system] SELinux support is enabled Jan 29 12:15:22.072498 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 12:15:22.073628 extend-filesystems[1490]: Resized partition /dev/vda9 Jan 29 12:15:22.076866 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 12:15:22.086139 extend-filesystems[1514]: resize2fs 1.47.1 (20-May-2024) Jan 29 12:15:22.088873 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 12:15:22.089111 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 12:15:22.089353 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 12:15:22.089575 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 12:15:22.090599 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 12:15:22.090830 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 12:15:22.095388 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1225) Jan 29 12:15:22.100394 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 12:15:22.105980 jq[1512]: true Jan 29 12:15:22.112720 (ntainerd)[1522]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 12:15:22.119842 update_engine[1510]: I20250129 12:15:22.118623 1510 main.cc:92] Flatcar Update Engine starting Jan 29 12:15:22.123206 update_engine[1510]: I20250129 12:15:22.123142 1510 update_check_scheduler.cc:74] Next update check in 7m38s Jan 29 12:15:22.127099 systemd[1]: Started update-engine.service - Update Engine. Jan 29 12:15:22.128335 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 12:15:22.128389 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 12:15:22.129684 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 12:15:22.129702 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 12:15:22.131540 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 12:15:22.135536 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 12:15:22.138275 systemd-logind[1500]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 12:15:22.141030 jq[1525]: true Jan 29 12:15:22.141509 systemd-logind[1500]: New seat seat0. Jan 29 12:15:22.142781 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 12:15:22.162421 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 12:15:22.180526 extend-filesystems[1514]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 12:15:22.180526 extend-filesystems[1514]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 12:15:22.180526 extend-filesystems[1514]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 12:15:22.185459 extend-filesystems[1490]: Resized filesystem in /dev/vda9 Jan 29 12:15:22.182056 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 12:15:22.182309 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 12:15:22.197691 bash[1544]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:15:22.197991 locksmithd[1527]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 12:15:22.199232 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 12:15:22.200884 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 12:15:22.215599 systemd-networkd[1231]: eth0: Gained IPv6LL Jan 29 12:15:22.222808 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 12:15:22.225000 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 12:15:22.232615 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 12:15:22.235574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:15:22.238261 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 12:15:22.267153 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 12:15:22.269423 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 12:15:22.272883 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 12:15:22.274527 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 12:15:22.332238 containerd[1522]: time="2025-01-29T12:15:22.332085080Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 12:15:22.360583 containerd[1522]: time="2025-01-29T12:15:22.360501360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:15:22.363285 containerd[1522]: time="2025-01-29T12:15:22.362058640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:15:22.363285 containerd[1522]: time="2025-01-29T12:15:22.362096280Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 12:15:22.363285 containerd[1522]: time="2025-01-29T12:15:22.362111880Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 12:15:22.363285 containerd[1522]: time="2025-01-29T12:15:22.362263920Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 12:15:22.363285 containerd[1522]: time="2025-01-29T12:15:22.362280560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 12:15:22.363285 containerd[1522]: time="2025-01-29T12:15:22.362335320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:15:22.363285 containerd[1522]: time="2025-01-29T12:15:22.362347280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:15:22.363285 containerd[1522]: time="2025-01-29T12:15:22.362560840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:15:22.363285 containerd[1522]: time="2025-01-29T12:15:22.362582560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 12:15:22.363285 containerd[1522]: time="2025-01-29T12:15:22.362595000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:15:22.363285 containerd[1522]: time="2025-01-29T12:15:22.362604720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 12:15:22.363564 containerd[1522]: time="2025-01-29T12:15:22.362676560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:15:22.363564 containerd[1522]: time="2025-01-29T12:15:22.362895440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:15:22.363564 containerd[1522]: time="2025-01-29T12:15:22.363022960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:15:22.363564 containerd[1522]: time="2025-01-29T12:15:22.363042800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 12:15:22.363564 containerd[1522]: time="2025-01-29T12:15:22.363111320Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 12:15:22.363564 containerd[1522]: time="2025-01-29T12:15:22.363149200Z" level=info msg="metadata content store policy set" policy=shared Jan 29 12:15:22.366986 containerd[1522]: time="2025-01-29T12:15:22.366961440Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 12:15:22.367105 containerd[1522]: time="2025-01-29T12:15:22.367088520Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 12:15:22.367198 containerd[1522]: time="2025-01-29T12:15:22.367162880Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 12:15:22.367254 containerd[1522]: time="2025-01-29T12:15:22.367240800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 12:15:22.367325 containerd[1522]: time="2025-01-29T12:15:22.367311440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 12:15:22.367515 containerd[1522]: time="2025-01-29T12:15:22.367494840Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 12:15:22.368000 containerd[1522]: time="2025-01-29T12:15:22.367968280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 12:15:22.368140 containerd[1522]: time="2025-01-29T12:15:22.368121000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 12:15:22.368164 containerd[1522]: time="2025-01-29T12:15:22.368145160Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 12:15:22.368183 containerd[1522]: time="2025-01-29T12:15:22.368160360Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 12:15:22.368183 containerd[1522]: time="2025-01-29T12:15:22.368178000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 12:15:22.368232 containerd[1522]: time="2025-01-29T12:15:22.368200200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 12:15:22.368232 containerd[1522]: time="2025-01-29T12:15:22.368213600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 12:15:22.368232 containerd[1522]: time="2025-01-29T12:15:22.368227040Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 12:15:22.368281 containerd[1522]: time="2025-01-29T12:15:22.368243240Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 12:15:22.368281 containerd[1522]: time="2025-01-29T12:15:22.368256800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 12:15:22.368281 containerd[1522]: time="2025-01-29T12:15:22.368270120Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 12:15:22.368333 containerd[1522]: time="2025-01-29T12:15:22.368281920Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 12:15:22.368333 containerd[1522]: time="2025-01-29T12:15:22.368302320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 12:15:22.368333 containerd[1522]: time="2025-01-29T12:15:22.368317280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 12:15:22.368333 containerd[1522]: time="2025-01-29T12:15:22.368329440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 12:15:22.368427 containerd[1522]: time="2025-01-29T12:15:22.368342840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 12:15:22.368427 containerd[1522]: time="2025-01-29T12:15:22.368355480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 12:15:22.368427 containerd[1522]: time="2025-01-29T12:15:22.368368920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 12:15:22.368427 containerd[1522]: time="2025-01-29T12:15:22.368397520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 12:15:22.368427 containerd[1522]: time="2025-01-29T12:15:22.368411400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 12:15:22.368427 containerd[1522]: time="2025-01-29T12:15:22.368425240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 12:15:22.368527 containerd[1522]: time="2025-01-29T12:15:22.368439880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 12:15:22.368527 containerd[1522]: time="2025-01-29T12:15:22.368451640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 12:15:22.368527 containerd[1522]: time="2025-01-29T12:15:22.368464800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 12:15:22.368527 containerd[1522]: time="2025-01-29T12:15:22.368477200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 12:15:22.368527 containerd[1522]: time="2025-01-29T12:15:22.368492280Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 12:15:22.368527 containerd[1522]: time="2025-01-29T12:15:22.368514360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 12:15:22.368527 containerd[1522]: time="2025-01-29T12:15:22.368526640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 12:15:22.368642 containerd[1522]: time="2025-01-29T12:15:22.368537680Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 12:15:22.368667 containerd[1522]: time="2025-01-29T12:15:22.368653120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 12:15:22.368689 containerd[1522]: time="2025-01-29T12:15:22.368671160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 12:15:22.368689 containerd[1522]: time="2025-01-29T12:15:22.368683440Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 12:15:22.368732 containerd[1522]: time="2025-01-29T12:15:22.368697640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 12:15:22.368732 containerd[1522]: time="2025-01-29T12:15:22.368708720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 12:15:22.368732 containerd[1522]: time="2025-01-29T12:15:22.368720400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 12:15:22.368732 containerd[1522]: time="2025-01-29T12:15:22.368731200Z" level=info msg="NRI interface is disabled by configuration." Jan 29 12:15:22.368826 containerd[1522]: time="2025-01-29T12:15:22.368756520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 12:15:22.369160 containerd[1522]: time="2025-01-29T12:15:22.369105160Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 12:15:22.369387 containerd[1522]: time="2025-01-29T12:15:22.369175080Z" level=info msg="Connect containerd service" Jan 29 12:15:22.369387 containerd[1522]: time="2025-01-29T12:15:22.369203080Z" level=info msg="using legacy CRI server" Jan 29 12:15:22.369387 containerd[1522]: time="2025-01-29T12:15:22.369210560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 12:15:22.371069 containerd[1522]: time="2025-01-29T12:15:22.371037800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 12:15:22.372145 containerd[1522]: time="2025-01-29T12:15:22.372094440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:15:22.372530 containerd[1522]: time="2025-01-29T12:15:22.372457440Z" level=info msg="Start subscribing containerd event" Jan 29 12:15:22.373099 containerd[1522]: time="2025-01-29T12:15:22.372599120Z" level=info msg="Start recovering state" Jan 29 12:15:22.373099 containerd[1522]: time="2025-01-29T12:15:22.372615920Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 12:15:22.373099 containerd[1522]: time="2025-01-29T12:15:22.372666520Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 12:15:22.373099 containerd[1522]: time="2025-01-29T12:15:22.372685080Z" level=info msg="Start event monitor" Jan 29 12:15:22.373099 containerd[1522]: time="2025-01-29T12:15:22.372702360Z" level=info msg="Start snapshots syncer" Jan 29 12:15:22.373099 containerd[1522]: time="2025-01-29T12:15:22.372713400Z" level=info msg="Start cni network conf syncer for default" Jan 29 12:15:22.373099 containerd[1522]: time="2025-01-29T12:15:22.372725720Z" level=info msg="Start streaming server" Jan 29 12:15:22.373099 containerd[1522]: time="2025-01-29T12:15:22.372875880Z" level=info msg="containerd successfully booted in 0.041790s" Jan 29 12:15:22.373506 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 12:15:22.742192 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:15:22.746192 (kubelet)[1595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:15:22.960577 sshd_keygen[1508]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 12:15:22.979248 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 12:15:22.986609 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 12:15:22.993174 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 12:15:22.993445 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 12:15:22.996126 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 12:15:23.007150 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 12:15:23.009651 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 12:15:23.011454 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 12:15:23.012426 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 12:15:23.013201 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 12:15:23.014666 systemd[1]: Startup finished in 4.784s (kernel) + 3.085s (userspace) = 7.870s. Jan 29 12:15:23.220696 kubelet[1595]: E0129 12:15:23.220655 1595 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:15:23.223153 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:15:23.223347 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:15:28.553192 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:15:28.562609 systemd[1]: Started sshd@0-10.0.0.141:22-10.0.0.1:39370.service - OpenSSH per-connection server daemon (10.0.0.1:39370). Jan 29 12:15:28.645718 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 39370 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:15:28.648393 sshd[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:15:28.666665 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:15:28.681581 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:15:28.683059 systemd-logind[1500]: New session 1 of user core. Jan 29 12:15:28.691001 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:15:28.695792 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:15:28.699741 (systemd)[1635]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:15:28.767202 systemd[1635]: Queued start job for default target default.target. Jan 29 12:15:28.767598 systemd[1635]: Created slice app.slice - User Application Slice. Jan 29 12:15:28.767621 systemd[1635]: Reached target paths.target - Paths. Jan 29 12:15:28.767633 systemd[1635]: Reached target timers.target - Timers. Jan 29 12:15:28.779478 systemd[1635]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:15:28.785075 systemd[1635]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:15:28.785136 systemd[1635]: Reached target sockets.target - Sockets. Jan 29 12:15:28.785156 systemd[1635]: Reached target basic.target - Basic System. Jan 29 12:15:28.785194 systemd[1635]: Reached target default.target - Main User Target. Jan 29 12:15:28.785218 systemd[1635]: Startup finished in 79ms. Jan 29 12:15:28.785521 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:15:28.786876 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:15:28.846601 systemd[1]: Started sshd@1-10.0.0.141:22-10.0.0.1:39382.service - OpenSSH per-connection server daemon (10.0.0.1:39382). Jan 29 12:15:28.880234 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 39382 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:15:28.880679 sshd[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:15:28.884738 systemd-logind[1500]: New session 2 of user core. Jan 29 12:15:28.896731 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:15:28.948690 sshd[1647]: pam_unix(sshd:session): session closed for user core Jan 29 12:15:28.964743 systemd[1]: Started sshd@2-10.0.0.141:22-10.0.0.1:39392.service - OpenSSH per-connection server daemon (10.0.0.1:39392). Jan 29 12:15:28.965296 systemd[1]: sshd@1-10.0.0.141:22-10.0.0.1:39382.service: Deactivated successfully. Jan 29 12:15:28.966630 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 12:15:28.967277 systemd-logind[1500]: Session 2 logged out. Waiting for processes to exit. Jan 29 12:15:28.968543 systemd-logind[1500]: Removed session 2. Jan 29 12:15:28.997142 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 39392 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:15:28.998217 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:15:29.001963 systemd-logind[1500]: New session 3 of user core. Jan 29 12:15:29.010609 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:15:29.058497 sshd[1652]: pam_unix(sshd:session): session closed for user core Jan 29 12:15:29.065584 systemd[1]: Started sshd@3-10.0.0.141:22-10.0.0.1:39396.service - OpenSSH per-connection server daemon (10.0.0.1:39396). Jan 29 12:15:29.065926 systemd[1]: sshd@2-10.0.0.141:22-10.0.0.1:39392.service: Deactivated successfully. Jan 29 12:15:29.067542 systemd-logind[1500]: Session 3 logged out. Waiting for processes to exit. Jan 29 12:15:29.068050 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 12:15:29.069397 systemd-logind[1500]: Removed session 3. Jan 29 12:15:29.098579 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 39396 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:15:29.099668 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:15:29.103661 systemd-logind[1500]: New session 4 of user core. Jan 29 12:15:29.115693 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:15:29.167531 sshd[1660]: pam_unix(sshd:session): session closed for user core Jan 29 12:15:29.178624 systemd[1]: Started sshd@4-10.0.0.141:22-10.0.0.1:39412.service - OpenSSH per-connection server daemon (10.0.0.1:39412). Jan 29 12:15:29.179015 systemd[1]: sshd@3-10.0.0.141:22-10.0.0.1:39396.service: Deactivated successfully. Jan 29 12:15:29.180765 systemd-logind[1500]: Session 4 logged out. Waiting for processes to exit. Jan 29 12:15:29.181249 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 12:15:29.182610 systemd-logind[1500]: Removed session 4. Jan 29 12:15:29.212149 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 39412 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:15:29.213671 sshd[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:15:29.218561 systemd-logind[1500]: New session 5 of user core. Jan 29 12:15:29.223658 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:15:29.293727 sudo[1675]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 12:15:29.294048 sudo[1675]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:15:29.314330 sudo[1675]: pam_unix(sudo:session): session closed for user root Jan 29 12:15:29.316385 sshd[1668]: pam_unix(sshd:session): session closed for user core Jan 29 12:15:29.323626 systemd[1]: Started sshd@5-10.0.0.141:22-10.0.0.1:39420.service - OpenSSH per-connection server daemon (10.0.0.1:39420). Jan 29 12:15:29.323995 systemd[1]: sshd@4-10.0.0.141:22-10.0.0.1:39412.service: Deactivated successfully. Jan 29 12:15:29.325939 systemd-logind[1500]: Session 5 logged out. Waiting for processes to exit. Jan 29 12:15:29.326508 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 12:15:29.327471 systemd-logind[1500]: Removed session 5. Jan 29 12:15:29.357491 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 39420 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:15:29.358779 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:15:29.362948 systemd-logind[1500]: New session 6 of user core. Jan 29 12:15:29.372717 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:15:29.424794 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 12:15:29.425060 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:15:29.428768 sudo[1685]: pam_unix(sudo:session): session closed for user root Jan 29 12:15:29.433966 sudo[1684]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 12:15:29.434237 sudo[1684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:15:29.450651 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 12:15:29.452081 auditctl[1688]: No rules Jan 29 12:15:29.452971 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:15:29.453243 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 12:15:29.455096 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:15:29.479892 augenrules[1707]: No rules Jan 29 12:15:29.481201 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:15:29.482537 sudo[1684]: pam_unix(sudo:session): session closed for user root Jan 29 12:15:29.484566 sshd[1677]: pam_unix(sshd:session): session closed for user core Jan 29 12:15:29.493659 systemd[1]: Started sshd@6-10.0.0.141:22-10.0.0.1:39428.service - OpenSSH per-connection server daemon (10.0.0.1:39428). Jan 29 12:15:29.494056 systemd[1]: sshd@5-10.0.0.141:22-10.0.0.1:39420.service: Deactivated successfully. Jan 29 12:15:29.496135 systemd-logind[1500]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:15:29.496369 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:15:29.498382 systemd-logind[1500]: Removed session 6. Jan 29 12:15:29.529468 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 39428 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44 Jan 29 12:15:29.530819 sshd[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:15:29.535049 systemd-logind[1500]: New session 7 of user core. Jan 29 12:15:29.550671 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:15:29.601111 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:15:29.601426 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:15:29.625684 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 12:15:29.641097 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 12:15:29.641334 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 12:15:30.174690 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:15:30.193653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:15:30.214385 systemd[1]: Reloading requested from client PID 1772 ('systemctl') (unit session-7.scope)... Jan 29 12:15:30.214402 systemd[1]: Reloading... Jan 29 12:15:30.287397 zram_generator::config[1808]: No configuration found. Jan 29 12:15:30.405605 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:15:30.454056 systemd[1]: Reloading finished in 239 ms. Jan 29 12:15:30.491521 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 12:15:30.491587 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 12:15:30.491836 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:15:30.494084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:15:30.586825 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:15:30.590904 (kubelet)[1868]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:15:30.626091 kubelet[1868]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:15:30.626091 kubelet[1868]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:15:30.626091 kubelet[1868]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:15:30.626996 kubelet[1868]: I0129 12:15:30.626944 1868 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:15:31.491455 kubelet[1868]: I0129 12:15:31.491412 1868 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:15:31.491455 kubelet[1868]: I0129 12:15:31.491443 1868 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:15:31.491662 kubelet[1868]: I0129 12:15:31.491648 1868 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:15:31.534396 kubelet[1868]: I0129 12:15:31.532497 1868 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:15:31.544515 kubelet[1868]: I0129 12:15:31.544479 1868 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:15:31.545238 kubelet[1868]: I0129 12:15:31.545189 1868 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:15:31.545401 kubelet[1868]: I0129 12:15:31.545224 1868 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.141","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:15:31.545503 kubelet[1868]: I0129 12:15:31.545478 1868 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:15:31.545503 kubelet[1868]: I0129 12:15:31.545489 1868 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:15:31.545762 kubelet[1868]: I0129 12:15:31.545736 1868 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:15:31.546689 kubelet[1868]: I0129 12:15:31.546668 1868 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:15:31.546786 kubelet[1868]: I0129 12:15:31.546692 1868 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:15:31.547763 kubelet[1868]: I0129 12:15:31.546914 1868 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:15:31.547763 kubelet[1868]: I0129 12:15:31.547090 1868 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:15:31.547763 kubelet[1868]: E0129 12:15:31.547574 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:31.547763 kubelet[1868]: E0129 12:15:31.547613 1868 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:31.548486 kubelet[1868]: I0129 12:15:31.548462 1868 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:15:31.548861 kubelet[1868]: I0129 12:15:31.548850 1868 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:15:31.548966 kubelet[1868]: W0129 12:15:31.548951 1868 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:15:31.549910 kubelet[1868]: I0129 12:15:31.549791 1868 server.go:1264] "Started kubelet" Jan 29 12:15:31.553388 kubelet[1868]: I0129 12:15:31.552762 1868 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:15:31.553388 kubelet[1868]: I0129 12:15:31.553227 1868 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:15:31.553680 kubelet[1868]: I0129 12:15:31.553656 1868 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:15:31.553716 kubelet[1868]: I0129 12:15:31.553660 1868 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:15:31.554004 kubelet[1868]: I0129 12:15:31.553980 1868 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:15:31.554991 kubelet[1868]: I0129 12:15:31.554965 1868 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:15:31.555735 kubelet[1868]: I0129 12:15:31.555707 1868 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:15:31.556009 kubelet[1868]: I0129 12:15:31.555989 1868 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:15:31.558036 kubelet[1868]: W0129 12:15:31.558009 1868 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 12:15:31.558036 kubelet[1868]: E0129 12:15:31.558046 1868 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 12:15:31.558245 kubelet[1868]: E0129 12:15:31.558091 1868 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.141\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 29 12:15:31.558701 kubelet[1868]: W0129 12:15:31.558369 1868 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.141" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 12:15:31.558701 kubelet[1868]: E0129 12:15:31.558413 1868 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.141" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 12:15:31.558701 kubelet[1868]: I0129 12:15:31.558477 1868 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:15:31.558701 kubelet[1868]: I0129 12:15:31.558570 1868 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:15:31.561050 kubelet[1868]: I0129 12:15:31.561007 1868 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:15:31.582232 kubelet[1868]: I0129 12:15:31.582210 1868 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:15:31.582385 kubelet[1868]: I0129 12:15:31.582354 1868 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:15:31.582907 kubelet[1868]: I0129 12:15:31.582452 1868 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:15:31.653774 kubelet[1868]: I0129 12:15:31.653664 1868 policy_none.go:49] "None policy: Start" Jan 29 12:15:31.654983 kubelet[1868]: I0129 12:15:31.654939 1868 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:15:31.654983 kubelet[1868]: I0129 12:15:31.654976 1868 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:15:31.655985 kubelet[1868]: I0129 12:15:31.655964 1868 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.141" Jan 29 12:15:31.660400 kubelet[1868]: E0129 12:15:31.658807 1868 kubelet_node_status.go:96] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.141" Jan 29 12:15:31.660400 kubelet[1868]: I0129 12:15:31.659925 1868 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:15:31.660400 kubelet[1868]: I0129 12:15:31.660093 1868 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:15:31.660400 kubelet[1868]: I0129 12:15:31.660275 1868 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:15:31.662234 kubelet[1868]: E0129 12:15:31.662209 1868 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.141\" not found" Jan 29 12:15:31.678548 kubelet[1868]: I0129 12:15:31.678505 1868 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:15:31.679860 kubelet[1868]: I0129 12:15:31.679838 1868 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:15:31.680108 kubelet[1868]: I0129 12:15:31.679996 1868 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:15:31.680108 kubelet[1868]: I0129 12:15:31.680027 1868 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:15:31.680108 kubelet[1868]: E0129 12:15:31.680076 1868 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 29 12:15:31.762131 kubelet[1868]: E0129 12:15:31.762021 1868 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.141\" not found" node="10.0.0.141" Jan 29 12:15:31.860425 kubelet[1868]: I0129 12:15:31.860181 1868 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.141" Jan 29 12:15:31.867333 kubelet[1868]: I0129 12:15:31.867244 1868 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.141" Jan 29 12:15:31.882258 kubelet[1868]: E0129 12:15:31.882212 1868 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.141\" not found" Jan 29 12:15:31.982580 kubelet[1868]: E0129 12:15:31.982540 1868 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.141\" not found" Jan 29 12:15:32.083331 kubelet[1868]: E0129 12:15:32.083224 1868 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.141\" not found" Jan 29 12:15:32.100878 sudo[1720]: pam_unix(sudo:session): session closed for user root Jan 29 12:15:32.102571 sshd[1713]: pam_unix(sshd:session): session closed for user core Jan 29 12:15:32.105725 systemd[1]: sshd@6-10.0.0.141:22-10.0.0.1:39428.service: Deactivated successfully. Jan 29 12:15:32.109026 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:15:32.109581 systemd-logind[1500]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:15:32.110602 systemd-logind[1500]: Removed session 7. Jan 29 12:15:32.183915 kubelet[1868]: E0129 12:15:32.183871 1868 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.141\" not found" Jan 29 12:15:32.284330 kubelet[1868]: E0129 12:15:32.284279 1868 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.141\" not found" Jan 29 12:15:32.384842 kubelet[1868]: E0129 12:15:32.384720 1868 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.141\" not found" Jan 29 12:15:32.485151 kubelet[1868]: E0129 12:15:32.485116 1868 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.141\" not found" Jan 29 12:15:32.493313 kubelet[1868]: I0129 12:15:32.493277 1868 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 12:15:32.493498 kubelet[1868]: W0129 12:15:32.493480 1868 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 12:15:32.548614 kubelet[1868]: E0129 12:15:32.548570 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:32.585540 kubelet[1868]: E0129 12:15:32.585491 1868 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.141\" not found" Jan 29 12:15:32.686683 kubelet[1868]: E0129 12:15:32.686561 1868 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.141\" not found" Jan 29 12:15:32.787203 kubelet[1868]: I0129 12:15:32.787179 1868 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 12:15:32.787617 containerd[1522]: time="2025-01-29T12:15:32.787504397Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:15:32.787950 kubelet[1868]: I0129 12:15:32.787733 1868 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 12:15:33.549630 kubelet[1868]: I0129 12:15:33.549587 1868 apiserver.go:52] "Watching apiserver" Jan 29 12:15:33.549630 kubelet[1868]: E0129 12:15:33.549617 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:33.553024 kubelet[1868]: I0129 12:15:33.552870 1868 topology_manager.go:215] "Topology Admit Handler" podUID="2f65d93c-2baa-4d12-8c86-c580b8263671" podNamespace="kube-system" podName="cilium-ggrrg" Jan 29 12:15:33.553024 kubelet[1868]: I0129 12:15:33.553010 1868 topology_manager.go:215] "Topology Admit Handler" podUID="dd302066-40f6-4d1c-a04b-9f6709510c8a" podNamespace="kube-system" podName="kube-proxy-swknv" Jan 29 12:15:33.557154 kubelet[1868]: I0129 12:15:33.556211 1868 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:15:33.565315 kubelet[1868]: I0129 12:15:33.565282 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-bpf-maps\") pod \"cilium-ggrrg\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " pod="kube-system/cilium-ggrrg" Jan 29 12:15:33.565372 kubelet[1868]: I0129 12:15:33.565318 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2f65d93c-2baa-4d12-8c86-c580b8263671-clustermesh-secrets\") pod \"cilium-ggrrg\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " pod="kube-system/cilium-ggrrg" Jan 29 12:15:33.565372 kubelet[1868]: I0129 12:15:33.565341 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd302066-40f6-4d1c-a04b-9f6709510c8a-xtables-lock\") pod \"kube-proxy-swknv\" (UID: \"dd302066-40f6-4d1c-a04b-9f6709510c8a\") " pod="kube-system/kube-proxy-swknv" Jan 29 12:15:33.565372 kubelet[1868]: I0129 12:15:33.565357 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-host-proc-sys-kernel\") pod \"cilium-ggrrg\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " pod="kube-system/cilium-ggrrg" Jan 29 12:15:33.565463 kubelet[1868]: I0129 12:15:33.565383 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd302066-40f6-4d1c-a04b-9f6709510c8a-lib-modules\") pod \"kube-proxy-swknv\" (UID: \"dd302066-40f6-4d1c-a04b-9f6709510c8a\") " pod="kube-system/kube-proxy-swknv" Jan 29 12:15:33.565463 kubelet[1868]: I0129 12:15:33.565401 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-cilium-run\") pod \"cilium-ggrrg\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " pod="kube-system/cilium-ggrrg" Jan 29 12:15:33.565463 kubelet[1868]: I0129 12:15:33.565417 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-hostproc\") pod \"cilium-ggrrg\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " pod="kube-system/cilium-ggrrg" Jan 29 12:15:33.565463 kubelet[1868]: I0129 12:15:33.565433 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-etc-cni-netd\") pod \"cilium-ggrrg\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " pod="kube-system/cilium-ggrrg" Jan 29 12:15:33.565463 kubelet[1868]: I0129 12:15:33.565449 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-lib-modules\") pod \"cilium-ggrrg\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " pod="kube-system/cilium-ggrrg" Jan 29 12:15:33.565554 kubelet[1868]: I0129 12:15:33.565466 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-xtables-lock\") pod \"cilium-ggrrg\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " pod="kube-system/cilium-ggrrg" Jan 29 12:15:33.565554 kubelet[1868]: I0129 12:15:33.565484 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhphf\" (UniqueName: \"kubernetes.io/projected/dd302066-40f6-4d1c-a04b-9f6709510c8a-kube-api-access-zhphf\") pod \"kube-proxy-swknv\" (UID: \"dd302066-40f6-4d1c-a04b-9f6709510c8a\") " pod="kube-system/kube-proxy-swknv" Jan 29 12:15:33.565554 kubelet[1868]: I0129 12:15:33.565516 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-cilium-cgroup\") pod \"cilium-ggrrg\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " pod="kube-system/cilium-ggrrg" Jan 29 12:15:33.565617 kubelet[1868]: I0129 12:15:33.565553 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-cni-path\") pod \"cilium-ggrrg\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " pod="kube-system/cilium-ggrrg" Jan 29 12:15:33.565617 kubelet[1868]: I0129 12:15:33.565592 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2f65d93c-2baa-4d12-8c86-c580b8263671-cilium-config-path\") pod \"cilium-ggrrg\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " pod="kube-system/cilium-ggrrg" Jan 29 12:15:33.565658 kubelet[1868]: I0129 12:15:33.565620 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-host-proc-sys-net\") pod \"cilium-ggrrg\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " pod="kube-system/cilium-ggrrg" Jan 29 12:15:33.565658 kubelet[1868]: I0129 12:15:33.565637 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2f65d93c-2baa-4d12-8c86-c580b8263671-hubble-tls\") pod \"cilium-ggrrg\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " pod="kube-system/cilium-ggrrg" Jan 29 12:15:33.565695 kubelet[1868]: I0129 12:15:33.565664 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc94j\" (UniqueName: \"kubernetes.io/projected/2f65d93c-2baa-4d12-8c86-c580b8263671-kube-api-access-jc94j\") pod \"cilium-ggrrg\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " pod="kube-system/cilium-ggrrg" Jan 29 12:15:33.565695 kubelet[1868]: I0129 12:15:33.565683 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dd302066-40f6-4d1c-a04b-9f6709510c8a-kube-proxy\") pod \"kube-proxy-swknv\" (UID: \"dd302066-40f6-4d1c-a04b-9f6709510c8a\") " pod="kube-system/kube-proxy-swknv" Jan 29 12:15:33.855683 kubelet[1868]: E0129 12:15:33.855569 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:33.856722 containerd[1522]: time="2025-01-29T12:15:33.856666960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-swknv,Uid:dd302066-40f6-4d1c-a04b-9f6709510c8a,Namespace:kube-system,Attempt:0,}" Jan 29 12:15:33.857688 kubelet[1868]: E0129 12:15:33.857665 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:33.858037 containerd[1522]: time="2025-01-29T12:15:33.857980661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ggrrg,Uid:2f65d93c-2baa-4d12-8c86-c580b8263671,Namespace:kube-system,Attempt:0,}" Jan 29 12:15:34.532700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1558226559.mount: Deactivated successfully. Jan 29 12:15:34.537584 containerd[1522]: time="2025-01-29T12:15:34.537531475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:15:34.538759 containerd[1522]: time="2025-01-29T12:15:34.538579294Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:15:34.539234 containerd[1522]: time="2025-01-29T12:15:34.539195322Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:15:34.539832 containerd[1522]: time="2025-01-29T12:15:34.539770660Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 29 12:15:34.540363 containerd[1522]: time="2025-01-29T12:15:34.540323136Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:15:34.543242 containerd[1522]: time="2025-01-29T12:15:34.543207074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:15:34.544696 containerd[1522]: time="2025-01-29T12:15:34.544424471Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 686.376496ms" Jan 29 12:15:34.546426 containerd[1522]: time="2025-01-29T12:15:34.546387497Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 689.629364ms" Jan 29 12:15:34.550693 kubelet[1868]: E0129 12:15:34.550664 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:34.652724 containerd[1522]: time="2025-01-29T12:15:34.652583012Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:15:34.652724 containerd[1522]: time="2025-01-29T12:15:34.652643672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:15:34.652724 containerd[1522]: time="2025-01-29T12:15:34.652659996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:34.652911 containerd[1522]: time="2025-01-29T12:15:34.652741259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:34.653180 containerd[1522]: time="2025-01-29T12:15:34.653088577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:15:34.653180 containerd[1522]: time="2025-01-29T12:15:34.653148762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:15:34.653254 containerd[1522]: time="2025-01-29T12:15:34.653163936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:34.653254 containerd[1522]: time="2025-01-29T12:15:34.653235017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:34.750788 containerd[1522]: time="2025-01-29T12:15:34.750737590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ggrrg,Uid:2f65d93c-2baa-4d12-8c86-c580b8263671,Namespace:kube-system,Attempt:0,} returns sandbox id \"850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3\"" Jan 29 12:15:34.752315 kubelet[1868]: E0129 12:15:34.752294 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:34.753159 containerd[1522]: time="2025-01-29T12:15:34.753115845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-swknv,Uid:dd302066-40f6-4d1c-a04b-9f6709510c8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5a3ce88faec21ef40e099983230692612634c16afdef2b91e5d42eed7ced42b\"" Jan 29 12:15:34.753671 kubelet[1868]: E0129 12:15:34.753651 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:34.753973 containerd[1522]: time="2025-01-29T12:15:34.753948165Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 12:15:35.551681 kubelet[1868]: E0129 12:15:35.551643 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:36.552451 kubelet[1868]: E0129 12:15:36.552415 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:37.553033 kubelet[1868]: E0129 12:15:37.552966 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:38.200263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1068587051.mount: Deactivated successfully. Jan 29 12:15:38.553952 kubelet[1868]: E0129 12:15:38.553816 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:39.414156 containerd[1522]: time="2025-01-29T12:15:39.414104877Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:15:39.415394 containerd[1522]: time="2025-01-29T12:15:39.415174494Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 29 12:15:39.416117 containerd[1522]: time="2025-01-29T12:15:39.416064469Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:15:39.417706 containerd[1522]: time="2025-01-29T12:15:39.417549498Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.66356143s" Jan 29 12:15:39.417706 containerd[1522]: time="2025-01-29T12:15:39.417583810Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 29 12:15:39.419146 containerd[1522]: time="2025-01-29T12:15:39.419009967Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 12:15:39.421274 containerd[1522]: time="2025-01-29T12:15:39.421241352Z" level=info msg="CreateContainer within sandbox \"850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 12:15:39.430774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1447543263.mount: Deactivated successfully. Jan 29 12:15:39.433583 containerd[1522]: time="2025-01-29T12:15:39.433489150Z" level=info msg="CreateContainer within sandbox \"850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"35cac2e7f6e8dde8bdc2c4d7441e4997a6998c3c529e3d83f320aca80b9ae37f\"" Jan 29 12:15:39.434430 containerd[1522]: time="2025-01-29T12:15:39.434404958Z" level=info msg="StartContainer for \"35cac2e7f6e8dde8bdc2c4d7441e4997a6998c3c529e3d83f320aca80b9ae37f\"" Jan 29 12:15:39.473953 containerd[1522]: time="2025-01-29T12:15:39.473865087Z" level=info msg="StartContainer for \"35cac2e7f6e8dde8bdc2c4d7441e4997a6998c3c529e3d83f320aca80b9ae37f\" returns successfully" Jan 29 12:15:39.554905 kubelet[1868]: E0129 12:15:39.554839 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:39.701519 kubelet[1868]: E0129 12:15:39.701418 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:39.707565 containerd[1522]: time="2025-01-29T12:15:39.707513937Z" level=info msg="shim disconnected" id=35cac2e7f6e8dde8bdc2c4d7441e4997a6998c3c529e3d83f320aca80b9ae37f namespace=k8s.io Jan 29 12:15:39.707565 containerd[1522]: time="2025-01-29T12:15:39.707562102Z" level=warning msg="cleaning up after shim disconnected" id=35cac2e7f6e8dde8bdc2c4d7441e4997a6998c3c529e3d83f320aca80b9ae37f namespace=k8s.io Jan 29 12:15:39.707565 containerd[1522]: time="2025-01-29T12:15:39.707570939Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:15:40.427889 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35cac2e7f6e8dde8bdc2c4d7441e4997a6998c3c529e3d83f320aca80b9ae37f-rootfs.mount: Deactivated successfully. Jan 29 12:15:40.555261 kubelet[1868]: E0129 12:15:40.555198 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:40.704907 kubelet[1868]: E0129 12:15:40.704622 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:40.706886 containerd[1522]: time="2025-01-29T12:15:40.706817584Z" level=info msg="CreateContainer within sandbox \"850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 12:15:40.727809 containerd[1522]: time="2025-01-29T12:15:40.727767374Z" level=info msg="CreateContainer within sandbox \"850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2c7a581e8b83c28e84f9aeeba462abcda7bbb21b164e49ecc49d6fc3db9a70c4\"" Jan 29 12:15:40.728232 containerd[1522]: time="2025-01-29T12:15:40.728177582Z" level=info msg="StartContainer for \"2c7a581e8b83c28e84f9aeeba462abcda7bbb21b164e49ecc49d6fc3db9a70c4\"" Jan 29 12:15:40.792037 containerd[1522]: time="2025-01-29T12:15:40.791980840Z" level=info msg="StartContainer for \"2c7a581e8b83c28e84f9aeeba462abcda7bbb21b164e49ecc49d6fc3db9a70c4\" returns successfully" Jan 29 12:15:40.810476 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:15:40.810989 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:15:40.811054 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:15:40.815599 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:15:40.827366 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:15:40.890815 containerd[1522]: time="2025-01-29T12:15:40.890742355Z" level=info msg="shim disconnected" id=2c7a581e8b83c28e84f9aeeba462abcda7bbb21b164e49ecc49d6fc3db9a70c4 namespace=k8s.io Jan 29 12:15:40.890815 containerd[1522]: time="2025-01-29T12:15:40.890806321Z" level=warning msg="cleaning up after shim disconnected" id=2c7a581e8b83c28e84f9aeeba462abcda7bbb21b164e49ecc49d6fc3db9a70c4 namespace=k8s.io Jan 29 12:15:40.890815 containerd[1522]: time="2025-01-29T12:15:40.890816120Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:15:41.043943 containerd[1522]: time="2025-01-29T12:15:41.043794244Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:15:41.044796 containerd[1522]: time="2025-01-29T12:15:41.044755973Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662714" Jan 29 12:15:41.045918 containerd[1522]: time="2025-01-29T12:15:41.045873719Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:15:41.047954 containerd[1522]: time="2025-01-29T12:15:41.047889194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:15:41.049148 containerd[1522]: time="2025-01-29T12:15:41.048781143Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.629680318s" Jan 29 12:15:41.049148 containerd[1522]: time="2025-01-29T12:15:41.048835938Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 29 12:15:41.050910 containerd[1522]: time="2025-01-29T12:15:41.050879866Z" level=info msg="CreateContainer within sandbox \"c5a3ce88faec21ef40e099983230692612634c16afdef2b91e5d42eed7ced42b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:15:41.061822 containerd[1522]: time="2025-01-29T12:15:41.061729155Z" level=info msg="CreateContainer within sandbox \"c5a3ce88faec21ef40e099983230692612634c16afdef2b91e5d42eed7ced42b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c34570e85e67ecb58e4d547ee15707d4162a19a90fe24368ab2e9d9a2ac2097b\"" Jan 29 12:15:41.062271 containerd[1522]: time="2025-01-29T12:15:41.062232515Z" level=info msg="StartContainer for \"c34570e85e67ecb58e4d547ee15707d4162a19a90fe24368ab2e9d9a2ac2097b\"" Jan 29 12:15:41.107852 containerd[1522]: time="2025-01-29T12:15:41.107775059Z" level=info msg="StartContainer for \"c34570e85e67ecb58e4d547ee15707d4162a19a90fe24368ab2e9d9a2ac2097b\" returns successfully" Jan 29 12:15:41.428687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c7a581e8b83c28e84f9aeeba462abcda7bbb21b164e49ecc49d6fc3db9a70c4-rootfs.mount: Deactivated successfully. Jan 29 12:15:41.428824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount495903917.mount: Deactivated successfully. Jan 29 12:15:41.555587 kubelet[1868]: E0129 12:15:41.555554 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:41.708676 kubelet[1868]: E0129 12:15:41.708493 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:41.711182 kubelet[1868]: E0129 12:15:41.711141 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:41.713754 containerd[1522]: time="2025-01-29T12:15:41.713616433Z" level=info msg="CreateContainer within sandbox \"850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 12:15:41.720058 kubelet[1868]: I0129 12:15:41.719981 1868 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-swknv" podStartSLOduration=4.424520311 podStartE2EDuration="10.71996971s" podCreationTimestamp="2025-01-29 12:15:31 +0000 UTC" firstStartedPulling="2025-01-29 12:15:34.754092583 +0000 UTC m=+4.160079141" lastFinishedPulling="2025-01-29 12:15:41.049541942 +0000 UTC m=+10.455528540" observedRunningTime="2025-01-29 12:15:41.718895561 +0000 UTC m=+11.124882159" watchObservedRunningTime="2025-01-29 12:15:41.71996971 +0000 UTC m=+11.125956308" Jan 29 12:15:41.731845 containerd[1522]: time="2025-01-29T12:15:41.731792245Z" level=info msg="CreateContainer within sandbox \"850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3e0635579d758bb522025688c561802470c028205a3c2c24524fca6594d8434d\"" Jan 29 12:15:41.732616 containerd[1522]: time="2025-01-29T12:15:41.732568905Z" level=info msg="StartContainer for \"3e0635579d758bb522025688c561802470c028205a3c2c24524fca6594d8434d\"" Jan 29 12:15:41.784848 containerd[1522]: time="2025-01-29T12:15:41.784741929Z" level=info msg="StartContainer for \"3e0635579d758bb522025688c561802470c028205a3c2c24524fca6594d8434d\" returns successfully" Jan 29 12:15:41.878073 containerd[1522]: time="2025-01-29T12:15:41.878008350Z" level=info msg="shim disconnected" id=3e0635579d758bb522025688c561802470c028205a3c2c24524fca6594d8434d namespace=k8s.io Jan 29 12:15:41.878073 containerd[1522]: time="2025-01-29T12:15:41.878068366Z" level=warning msg="cleaning up after shim disconnected" id=3e0635579d758bb522025688c561802470c028205a3c2c24524fca6594d8434d namespace=k8s.io Jan 29 12:15:41.878073 containerd[1522]: time="2025-01-29T12:15:41.878076974Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:15:42.427763 systemd[1]: run-containerd-runc-k8s.io-3e0635579d758bb522025688c561802470c028205a3c2c24524fca6594d8434d-runc.yI2oCQ.mount: Deactivated successfully. Jan 29 12:15:42.427912 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e0635579d758bb522025688c561802470c028205a3c2c24524fca6594d8434d-rootfs.mount: Deactivated successfully. Jan 29 12:15:42.556538 kubelet[1868]: E0129 12:15:42.556484 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:42.713992 kubelet[1868]: E0129 12:15:42.713895 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:42.714303 kubelet[1868]: E0129 12:15:42.714012 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:42.716133 containerd[1522]: time="2025-01-29T12:15:42.716021093Z" level=info msg="CreateContainer within sandbox \"850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 12:15:42.732217 containerd[1522]: time="2025-01-29T12:15:42.732165131Z" level=info msg="CreateContainer within sandbox \"850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dcfd1e781536e9c48d6a760136fc011b845a8be0bcb3794b08f276ec7bf648b4\"" Jan 29 12:15:42.732629 containerd[1522]: time="2025-01-29T12:15:42.732608922Z" level=info msg="StartContainer for \"dcfd1e781536e9c48d6a760136fc011b845a8be0bcb3794b08f276ec7bf648b4\"" Jan 29 12:15:42.770758 containerd[1522]: time="2025-01-29T12:15:42.770720419Z" level=info msg="StartContainer for \"dcfd1e781536e9c48d6a760136fc011b845a8be0bcb3794b08f276ec7bf648b4\" returns successfully" Jan 29 12:15:42.785656 containerd[1522]: time="2025-01-29T12:15:42.785587029Z" level=info msg="shim disconnected" id=dcfd1e781536e9c48d6a760136fc011b845a8be0bcb3794b08f276ec7bf648b4 namespace=k8s.io Jan 29 12:15:42.785656 containerd[1522]: time="2025-01-29T12:15:42.785655684Z" level=warning msg="cleaning up after shim disconnected" id=dcfd1e781536e9c48d6a760136fc011b845a8be0bcb3794b08f276ec7bf648b4 namespace=k8s.io Jan 29 12:15:42.785809 containerd[1522]: time="2025-01-29T12:15:42.785665253Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:15:43.427900 systemd[1]: run-containerd-runc-k8s.io-dcfd1e781536e9c48d6a760136fc011b845a8be0bcb3794b08f276ec7bf648b4-runc.Adz8rF.mount: Deactivated successfully. Jan 29 12:15:43.428047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcfd1e781536e9c48d6a760136fc011b845a8be0bcb3794b08f276ec7bf648b4-rootfs.mount: Deactivated successfully. Jan 29 12:15:43.557124 kubelet[1868]: E0129 12:15:43.557073 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:43.717188 kubelet[1868]: E0129 12:15:43.717107 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:43.719279 containerd[1522]: time="2025-01-29T12:15:43.719219715Z" level=info msg="CreateContainer within sandbox \"850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 12:15:43.733402 containerd[1522]: time="2025-01-29T12:15:43.733237475Z" level=info msg="CreateContainer within sandbox \"850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4740d2f144bd99732e1bc5e3a025a314344ea955b79178ef30a13d457e86981d\"" Jan 29 12:15:43.733800 containerd[1522]: time="2025-01-29T12:15:43.733718780Z" level=info msg="StartContainer for \"4740d2f144bd99732e1bc5e3a025a314344ea955b79178ef30a13d457e86981d\"" Jan 29 12:15:43.772785 containerd[1522]: time="2025-01-29T12:15:43.772675944Z" level=info msg="StartContainer for \"4740d2f144bd99732e1bc5e3a025a314344ea955b79178ef30a13d457e86981d\" returns successfully" Jan 29 12:15:43.887361 kubelet[1868]: I0129 12:15:43.887319 1868 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 12:15:44.256400 kernel: Initializing XFRM netlink socket Jan 29 12:15:44.427985 systemd[1]: run-containerd-runc-k8s.io-4740d2f144bd99732e1bc5e3a025a314344ea955b79178ef30a13d457e86981d-runc.kfDE0P.mount: Deactivated successfully. Jan 29 12:15:44.557935 kubelet[1868]: E0129 12:15:44.557844 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:44.721671 kubelet[1868]: E0129 12:15:44.721629 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:44.735111 kubelet[1868]: I0129 12:15:44.735050 1868 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ggrrg" podStartSLOduration=9.069598031 podStartE2EDuration="13.735035973s" podCreationTimestamp="2025-01-29 12:15:31 +0000 UTC" firstStartedPulling="2025-01-29 12:15:34.75331922 +0000 UTC m=+4.159305817" lastFinishedPulling="2025-01-29 12:15:39.418757161 +0000 UTC m=+8.824743759" observedRunningTime="2025-01-29 12:15:44.73427284 +0000 UTC m=+14.140259438" watchObservedRunningTime="2025-01-29 12:15:44.735035973 +0000 UTC m=+14.141022571" Jan 29 12:15:45.558471 kubelet[1868]: E0129 12:15:45.558426 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:45.723453 kubelet[1868]: E0129 12:15:45.723418 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:45.866332 systemd-networkd[1231]: cilium_host: Link UP Jan 29 12:15:45.866478 systemd-networkd[1231]: cilium_net: Link UP Jan 29 12:15:45.866616 systemd-networkd[1231]: cilium_net: Gained carrier Jan 29 12:15:45.866746 systemd-networkd[1231]: cilium_host: Gained carrier Jan 29 12:15:45.940717 systemd-networkd[1231]: cilium_vxlan: Link UP Jan 29 12:15:45.940724 systemd-networkd[1231]: cilium_vxlan: Gained carrier Jan 29 12:15:46.221417 kernel: NET: Registered PF_ALG protocol family Jan 29 12:15:46.335575 systemd-networkd[1231]: cilium_host: Gained IPv6LL Jan 29 12:15:46.558984 kubelet[1868]: E0129 12:15:46.558942 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:46.599586 systemd-networkd[1231]: cilium_net: Gained IPv6LL Jan 29 12:15:46.725078 kubelet[1868]: E0129 12:15:46.724991 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:46.754685 systemd-networkd[1231]: lxc_health: Link UP Jan 29 12:15:46.765149 systemd-networkd[1231]: lxc_health: Gained carrier Jan 29 12:15:47.431858 systemd-networkd[1231]: cilium_vxlan: Gained IPv6LL Jan 29 12:15:47.559810 kubelet[1868]: E0129 12:15:47.559765 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:47.859714 kubelet[1868]: E0129 12:15:47.859615 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:15:47.954404 kubelet[1868]: I0129 12:15:47.951712 1868 topology_manager.go:215] "Topology Admit Handler" podUID="4c680633-304f-4afa-bb4a-8de72b9e1126" podNamespace="default" podName="nginx-deployment-85f456d6dd-twrbl" Jan 29 12:15:48.050505 kubelet[1868]: I0129 12:15:48.050467 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx9lg\" (UniqueName: \"kubernetes.io/projected/4c680633-304f-4afa-bb4a-8de72b9e1126-kube-api-access-fx9lg\") pod \"nginx-deployment-85f456d6dd-twrbl\" (UID: \"4c680633-304f-4afa-bb4a-8de72b9e1126\") " pod="default/nginx-deployment-85f456d6dd-twrbl" Jan 29 12:15:48.254643 containerd[1522]: time="2025-01-29T12:15:48.254540466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-twrbl,Uid:4c680633-304f-4afa-bb4a-8de72b9e1126,Namespace:default,Attempt:0,}" Jan 29 12:15:48.323370 systemd-networkd[1231]: lxcc4a36199b3ba: Link UP Jan 29 12:15:48.332405 kernel: eth0: renamed from tmp38b76 Jan 29 12:15:48.338078 systemd-networkd[1231]: lxcc4a36199b3ba: Gained carrier Jan 29 12:15:48.560982 kubelet[1868]: E0129 12:15:48.560830 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:48.711641 systemd-networkd[1231]: lxc_health: Gained IPv6LL Jan 29 12:15:49.561207 kubelet[1868]: E0129 12:15:49.561165 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:50.375609 systemd-networkd[1231]: lxcc4a36199b3ba: Gained IPv6LL Jan 29 12:15:50.562458 kubelet[1868]: E0129 12:15:50.562417 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:51.210289 containerd[1522]: time="2025-01-29T12:15:51.210188525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:15:51.210289 containerd[1522]: time="2025-01-29T12:15:51.210247587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:15:51.210289 containerd[1522]: time="2025-01-29T12:15:51.210258696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:51.210769 containerd[1522]: time="2025-01-29T12:15:51.210342214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:15:51.233990 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:15:51.249444 containerd[1522]: time="2025-01-29T12:15:51.249412100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-twrbl,Uid:4c680633-304f-4afa-bb4a-8de72b9e1126,Namespace:default,Attempt:0,} returns sandbox id \"38b76be565b4152b49214fd89c7b6be256aa83fff599c664d273b900ae154b77\"" Jan 29 12:15:51.251153 containerd[1522]: time="2025-01-29T12:15:51.251111915Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 12:15:51.547704 kubelet[1868]: E0129 12:15:51.547574 1868 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:51.563114 kubelet[1868]: E0129 12:15:51.563081 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:52.563198 kubelet[1868]: E0129 12:15:52.563164 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:53.167557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3333403534.mount: Deactivated successfully. Jan 29 12:15:53.564019 kubelet[1868]: E0129 12:15:53.563902 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:53.912962 containerd[1522]: time="2025-01-29T12:15:53.912838584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:15:53.913462 containerd[1522]: time="2025-01-29T12:15:53.913394287Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67680490" Jan 29 12:15:53.914076 containerd[1522]: time="2025-01-29T12:15:53.914051754Z" level=info msg="ImageCreate event name:\"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:15:53.917424 containerd[1522]: time="2025-01-29T12:15:53.917347803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:15:53.921347 containerd[1522]: time="2025-01-29T12:15:53.921301598Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 2.670157712s" Jan 29 12:15:53.921405 containerd[1522]: time="2025-01-29T12:15:53.921348043Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\"" Jan 29 12:15:53.923398 containerd[1522]: time="2025-01-29T12:15:53.923346185Z" level=info msg="CreateContainer within sandbox \"38b76be565b4152b49214fd89c7b6be256aa83fff599c664d273b900ae154b77\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 12:15:53.932607 containerd[1522]: time="2025-01-29T12:15:53.932569469Z" level=info msg="CreateContainer within sandbox \"38b76be565b4152b49214fd89c7b6be256aa83fff599c664d273b900ae154b77\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"1287a7e57d63dd86197a74c7a9320e07d0db2344957c7cadb10f0aa1fb234ac0\"" Jan 29 12:15:53.933147 containerd[1522]: time="2025-01-29T12:15:53.933004063Z" level=info msg="StartContainer for \"1287a7e57d63dd86197a74c7a9320e07d0db2344957c7cadb10f0aa1fb234ac0\"" Jan 29 12:15:53.979251 containerd[1522]: time="2025-01-29T12:15:53.979196705Z" level=info msg="StartContainer for \"1287a7e57d63dd86197a74c7a9320e07d0db2344957c7cadb10f0aa1fb234ac0\" returns successfully" Jan 29 12:15:54.564912 kubelet[1868]: E0129 12:15:54.564860 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:54.930455 systemd[1]: run-containerd-runc-k8s.io-1287a7e57d63dd86197a74c7a9320e07d0db2344957c7cadb10f0aa1fb234ac0-runc.ALcElw.mount: Deactivated successfully. Jan 29 12:15:55.565209 kubelet[1868]: E0129 12:15:55.565136 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:56.565806 kubelet[1868]: E0129 12:15:56.565763 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:57.566216 kubelet[1868]: E0129 12:15:57.566158 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:58.567277 kubelet[1868]: E0129 12:15:58.567227 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:59.567439 kubelet[1868]: E0129 12:15:59.567392 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:15:59.973324 kubelet[1868]: I0129 12:15:59.973182 1868 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-twrbl" podStartSLOduration=10.301504515 podStartE2EDuration="12.973165721s" podCreationTimestamp="2025-01-29 12:15:47 +0000 UTC" firstStartedPulling="2025-01-29 12:15:51.250702476 +0000 UTC m=+20.656689074" lastFinishedPulling="2025-01-29 12:15:53.922363682 +0000 UTC m=+23.328350280" observedRunningTime="2025-01-29 12:15:54.747615041 +0000 UTC m=+24.153601639" watchObservedRunningTime="2025-01-29 12:15:59.973165721 +0000 UTC m=+29.379152319" Jan 29 12:15:59.973469 kubelet[1868]: I0129 12:15:59.973309 1868 topology_manager.go:215] "Topology Admit Handler" podUID="07d3967f-b93e-488a-95ff-734cf393c590" podNamespace="default" podName="nfs-server-provisioner-0" Jan 29 12:16:00.013359 kubelet[1868]: I0129 12:16:00.013318 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s98p6\" (UniqueName: \"kubernetes.io/projected/07d3967f-b93e-488a-95ff-734cf393c590-kube-api-access-s98p6\") pod \"nfs-server-provisioner-0\" (UID: \"07d3967f-b93e-488a-95ff-734cf393c590\") " pod="default/nfs-server-provisioner-0" Jan 29 12:16:00.013359 kubelet[1868]: I0129 12:16:00.013363 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/07d3967f-b93e-488a-95ff-734cf393c590-data\") pod \"nfs-server-provisioner-0\" (UID: \"07d3967f-b93e-488a-95ff-734cf393c590\") " pod="default/nfs-server-provisioner-0" Jan 29 12:16:00.277671 containerd[1522]: time="2025-01-29T12:16:00.277552750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:07d3967f-b93e-488a-95ff-734cf393c590,Namespace:default,Attempt:0,}" Jan 29 12:16:00.301049 systemd-networkd[1231]: lxccf01b3ee7350: Link UP Jan 29 12:16:00.309429 kernel: eth0: renamed from tmpa4326 Jan 29 12:16:00.313002 systemd-networkd[1231]: lxccf01b3ee7350: Gained carrier Jan 29 12:16:00.487701 containerd[1522]: time="2025-01-29T12:16:00.487526185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:16:00.487701 containerd[1522]: time="2025-01-29T12:16:00.487575396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:16:00.487701 containerd[1522]: time="2025-01-29T12:16:00.487586279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:00.487701 containerd[1522]: time="2025-01-29T12:16:00.487669859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:00.507585 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:16:00.523336 containerd[1522]: time="2025-01-29T12:16:00.523301328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:07d3967f-b93e-488a-95ff-734cf393c590,Namespace:default,Attempt:0,} returns sandbox id \"a43266294adb422b17e2409942ac7696cc1038633664aed4272f2af64edcabf4\"" Jan 29 12:16:00.524900 containerd[1522]: time="2025-01-29T12:16:00.524803243Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 12:16:00.568328 kubelet[1868]: E0129 12:16:00.568218 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:01.568810 kubelet[1868]: E0129 12:16:01.568764 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:02.343552 systemd-networkd[1231]: lxccf01b3ee7350: Gained IPv6LL Jan 29 12:16:02.569283 kubelet[1868]: E0129 12:16:02.569245 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:02.607932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3547522772.mount: Deactivated successfully. Jan 29 12:16:03.569704 kubelet[1868]: E0129 12:16:03.569657 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:03.741892 kubelet[1868]: I0129 12:16:03.741819 1868 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:16:03.742679 kubelet[1868]: E0129 12:16:03.742632 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:03.757656 kubelet[1868]: E0129 12:16:03.757617 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:04.027295 containerd[1522]: time="2025-01-29T12:16:04.027154826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:04.032367 containerd[1522]: time="2025-01-29T12:16:04.032311609Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Jan 29 12:16:04.033494 containerd[1522]: time="2025-01-29T12:16:04.033452707Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:04.036307 containerd[1522]: time="2025-01-29T12:16:04.036251481Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:04.037541 containerd[1522]: time="2025-01-29T12:16:04.037498238Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.512640103s" Jan 29 12:16:04.037541 containerd[1522]: time="2025-01-29T12:16:04.037535125Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 29 12:16:04.040296 containerd[1522]: time="2025-01-29T12:16:04.040260125Z" level=info msg="CreateContainer within sandbox \"a43266294adb422b17e2409942ac7696cc1038633664aed4272f2af64edcabf4\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 12:16:04.060223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2806236081.mount: Deactivated successfully. Jan 29 12:16:04.063858 containerd[1522]: time="2025-01-29T12:16:04.063738842Z" level=info msg="CreateContainer within sandbox \"a43266294adb422b17e2409942ac7696cc1038633664aed4272f2af64edcabf4\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"4aa3a0b343a42b400cc45eb73753dbd782bb1d23e10e0fbb9db4f6bcb5b7d1b5\"" Jan 29 12:16:04.064366 containerd[1522]: time="2025-01-29T12:16:04.064250459Z" level=info msg="StartContainer for \"4aa3a0b343a42b400cc45eb73753dbd782bb1d23e10e0fbb9db4f6bcb5b7d1b5\"" Jan 29 12:16:04.150208 containerd[1522]: time="2025-01-29T12:16:04.150170083Z" level=info msg="StartContainer for \"4aa3a0b343a42b400cc45eb73753dbd782bb1d23e10e0fbb9db4f6bcb5b7d1b5\" returns successfully" Jan 29 12:16:04.570010 kubelet[1868]: E0129 12:16:04.569965 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:05.046414 systemd[1]: run-containerd-runc-k8s.io-4aa3a0b343a42b400cc45eb73753dbd782bb1d23e10e0fbb9db4f6bcb5b7d1b5-runc.obvAfg.mount: Deactivated successfully. Jan 29 12:16:05.570447 kubelet[1868]: E0129 12:16:05.570341 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:06.570759 kubelet[1868]: E0129 12:16:06.570704 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:07.159519 update_engine[1510]: I20250129 12:16:07.159431 1510 update_attempter.cc:509] Updating boot flags... Jan 29 12:16:07.183454 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3243) Jan 29 12:16:07.220407 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3243) Jan 29 12:16:07.571951 kubelet[1868]: E0129 12:16:07.571649 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:08.572502 kubelet[1868]: E0129 12:16:08.572399 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:09.572562 kubelet[1868]: E0129 12:16:09.572494 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:10.573389 kubelet[1868]: E0129 12:16:10.573333 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:11.547204 kubelet[1868]: E0129 12:16:11.547159 1868 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:11.574002 kubelet[1868]: E0129 12:16:11.573944 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:12.575011 kubelet[1868]: E0129 12:16:12.574938 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:13.576066 kubelet[1868]: E0129 12:16:13.576020 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:13.894837 kubelet[1868]: I0129 12:16:13.894413 1868 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.380150288 podStartE2EDuration="14.894362913s" podCreationTimestamp="2025-01-29 12:15:59 +0000 UTC" firstStartedPulling="2025-01-29 12:16:00.52436774 +0000 UTC m=+29.930354338" lastFinishedPulling="2025-01-29 12:16:04.038580365 +0000 UTC m=+33.444566963" observedRunningTime="2025-01-29 12:16:04.777769193 +0000 UTC m=+34.183755791" watchObservedRunningTime="2025-01-29 12:16:13.894362913 +0000 UTC m=+43.300349511" Jan 29 12:16:13.894837 kubelet[1868]: I0129 12:16:13.894538 1868 topology_manager.go:215] "Topology Admit Handler" podUID="7bf25168-8bed-496e-a510-2813af2b3839" podNamespace="default" podName="test-pod-1" Jan 29 12:16:13.990458 kubelet[1868]: I0129 12:16:13.990389 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-331c6a41-19d7-410b-826e-d83cc8a39bf2\" (UniqueName: \"kubernetes.io/nfs/7bf25168-8bed-496e-a510-2813af2b3839-pvc-331c6a41-19d7-410b-826e-d83cc8a39bf2\") pod \"test-pod-1\" (UID: \"7bf25168-8bed-496e-a510-2813af2b3839\") " pod="default/test-pod-1" Jan 29 12:16:13.990458 kubelet[1868]: I0129 12:16:13.990434 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g68t9\" (UniqueName: \"kubernetes.io/projected/7bf25168-8bed-496e-a510-2813af2b3839-kube-api-access-g68t9\") pod \"test-pod-1\" (UID: \"7bf25168-8bed-496e-a510-2813af2b3839\") " pod="default/test-pod-1" Jan 29 12:16:14.112412 kernel: FS-Cache: Loaded Jan 29 12:16:14.136725 kernel: RPC: Registered named UNIX socket transport module. Jan 29 12:16:14.136797 kernel: RPC: Registered udp transport module. Jan 29 12:16:14.136817 kernel: RPC: Registered tcp transport module. Jan 29 12:16:14.136833 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 12:16:14.137877 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 12:16:14.314164 kernel: NFS: Registering the id_resolver key type Jan 29 12:16:14.314296 kernel: Key type id_resolver registered Jan 29 12:16:14.314313 kernel: Key type id_legacy registered Jan 29 12:16:14.344789 nfsidmap[3271]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 29 12:16:14.350121 nfsidmap[3274]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 29 12:16:14.497885 containerd[1522]: time="2025-01-29T12:16:14.497821246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7bf25168-8bed-496e-a510-2813af2b3839,Namespace:default,Attempt:0,}" Jan 29 12:16:14.518199 systemd-networkd[1231]: lxc2a9ab2dd4b42: Link UP Jan 29 12:16:14.530412 kernel: eth0: renamed from tmpdf2ab Jan 29 12:16:14.538416 systemd-networkd[1231]: lxc2a9ab2dd4b42: Gained carrier Jan 29 12:16:14.576941 kubelet[1868]: E0129 12:16:14.576902 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:14.671583 containerd[1522]: time="2025-01-29T12:16:14.670844256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:16:14.671583 containerd[1522]: time="2025-01-29T12:16:14.671118488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:16:14.671583 containerd[1522]: time="2025-01-29T12:16:14.671136610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:14.671583 containerd[1522]: time="2025-01-29T12:16:14.671220580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:14.692855 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:16:14.719992 containerd[1522]: time="2025-01-29T12:16:14.719953340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7bf25168-8bed-496e-a510-2813af2b3839,Namespace:default,Attempt:0,} returns sandbox id \"df2abcd150beabfead8f951aff17e1f132b2f38459945f8e5a25400e889a163c\"" Jan 29 12:16:14.721753 containerd[1522]: time="2025-01-29T12:16:14.721711345Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 12:16:15.005741 containerd[1522]: time="2025-01-29T12:16:15.002027054Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:15.006402 containerd[1522]: time="2025-01-29T12:16:15.006090467Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 12:16:15.009049 containerd[1522]: time="2025-01-29T12:16:15.009009073Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 287.260323ms" Jan 29 12:16:15.009110 containerd[1522]: time="2025-01-29T12:16:15.009050277Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\"" Jan 29 12:16:15.011686 containerd[1522]: time="2025-01-29T12:16:15.011542275Z" level=info msg="CreateContainer within sandbox \"df2abcd150beabfead8f951aff17e1f132b2f38459945f8e5a25400e889a163c\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 12:16:15.020394 containerd[1522]: time="2025-01-29T12:16:15.020343536Z" level=info msg="CreateContainer within sandbox \"df2abcd150beabfead8f951aff17e1f132b2f38459945f8e5a25400e889a163c\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"80cd00b0467ba49a42a7682252038b03802bf9786a3205caf64d33c76b83d579\"" Jan 29 12:16:15.020808 containerd[1522]: time="2025-01-29T12:16:15.020780705Z" level=info msg="StartContainer for \"80cd00b0467ba49a42a7682252038b03802bf9786a3205caf64d33c76b83d579\"" Jan 29 12:16:15.074309 containerd[1522]: time="2025-01-29T12:16:15.074256425Z" level=info msg="StartContainer for \"80cd00b0467ba49a42a7682252038b03802bf9786a3205caf64d33c76b83d579\" returns successfully" Jan 29 12:16:15.577884 kubelet[1868]: E0129 12:16:15.577829 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:15.798818 kubelet[1868]: I0129 12:16:15.798747 1868 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.510256358 podStartE2EDuration="15.798731219s" podCreationTimestamp="2025-01-29 12:16:00 +0000 UTC" firstStartedPulling="2025-01-29 12:16:14.721225449 +0000 UTC m=+44.127212047" lastFinishedPulling="2025-01-29 12:16:15.00970031 +0000 UTC m=+44.415686908" observedRunningTime="2025-01-29 12:16:15.798542038 +0000 UTC m=+45.204528636" watchObservedRunningTime="2025-01-29 12:16:15.798731219 +0000 UTC m=+45.204717817" Jan 29 12:16:16.487648 systemd-networkd[1231]: lxc2a9ab2dd4b42: Gained IPv6LL Jan 29 12:16:16.578925 kubelet[1868]: E0129 12:16:16.578886 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:17.579663 kubelet[1868]: E0129 12:16:17.579616 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:17.607990 containerd[1522]: time="2025-01-29T12:16:17.607920980Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:16:17.613410 containerd[1522]: time="2025-01-29T12:16:17.613351415Z" level=info msg="StopContainer for \"4740d2f144bd99732e1bc5e3a025a314344ea955b79178ef30a13d457e86981d\" with timeout 2 (s)" Jan 29 12:16:17.613633 containerd[1522]: time="2025-01-29T12:16:17.613608481Z" level=info msg="Stop container \"4740d2f144bd99732e1bc5e3a025a314344ea955b79178ef30a13d457e86981d\" with signal terminated" Jan 29 12:16:17.618610 systemd-networkd[1231]: lxc_health: Link DOWN Jan 29 12:16:17.618619 systemd-networkd[1231]: lxc_health: Lost carrier Jan 29 12:16:17.660286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4740d2f144bd99732e1bc5e3a025a314344ea955b79178ef30a13d457e86981d-rootfs.mount: Deactivated successfully. Jan 29 12:16:17.710003 containerd[1522]: time="2025-01-29T12:16:17.709890480Z" level=info msg="shim disconnected" id=4740d2f144bd99732e1bc5e3a025a314344ea955b79178ef30a13d457e86981d namespace=k8s.io Jan 29 12:16:17.710179 containerd[1522]: time="2025-01-29T12:16:17.709999331Z" level=warning msg="cleaning up after shim disconnected" id=4740d2f144bd99732e1bc5e3a025a314344ea955b79178ef30a13d457e86981d namespace=k8s.io Jan 29 12:16:17.710179 containerd[1522]: time="2025-01-29T12:16:17.710023293Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:16:17.722909 containerd[1522]: time="2025-01-29T12:16:17.722856125Z" level=info msg="StopContainer for \"4740d2f144bd99732e1bc5e3a025a314344ea955b79178ef30a13d457e86981d\" returns successfully" Jan 29 12:16:17.723804 containerd[1522]: time="2025-01-29T12:16:17.723628003Z" level=info msg="StopPodSandbox for \"850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3\"" Jan 29 12:16:17.723804 containerd[1522]: time="2025-01-29T12:16:17.723667568Z" level=info msg="Container to stop \"dcfd1e781536e9c48d6a760136fc011b845a8be0bcb3794b08f276ec7bf648b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:16:17.723804 containerd[1522]: time="2025-01-29T12:16:17.723679129Z" level=info msg="Container to stop \"4740d2f144bd99732e1bc5e3a025a314344ea955b79178ef30a13d457e86981d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:16:17.723804 containerd[1522]: time="2025-01-29T12:16:17.723688690Z" level=info msg="Container to stop \"35cac2e7f6e8dde8bdc2c4d7441e4997a6998c3c529e3d83f320aca80b9ae37f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:16:17.723804 containerd[1522]: time="2025-01-29T12:16:17.723697331Z" level=info msg="Container to stop \"2c7a581e8b83c28e84f9aeeba462abcda7bbb21b164e49ecc49d6fc3db9a70c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:16:17.723804 containerd[1522]: time="2025-01-29T12:16:17.723707932Z" level=info msg="Container to stop \"3e0635579d758bb522025688c561802470c028205a3c2c24524fca6594d8434d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 12:16:17.725317 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3-shm.mount: Deactivated successfully. Jan 29 12:16:17.742568 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3-rootfs.mount: Deactivated successfully. Jan 29 12:16:17.748747 containerd[1522]: time="2025-01-29T12:16:17.748646760Z" level=info msg="shim disconnected" id=850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3 namespace=k8s.io Jan 29 12:16:17.748747 containerd[1522]: time="2025-01-29T12:16:17.748708766Z" level=warning msg="cleaning up after shim disconnected" id=850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3 namespace=k8s.io Jan 29 12:16:17.748747 containerd[1522]: time="2025-01-29T12:16:17.748716967Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:16:17.758981 containerd[1522]: time="2025-01-29T12:16:17.758946132Z" level=info msg="TearDown network for sandbox \"850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3\" successfully" Jan 29 12:16:17.759214 containerd[1522]: time="2025-01-29T12:16:17.759085027Z" level=info msg="StopPodSandbox for \"850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3\" returns successfully" Jan 29 12:16:17.795024 kubelet[1868]: I0129 12:16:17.794998 1868 scope.go:117] "RemoveContainer" containerID="4740d2f144bd99732e1bc5e3a025a314344ea955b79178ef30a13d457e86981d" Jan 29 12:16:17.796474 containerd[1522]: time="2025-01-29T12:16:17.796436123Z" level=info msg="RemoveContainer for \"4740d2f144bd99732e1bc5e3a025a314344ea955b79178ef30a13d457e86981d\"" Jan 29 12:16:17.809165 containerd[1522]: time="2025-01-29T12:16:17.809091337Z" level=info msg="RemoveContainer for \"4740d2f144bd99732e1bc5e3a025a314344ea955b79178ef30a13d457e86981d\" returns successfully" Jan 29 12:16:17.809576 kubelet[1868]: I0129 12:16:17.809458 1868 scope.go:117] "RemoveContainer" containerID="dcfd1e781536e9c48d6a760136fc011b845a8be0bcb3794b08f276ec7bf648b4" Jan 29 12:16:17.811005 containerd[1522]: time="2025-01-29T12:16:17.810964968Z" level=info msg="RemoveContainer for \"dcfd1e781536e9c48d6a760136fc011b845a8be0bcb3794b08f276ec7bf648b4\"" Jan 29 12:16:17.814068 containerd[1522]: time="2025-01-29T12:16:17.814034882Z" level=info msg="RemoveContainer for \"dcfd1e781536e9c48d6a760136fc011b845a8be0bcb3794b08f276ec7bf648b4\" returns successfully" Jan 29 12:16:17.814215 kubelet[1868]: I0129 12:16:17.814192 1868 scope.go:117] "RemoveContainer" containerID="3e0635579d758bb522025688c561802470c028205a3c2c24524fca6594d8434d" Jan 29 12:16:17.815110 containerd[1522]: time="2025-01-29T12:16:17.815087069Z" level=info msg="RemoveContainer for \"3e0635579d758bb522025688c561802470c028205a3c2c24524fca6594d8434d\"" Jan 29 12:16:17.817200 containerd[1522]: time="2025-01-29T12:16:17.817167842Z" level=info msg="RemoveContainer for \"3e0635579d758bb522025688c561802470c028205a3c2c24524fca6594d8434d\" returns successfully" Jan 29 12:16:17.817362 kubelet[1868]: I0129 12:16:17.817331 1868 scope.go:117] "RemoveContainer" containerID="2c7a581e8b83c28e84f9aeeba462abcda7bbb21b164e49ecc49d6fc3db9a70c4" Jan 29 12:16:17.818241 containerd[1522]: time="2025-01-29T12:16:17.818217669Z" level=info msg="RemoveContainer for \"2c7a581e8b83c28e84f9aeeba462abcda7bbb21b164e49ecc49d6fc3db9a70c4\"" Jan 29 12:16:17.820282 containerd[1522]: time="2025-01-29T12:16:17.820247117Z" level=info msg="RemoveContainer for \"2c7a581e8b83c28e84f9aeeba462abcda7bbb21b164e49ecc49d6fc3db9a70c4\" returns successfully" Jan 29 12:16:17.820468 kubelet[1868]: I0129 12:16:17.820439 1868 scope.go:117] "RemoveContainer" containerID="35cac2e7f6e8dde8bdc2c4d7441e4997a6998c3c529e3d83f320aca80b9ae37f" Jan 29 12:16:17.821365 containerd[1522]: time="2025-01-29T12:16:17.821344469Z" level=info msg="RemoveContainer for \"35cac2e7f6e8dde8bdc2c4d7441e4997a6998c3c529e3d83f320aca80b9ae37f\"" Jan 29 12:16:17.823575 containerd[1522]: time="2025-01-29T12:16:17.823548334Z" level=info msg="RemoveContainer for \"35cac2e7f6e8dde8bdc2c4d7441e4997a6998c3c529e3d83f320aca80b9ae37f\" returns successfully" Jan 29 12:16:17.823729 kubelet[1868]: I0129 12:16:17.823710 1868 scope.go:117] "RemoveContainer" containerID="4740d2f144bd99732e1bc5e3a025a314344ea955b79178ef30a13d457e86981d" Jan 29 12:16:17.823928 containerd[1522]: time="2025-01-29T12:16:17.823893289Z" level=error msg="ContainerStatus for \"4740d2f144bd99732e1bc5e3a025a314344ea955b79178ef30a13d457e86981d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4740d2f144bd99732e1bc5e3a025a314344ea955b79178ef30a13d457e86981d\": not found" Jan 29 12:16:17.824049 kubelet[1868]: E0129 12:16:17.824021 1868 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4740d2f144bd99732e1bc5e3a025a314344ea955b79178ef30a13d457e86981d\": not found" containerID="4740d2f144bd99732e1bc5e3a025a314344ea955b79178ef30a13d457e86981d" Jan 29 12:16:17.824132 kubelet[1868]: I0129 12:16:17.824053 1868 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4740d2f144bd99732e1bc5e3a025a314344ea955b79178ef30a13d457e86981d"} err="failed to get container status \"4740d2f144bd99732e1bc5e3a025a314344ea955b79178ef30a13d457e86981d\": rpc error: code = NotFound desc = an error occurred when try to find container \"4740d2f144bd99732e1bc5e3a025a314344ea955b79178ef30a13d457e86981d\": not found" Jan 29 12:16:17.824170 kubelet[1868]: I0129 12:16:17.824132 1868 scope.go:117] "RemoveContainer" containerID="dcfd1e781536e9c48d6a760136fc011b845a8be0bcb3794b08f276ec7bf648b4" Jan 29 12:16:17.824365 containerd[1522]: time="2025-01-29T12:16:17.824339015Z" level=error msg="ContainerStatus for \"dcfd1e781536e9c48d6a760136fc011b845a8be0bcb3794b08f276ec7bf648b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dcfd1e781536e9c48d6a760136fc011b845a8be0bcb3794b08f276ec7bf648b4\": not found" Jan 29 12:16:17.824499 kubelet[1868]: E0129 12:16:17.824476 1868 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dcfd1e781536e9c48d6a760136fc011b845a8be0bcb3794b08f276ec7bf648b4\": not found" containerID="dcfd1e781536e9c48d6a760136fc011b845a8be0bcb3794b08f276ec7bf648b4" Jan 29 12:16:17.824532 kubelet[1868]: I0129 12:16:17.824507 1868 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dcfd1e781536e9c48d6a760136fc011b845a8be0bcb3794b08f276ec7bf648b4"} err="failed to get container status \"dcfd1e781536e9c48d6a760136fc011b845a8be0bcb3794b08f276ec7bf648b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"dcfd1e781536e9c48d6a760136fc011b845a8be0bcb3794b08f276ec7bf648b4\": not found" Jan 29 12:16:17.824532 kubelet[1868]: I0129 12:16:17.824524 1868 scope.go:117] "RemoveContainer" containerID="3e0635579d758bb522025688c561802470c028205a3c2c24524fca6594d8434d" Jan 29 12:16:17.824727 containerd[1522]: time="2025-01-29T12:16:17.824696331Z" level=error msg="ContainerStatus for \"3e0635579d758bb522025688c561802470c028205a3c2c24524fca6594d8434d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3e0635579d758bb522025688c561802470c028205a3c2c24524fca6594d8434d\": not found" Jan 29 12:16:17.824816 kubelet[1868]: E0129 12:16:17.824795 1868 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3e0635579d758bb522025688c561802470c028205a3c2c24524fca6594d8434d\": not found" containerID="3e0635579d758bb522025688c561802470c028205a3c2c24524fca6594d8434d" Jan 29 12:16:17.824853 kubelet[1868]: I0129 12:16:17.824821 1868 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3e0635579d758bb522025688c561802470c028205a3c2c24524fca6594d8434d"} err="failed to get container status \"3e0635579d758bb522025688c561802470c028205a3c2c24524fca6594d8434d\": rpc error: code = NotFound desc = an error occurred when try to find container \"3e0635579d758bb522025688c561802470c028205a3c2c24524fca6594d8434d\": not found" Jan 29 12:16:17.824853 kubelet[1868]: I0129 12:16:17.824840 1868 scope.go:117] "RemoveContainer" containerID="2c7a581e8b83c28e84f9aeeba462abcda7bbb21b164e49ecc49d6fc3db9a70c4" Jan 29 12:16:17.825012 containerd[1522]: time="2025-01-29T12:16:17.824986081Z" level=error msg="ContainerStatus for \"2c7a581e8b83c28e84f9aeeba462abcda7bbb21b164e49ecc49d6fc3db9a70c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c7a581e8b83c28e84f9aeeba462abcda7bbb21b164e49ecc49d6fc3db9a70c4\": not found" Jan 29 12:16:17.825087 kubelet[1868]: E0129 12:16:17.825070 1868 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2c7a581e8b83c28e84f9aeeba462abcda7bbb21b164e49ecc49d6fc3db9a70c4\": not found" containerID="2c7a581e8b83c28e84f9aeeba462abcda7bbb21b164e49ecc49d6fc3db9a70c4" Jan 29 12:16:17.825118 kubelet[1868]: I0129 12:16:17.825091 1868 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2c7a581e8b83c28e84f9aeeba462abcda7bbb21b164e49ecc49d6fc3db9a70c4"} err="failed to get container status \"2c7a581e8b83c28e84f9aeeba462abcda7bbb21b164e49ecc49d6fc3db9a70c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"2c7a581e8b83c28e84f9aeeba462abcda7bbb21b164e49ecc49d6fc3db9a70c4\": not found" Jan 29 12:16:17.825118 kubelet[1868]: I0129 12:16:17.825106 1868 scope.go:117] "RemoveContainer" containerID="35cac2e7f6e8dde8bdc2c4d7441e4997a6998c3c529e3d83f320aca80b9ae37f" Jan 29 12:16:17.825248 containerd[1522]: time="2025-01-29T12:16:17.825222425Z" level=error msg="ContainerStatus for \"35cac2e7f6e8dde8bdc2c4d7441e4997a6998c3c529e3d83f320aca80b9ae37f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"35cac2e7f6e8dde8bdc2c4d7441e4997a6998c3c529e3d83f320aca80b9ae37f\": not found" Jan 29 12:16:17.825342 kubelet[1868]: E0129 12:16:17.825324 1868 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"35cac2e7f6e8dde8bdc2c4d7441e4997a6998c3c529e3d83f320aca80b9ae37f\": not found" containerID="35cac2e7f6e8dde8bdc2c4d7441e4997a6998c3c529e3d83f320aca80b9ae37f" Jan 29 12:16:17.825404 kubelet[1868]: I0129 12:16:17.825347 1868 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"35cac2e7f6e8dde8bdc2c4d7441e4997a6998c3c529e3d83f320aca80b9ae37f"} err="failed to get container status \"35cac2e7f6e8dde8bdc2c4d7441e4997a6998c3c529e3d83f320aca80b9ae37f\": rpc error: code = NotFound desc = an error occurred when try to find container \"35cac2e7f6e8dde8bdc2c4d7441e4997a6998c3c529e3d83f320aca80b9ae37f\": not found" Jan 29 12:16:17.910561 kubelet[1868]: I0129 12:16:17.908783 1868 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-hostproc\") pod \"2f65d93c-2baa-4d12-8c86-c580b8263671\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " Jan 29 12:16:17.910561 kubelet[1868]: I0129 12:16:17.908829 1868 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-lib-modules\") pod \"2f65d93c-2baa-4d12-8c86-c580b8263671\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " Jan 29 12:16:17.910561 kubelet[1868]: I0129 12:16:17.908847 1868 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-xtables-lock\") pod \"2f65d93c-2baa-4d12-8c86-c580b8263671\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " Jan 29 12:16:17.910561 kubelet[1868]: I0129 12:16:17.908872 1868 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2f65d93c-2baa-4d12-8c86-c580b8263671-cilium-config-path\") pod \"2f65d93c-2baa-4d12-8c86-c580b8263671\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " Jan 29 12:16:17.910561 kubelet[1868]: I0129 12:16:17.908870 1868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-hostproc" (OuterVolumeSpecName: "hostproc") pod "2f65d93c-2baa-4d12-8c86-c580b8263671" (UID: "2f65d93c-2baa-4d12-8c86-c580b8263671"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:17.910561 kubelet[1868]: I0129 12:16:17.908894 1868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2f65d93c-2baa-4d12-8c86-c580b8263671" (UID: "2f65d93c-2baa-4d12-8c86-c580b8263671"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:17.910755 kubelet[1868]: I0129 12:16:17.908894 1868 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2f65d93c-2baa-4d12-8c86-c580b8263671-clustermesh-secrets\") pod \"2f65d93c-2baa-4d12-8c86-c580b8263671\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " Jan 29 12:16:17.910755 kubelet[1868]: I0129 12:16:17.908943 1868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2f65d93c-2baa-4d12-8c86-c580b8263671" (UID: "2f65d93c-2baa-4d12-8c86-c580b8263671"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:17.910755 kubelet[1868]: I0129 12:16:17.908967 1868 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-cilium-run\") pod \"2f65d93c-2baa-4d12-8c86-c580b8263671\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " Jan 29 12:16:17.910755 kubelet[1868]: I0129 12:16:17.909001 1868 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-bpf-maps\") pod \"2f65d93c-2baa-4d12-8c86-c580b8263671\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " Jan 29 12:16:17.910755 kubelet[1868]: I0129 12:16:17.909028 1868 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-host-proc-sys-kernel\") pod \"2f65d93c-2baa-4d12-8c86-c580b8263671\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " Jan 29 12:16:17.910755 kubelet[1868]: I0129 12:16:17.909066 1868 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2f65d93c-2baa-4d12-8c86-c580b8263671-hubble-tls\") pod \"2f65d93c-2baa-4d12-8c86-c580b8263671\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " Jan 29 12:16:17.910882 kubelet[1868]: I0129 12:16:17.909091 1868 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-etc-cni-netd\") pod \"2f65d93c-2baa-4d12-8c86-c580b8263671\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " Jan 29 12:16:17.910882 kubelet[1868]: I0129 12:16:17.909107 1868 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-cilium-cgroup\") pod \"2f65d93c-2baa-4d12-8c86-c580b8263671\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " Jan 29 12:16:17.910882 kubelet[1868]: I0129 12:16:17.909122 1868 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-cni-path\") pod \"2f65d93c-2baa-4d12-8c86-c580b8263671\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " Jan 29 12:16:17.910882 kubelet[1868]: I0129 12:16:17.909136 1868 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-host-proc-sys-net\") pod \"2f65d93c-2baa-4d12-8c86-c580b8263671\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " Jan 29 12:16:17.910882 kubelet[1868]: I0129 12:16:17.909152 1868 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc94j\" (UniqueName: \"kubernetes.io/projected/2f65d93c-2baa-4d12-8c86-c580b8263671-kube-api-access-jc94j\") pod \"2f65d93c-2baa-4d12-8c86-c580b8263671\" (UID: \"2f65d93c-2baa-4d12-8c86-c580b8263671\") " Jan 29 12:16:17.910882 kubelet[1868]: I0129 12:16:17.909180 1868 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-hostproc\") on node \"10.0.0.141\" DevicePath \"\"" Jan 29 12:16:17.910882 kubelet[1868]: I0129 12:16:17.909191 1868 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-lib-modules\") on node \"10.0.0.141\" DevicePath \"\"" Jan 29 12:16:17.911020 kubelet[1868]: I0129 12:16:17.909199 1868 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-xtables-lock\") on node \"10.0.0.141\" DevicePath \"\"" Jan 29 12:16:17.911229 kubelet[1868]: I0129 12:16:17.911192 1868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f65d93c-2baa-4d12-8c86-c580b8263671-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2f65d93c-2baa-4d12-8c86-c580b8263671" (UID: "2f65d93c-2baa-4d12-8c86-c580b8263671"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:16:17.912884 kubelet[1868]: I0129 12:16:17.911259 1868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2f65d93c-2baa-4d12-8c86-c580b8263671" (UID: "2f65d93c-2baa-4d12-8c86-c580b8263671"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:17.912884 kubelet[1868]: I0129 12:16:17.911278 1868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2f65d93c-2baa-4d12-8c86-c580b8263671" (UID: "2f65d93c-2baa-4d12-8c86-c580b8263671"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:17.912884 kubelet[1868]: I0129 12:16:17.911293 1868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-cni-path" (OuterVolumeSpecName: "cni-path") pod "2f65d93c-2baa-4d12-8c86-c580b8263671" (UID: "2f65d93c-2baa-4d12-8c86-c580b8263671"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:17.912884 kubelet[1868]: I0129 12:16:17.911307 1868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2f65d93c-2baa-4d12-8c86-c580b8263671" (UID: "2f65d93c-2baa-4d12-8c86-c580b8263671"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:17.912884 kubelet[1868]: I0129 12:16:17.911320 1868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2f65d93c-2baa-4d12-8c86-c580b8263671" (UID: "2f65d93c-2baa-4d12-8c86-c580b8263671"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:17.912364 systemd[1]: var-lib-kubelet-pods-2f65d93c\x2d2baa\x2d4d12\x2d8c86\x2dc580b8263671-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djc94j.mount: Deactivated successfully. Jan 29 12:16:17.913086 kubelet[1868]: I0129 12:16:17.911334 1868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2f65d93c-2baa-4d12-8c86-c580b8263671" (UID: "2f65d93c-2baa-4d12-8c86-c580b8263671"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:17.913086 kubelet[1868]: I0129 12:16:17.911354 1868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2f65d93c-2baa-4d12-8c86-c580b8263671" (UID: "2f65d93c-2baa-4d12-8c86-c580b8263671"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:17.913086 kubelet[1868]: I0129 12:16:17.912355 1868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f65d93c-2baa-4d12-8c86-c580b8263671-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2f65d93c-2baa-4d12-8c86-c580b8263671" (UID: "2f65d93c-2baa-4d12-8c86-c580b8263671"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:16:17.912540 systemd[1]: var-lib-kubelet-pods-2f65d93c\x2d2baa\x2d4d12\x2d8c86\x2dc580b8263671-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 12:16:17.912642 systemd[1]: var-lib-kubelet-pods-2f65d93c\x2d2baa\x2d4d12\x2d8c86\x2dc580b8263671-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 12:16:17.914047 kubelet[1868]: I0129 12:16:17.914008 1868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f65d93c-2baa-4d12-8c86-c580b8263671-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2f65d93c-2baa-4d12-8c86-c580b8263671" (UID: "2f65d93c-2baa-4d12-8c86-c580b8263671"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:16:17.914165 kubelet[1868]: I0129 12:16:17.914100 1868 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f65d93c-2baa-4d12-8c86-c580b8263671-kube-api-access-jc94j" (OuterVolumeSpecName: "kube-api-access-jc94j") pod "2f65d93c-2baa-4d12-8c86-c580b8263671" (UID: "2f65d93c-2baa-4d12-8c86-c580b8263671"). InnerVolumeSpecName "kube-api-access-jc94j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:16:18.010266 kubelet[1868]: I0129 12:16:18.010223 1868 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-cilium-run\") on node \"10.0.0.141\" DevicePath \"\"" Jan 29 12:16:18.010266 kubelet[1868]: I0129 12:16:18.010256 1868 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2f65d93c-2baa-4d12-8c86-c580b8263671-clustermesh-secrets\") on node \"10.0.0.141\" DevicePath \"\"" Jan 29 12:16:18.010266 kubelet[1868]: I0129 12:16:18.010269 1868 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-host-proc-sys-kernel\") on node \"10.0.0.141\" DevicePath \"\"" Jan 29 12:16:18.010428 kubelet[1868]: I0129 12:16:18.010278 1868 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2f65d93c-2baa-4d12-8c86-c580b8263671-hubble-tls\") on node \"10.0.0.141\" DevicePath \"\"" Jan 29 12:16:18.010428 kubelet[1868]: I0129 12:16:18.010286 1868 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-bpf-maps\") on node \"10.0.0.141\" DevicePath \"\"" Jan 29 12:16:18.010428 kubelet[1868]: I0129 12:16:18.010294 1868 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-cilium-cgroup\") on node \"10.0.0.141\" DevicePath \"\"" Jan 29 12:16:18.010428 kubelet[1868]: I0129 12:16:18.010301 1868 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-cni-path\") on node \"10.0.0.141\" DevicePath \"\"" Jan 29 12:16:18.010428 kubelet[1868]: I0129 12:16:18.010308 1868 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-host-proc-sys-net\") on node \"10.0.0.141\" DevicePath \"\"" Jan 29 12:16:18.010428 kubelet[1868]: I0129 12:16:18.010315 1868 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-jc94j\" (UniqueName: \"kubernetes.io/projected/2f65d93c-2baa-4d12-8c86-c580b8263671-kube-api-access-jc94j\") on node \"10.0.0.141\" DevicePath \"\"" Jan 29 12:16:18.010428 kubelet[1868]: I0129 12:16:18.010323 1868 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f65d93c-2baa-4d12-8c86-c580b8263671-etc-cni-netd\") on node \"10.0.0.141\" DevicePath \"\"" Jan 29 12:16:18.010428 kubelet[1868]: I0129 12:16:18.010330 1868 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2f65d93c-2baa-4d12-8c86-c580b8263671-cilium-config-path\") on node \"10.0.0.141\" DevicePath \"\"" Jan 29 12:16:18.580414 kubelet[1868]: E0129 12:16:18.580329 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:19.581339 kubelet[1868]: E0129 12:16:19.581297 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:19.682965 kubelet[1868]: I0129 12:16:19.682921 1868 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f65d93c-2baa-4d12-8c86-c580b8263671" path="/var/lib/kubelet/pods/2f65d93c-2baa-4d12-8c86-c580b8263671/volumes" Jan 29 12:16:20.447188 kubelet[1868]: I0129 12:16:20.447141 1868 topology_manager.go:215] "Topology Admit Handler" podUID="47e6217c-1ba7-4d71-acef-f480ea6b0ca8" podNamespace="kube-system" podName="cilium-operator-599987898-f8dph" Jan 29 12:16:20.447188 kubelet[1868]: E0129 12:16:20.447195 1868 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2f65d93c-2baa-4d12-8c86-c580b8263671" containerName="mount-cgroup" Jan 29 12:16:20.448369 kubelet[1868]: E0129 12:16:20.447208 1868 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2f65d93c-2baa-4d12-8c86-c580b8263671" containerName="clean-cilium-state" Jan 29 12:16:20.448369 kubelet[1868]: E0129 12:16:20.447416 1868 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2f65d93c-2baa-4d12-8c86-c580b8263671" containerName="apply-sysctl-overwrites" Jan 29 12:16:20.448369 kubelet[1868]: E0129 12:16:20.447424 1868 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2f65d93c-2baa-4d12-8c86-c580b8263671" containerName="mount-bpf-fs" Jan 29 12:16:20.448369 kubelet[1868]: E0129 12:16:20.447430 1868 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2f65d93c-2baa-4d12-8c86-c580b8263671" containerName="cilium-agent" Jan 29 12:16:20.448369 kubelet[1868]: I0129 12:16:20.447472 1868 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f65d93c-2baa-4d12-8c86-c580b8263671" containerName="cilium-agent" Jan 29 12:16:20.450638 kubelet[1868]: I0129 12:16:20.450591 1868 topology_manager.go:215] "Topology Admit Handler" podUID="fe6f5e6f-824e-4554-9f07-4f16d49d6555" podNamespace="kube-system" podName="cilium-dwpvc" Jan 29 12:16:20.582110 kubelet[1868]: E0129 12:16:20.582055 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:20.623586 kubelet[1868]: I0129 12:16:20.623516 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fe6f5e6f-824e-4554-9f07-4f16d49d6555-cilium-run\") pod \"cilium-dwpvc\" (UID: \"fe6f5e6f-824e-4554-9f07-4f16d49d6555\") " pod="kube-system/cilium-dwpvc" Jan 29 12:16:20.623586 kubelet[1868]: I0129 12:16:20.623562 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fe6f5e6f-824e-4554-9f07-4f16d49d6555-etc-cni-netd\") pod \"cilium-dwpvc\" (UID: \"fe6f5e6f-824e-4554-9f07-4f16d49d6555\") " pod="kube-system/cilium-dwpvc" Jan 29 12:16:20.623586 kubelet[1868]: I0129 12:16:20.623585 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fe6f5e6f-824e-4554-9f07-4f16d49d6555-clustermesh-secrets\") pod \"cilium-dwpvc\" (UID: \"fe6f5e6f-824e-4554-9f07-4f16d49d6555\") " pod="kube-system/cilium-dwpvc" Jan 29 12:16:20.623752 kubelet[1868]: I0129 12:16:20.623604 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fe6f5e6f-824e-4554-9f07-4f16d49d6555-host-proc-sys-kernel\") pod \"cilium-dwpvc\" (UID: \"fe6f5e6f-824e-4554-9f07-4f16d49d6555\") " pod="kube-system/cilium-dwpvc" Jan 29 12:16:20.623752 kubelet[1868]: I0129 12:16:20.623622 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqw7d\" (UniqueName: \"kubernetes.io/projected/47e6217c-1ba7-4d71-acef-f480ea6b0ca8-kube-api-access-mqw7d\") pod \"cilium-operator-599987898-f8dph\" (UID: \"47e6217c-1ba7-4d71-acef-f480ea6b0ca8\") " pod="kube-system/cilium-operator-599987898-f8dph" Jan 29 12:16:20.623752 kubelet[1868]: I0129 12:16:20.623644 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fe6f5e6f-824e-4554-9f07-4f16d49d6555-xtables-lock\") pod \"cilium-dwpvc\" (UID: \"fe6f5e6f-824e-4554-9f07-4f16d49d6555\") " pod="kube-system/cilium-dwpvc" Jan 29 12:16:20.623752 kubelet[1868]: I0129 12:16:20.623659 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fe6f5e6f-824e-4554-9f07-4f16d49d6555-host-proc-sys-net\") pod \"cilium-dwpvc\" (UID: \"fe6f5e6f-824e-4554-9f07-4f16d49d6555\") " pod="kube-system/cilium-dwpvc" Jan 29 12:16:20.623752 kubelet[1868]: I0129 12:16:20.623715 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/47e6217c-1ba7-4d71-acef-f480ea6b0ca8-cilium-config-path\") pod \"cilium-operator-599987898-f8dph\" (UID: \"47e6217c-1ba7-4d71-acef-f480ea6b0ca8\") " pod="kube-system/cilium-operator-599987898-f8dph" Jan 29 12:16:20.623871 kubelet[1868]: I0129 12:16:20.623730 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fe6f5e6f-824e-4554-9f07-4f16d49d6555-hubble-tls\") pod \"cilium-dwpvc\" (UID: \"fe6f5e6f-824e-4554-9f07-4f16d49d6555\") " pod="kube-system/cilium-dwpvc" Jan 29 12:16:20.623871 kubelet[1868]: I0129 12:16:20.623745 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fe6f5e6f-824e-4554-9f07-4f16d49d6555-cilium-cgroup\") pod \"cilium-dwpvc\" (UID: \"fe6f5e6f-824e-4554-9f07-4f16d49d6555\") " pod="kube-system/cilium-dwpvc" Jan 29 12:16:20.623871 kubelet[1868]: I0129 12:16:20.623760 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fe6f5e6f-824e-4554-9f07-4f16d49d6555-cni-path\") pod \"cilium-dwpvc\" (UID: \"fe6f5e6f-824e-4554-9f07-4f16d49d6555\") " pod="kube-system/cilium-dwpvc" Jan 29 12:16:20.623871 kubelet[1868]: I0129 12:16:20.623786 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fe6f5e6f-824e-4554-9f07-4f16d49d6555-lib-modules\") pod \"cilium-dwpvc\" (UID: \"fe6f5e6f-824e-4554-9f07-4f16d49d6555\") " pod="kube-system/cilium-dwpvc" Jan 29 12:16:20.623871 kubelet[1868]: I0129 12:16:20.623803 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fe6f5e6f-824e-4554-9f07-4f16d49d6555-cilium-ipsec-secrets\") pod \"cilium-dwpvc\" (UID: \"fe6f5e6f-824e-4554-9f07-4f16d49d6555\") " pod="kube-system/cilium-dwpvc" Jan 29 12:16:20.623871 kubelet[1868]: I0129 12:16:20.623821 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fe6f5e6f-824e-4554-9f07-4f16d49d6555-bpf-maps\") pod \"cilium-dwpvc\" (UID: \"fe6f5e6f-824e-4554-9f07-4f16d49d6555\") " pod="kube-system/cilium-dwpvc" Jan 29 12:16:20.623986 kubelet[1868]: I0129 12:16:20.623836 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fe6f5e6f-824e-4554-9f07-4f16d49d6555-hostproc\") pod \"cilium-dwpvc\" (UID: \"fe6f5e6f-824e-4554-9f07-4f16d49d6555\") " pod="kube-system/cilium-dwpvc" Jan 29 12:16:20.623986 kubelet[1868]: I0129 12:16:20.623853 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fe6f5e6f-824e-4554-9f07-4f16d49d6555-cilium-config-path\") pod \"cilium-dwpvc\" (UID: \"fe6f5e6f-824e-4554-9f07-4f16d49d6555\") " pod="kube-system/cilium-dwpvc" Jan 29 12:16:20.623986 kubelet[1868]: I0129 12:16:20.623868 1868 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nn4q\" (UniqueName: \"kubernetes.io/projected/fe6f5e6f-824e-4554-9f07-4f16d49d6555-kube-api-access-7nn4q\") pod \"cilium-dwpvc\" (UID: \"fe6f5e6f-824e-4554-9f07-4f16d49d6555\") " pod="kube-system/cilium-dwpvc" Jan 29 12:16:20.750839 kubelet[1868]: E0129 12:16:20.750654 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:20.751473 containerd[1522]: time="2025-01-29T12:16:20.751147255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-f8dph,Uid:47e6217c-1ba7-4d71-acef-f480ea6b0ca8,Namespace:kube-system,Attempt:0,}" Jan 29 12:16:20.754579 kubelet[1868]: E0129 12:16:20.754429 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:20.755207 containerd[1522]: time="2025-01-29T12:16:20.755173339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dwpvc,Uid:fe6f5e6f-824e-4554-9f07-4f16d49d6555,Namespace:kube-system,Attempt:0,}" Jan 29 12:16:20.771412 containerd[1522]: time="2025-01-29T12:16:20.770873637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:16:20.771412 containerd[1522]: time="2025-01-29T12:16:20.771390764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:16:20.771412 containerd[1522]: time="2025-01-29T12:16:20.771405245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:20.772155 containerd[1522]: time="2025-01-29T12:16:20.771502334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:20.776619 containerd[1522]: time="2025-01-29T12:16:20.776523027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:16:20.776619 containerd[1522]: time="2025-01-29T12:16:20.776569232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:16:20.776619 containerd[1522]: time="2025-01-29T12:16:20.776579953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:20.776914 containerd[1522]: time="2025-01-29T12:16:20.776656639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:16:20.804444 containerd[1522]: time="2025-01-29T12:16:20.804289456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dwpvc,Uid:fe6f5e6f-824e-4554-9f07-4f16d49d6555,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ffbd0ae5109ef0550462a0f6efe29dac765d7e0c77f7300cd542b0d9ed9a028\"" Jan 29 12:16:20.806183 kubelet[1868]: E0129 12:16:20.805586 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:20.808236 containerd[1522]: time="2025-01-29T12:16:20.808196528Z" level=info msg="CreateContainer within sandbox \"7ffbd0ae5109ef0550462a0f6efe29dac765d7e0c77f7300cd542b0d9ed9a028\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 12:16:20.820269 containerd[1522]: time="2025-01-29T12:16:20.820222135Z" level=info msg="CreateContainer within sandbox \"7ffbd0ae5109ef0550462a0f6efe29dac765d7e0c77f7300cd542b0d9ed9a028\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"840936676690be579d7230ab8ce273fef53dd2dafc8ad489d3a08396fa1c2035\"" Jan 29 12:16:20.821389 containerd[1522]: time="2025-01-29T12:16:20.821350157Z" level=info msg="StartContainer for \"840936676690be579d7230ab8ce273fef53dd2dafc8ad489d3a08396fa1c2035\"" Jan 29 12:16:20.830096 containerd[1522]: time="2025-01-29T12:16:20.830058463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-f8dph,Uid:47e6217c-1ba7-4d71-acef-f480ea6b0ca8,Namespace:kube-system,Attempt:0,} returns sandbox id \"29eb26f1c588bb93fd144ad26503569c14aea68a259e613b782086b2ea277108\"" Jan 29 12:16:20.832076 kubelet[1868]: E0129 12:16:20.832053 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:20.836949 containerd[1522]: time="2025-01-29T12:16:20.836904322Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 12:16:20.881998 containerd[1522]: time="2025-01-29T12:16:20.881957591Z" level=info msg="StartContainer for \"840936676690be579d7230ab8ce273fef53dd2dafc8ad489d3a08396fa1c2035\" returns successfully" Jan 29 12:16:20.947227 containerd[1522]: time="2025-01-29T12:16:20.947170042Z" level=info msg="shim disconnected" id=840936676690be579d7230ab8ce273fef53dd2dafc8ad489d3a08396fa1c2035 namespace=k8s.io Jan 29 12:16:20.947227 containerd[1522]: time="2025-01-29T12:16:20.947220446Z" level=warning msg="cleaning up after shim disconnected" id=840936676690be579d7230ab8ce273fef53dd2dafc8ad489d3a08396fa1c2035 namespace=k8s.io Jan 29 12:16:20.947227 containerd[1522]: time="2025-01-29T12:16:20.947228767Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:16:21.582787 kubelet[1868]: E0129 12:16:21.582745 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:21.672545 kubelet[1868]: E0129 12:16:21.672467 1868 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 12:16:21.804750 kubelet[1868]: E0129 12:16:21.804705 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:21.807508 containerd[1522]: time="2025-01-29T12:16:21.807230458Z" level=info msg="CreateContainer within sandbox \"7ffbd0ae5109ef0550462a0f6efe29dac765d7e0c77f7300cd542b0d9ed9a028\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 12:16:21.819065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount929413040.mount: Deactivated successfully. Jan 29 12:16:21.820851 containerd[1522]: time="2025-01-29T12:16:21.820805157Z" level=info msg="CreateContainer within sandbox \"7ffbd0ae5109ef0550462a0f6efe29dac765d7e0c77f7300cd542b0d9ed9a028\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"44516b341b36601913aa84277bb3f4a52d982a65e43f9e3f106ed4ebf143c984\"" Jan 29 12:16:21.821569 containerd[1522]: time="2025-01-29T12:16:21.821486816Z" level=info msg="StartContainer for \"44516b341b36601913aa84277bb3f4a52d982a65e43f9e3f106ed4ebf143c984\"" Jan 29 12:16:21.868215 containerd[1522]: time="2025-01-29T12:16:21.868125188Z" level=info msg="StartContainer for \"44516b341b36601913aa84277bb3f4a52d982a65e43f9e3f106ed4ebf143c984\" returns successfully" Jan 29 12:16:21.938618 containerd[1522]: time="2025-01-29T12:16:21.938462257Z" level=info msg="shim disconnected" id=44516b341b36601913aa84277bb3f4a52d982a65e43f9e3f106ed4ebf143c984 namespace=k8s.io Jan 29 12:16:21.938618 containerd[1522]: time="2025-01-29T12:16:21.938518102Z" level=warning msg="cleaning up after shim disconnected" id=44516b341b36601913aa84277bb3f4a52d982a65e43f9e3f106ed4ebf143c984 namespace=k8s.io Jan 29 12:16:21.938618 containerd[1522]: time="2025-01-29T12:16:21.938528463Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:16:22.037964 containerd[1522]: time="2025-01-29T12:16:22.037916897Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:22.038408 containerd[1522]: time="2025-01-29T12:16:22.038366775Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 29 12:16:22.039129 containerd[1522]: time="2025-01-29T12:16:22.039104996Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:16:22.040580 containerd[1522]: time="2025-01-29T12:16:22.040505713Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.203544147s" Jan 29 12:16:22.040580 containerd[1522]: time="2025-01-29T12:16:22.040540276Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 29 12:16:22.043007 containerd[1522]: time="2025-01-29T12:16:22.042975960Z" level=info msg="CreateContainer within sandbox \"29eb26f1c588bb93fd144ad26503569c14aea68a259e613b782086b2ea277108\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 12:16:22.049841 containerd[1522]: time="2025-01-29T12:16:22.049787130Z" level=info msg="CreateContainer within sandbox \"29eb26f1c588bb93fd144ad26503569c14aea68a259e613b782086b2ea277108\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8e9a25c568c8e175717b6f43cc64be621ae3160e6f3fd6baaa315e63320d7f93\"" Jan 29 12:16:22.050486 containerd[1522]: time="2025-01-29T12:16:22.050455105Z" level=info msg="StartContainer for \"8e9a25c568c8e175717b6f43cc64be621ae3160e6f3fd6baaa315e63320d7f93\"" Jan 29 12:16:22.098077 containerd[1522]: time="2025-01-29T12:16:22.098001561Z" level=info msg="StartContainer for \"8e9a25c568c8e175717b6f43cc64be621ae3160e6f3fd6baaa315e63320d7f93\" returns successfully" Jan 29 12:16:22.457437 kubelet[1868]: I0129 12:16:22.457388 1868 setters.go:580] "Node became not ready" node="10.0.0.141" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T12:16:22Z","lastTransitionTime":"2025-01-29T12:16:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 12:16:22.583423 kubelet[1868]: E0129 12:16:22.583368 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:22.730841 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44516b341b36601913aa84277bb3f4a52d982a65e43f9e3f106ed4ebf143c984-rootfs.mount: Deactivated successfully. Jan 29 12:16:22.810974 kubelet[1868]: E0129 12:16:22.810709 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:22.812220 kubelet[1868]: E0129 12:16:22.812181 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:22.812456 containerd[1522]: time="2025-01-29T12:16:22.812419773Z" level=info msg="CreateContainer within sandbox \"7ffbd0ae5109ef0550462a0f6efe29dac765d7e0c77f7300cd542b0d9ed9a028\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 12:16:22.823712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1158295456.mount: Deactivated successfully. Jan 29 12:16:22.826244 containerd[1522]: time="2025-01-29T12:16:22.826193844Z" level=info msg="CreateContainer within sandbox \"7ffbd0ae5109ef0550462a0f6efe29dac765d7e0c77f7300cd542b0d9ed9a028\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"de7624a2bff9285ae63277ef38504dd010dc67df5dffee0a124fef7c85b13411\"" Jan 29 12:16:22.826995 containerd[1522]: time="2025-01-29T12:16:22.826771293Z" level=info msg="StartContainer for \"de7624a2bff9285ae63277ef38504dd010dc67df5dffee0a124fef7c85b13411\"" Jan 29 12:16:22.833983 kubelet[1868]: I0129 12:16:22.833904 1868 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-f8dph" podStartSLOduration=1.6250313090000001 podStartE2EDuration="2.833888248s" podCreationTimestamp="2025-01-29 12:16:20 +0000 UTC" firstStartedPulling="2025-01-29 12:16:20.832686621 +0000 UTC m=+50.238673219" lastFinishedPulling="2025-01-29 12:16:22.04154356 +0000 UTC m=+51.447530158" observedRunningTime="2025-01-29 12:16:22.833829923 +0000 UTC m=+52.239816521" watchObservedRunningTime="2025-01-29 12:16:22.833888248 +0000 UTC m=+52.239874846" Jan 29 12:16:22.878364 containerd[1522]: time="2025-01-29T12:16:22.878315562Z" level=info msg="StartContainer for \"de7624a2bff9285ae63277ef38504dd010dc67df5dffee0a124fef7c85b13411\" returns successfully" Jan 29 12:16:22.926060 containerd[1522]: time="2025-01-29T12:16:22.925996869Z" level=info msg="shim disconnected" id=de7624a2bff9285ae63277ef38504dd010dc67df5dffee0a124fef7c85b13411 namespace=k8s.io Jan 29 12:16:22.926060 containerd[1522]: time="2025-01-29T12:16:22.926052994Z" level=warning msg="cleaning up after shim disconnected" id=de7624a2bff9285ae63277ef38504dd010dc67df5dffee0a124fef7c85b13411 namespace=k8s.io Jan 29 12:16:22.926060 containerd[1522]: time="2025-01-29T12:16:22.926061274Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:16:23.583927 kubelet[1868]: E0129 12:16:23.583873 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:23.730129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de7624a2bff9285ae63277ef38504dd010dc67df5dffee0a124fef7c85b13411-rootfs.mount: Deactivated successfully. Jan 29 12:16:23.816886 kubelet[1868]: E0129 12:16:23.816004 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:23.816886 kubelet[1868]: E0129 12:16:23.816612 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:23.818488 containerd[1522]: time="2025-01-29T12:16:23.818451118Z" level=info msg="CreateContainer within sandbox \"7ffbd0ae5109ef0550462a0f6efe29dac765d7e0c77f7300cd542b0d9ed9a028\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 12:16:23.828538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount344051488.mount: Deactivated successfully. Jan 29 12:16:23.829304 containerd[1522]: time="2025-01-29T12:16:23.829074574Z" level=info msg="CreateContainer within sandbox \"7ffbd0ae5109ef0550462a0f6efe29dac765d7e0c77f7300cd542b0d9ed9a028\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f8ed04f6f31d5a0bb417676a1b774c91c4310fba56802a8fdfa3c4644a2a9b53\"" Jan 29 12:16:23.832886 containerd[1522]: time="2025-01-29T12:16:23.832848198Z" level=info msg="StartContainer for \"f8ed04f6f31d5a0bb417676a1b774c91c4310fba56802a8fdfa3c4644a2a9b53\"" Jan 29 12:16:23.874034 containerd[1522]: time="2025-01-29T12:16:23.873307818Z" level=info msg="StartContainer for \"f8ed04f6f31d5a0bb417676a1b774c91c4310fba56802a8fdfa3c4644a2a9b53\" returns successfully" Jan 29 12:16:23.889678 containerd[1522]: time="2025-01-29T12:16:23.889551806Z" level=info msg="shim disconnected" id=f8ed04f6f31d5a0bb417676a1b774c91c4310fba56802a8fdfa3c4644a2a9b53 namespace=k8s.io Jan 29 12:16:23.889678 containerd[1522]: time="2025-01-29T12:16:23.889673136Z" level=warning msg="cleaning up after shim disconnected" id=f8ed04f6f31d5a0bb417676a1b774c91c4310fba56802a8fdfa3c4644a2a9b53 namespace=k8s.io Jan 29 12:16:23.889678 containerd[1522]: time="2025-01-29T12:16:23.889683297Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:16:24.584687 kubelet[1868]: E0129 12:16:24.584630 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:24.730154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8ed04f6f31d5a0bb417676a1b774c91c4310fba56802a8fdfa3c4644a2a9b53-rootfs.mount: Deactivated successfully. Jan 29 12:16:24.821154 kubelet[1868]: E0129 12:16:24.821021 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:24.823536 containerd[1522]: time="2025-01-29T12:16:24.823460582Z" level=info msg="CreateContainer within sandbox \"7ffbd0ae5109ef0550462a0f6efe29dac765d7e0c77f7300cd542b0d9ed9a028\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 12:16:24.840947 containerd[1522]: time="2025-01-29T12:16:24.840809610Z" level=info msg="CreateContainer within sandbox \"7ffbd0ae5109ef0550462a0f6efe29dac765d7e0c77f7300cd542b0d9ed9a028\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"89d9f54c8f6940ef26bb73d9b139b898613a424130e0c864c40db39f40aa4f40\"" Jan 29 12:16:24.841733 containerd[1522]: time="2025-01-29T12:16:24.841645595Z" level=info msg="StartContainer for \"89d9f54c8f6940ef26bb73d9b139b898613a424130e0c864c40db39f40aa4f40\"" Jan 29 12:16:24.884941 containerd[1522]: time="2025-01-29T12:16:24.884828630Z" level=info msg="StartContainer for \"89d9f54c8f6940ef26bb73d9b139b898613a424130e0c864c40db39f40aa4f40\" returns successfully" Jan 29 12:16:25.134569 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 29 12:16:25.584811 kubelet[1868]: E0129 12:16:25.584759 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:25.826441 kubelet[1868]: E0129 12:16:25.826086 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:26.585868 kubelet[1868]: E0129 12:16:26.585803 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:26.828343 kubelet[1868]: E0129 12:16:26.828197 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:27.586631 kubelet[1868]: E0129 12:16:27.586585 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:27.909297 systemd-networkd[1231]: lxc_health: Link UP Jan 29 12:16:27.919933 systemd-networkd[1231]: lxc_health: Gained carrier Jan 29 12:16:28.587229 kubelet[1868]: E0129 12:16:28.587170 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:28.760110 kubelet[1868]: E0129 12:16:28.759648 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:28.775966 kubelet[1868]: I0129 12:16:28.775896 1868 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-dwpvc" podStartSLOduration=8.775880843 podStartE2EDuration="8.775880843s" podCreationTimestamp="2025-01-29 12:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:16:25.842372461 +0000 UTC m=+55.248359059" watchObservedRunningTime="2025-01-29 12:16:28.775880843 +0000 UTC m=+58.181867441" Jan 29 12:16:28.831081 kubelet[1868]: E0129 12:16:28.831033 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:29.351696 systemd-networkd[1231]: lxc_health: Gained IPv6LL Jan 29 12:16:29.588368 kubelet[1868]: E0129 12:16:29.588292 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:29.833268 kubelet[1868]: E0129 12:16:29.832986 1868 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:16:30.588690 kubelet[1868]: E0129 12:16:30.588644 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:31.547619 kubelet[1868]: E0129 12:16:31.547550 1868 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:31.586934 containerd[1522]: time="2025-01-29T12:16:31.586672494Z" level=info msg="StopPodSandbox for \"850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3\"" Jan 29 12:16:31.586934 containerd[1522]: time="2025-01-29T12:16:31.586756219Z" level=info msg="TearDown network for sandbox \"850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3\" successfully" Jan 29 12:16:31.586934 containerd[1522]: time="2025-01-29T12:16:31.586768060Z" level=info msg="StopPodSandbox for \"850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3\" returns successfully" Jan 29 12:16:31.587608 containerd[1522]: time="2025-01-29T12:16:31.587311134Z" level=info msg="RemovePodSandbox for \"850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3\"" Jan 29 12:16:31.588845 kubelet[1868]: E0129 12:16:31.588813 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:31.591397 containerd[1522]: time="2025-01-29T12:16:31.591353185Z" level=info msg="Forcibly stopping sandbox \"850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3\"" Jan 29 12:16:31.591454 containerd[1522]: time="2025-01-29T12:16:31.591436670Z" level=info msg="TearDown network for sandbox \"850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3\" successfully" Jan 29 12:16:31.600755 containerd[1522]: time="2025-01-29T12:16:31.600718567Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:16:31.600826 containerd[1522]: time="2025-01-29T12:16:31.600780090Z" level=info msg="RemovePodSandbox \"850594ff624b6519b5f167af06eb4f9eeca8180e5cad3d3de4d52cd1b15ef0d3\" returns successfully" Jan 29 12:16:32.589834 kubelet[1868]: E0129 12:16:32.589784 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:33.590058 kubelet[1868]: E0129 12:16:33.590007 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 12:16:34.590291 kubelet[1868]: E0129 12:16:34.590240 1868 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"