Jan 30 12:47:31.895052 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 12:47:31.895073 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 30 12:47:31.895084 kernel: KASLR enabled Jan 30 12:47:31.895089 kernel: efi: EFI v2.7 by EDK II Jan 30 12:47:31.895095 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 30 12:47:31.895101 kernel: random: crng init done Jan 30 12:47:31.895109 kernel: ACPI: Early table checksum verification disabled Jan 30 12:47:31.895115 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 30 12:47:31.895122 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 30 12:47:31.895129 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:47:31.895136 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:47:31.895142 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:47:31.895148 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:47:31.895155 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:47:31.895163 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:47:31.895171 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:47:31.895177 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:47:31.895184 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:47:31.895191 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 30 12:47:31.895197 kernel: NUMA: Failed to initialise from firmware Jan 30 12:47:31.895204 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 12:47:31.895211 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 30 12:47:31.895217 kernel: Zone ranges: Jan 30 12:47:31.895224 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 12:47:31.895230 kernel: DMA32 empty Jan 30 12:47:31.895239 kernel: Normal empty Jan 30 12:47:31.895245 kernel: Movable zone start for each node Jan 30 12:47:31.895252 kernel: Early memory node ranges Jan 30 12:47:31.895259 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 30 12:47:31.895265 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 30 12:47:31.895272 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 30 12:47:31.895278 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 30 12:47:31.895285 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 30 12:47:31.895292 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 30 12:47:31.895298 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 30 12:47:31.895305 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 12:47:31.895312 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 30 12:47:31.895320 kernel: psci: probing for conduit method from ACPI. Jan 30 12:47:31.895326 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 12:47:31.895333 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 12:47:31.895343 kernel: psci: Trusted OS migration not required Jan 30 12:47:31.895350 kernel: psci: SMC Calling Convention v1.1 Jan 30 12:47:31.895357 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 30 12:47:31.895366 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 12:47:31.895373 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 12:47:31.895380 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 30 12:47:31.895387 kernel: Detected PIPT I-cache on CPU0 Jan 30 12:47:31.895394 kernel: CPU features: detected: GIC system register CPU interface Jan 30 12:47:31.895401 kernel: CPU features: detected: Hardware dirty bit management Jan 30 12:47:31.895408 kernel: CPU features: detected: Spectre-v4 Jan 30 12:47:31.895415 kernel: CPU features: detected: Spectre-BHB Jan 30 12:47:31.895422 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 12:47:31.895429 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 12:47:31.895438 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 12:47:31.895445 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 12:47:31.895452 kernel: alternatives: applying boot alternatives Jan 30 12:47:31.895460 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 12:47:31.895468 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 12:47:31.895475 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 12:47:31.895482 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 12:47:31.895489 kernel: Fallback order for Node 0: 0 Jan 30 12:47:31.895496 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 30 12:47:31.895503 kernel: Policy zone: DMA Jan 30 12:47:31.895510 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 12:47:31.895518 kernel: software IO TLB: area num 4. Jan 30 12:47:31.895525 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 30 12:47:31.895533 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Jan 30 12:47:31.895540 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 12:47:31.895548 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 12:47:31.895555 kernel: rcu: RCU event tracing is enabled. Jan 30 12:47:31.895562 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 12:47:31.895570 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 12:47:31.895577 kernel: Tracing variant of Tasks RCU enabled. Jan 30 12:47:31.895584 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 12:47:31.895591 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 12:47:31.895598 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 12:47:31.895607 kernel: GICv3: 256 SPIs implemented Jan 30 12:47:31.895614 kernel: GICv3: 0 Extended SPIs implemented Jan 30 12:47:31.895621 kernel: Root IRQ handler: gic_handle_irq Jan 30 12:47:31.895628 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 12:47:31.895635 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 30 12:47:31.895642 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 30 12:47:31.895649 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 12:47:31.895657 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 30 12:47:31.895664 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 30 12:47:31.895671 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 30 12:47:31.895678 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 12:47:31.895686 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:47:31.895694 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 12:47:31.895701 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 12:47:31.895708 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 12:47:31.895715 kernel: arm-pv: using stolen time PV Jan 30 12:47:31.895723 kernel: Console: colour dummy device 80x25 Jan 30 12:47:31.895730 kernel: ACPI: Core revision 20230628 Jan 30 12:47:31.895747 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 12:47:31.895755 kernel: pid_max: default: 32768 minimum: 301 Jan 30 12:47:31.895763 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 12:47:31.895772 kernel: landlock: Up and running. Jan 30 12:47:31.895779 kernel: SELinux: Initializing. Jan 30 12:47:31.895786 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 12:47:31.895794 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 12:47:31.895801 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 12:47:31.895808 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 12:47:31.895815 kernel: rcu: Hierarchical SRCU implementation. Jan 30 12:47:31.895823 kernel: rcu: Max phase no-delay instances is 400. Jan 30 12:47:31.895830 kernel: Platform MSI: ITS@0x8080000 domain created Jan 30 12:47:31.895839 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 30 12:47:31.895846 kernel: Remapping and enabling EFI services. Jan 30 12:47:31.895854 kernel: smp: Bringing up secondary CPUs ... Jan 30 12:47:31.895861 kernel: Detected PIPT I-cache on CPU1 Jan 30 12:47:31.895868 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 30 12:47:31.895875 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 30 12:47:31.895883 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:47:31.895891 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 12:47:31.895898 kernel: Detected PIPT I-cache on CPU2 Jan 30 12:47:31.895905 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 30 12:47:31.895914 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 30 12:47:31.895922 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:47:31.895934 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 30 12:47:31.895943 kernel: Detected PIPT I-cache on CPU3 Jan 30 12:47:31.895951 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 30 12:47:31.895959 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 30 12:47:31.895967 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:47:31.895974 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 30 12:47:31.895982 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 12:47:31.895991 kernel: SMP: Total of 4 processors activated. Jan 30 12:47:31.895999 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 12:47:31.896006 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 12:47:31.896020 kernel: CPU features: detected: Common not Private translations Jan 30 12:47:31.896028 kernel: CPU features: detected: CRC32 instructions Jan 30 12:47:31.896036 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 30 12:47:31.896044 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 12:47:31.896051 kernel: CPU features: detected: LSE atomic instructions Jan 30 12:47:31.896061 kernel: CPU features: detected: Privileged Access Never Jan 30 12:47:31.896069 kernel: CPU features: detected: RAS Extension Support Jan 30 12:47:31.896076 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 30 12:47:31.896084 kernel: CPU: All CPU(s) started at EL1 Jan 30 12:47:31.896091 kernel: alternatives: applying system-wide alternatives Jan 30 12:47:31.896099 kernel: devtmpfs: initialized Jan 30 12:47:31.896107 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 12:47:31.896114 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 12:47:31.896122 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 12:47:31.896131 kernel: SMBIOS 3.0.0 present. Jan 30 12:47:31.896139 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 30 12:47:31.896147 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 12:47:31.896154 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 12:47:31.896162 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 12:47:31.896170 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 12:47:31.896178 kernel: audit: initializing netlink subsys (disabled) Jan 30 12:47:31.896185 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jan 30 12:47:31.896200 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 12:47:31.896216 kernel: cpuidle: using governor menu Jan 30 12:47:31.896223 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 12:47:31.896232 kernel: ASID allocator initialised with 32768 entries Jan 30 12:47:31.896239 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 12:47:31.896247 kernel: Serial: AMBA PL011 UART driver Jan 30 12:47:31.896254 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 12:47:31.896262 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 12:47:31.896270 kernel: Modules: 509040 pages in range for PLT usage Jan 30 12:47:31.896278 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 12:47:31.896287 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 12:47:31.896294 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 12:47:31.896302 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 12:47:31.896310 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 12:47:31.896317 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 12:47:31.896325 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 12:47:31.896333 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 12:47:31.896340 kernel: ACPI: Added _OSI(Module Device) Jan 30 12:47:31.896348 kernel: ACPI: Added _OSI(Processor Device) Jan 30 12:47:31.896357 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 12:47:31.896364 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 12:47:31.896372 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 12:47:31.896379 kernel: ACPI: Interpreter enabled Jan 30 12:47:31.896386 kernel: ACPI: Using GIC for interrupt routing Jan 30 12:47:31.896394 kernel: ACPI: MCFG table detected, 1 entries Jan 30 12:47:31.896401 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 30 12:47:31.896409 kernel: printk: console [ttyAMA0] enabled Jan 30 12:47:31.896416 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 12:47:31.896555 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 12:47:31.896629 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 12:47:31.896696 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 12:47:31.896774 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 30 12:47:31.896841 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 30 12:47:31.896852 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 30 12:47:31.896860 kernel: PCI host bridge to bus 0000:00 Jan 30 12:47:31.896938 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 30 12:47:31.897002 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 12:47:31.897079 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 30 12:47:31.897141 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 12:47:31.897233 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 30 12:47:31.897312 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 12:47:31.897386 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 30 12:47:31.897453 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 30 12:47:31.897520 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 12:47:31.897586 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 12:47:31.897654 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 30 12:47:31.897721 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 30 12:47:31.897871 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 30 12:47:31.897936 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 12:47:31.897995 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 30 12:47:31.898005 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 12:47:31.898020 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 12:47:31.898028 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 12:47:31.898036 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 12:47:31.898044 kernel: iommu: Default domain type: Translated Jan 30 12:47:31.898051 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 12:47:31.898061 kernel: efivars: Registered efivars operations Jan 30 12:47:31.898069 kernel: vgaarb: loaded Jan 30 12:47:31.898077 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 12:47:31.898084 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 12:47:31.898092 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 12:47:31.898100 kernel: pnp: PnP ACPI init Jan 30 12:47:31.898179 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 30 12:47:31.898190 kernel: pnp: PnP ACPI: found 1 devices Jan 30 12:47:31.898198 kernel: NET: Registered PF_INET protocol family Jan 30 12:47:31.898208 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 12:47:31.898217 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 12:47:31.898225 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 12:47:31.898233 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 12:47:31.898241 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 12:47:31.898249 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 12:47:31.898257 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 12:47:31.898265 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 12:47:31.898273 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 12:47:31.898283 kernel: PCI: CLS 0 bytes, default 64 Jan 30 12:47:31.898291 kernel: kvm [1]: HYP mode not available Jan 30 12:47:31.898298 kernel: Initialise system trusted keyrings Jan 30 12:47:31.898306 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 12:47:31.898314 kernel: Key type asymmetric registered Jan 30 12:47:31.898322 kernel: Asymmetric key parser 'x509' registered Jan 30 12:47:31.898334 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 12:47:31.898342 kernel: io scheduler mq-deadline registered Jan 30 12:47:31.898349 kernel: io scheduler kyber registered Jan 30 12:47:31.898359 kernel: io scheduler bfq registered Jan 30 12:47:31.898367 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 12:47:31.898374 kernel: ACPI: button: Power Button [PWRB] Jan 30 12:47:31.898382 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 12:47:31.898453 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 30 12:47:31.898464 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 12:47:31.898471 kernel: thunder_xcv, ver 1.0 Jan 30 12:47:31.898479 kernel: thunder_bgx, ver 1.0 Jan 30 12:47:31.898487 kernel: nicpf, ver 1.0 Jan 30 12:47:31.898496 kernel: nicvf, ver 1.0 Jan 30 12:47:31.898570 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 12:47:31.898634 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T12:47:31 UTC (1738241251) Jan 30 12:47:31.898645 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 12:47:31.898652 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 30 12:47:31.898660 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 12:47:31.898668 kernel: watchdog: Hard watchdog permanently disabled Jan 30 12:47:31.898676 kernel: NET: Registered PF_INET6 protocol family Jan 30 12:47:31.898686 kernel: Segment Routing with IPv6 Jan 30 12:47:31.898693 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 12:47:31.898701 kernel: NET: Registered PF_PACKET protocol family Jan 30 12:47:31.898708 kernel: Key type dns_resolver registered Jan 30 12:47:31.898716 kernel: registered taskstats version 1 Jan 30 12:47:31.898723 kernel: Loading compiled-in X.509 certificates Jan 30 12:47:31.898731 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 30 12:47:31.898748 kernel: Key type .fscrypt registered Jan 30 12:47:31.898755 kernel: Key type fscrypt-provisioning registered Jan 30 12:47:31.898765 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 12:47:31.898773 kernel: ima: Allocated hash algorithm: sha1 Jan 30 12:47:31.898780 kernel: ima: No architecture policies found Jan 30 12:47:31.898788 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 12:47:31.898796 kernel: clk: Disabling unused clocks Jan 30 12:47:31.898804 kernel: Freeing unused kernel memory: 39360K Jan 30 12:47:31.898811 kernel: Run /init as init process Jan 30 12:47:31.898819 kernel: with arguments: Jan 30 12:47:31.898826 kernel: /init Jan 30 12:47:31.898835 kernel: with environment: Jan 30 12:47:31.898843 kernel: HOME=/ Jan 30 12:47:31.898850 kernel: TERM=linux Jan 30 12:47:31.898858 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 12:47:31.898867 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 12:47:31.898877 systemd[1]: Detected virtualization kvm. Jan 30 12:47:31.898885 systemd[1]: Detected architecture arm64. Jan 30 12:47:31.898895 systemd[1]: Running in initrd. Jan 30 12:47:31.898903 systemd[1]: No hostname configured, using default hostname. Jan 30 12:47:31.898911 systemd[1]: Hostname set to . Jan 30 12:47:31.898920 systemd[1]: Initializing machine ID from VM UUID. Jan 30 12:47:31.898928 systemd[1]: Queued start job for default target initrd.target. Jan 30 12:47:31.898936 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:47:31.898944 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:47:31.898953 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 12:47:31.898963 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 12:47:31.898974 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 12:47:31.898984 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 12:47:31.898994 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 12:47:31.899004 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 12:47:31.899018 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:47:31.899030 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:47:31.899040 systemd[1]: Reached target paths.target - Path Units. Jan 30 12:47:31.899049 systemd[1]: Reached target slices.target - Slice Units. Jan 30 12:47:31.899057 systemd[1]: Reached target swap.target - Swaps. Jan 30 12:47:31.899066 systemd[1]: Reached target timers.target - Timer Units. Jan 30 12:47:31.899074 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 12:47:31.899084 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 12:47:31.899093 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 12:47:31.899101 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 12:47:31.899110 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:47:31.899123 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 12:47:31.899134 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:47:31.899143 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 12:47:31.899153 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 12:47:31.899165 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 12:47:31.899173 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 12:47:31.899181 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 12:47:31.899190 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 12:47:31.899199 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 12:47:31.899208 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:47:31.899216 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 12:47:31.899224 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:47:31.899232 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 12:47:31.899242 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 12:47:31.899269 systemd-journald[237]: Collecting audit messages is disabled. Jan 30 12:47:31.899290 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:47:31.899298 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 12:47:31.899309 systemd-journald[237]: Journal started Jan 30 12:47:31.899329 systemd-journald[237]: Runtime Journal (/run/log/journal/3e9447c54fe74d9d987b4db96bc2d5d3) is 5.9M, max 47.3M, 41.4M free. Jan 30 12:47:31.899399 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 12:47:31.883365 systemd-modules-load[238]: Inserted module 'overlay' Jan 30 12:47:31.903828 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:47:31.903857 kernel: Bridge firewalling registered Jan 30 12:47:31.902058 systemd-modules-load[238]: Inserted module 'br_netfilter' Jan 30 12:47:31.907758 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 12:47:31.907794 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 12:47:31.908673 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 12:47:31.912846 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:47:31.914999 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 12:47:31.917231 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:47:31.924110 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:47:31.926295 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:47:31.927763 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:47:31.931115 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 12:47:31.933324 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 12:47:31.958219 dracut-cmdline[275]: dracut-dracut-053 Jan 30 12:47:31.960462 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 12:47:31.970101 systemd-resolved[276]: Positive Trust Anchors: Jan 30 12:47:31.970118 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 12:47:31.970150 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 12:47:31.975342 systemd-resolved[276]: Defaulting to hostname 'linux'. Jan 30 12:47:31.976631 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 12:47:31.977914 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:47:32.039771 kernel: SCSI subsystem initialized Jan 30 12:47:32.044767 kernel: Loading iSCSI transport class v2.0-870. Jan 30 12:47:32.052759 kernel: iscsi: registered transport (tcp) Jan 30 12:47:32.067767 kernel: iscsi: registered transport (qla4xxx) Jan 30 12:47:32.067819 kernel: QLogic iSCSI HBA Driver Jan 30 12:47:32.118169 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 12:47:32.131922 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 12:47:32.156192 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 12:47:32.156262 kernel: device-mapper: uevent: version 1.0.3 Jan 30 12:47:32.156277 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 12:47:32.210792 kernel: raid6: neonx8 gen() 15474 MB/s Jan 30 12:47:32.227920 kernel: raid6: neonx4 gen() 15624 MB/s Jan 30 12:47:32.244782 kernel: raid6: neonx2 gen() 13086 MB/s Jan 30 12:47:32.261776 kernel: raid6: neonx1 gen() 10437 MB/s Jan 30 12:47:32.278771 kernel: raid6: int64x8 gen() 6956 MB/s Jan 30 12:47:32.295755 kernel: raid6: int64x4 gen() 7340 MB/s Jan 30 12:47:32.312768 kernel: raid6: int64x2 gen() 6074 MB/s Jan 30 12:47:32.329778 kernel: raid6: int64x1 gen() 5037 MB/s Jan 30 12:47:32.329813 kernel: raid6: using algorithm neonx4 gen() 15624 MB/s Jan 30 12:47:32.346786 kernel: raid6: .... xor() 12251 MB/s, rmw enabled Jan 30 12:47:32.346844 kernel: raid6: using neon recovery algorithm Jan 30 12:47:32.351900 kernel: xor: measuring software checksum speed Jan 30 12:47:32.351931 kernel: 8regs : 19773 MB/sec Jan 30 12:47:32.353024 kernel: 32regs : 19200 MB/sec Jan 30 12:47:32.353042 kernel: arm64_neon : 26989 MB/sec Jan 30 12:47:32.353052 kernel: xor: using function: arm64_neon (26989 MB/sec) Jan 30 12:47:32.409783 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 12:47:32.424813 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 12:47:32.432940 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:47:32.449173 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jan 30 12:47:32.452465 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:47:32.462163 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 12:47:32.474546 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Jan 30 12:47:32.506678 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 12:47:32.518965 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 12:47:32.561036 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:47:32.569967 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 12:47:32.585082 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 12:47:32.587341 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 12:47:32.591514 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:47:32.593399 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 12:47:32.603774 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 12:47:32.615976 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 12:47:32.621295 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 30 12:47:32.632041 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 12:47:32.632157 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 12:47:32.632170 kernel: GPT:9289727 != 19775487 Jan 30 12:47:32.632183 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 12:47:32.632196 kernel: GPT:9289727 != 19775487 Jan 30 12:47:32.632211 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 12:47:32.632223 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:47:32.628841 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 12:47:32.628912 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:47:32.632042 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:47:32.632873 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 12:47:32.632931 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:47:32.635319 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:47:32.649125 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:47:32.654762 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (503) Jan 30 12:47:32.656757 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (514) Jan 30 12:47:32.664467 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 12:47:32.665661 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:47:32.673693 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 12:47:32.677346 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 12:47:32.678312 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 12:47:32.684385 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 12:47:32.700954 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 12:47:32.702541 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:47:32.717206 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:47:32.736661 disk-uuid[549]: Primary Header is updated. Jan 30 12:47:32.736661 disk-uuid[549]: Secondary Entries is updated. Jan 30 12:47:32.736661 disk-uuid[549]: Secondary Header is updated. Jan 30 12:47:32.740762 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:47:33.755841 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:47:33.756174 disk-uuid[558]: The operation has completed successfully. Jan 30 12:47:33.791598 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 12:47:33.792108 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 12:47:33.811959 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 12:47:33.814976 sh[572]: Success Jan 30 12:47:33.828771 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 12:47:33.861531 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 12:47:33.872170 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 12:47:33.875816 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 12:47:33.884961 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 30 12:47:33.885013 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:47:33.885782 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 12:47:33.885799 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 12:47:33.886785 kernel: BTRFS info (device dm-0): using free space tree Jan 30 12:47:33.892838 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 12:47:33.893671 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 12:47:33.901954 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 12:47:33.903593 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 12:47:33.916001 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:47:33.916051 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:47:33.916062 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:47:33.919800 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:47:33.931785 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 12:47:33.932848 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:47:33.939543 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 12:47:33.944925 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 12:47:34.017075 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 12:47:34.025937 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 12:47:34.050682 ignition[666]: Ignition 2.19.0 Jan 30 12:47:34.050695 ignition[666]: Stage: fetch-offline Jan 30 12:47:34.050753 ignition[666]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:47:34.050768 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:47:34.051004 ignition[666]: parsed url from cmdline: "" Jan 30 12:47:34.051008 ignition[666]: no config URL provided Jan 30 12:47:34.051012 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 12:47:34.051019 ignition[666]: no config at "/usr/lib/ignition/user.ign" Jan 30 12:47:34.051045 ignition[666]: op(1): [started] loading QEMU firmware config module Jan 30 12:47:34.051055 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 12:47:34.059684 systemd-networkd[765]: lo: Link UP Jan 30 12:47:34.059695 systemd-networkd[765]: lo: Gained carrier Jan 30 12:47:34.060407 systemd-networkd[765]: Enumeration completed Jan 30 12:47:34.060552 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 12:47:34.061097 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:47:34.061101 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 12:47:34.061780 systemd[1]: Reached target network.target - Network. Jan 30 12:47:34.065560 systemd-networkd[765]: eth0: Link UP Jan 30 12:47:34.065566 systemd-networkd[765]: eth0: Gained carrier Jan 30 12:47:34.065575 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:47:34.072144 ignition[666]: op(1): [finished] loading QEMU firmware config module Jan 30 12:47:34.078236 ignition[666]: parsing config with SHA512: d564665216b3869c10947e5561e77802f8b78311554d3c0a21f63a6da57a403ce5ddc15a764fb6d69674b5c63a31d572db3711df7d82320810daf558f3e851e5 Jan 30 12:47:34.081445 unknown[666]: fetched base config from "system" Jan 30 12:47:34.081455 unknown[666]: fetched user config from "qemu" Jan 30 12:47:34.081760 ignition[666]: fetch-offline: fetch-offline passed Jan 30 12:47:34.084021 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 12:47:34.081827 ignition[666]: Ignition finished successfully Jan 30 12:47:34.085388 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 12:47:34.085810 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.22/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 12:47:34.090948 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 12:47:34.104239 ignition[771]: Ignition 2.19.0 Jan 30 12:47:34.104251 ignition[771]: Stage: kargs Jan 30 12:47:34.104431 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:47:34.104442 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:47:34.105181 ignition[771]: kargs: kargs passed Jan 30 12:47:34.105230 ignition[771]: Ignition finished successfully Jan 30 12:47:34.107683 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 12:47:34.118970 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 12:47:34.130084 ignition[779]: Ignition 2.19.0 Jan 30 12:47:34.130095 ignition[779]: Stage: disks Jan 30 12:47:34.130267 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:47:34.130277 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:47:34.132931 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 12:47:34.130981 ignition[779]: disks: disks passed Jan 30 12:47:34.133946 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 12:47:34.131040 ignition[779]: Ignition finished successfully Jan 30 12:47:34.135318 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 12:47:34.136856 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 12:47:34.137941 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 12:47:34.139417 systemd[1]: Reached target basic.target - Basic System. Jan 30 12:47:34.150927 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 12:47:34.162608 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 12:47:34.167463 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 12:47:34.178864 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 12:47:34.227778 kernel: EXT4-fs (vda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 30 12:47:34.228246 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 12:47:34.229365 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 12:47:34.238833 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 12:47:34.240557 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 12:47:34.241612 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 12:47:34.241656 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 12:47:34.241679 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 12:47:34.246972 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 12:47:34.249411 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 12:47:34.253768 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (798) Jan 30 12:47:34.258296 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:47:34.258341 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:47:34.258353 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:47:34.265757 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:47:34.267292 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 12:47:34.309914 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 12:47:34.314427 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Jan 30 12:47:34.318998 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 12:47:34.323087 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 12:47:34.412013 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 12:47:34.427899 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 12:47:34.429438 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 12:47:34.435766 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:47:34.453020 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 12:47:34.509133 ignition[916]: INFO : Ignition 2.19.0 Jan 30 12:47:34.509133 ignition[916]: INFO : Stage: mount Jan 30 12:47:34.510572 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:47:34.510572 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:47:34.510572 ignition[916]: INFO : mount: mount passed Jan 30 12:47:34.510572 ignition[916]: INFO : Ignition finished successfully Jan 30 12:47:34.513489 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 12:47:34.524872 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 12:47:34.884333 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 12:47:34.896967 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 12:47:34.902765 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (926) Jan 30 12:47:34.904747 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:47:34.904766 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:47:34.904777 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:47:34.908755 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:47:34.909894 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 12:47:34.934249 ignition[943]: INFO : Ignition 2.19.0 Jan 30 12:47:34.934249 ignition[943]: INFO : Stage: files Jan 30 12:47:34.934249 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:47:34.934249 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:47:34.937529 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Jan 30 12:47:34.937529 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 12:47:34.937529 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 12:47:34.937529 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 12:47:34.941780 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 12:47:34.941780 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 12:47:34.941780 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 30 12:47:34.941780 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 12:47:34.941780 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 12:47:34.941780 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 12:47:34.941780 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 30 12:47:34.941780 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 30 12:47:34.941780 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 30 12:47:34.941780 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Jan 30 12:47:34.937946 unknown[943]: wrote ssh authorized keys file for user: core Jan 30 12:47:35.267923 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 30 12:47:35.492582 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 30 12:47:35.492582 ignition[943]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 30 12:47:35.496402 ignition[943]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 12:47:35.496402 ignition[943]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 12:47:35.496402 ignition[943]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 30 12:47:35.496402 ignition[943]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 12:47:35.523389 ignition[943]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 12:47:35.530108 ignition[943]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 12:47:35.533008 ignition[943]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 12:47:35.533008 ignition[943]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 12:47:35.533008 ignition[943]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 12:47:35.533008 ignition[943]: INFO : files: files passed Jan 30 12:47:35.533008 ignition[943]: INFO : Ignition finished successfully Jan 30 12:47:35.534822 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 12:47:35.542965 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 12:47:35.544808 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 12:47:35.549091 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 12:47:35.549192 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 12:47:35.553170 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 12:47:35.556521 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:47:35.556521 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:47:35.559881 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:47:35.559510 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 12:47:35.561409 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 12:47:35.574185 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 12:47:35.594549 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 12:47:35.594659 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 12:47:35.596464 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 12:47:35.597277 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 12:47:35.599156 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 12:47:35.599996 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 12:47:35.616901 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 12:47:35.626939 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 12:47:35.635152 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:47:35.636174 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:47:35.638243 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 12:47:35.639893 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 12:47:35.640026 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 12:47:35.642476 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 12:47:35.644438 systemd[1]: Stopped target basic.target - Basic System. Jan 30 12:47:35.646080 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 12:47:35.647626 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 12:47:35.649485 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 12:47:35.651455 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 12:47:35.653203 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 12:47:35.655059 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 12:47:35.656991 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 12:47:35.658594 systemd[1]: Stopped target swap.target - Swaps. Jan 30 12:47:35.660062 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 12:47:35.660193 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 12:47:35.662586 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:47:35.664486 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:47:35.666284 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 12:47:35.669799 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:47:35.670901 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 12:47:35.671040 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 12:47:35.673597 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 12:47:35.673706 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 12:47:35.675672 systemd[1]: Stopped target paths.target - Path Units. Jan 30 12:47:35.677289 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 12:47:35.680826 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:47:35.682248 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 12:47:35.684644 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 12:47:35.686593 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 12:47:35.686699 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 12:47:35.688583 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 12:47:35.688681 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 12:47:35.690703 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 12:47:35.690852 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 12:47:35.692784 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 12:47:35.692895 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 12:47:35.704972 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 12:47:35.706684 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 12:47:35.707781 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 12:47:35.707916 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:47:35.710080 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 12:47:35.710185 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 12:47:35.715970 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 12:47:35.717006 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 12:47:35.719778 ignition[997]: INFO : Ignition 2.19.0 Jan 30 12:47:35.719778 ignition[997]: INFO : Stage: umount Jan 30 12:47:35.722360 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:47:35.722360 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:47:35.722360 ignition[997]: INFO : umount: umount passed Jan 30 12:47:35.722360 ignition[997]: INFO : Ignition finished successfully Jan 30 12:47:35.722756 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 12:47:35.722870 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 12:47:35.724679 systemd[1]: Stopped target network.target - Network. Jan 30 12:47:35.726348 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 12:47:35.726410 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 12:47:35.728223 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 12:47:35.728277 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 12:47:35.729716 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 12:47:35.729767 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 12:47:35.731625 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 12:47:35.731670 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 12:47:35.733547 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 12:47:35.737609 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 12:47:35.737841 systemd-networkd[765]: eth0: DHCPv6 lease lost Jan 30 12:47:35.740483 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 12:47:35.741014 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 12:47:35.741114 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 12:47:35.748804 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 12:47:35.748857 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:47:35.759871 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 12:47:35.760860 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 12:47:35.760923 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 12:47:35.764184 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:47:35.766888 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 12:47:35.767000 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 12:47:35.771556 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 12:47:35.771612 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:47:35.773637 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 12:47:35.773688 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 12:47:35.775988 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 12:47:35.776034 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:47:35.778386 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 12:47:35.778516 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:47:35.780717 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 12:47:35.780824 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 12:47:35.783063 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 12:47:35.783114 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 12:47:35.784356 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 12:47:35.784393 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:47:35.786371 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 12:47:35.786419 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 12:47:35.788255 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 12:47:35.788298 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 12:47:35.791439 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 12:47:35.791492 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:47:35.804886 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 12:47:35.806005 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 12:47:35.806069 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:47:35.808215 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 12:47:35.808261 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 12:47:35.810434 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 12:47:35.810480 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:47:35.812514 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 12:47:35.812558 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:47:35.814798 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 12:47:35.814887 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 12:47:35.817331 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 12:47:35.817417 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 12:47:35.819342 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 12:47:35.820491 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 12:47:35.820556 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 12:47:35.829928 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 12:47:35.837294 systemd[1]: Switching root. Jan 30 12:47:35.861772 systemd-journald[237]: Journal stopped Jan 30 12:47:36.596365 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jan 30 12:47:36.596420 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 12:47:36.596432 kernel: SELinux: policy capability open_perms=1 Jan 30 12:47:36.596442 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 12:47:36.596455 kernel: SELinux: policy capability always_check_network=0 Jan 30 12:47:36.596466 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 12:47:36.596478 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 12:47:36.596490 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 12:47:36.596504 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 12:47:36.596514 kernel: audit: type=1403 audit(1738241256.007:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 12:47:36.596526 systemd[1]: Successfully loaded SELinux policy in 32.522ms. Jan 30 12:47:36.596546 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.988ms. Jan 30 12:47:36.596558 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 12:47:36.596574 systemd[1]: Detected virtualization kvm. Jan 30 12:47:36.596585 systemd[1]: Detected architecture arm64. Jan 30 12:47:36.596596 systemd[1]: Detected first boot. Jan 30 12:47:36.596607 systemd[1]: Initializing machine ID from VM UUID. Jan 30 12:47:36.596619 zram_generator::config[1042]: No configuration found. Jan 30 12:47:36.596631 systemd[1]: Populated /etc with preset unit settings. Jan 30 12:47:36.596642 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 12:47:36.596654 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 12:47:36.596664 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 12:47:36.596676 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 12:47:36.596687 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 12:47:36.596698 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 12:47:36.596709 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 12:47:36.596720 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 12:47:36.596743 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 12:47:36.596758 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 12:47:36.596768 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 12:47:36.596780 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:47:36.596791 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:47:36.596802 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 12:47:36.596814 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 12:47:36.596826 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 12:47:36.596839 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 12:47:36.596850 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 12:47:36.596861 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:47:36.596872 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 12:47:36.596882 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 12:47:36.596893 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 12:47:36.596904 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 12:47:36.596915 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:47:36.596928 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 12:47:36.596939 systemd[1]: Reached target slices.target - Slice Units. Jan 30 12:47:36.596950 systemd[1]: Reached target swap.target - Swaps. Jan 30 12:47:36.596961 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 12:47:36.597055 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 12:47:36.597072 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:47:36.597084 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 12:47:36.597095 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:47:36.597106 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 12:47:36.597120 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 12:47:36.597132 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 12:47:36.597143 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 12:47:36.597156 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 12:47:36.597167 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 12:47:36.597178 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 12:47:36.597189 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 12:47:36.597200 systemd[1]: Reached target machines.target - Containers. Jan 30 12:47:36.597211 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 12:47:36.597223 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:47:36.597233 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 12:47:36.597244 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 12:47:36.597255 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:47:36.597265 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 12:47:36.597276 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:47:36.597287 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 12:47:36.597298 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:47:36.597311 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 12:47:36.597321 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 12:47:36.597332 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 12:47:36.597343 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 12:47:36.597353 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 12:47:36.597364 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 12:47:36.597375 kernel: loop: module loaded Jan 30 12:47:36.597387 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 12:47:36.597398 kernel: ACPI: bus type drm_connector registered Jan 30 12:47:36.597409 kernel: fuse: init (API version 7.39) Jan 30 12:47:36.597420 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 12:47:36.597431 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 12:47:36.597441 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 12:47:36.597452 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 12:47:36.597463 systemd[1]: Stopped verity-setup.service. Jan 30 12:47:36.597473 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 12:47:36.597484 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 12:47:36.597494 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 12:47:36.597507 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 12:47:36.597540 systemd-journald[1109]: Collecting audit messages is disabled. Jan 30 12:47:36.597563 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 12:47:36.597574 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 12:47:36.597587 systemd-journald[1109]: Journal started Jan 30 12:47:36.597609 systemd-journald[1109]: Runtime Journal (/run/log/journal/3e9447c54fe74d9d987b4db96bc2d5d3) is 5.9M, max 47.3M, 41.4M free. Jan 30 12:47:36.392123 systemd[1]: Queued start job for default target multi-user.target. Jan 30 12:47:36.410647 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 12:47:36.411026 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 12:47:36.600278 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 12:47:36.602778 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:47:36.604095 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 12:47:36.604223 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 12:47:36.605599 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 12:47:36.606856 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:47:36.607094 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:47:36.608356 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 12:47:36.608509 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 12:47:36.609664 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:47:36.609829 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:47:36.611029 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 12:47:36.611179 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 12:47:36.612494 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:47:36.612637 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:47:36.613885 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 12:47:36.615078 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 12:47:36.616623 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 12:47:36.630383 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 12:47:36.644903 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 12:47:36.647291 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 12:47:36.648257 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 12:47:36.648308 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 12:47:36.650317 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 12:47:36.652568 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 12:47:36.654627 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 12:47:36.655596 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:47:36.657951 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 12:47:36.659911 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 12:47:36.660807 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 12:47:36.661782 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 12:47:36.662631 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 12:47:36.665979 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:47:36.669545 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 12:47:36.678173 systemd-journald[1109]: Time spent on flushing to /var/log/journal/3e9447c54fe74d9d987b4db96bc2d5d3 is 33.240ms for 838 entries. Jan 30 12:47:36.678173 systemd-journald[1109]: System Journal (/var/log/journal/3e9447c54fe74d9d987b4db96bc2d5d3) is 8.0M, max 195.6M, 187.6M free. Jan 30 12:47:36.719916 systemd-journald[1109]: Received client request to flush runtime journal. Jan 30 12:47:36.719989 kernel: loop0: detected capacity change from 0 to 114328 Jan 30 12:47:36.678100 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 12:47:36.683473 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:47:36.685009 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 12:47:36.686109 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 12:47:36.687327 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 12:47:36.696056 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 12:47:36.713509 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 12:47:36.719126 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:47:36.721951 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 12:47:36.726754 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 12:47:36.740314 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Jan 30 12:47:36.740336 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Jan 30 12:47:36.740997 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 12:47:36.742531 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 12:47:36.750279 udevadm[1160]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 12:47:36.753084 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 12:47:36.758784 kernel: loop1: detected capacity change from 0 to 114432 Jan 30 12:47:36.764997 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 12:47:36.766575 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 12:47:36.767378 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 12:47:36.801767 kernel: loop2: detected capacity change from 0 to 201592 Jan 30 12:47:36.802617 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 12:47:36.816775 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 12:47:36.828309 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 30 12:47:36.828324 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Jan 30 12:47:36.832506 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:47:36.844765 kernel: loop3: detected capacity change from 0 to 114328 Jan 30 12:47:36.859764 kernel: loop4: detected capacity change from 0 to 114432 Jan 30 12:47:36.863880 kernel: loop5: detected capacity change from 0 to 201592 Jan 30 12:47:36.870806 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 12:47:36.871570 (sd-merge)[1181]: Merged extensions into '/usr'. Jan 30 12:47:36.875572 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 12:47:36.875588 systemd[1]: Reloading... Jan 30 12:47:36.927082 zram_generator::config[1207]: No configuration found. Jan 30 12:47:36.969371 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 12:47:37.023234 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:47:37.060488 systemd[1]: Reloading finished in 184 ms. Jan 30 12:47:37.094331 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 12:47:37.095833 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 12:47:37.109068 systemd[1]: Starting ensure-sysext.service... Jan 30 12:47:37.111126 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 12:47:37.119636 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Jan 30 12:47:37.119652 systemd[1]: Reloading... Jan 30 12:47:37.129157 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 12:47:37.129413 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 12:47:37.130050 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 12:47:37.130265 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jan 30 12:47:37.130316 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jan 30 12:47:37.133339 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 12:47:37.133351 systemd-tmpfiles[1243]: Skipping /boot Jan 30 12:47:37.140553 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 12:47:37.140571 systemd-tmpfiles[1243]: Skipping /boot Jan 30 12:47:37.179771 zram_generator::config[1270]: No configuration found. Jan 30 12:47:37.261140 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:47:37.297567 systemd[1]: Reloading finished in 177 ms. Jan 30 12:47:37.312860 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 12:47:37.339311 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:47:37.346524 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 12:47:37.348833 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 12:47:37.350942 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 12:47:37.354048 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 12:47:37.359690 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:47:37.364840 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 12:47:37.369647 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:47:37.374080 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:47:37.378747 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:47:37.384567 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:47:37.386930 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:47:37.387713 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:47:37.389784 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:47:37.392189 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:47:37.392319 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:47:37.407908 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 12:47:37.409033 systemd-udevd[1312]: Using default interface naming scheme 'v255'. Jan 30 12:47:37.410000 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:47:37.410136 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:47:37.414479 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:47:37.426140 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:47:37.428134 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:47:37.429991 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:47:37.430817 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:47:37.434818 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 12:47:37.439538 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 12:47:37.442084 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:47:37.443561 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 12:47:37.446771 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 12:47:37.448458 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:47:37.448586 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:47:37.451275 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:47:37.451448 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:47:37.453490 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:47:37.453613 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:47:37.455766 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 12:47:37.460612 augenrules[1356]: No rules Jan 30 12:47:37.459677 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 12:47:37.475102 systemd[1]: Finished ensure-sysext.service. Jan 30 12:47:37.477648 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 30 12:47:37.479187 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:47:37.490775 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:47:37.498942 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1341) Jan 30 12:47:37.494916 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 12:47:37.498043 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:47:37.502917 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:47:37.503917 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:47:37.509865 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 12:47:37.518505 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 12:47:37.520433 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 12:47:37.520795 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 12:47:37.522234 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:47:37.524390 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:47:37.525635 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:47:37.525806 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:47:37.542885 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 12:47:37.547330 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 12:47:37.550474 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:47:37.550640 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:47:37.555874 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 12:47:37.557015 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 12:47:37.557378 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 12:47:37.557540 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 12:47:37.585348 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 12:47:37.599307 systemd-resolved[1310]: Positive Trust Anchors: Jan 30 12:47:37.599325 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 12:47:37.599361 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 12:47:37.609942 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 12:47:37.611144 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 12:47:37.614499 systemd-resolved[1310]: Defaulting to hostname 'linux'. Jan 30 12:47:37.619731 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 12:47:37.621316 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:47:37.631281 systemd-networkd[1377]: lo: Link UP Jan 30 12:47:37.631295 systemd-networkd[1377]: lo: Gained carrier Jan 30 12:47:37.632037 systemd-networkd[1377]: Enumeration completed Jan 30 12:47:37.634154 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:47:37.635184 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 12:47:37.636429 systemd[1]: Reached target network.target - Network. Jan 30 12:47:37.636679 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:47:37.636692 systemd-networkd[1377]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 12:47:37.637391 systemd-networkd[1377]: eth0: Link UP Jan 30 12:47:37.637399 systemd-networkd[1377]: eth0: Gained carrier Jan 30 12:47:37.637414 systemd-networkd[1377]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:47:37.638452 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 12:47:37.646232 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 12:47:37.649309 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 12:47:37.661421 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 12:47:37.673823 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:47:37.692790 systemd-networkd[1377]: eth0: DHCPv4 address 10.0.0.22/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 12:47:37.694103 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Jan 30 12:47:37.695180 systemd-timesyncd[1382]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 12:47:37.695233 systemd-timesyncd[1382]: Initial clock synchronization to Thu 2025-01-30 12:47:37.392634 UTC. Jan 30 12:47:37.698271 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 12:47:37.699505 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:47:37.701806 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 12:47:37.702639 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 12:47:37.703644 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 12:47:37.704835 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 12:47:37.705687 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 12:47:37.706702 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 12:47:37.707772 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 12:47:37.707801 systemd[1]: Reached target paths.target - Path Units. Jan 30 12:47:37.708445 systemd[1]: Reached target timers.target - Timer Units. Jan 30 12:47:37.709861 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 12:47:37.711900 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 12:47:37.723472 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 12:47:37.725802 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 12:47:37.727102 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 12:47:37.728102 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 12:47:37.728811 systemd[1]: Reached target basic.target - Basic System. Jan 30 12:47:37.729546 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 12:47:37.729577 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 12:47:37.730535 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 12:47:37.732248 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 12:47:37.734873 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 12:47:37.736874 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 12:47:37.741917 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 12:47:37.743853 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 12:47:37.747128 jq[1414]: false Jan 30 12:47:37.747501 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 12:47:37.749721 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 12:47:37.755046 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 12:47:37.759050 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 12:47:37.766319 dbus-daemon[1413]: [system] SELinux support is enabled Jan 30 12:47:37.772271 extend-filesystems[1415]: Found loop3 Jan 30 12:47:37.772271 extend-filesystems[1415]: Found loop4 Jan 30 12:47:37.772271 extend-filesystems[1415]: Found loop5 Jan 30 12:47:37.774599 extend-filesystems[1415]: Found vda Jan 30 12:47:37.774599 extend-filesystems[1415]: Found vda1 Jan 30 12:47:37.774599 extend-filesystems[1415]: Found vda2 Jan 30 12:47:37.774599 extend-filesystems[1415]: Found vda3 Jan 30 12:47:37.774599 extend-filesystems[1415]: Found usr Jan 30 12:47:37.774599 extend-filesystems[1415]: Found vda4 Jan 30 12:47:37.774599 extend-filesystems[1415]: Found vda6 Jan 30 12:47:37.774599 extend-filesystems[1415]: Found vda7 Jan 30 12:47:37.774599 extend-filesystems[1415]: Found vda9 Jan 30 12:47:37.774599 extend-filesystems[1415]: Checking size of /dev/vda9 Jan 30 12:47:37.772696 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 12:47:37.777089 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 12:47:37.779943 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 12:47:37.783014 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 12:47:37.784571 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 12:47:37.788912 jq[1432]: true Jan 30 12:47:37.789880 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 12:47:37.793911 extend-filesystems[1415]: Resized partition /dev/vda9 Jan 30 12:47:37.794945 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 12:47:37.795101 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 12:47:37.795346 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 12:47:37.795471 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 12:47:37.796642 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 12:47:37.796821 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 12:47:37.799771 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1354) Jan 30 12:47:37.808343 extend-filesystems[1436]: resize2fs 1.47.1 (20-May-2024) Jan 30 12:47:37.819759 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 12:47:37.836175 (ntainerd)[1440]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 12:47:37.845585 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 12:47:37.845620 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 12:47:37.849394 jq[1437]: true Jan 30 12:47:37.847730 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 12:47:37.854317 update_engine[1431]: I20250130 12:47:37.850455 1431 main.cc:92] Flatcar Update Engine starting Jan 30 12:47:37.847771 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 12:47:37.852811 systemd-logind[1422]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 12:47:37.855149 systemd-logind[1422]: New seat seat0. Jan 30 12:47:37.860947 update_engine[1431]: I20250130 12:47:37.860894 1431 update_check_scheduler.cc:74] Next update check in 5m39s Jan 30 12:47:37.866974 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 12:47:37.873649 systemd[1]: Started update-engine.service - Update Engine. Jan 30 12:47:37.878769 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 12:47:37.880052 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 12:47:37.889902 extend-filesystems[1436]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 12:47:37.889902 extend-filesystems[1436]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 12:47:37.889902 extend-filesystems[1436]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 12:47:37.892509 extend-filesystems[1415]: Resized filesystem in /dev/vda9 Jan 30 12:47:37.893496 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 12:47:37.893717 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 12:47:37.912756 bash[1463]: Updated "/home/core/.ssh/authorized_keys" Jan 30 12:47:37.915789 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 12:47:37.919939 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 12:47:37.937888 locksmithd[1456]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 12:47:38.037192 containerd[1440]: time="2025-01-30T12:47:38.037068221Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 12:47:38.061449 containerd[1440]: time="2025-01-30T12:47:38.061367430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:47:38.063919 containerd[1440]: time="2025-01-30T12:47:38.062815915Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:47:38.063919 containerd[1440]: time="2025-01-30T12:47:38.062851978Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 12:47:38.063919 containerd[1440]: time="2025-01-30T12:47:38.062867950Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 12:47:38.063919 containerd[1440]: time="2025-01-30T12:47:38.063023672Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 12:47:38.063919 containerd[1440]: time="2025-01-30T12:47:38.063041876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 12:47:38.063919 containerd[1440]: time="2025-01-30T12:47:38.063090717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:47:38.063919 containerd[1440]: time="2025-01-30T12:47:38.063103803Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:47:38.063919 containerd[1440]: time="2025-01-30T12:47:38.063245861Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:47:38.063919 containerd[1440]: time="2025-01-30T12:47:38.063259601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 12:47:38.063919 containerd[1440]: time="2025-01-30T12:47:38.063271686Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:47:38.063919 containerd[1440]: time="2025-01-30T12:47:38.063281154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 12:47:38.064151 containerd[1440]: time="2025-01-30T12:47:38.063348546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:47:38.064151 containerd[1440]: time="2025-01-30T12:47:38.063542178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:47:38.064151 containerd[1440]: time="2025-01-30T12:47:38.063629622Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:47:38.064151 containerd[1440]: time="2025-01-30T12:47:38.063644555Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 12:47:38.064151 containerd[1440]: time="2025-01-30T12:47:38.063729574Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 12:47:38.064151 containerd[1440]: time="2025-01-30T12:47:38.063788345Z" level=info msg="metadata content store policy set" policy=shared Jan 30 12:47:38.071809 containerd[1440]: time="2025-01-30T12:47:38.071779161Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 12:47:38.071949 containerd[1440]: time="2025-01-30T12:47:38.071935268Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 12:47:38.072027 containerd[1440]: time="2025-01-30T12:47:38.072015668Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 12:47:38.072107 containerd[1440]: time="2025-01-30T12:47:38.072095030Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 12:47:38.072177 containerd[1440]: time="2025-01-30T12:47:38.072165078Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 12:47:38.072429 containerd[1440]: time="2025-01-30T12:47:38.072409051Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 12:47:38.073061 containerd[1440]: time="2025-01-30T12:47:38.073038825Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 12:47:38.073515 containerd[1440]: time="2025-01-30T12:47:38.073294461Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 12:47:38.073515 containerd[1440]: time="2025-01-30T12:47:38.073405998Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 12:47:38.073515 containerd[1440]: time="2025-01-30T12:47:38.073427012Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 12:47:38.073515 containerd[1440]: time="2025-01-30T12:47:38.073447796Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 12:47:38.073515 containerd[1440]: time="2025-01-30T12:47:38.073464692Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 12:47:38.073515 containerd[1440]: time="2025-01-30T12:47:38.073482204Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 12:47:38.073515 containerd[1440]: time="2025-01-30T12:47:38.073497907Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 12:47:38.073515 containerd[1440]: time="2025-01-30T12:47:38.073516766Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 12:47:38.073686 containerd[1440]: time="2025-01-30T12:47:38.073533700Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 12:47:38.073686 containerd[1440]: time="2025-01-30T12:47:38.073550058Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 12:47:38.073686 containerd[1440]: time="2025-01-30T12:47:38.073565145Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 12:47:38.073686 containerd[1440]: time="2025-01-30T12:47:38.073588584Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 12:47:38.073686 containerd[1440]: time="2025-01-30T12:47:38.073607366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 12:47:38.073686 containerd[1440]: time="2025-01-30T12:47:38.073622530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 12:47:38.073686 containerd[1440]: time="2025-01-30T12:47:38.073637040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 12:47:38.073686 containerd[1440]: time="2025-01-30T12:47:38.073653512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 12:47:38.073686 containerd[1440]: time="2025-01-30T12:47:38.073670101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 12:47:38.073686 containerd[1440]: time="2025-01-30T12:47:38.073685380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 12:47:38.073868 containerd[1440]: time="2025-01-30T12:47:38.073708204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 12:47:38.073868 containerd[1440]: time="2025-01-30T12:47:38.073726408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 12:47:38.073868 containerd[1440]: time="2025-01-30T12:47:38.073755312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 12:47:38.073868 containerd[1440]: time="2025-01-30T12:47:38.073769360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 12:47:38.073868 containerd[1440]: time="2025-01-30T12:47:38.073784833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 12:47:38.073868 containerd[1440]: time="2025-01-30T12:47:38.073800728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 12:47:38.073868 containerd[1440]: time="2025-01-30T12:47:38.073824552Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 12:47:38.073868 containerd[1440]: time="2025-01-30T12:47:38.073851532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 12:47:38.073868 containerd[1440]: time="2025-01-30T12:47:38.073867158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 12:47:38.074009 containerd[1440]: time="2025-01-30T12:47:38.073881668Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 12:47:38.074147 containerd[1440]: time="2025-01-30T12:47:38.074031231Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 12:47:38.074334 containerd[1440]: time="2025-01-30T12:47:38.074293447Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 12:47:38.074334 containerd[1440]: time="2025-01-30T12:47:38.074313192Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 12:47:38.074334 containerd[1440]: time="2025-01-30T12:47:38.074327586Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 12:47:38.074406 containerd[1440]: time="2025-01-30T12:47:38.074336785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 12:47:38.074406 containerd[1440]: time="2025-01-30T12:47:38.074351064Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 12:47:38.074517 containerd[1440]: time="2025-01-30T12:47:38.074442510Z" level=info msg="NRI interface is disabled by configuration." Jan 30 12:47:38.074517 containerd[1440]: time="2025-01-30T12:47:38.074456905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 12:47:38.074837 containerd[1440]: time="2025-01-30T12:47:38.074781125Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 12:47:38.074957 containerd[1440]: time="2025-01-30T12:47:38.074840281Z" level=info msg="Connect containerd service" Jan 30 12:47:38.074957 containerd[1440]: time="2025-01-30T12:47:38.074868492Z" level=info msg="using legacy CRI server" Jan 30 12:47:38.074957 containerd[1440]: time="2025-01-30T12:47:38.074878768Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 12:47:38.074957 containerd[1440]: time="2025-01-30T12:47:38.074951934Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 12:47:38.075619 containerd[1440]: time="2025-01-30T12:47:38.075594563Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 12:47:38.075846 containerd[1440]: time="2025-01-30T12:47:38.075800934Z" level=info msg="Start subscribing containerd event" Jan 30 12:47:38.075883 containerd[1440]: time="2025-01-30T12:47:38.075867517Z" level=info msg="Start recovering state" Jan 30 12:47:38.075948 containerd[1440]: time="2025-01-30T12:47:38.075936295Z" level=info msg="Start event monitor" Jan 30 12:47:38.076190 containerd[1440]: time="2025-01-30T12:47:38.075955462Z" level=info msg="Start snapshots syncer" Jan 30 12:47:38.076190 containerd[1440]: time="2025-01-30T12:47:38.075966161Z" level=info msg="Start cni network conf syncer for default" Jan 30 12:47:38.076190 containerd[1440]: time="2025-01-30T12:47:38.075973128Z" level=info msg="Start streaming server" Jan 30 12:47:38.076190 containerd[1440]: time="2025-01-30T12:47:38.076083664Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 12:47:38.076190 containerd[1440]: time="2025-01-30T12:47:38.076121652Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 12:47:38.076190 containerd[1440]: time="2025-01-30T12:47:38.076184964Z" level=info msg="containerd successfully booted in 0.040485s" Jan 30 12:47:38.076284 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 12:47:39.193577 sshd_keygen[1429]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 12:47:39.213888 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 12:47:39.227053 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 12:47:39.232656 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 12:47:39.234779 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 12:47:39.237218 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 12:47:39.252724 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 12:47:39.255532 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 12:47:39.257554 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 12:47:39.258873 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 12:47:39.464854 systemd-networkd[1377]: eth0: Gained IPv6LL Jan 30 12:47:39.467443 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 12:47:39.469120 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 12:47:39.477988 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 12:47:39.480418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:47:39.482276 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 12:47:39.497314 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 12:47:39.497538 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 12:47:39.498782 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 12:47:39.505599 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 12:47:40.025389 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:47:40.026971 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 12:47:40.030811 systemd[1]: Startup finished in 561ms (kernel) + 4.288s (initrd) + 4.066s (userspace) = 8.917s. Jan 30 12:47:40.031375 (kubelet)[1518]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 12:47:40.424881 kubelet[1518]: E0130 12:47:40.424760 1518 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 12:47:40.426974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 12:47:40.427119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 12:47:44.595302 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 12:47:44.596414 systemd[1]: Started sshd@0-10.0.0.22:22-10.0.0.1:53924.service - OpenSSH per-connection server daemon (10.0.0.1:53924). Jan 30 12:47:44.650749 sshd[1531]: Accepted publickey for core from 10.0.0.1 port 53924 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:47:44.653340 sshd[1531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:47:44.664609 systemd-logind[1422]: New session 1 of user core. Jan 30 12:47:44.665758 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 12:47:44.676067 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 12:47:44.689491 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 12:47:44.693043 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 12:47:44.701820 (systemd)[1535]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 12:47:44.776404 systemd[1535]: Queued start job for default target default.target. Jan 30 12:47:44.786676 systemd[1535]: Created slice app.slice - User Application Slice. Jan 30 12:47:44.786743 systemd[1535]: Reached target paths.target - Paths. Jan 30 12:47:44.786758 systemd[1535]: Reached target timers.target - Timers. Jan 30 12:47:44.788026 systemd[1535]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 12:47:44.798718 systemd[1535]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 12:47:44.798888 systemd[1535]: Reached target sockets.target - Sockets. Jan 30 12:47:44.798909 systemd[1535]: Reached target basic.target - Basic System. Jan 30 12:47:44.798945 systemd[1535]: Reached target default.target - Main User Target. Jan 30 12:47:44.798971 systemd[1535]: Startup finished in 89ms. Jan 30 12:47:44.799169 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 12:47:44.800772 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 12:47:44.865804 systemd[1]: Started sshd@1-10.0.0.22:22-10.0.0.1:53926.service - OpenSSH per-connection server daemon (10.0.0.1:53926). Jan 30 12:47:44.910012 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 53926 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:47:44.911869 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:47:44.920845 systemd-logind[1422]: New session 2 of user core. Jan 30 12:47:44.929989 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 12:47:44.984322 sshd[1546]: pam_unix(sshd:session): session closed for user core Jan 30 12:47:44.996126 systemd[1]: sshd@1-10.0.0.22:22-10.0.0.1:53926.service: Deactivated successfully. Jan 30 12:47:44.997577 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 12:47:45.003057 systemd-logind[1422]: Session 2 logged out. Waiting for processes to exit. Jan 30 12:47:45.015178 systemd[1]: Started sshd@2-10.0.0.22:22-10.0.0.1:53934.service - OpenSSH per-connection server daemon (10.0.0.1:53934). Jan 30 12:47:45.016347 systemd-logind[1422]: Removed session 2. Jan 30 12:47:45.050638 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 53934 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:47:45.052319 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:47:45.056591 systemd-logind[1422]: New session 3 of user core. Jan 30 12:47:45.066956 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 12:47:45.120987 sshd[1553]: pam_unix(sshd:session): session closed for user core Jan 30 12:47:45.135284 systemd[1]: sshd@2-10.0.0.22:22-10.0.0.1:53934.service: Deactivated successfully. Jan 30 12:47:45.139341 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 12:47:45.141858 systemd-logind[1422]: Session 3 logged out. Waiting for processes to exit. Jan 30 12:47:45.157106 systemd[1]: Started sshd@3-10.0.0.22:22-10.0.0.1:53940.service - OpenSSH per-connection server daemon (10.0.0.1:53940). Jan 30 12:47:45.158569 systemd-logind[1422]: Removed session 3. Jan 30 12:47:45.191913 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 53940 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:47:45.194057 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:47:45.198501 systemd-logind[1422]: New session 4 of user core. Jan 30 12:47:45.211963 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 12:47:45.265656 sshd[1560]: pam_unix(sshd:session): session closed for user core Jan 30 12:47:45.282224 systemd[1]: sshd@3-10.0.0.22:22-10.0.0.1:53940.service: Deactivated successfully. Jan 30 12:47:45.284271 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 12:47:45.285598 systemd-logind[1422]: Session 4 logged out. Waiting for processes to exit. Jan 30 12:47:45.287029 systemd[1]: Started sshd@4-10.0.0.22:22-10.0.0.1:53954.service - OpenSSH per-connection server daemon (10.0.0.1:53954). Jan 30 12:47:45.287833 systemd-logind[1422]: Removed session 4. Jan 30 12:47:45.329080 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 53954 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:47:45.332159 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:47:45.341798 systemd-logind[1422]: New session 5 of user core. Jan 30 12:47:45.353057 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 12:47:45.429784 sudo[1570]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 12:47:45.430381 sudo[1570]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:47:45.450046 sudo[1570]: pam_unix(sudo:session): session closed for user root Jan 30 12:47:45.454951 sshd[1567]: pam_unix(sshd:session): session closed for user core Jan 30 12:47:45.468615 systemd[1]: sshd@4-10.0.0.22:22-10.0.0.1:53954.service: Deactivated successfully. Jan 30 12:47:45.470261 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 12:47:45.471699 systemd-logind[1422]: Session 5 logged out. Waiting for processes to exit. Jan 30 12:47:45.487175 systemd[1]: Started sshd@5-10.0.0.22:22-10.0.0.1:53966.service - OpenSSH per-connection server daemon (10.0.0.1:53966). Jan 30 12:47:45.488128 systemd-logind[1422]: Removed session 5. Jan 30 12:47:45.524201 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 53966 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:47:45.526161 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:47:45.530653 systemd-logind[1422]: New session 6 of user core. Jan 30 12:47:45.540047 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 12:47:45.593302 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 12:47:45.593617 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:47:45.597200 sudo[1579]: pam_unix(sudo:session): session closed for user root Jan 30 12:47:45.603550 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 12:47:45.603896 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:47:45.628137 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 12:47:45.629843 auditctl[1582]: No rules Jan 30 12:47:45.630193 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 12:47:45.630364 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 12:47:45.633672 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 12:47:45.663760 augenrules[1600]: No rules Jan 30 12:47:45.665312 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 12:47:45.666698 sudo[1578]: pam_unix(sudo:session): session closed for user root Jan 30 12:47:45.669703 sshd[1575]: pam_unix(sshd:session): session closed for user core Jan 30 12:47:45.680755 systemd[1]: sshd@5-10.0.0.22:22-10.0.0.1:53966.service: Deactivated successfully. Jan 30 12:47:45.683534 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 12:47:45.685832 systemd-logind[1422]: Session 6 logged out. Waiting for processes to exit. Jan 30 12:47:45.696110 systemd[1]: Started sshd@6-10.0.0.22:22-10.0.0.1:53982.service - OpenSSH per-connection server daemon (10.0.0.1:53982). Jan 30 12:47:45.697049 systemd-logind[1422]: Removed session 6. Jan 30 12:47:45.734882 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 53982 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:47:45.736632 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:47:45.740856 systemd-logind[1422]: New session 7 of user core. Jan 30 12:47:45.753967 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 12:47:45.808104 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 12:47:45.808373 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:47:45.832079 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 12:47:45.851679 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 12:47:45.853787 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 12:47:46.396340 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:47:46.405098 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:47:46.428503 systemd[1]: Reloading requested from client PID 1653 ('systemctl') (unit session-7.scope)... Jan 30 12:47:46.428519 systemd[1]: Reloading... Jan 30 12:47:46.494757 zram_generator::config[1688]: No configuration found. Jan 30 12:47:46.705794 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:47:46.758849 systemd[1]: Reloading finished in 330 ms. Jan 30 12:47:46.797013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:47:46.799520 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:47:46.801155 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 12:47:46.801404 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:47:46.804047 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:47:46.906938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:47:46.911688 (kubelet)[1738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 12:47:46.945222 kubelet[1738]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:47:46.945222 kubelet[1738]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 12:47:46.945222 kubelet[1738]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:47:46.945545 kubelet[1738]: I0130 12:47:46.945283 1738 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 12:47:47.493380 kubelet[1738]: I0130 12:47:47.493334 1738 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 12:47:47.493380 kubelet[1738]: I0130 12:47:47.493374 1738 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 12:47:47.493988 kubelet[1738]: I0130 12:47:47.493968 1738 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 12:47:47.559630 kubelet[1738]: I0130 12:47:47.559585 1738 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 12:47:47.573887 kubelet[1738]: E0130 12:47:47.573854 1738 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 12:47:47.573887 kubelet[1738]: I0130 12:47:47.573888 1738 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 12:47:47.578083 kubelet[1738]: I0130 12:47:47.577978 1738 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 12:47:47.579761 kubelet[1738]: I0130 12:47:47.579217 1738 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 12:47:47.579761 kubelet[1738]: I0130 12:47:47.579278 1738 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.22","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 12:47:47.579761 kubelet[1738]: I0130 12:47:47.579563 1738 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 12:47:47.579761 kubelet[1738]: I0130 12:47:47.579572 1738 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 12:47:47.580322 kubelet[1738]: I0130 12:47:47.579847 1738 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:47:47.582988 kubelet[1738]: I0130 12:47:47.582942 1738 kubelet.go:446] "Attempting to sync node with API server" Jan 30 12:47:47.582988 kubelet[1738]: I0130 12:47:47.582974 1738 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 12:47:47.582988 kubelet[1738]: I0130 12:47:47.582996 1738 kubelet.go:352] "Adding apiserver pod source" Jan 30 12:47:47.583160 kubelet[1738]: I0130 12:47:47.583007 1738 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 12:47:47.583321 kubelet[1738]: E0130 12:47:47.583279 1738 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:47:47.583362 kubelet[1738]: E0130 12:47:47.583341 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:47:47.586378 kubelet[1738]: I0130 12:47:47.586336 1738 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 12:47:47.589460 kubelet[1738]: I0130 12:47:47.587262 1738 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 12:47:47.589460 kubelet[1738]: W0130 12:47:47.587515 1738 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 12:47:47.589460 kubelet[1738]: I0130 12:47:47.589149 1738 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 12:47:47.589460 kubelet[1738]: I0130 12:47:47.589183 1738 server.go:1287] "Started kubelet" Jan 30 12:47:47.589460 kubelet[1738]: I0130 12:47:47.589425 1738 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 12:47:47.590619 kubelet[1738]: I0130 12:47:47.590578 1738 server.go:490] "Adding debug handlers to kubelet server" Jan 30 12:47:47.596413 kubelet[1738]: I0130 12:47:47.596304 1738 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 12:47:47.596945 kubelet[1738]: I0130 12:47:47.596696 1738 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 12:47:47.598816 kubelet[1738]: I0130 12:47:47.598181 1738 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 12:47:47.598816 kubelet[1738]: I0130 12:47:47.598615 1738 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 12:47:47.600695 kubelet[1738]: I0130 12:47:47.600655 1738 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 12:47:47.601227 kubelet[1738]: E0130 12:47:47.601187 1738 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.22\" not found" Jan 30 12:47:47.602716 kubelet[1738]: I0130 12:47:47.601977 1738 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 12:47:47.602716 kubelet[1738]: I0130 12:47:47.602172 1738 factory.go:221] Registration of the systemd container factory successfully Jan 30 12:47:47.602716 kubelet[1738]: I0130 12:47:47.602273 1738 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 12:47:47.603398 kubelet[1738]: I0130 12:47:47.602978 1738 reconciler.go:26] "Reconciler: start to sync state" Jan 30 12:47:47.608110 kubelet[1738]: E0130 12:47:47.606747 1738 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 12:47:47.608110 kubelet[1738]: I0130 12:47:47.606897 1738 factory.go:221] Registration of the containerd container factory successfully Jan 30 12:47:47.608110 kubelet[1738]: E0130 12:47:47.608030 1738 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.22\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 30 12:47:47.609546 kubelet[1738]: W0130 12:47:47.609481 1738 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 12:47:47.609742 kubelet[1738]: E0130 12:47:47.609703 1738 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 30 12:47:47.610893 kubelet[1738]: E0130 12:47:47.609872 1738 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.22.181f79392893aa5c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.22,UID:10.0.0.22,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.22,},FirstTimestamp:2025-01-30 12:47:47.589163612 +0000 UTC m=+0.674368620,LastTimestamp:2025-01-30 12:47:47.589163612 +0000 UTC m=+0.674368620,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.22,}" Jan 30 12:47:47.611184 kubelet[1738]: W0130 12:47:47.611161 1738 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 30 12:47:47.611317 kubelet[1738]: E0130 12:47:47.611280 1738 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 30 12:47:47.611605 kubelet[1738]: W0130 12:47:47.611583 1738 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.22" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 12:47:47.611702 kubelet[1738]: E0130 12:47:47.611688 1738 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.22\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 30 12:47:47.625365 kubelet[1738]: I0130 12:47:47.625323 1738 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 12:47:47.625365 kubelet[1738]: I0130 12:47:47.625346 1738 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 12:47:47.625365 kubelet[1738]: I0130 12:47:47.625370 1738 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:47:47.702262 kubelet[1738]: E0130 12:47:47.702173 1738 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.22\" not found" Jan 30 12:47:47.713996 kubelet[1738]: I0130 12:47:47.713961 1738 policy_none.go:49] "None policy: Start" Jan 30 12:47:47.713996 kubelet[1738]: I0130 12:47:47.714008 1738 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 12:47:47.714132 kubelet[1738]: I0130 12:47:47.714021 1738 state_mem.go:35] "Initializing new in-memory state store" Jan 30 12:47:47.721256 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 12:47:47.738845 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 12:47:47.741961 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 12:47:47.749039 kubelet[1738]: I0130 12:47:47.748986 1738 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 12:47:47.749168 kubelet[1738]: I0130 12:47:47.749067 1738 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 12:47:47.749238 kubelet[1738]: I0130 12:47:47.749204 1738 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 12:47:47.749275 kubelet[1738]: I0130 12:47:47.749216 1738 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 12:47:47.749652 kubelet[1738]: I0130 12:47:47.749612 1738 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 12:47:47.751310 kubelet[1738]: I0130 12:47:47.751144 1738 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 12:47:47.751310 kubelet[1738]: I0130 12:47:47.751181 1738 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 12:47:47.751310 kubelet[1738]: I0130 12:47:47.751218 1738 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 12:47:47.751310 kubelet[1738]: I0130 12:47:47.751227 1738 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 12:47:47.751495 kubelet[1738]: E0130 12:47:47.751478 1738 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 30 12:47:47.751523 kubelet[1738]: E0130 12:47:47.751501 1738 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 12:47:47.751543 kubelet[1738]: E0130 12:47:47.751531 1738 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.22\" not found" Jan 30 12:47:47.820967 kubelet[1738]: E0130 12:47:47.820900 1738 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.22\" not found" node="10.0.0.22" Jan 30 12:47:47.850861 kubelet[1738]: I0130 12:47:47.850817 1738 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.22" Jan 30 12:47:47.855769 kubelet[1738]: I0130 12:47:47.855652 1738 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.22" Jan 30 12:47:47.855769 kubelet[1738]: E0130 12:47:47.855692 1738 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.0.0.22\": node \"10.0.0.22\" not found" Jan 30 12:47:47.864005 kubelet[1738]: I0130 12:47:47.863964 1738 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 30 12:47:47.864517 containerd[1440]: time="2025-01-30T12:47:47.864477306Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 12:47:47.864860 kubelet[1738]: I0130 12:47:47.864702 1738 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 30 12:47:47.874447 kubelet[1738]: E0130 12:47:47.874403 1738 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.22\" not found" Jan 30 12:47:47.947471 sudo[1611]: pam_unix(sudo:session): session closed for user root Jan 30 12:47:47.949837 sshd[1608]: pam_unix(sshd:session): session closed for user core Jan 30 12:47:47.952617 systemd[1]: sshd@6-10.0.0.22:22-10.0.0.1:53982.service: Deactivated successfully. Jan 30 12:47:47.954954 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 12:47:47.955826 systemd-logind[1422]: Session 7 logged out. Waiting for processes to exit. Jan 30 12:47:47.957107 systemd-logind[1422]: Removed session 7. Jan 30 12:47:47.975378 kubelet[1738]: E0130 12:47:47.975324 1738 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.22\" not found" Jan 30 12:47:48.076194 kubelet[1738]: E0130 12:47:48.076001 1738 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.22\" not found" Jan 30 12:47:48.176956 kubelet[1738]: E0130 12:47:48.176912 1738 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.22\" not found" Jan 30 12:47:48.277576 kubelet[1738]: E0130 12:47:48.277513 1738 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.22\" not found" Jan 30 12:47:48.377842 kubelet[1738]: E0130 12:47:48.377702 1738 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.22\" not found" Jan 30 12:47:48.478654 kubelet[1738]: E0130 12:47:48.478577 1738 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.22\" not found" Jan 30 12:47:48.496843 kubelet[1738]: I0130 12:47:48.496795 1738 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 12:47:48.496991 kubelet[1738]: W0130 12:47:48.496962 1738 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 12:47:48.579519 kubelet[1738]: E0130 12:47:48.579472 1738 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.22\" not found" Jan 30 12:47:48.583626 kubelet[1738]: E0130 12:47:48.583595 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:47:48.680167 kubelet[1738]: E0130 12:47:48.680023 1738 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.22\" not found" Jan 30 12:47:48.780826 kubelet[1738]: E0130 12:47:48.780771 1738 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.22\" not found" Jan 30 12:47:48.881323 kubelet[1738]: E0130 12:47:48.881255 1738 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.22\" not found" Jan 30 12:47:48.981873 kubelet[1738]: E0130 12:47:48.981728 1738 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.22\" not found" Jan 30 12:47:49.082427 kubelet[1738]: E0130 12:47:49.082379 1738 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.22\" not found" Jan 30 12:47:49.584707 kubelet[1738]: E0130 12:47:49.584649 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:47:49.584831 kubelet[1738]: I0130 12:47:49.584716 1738 apiserver.go:52] "Watching apiserver" Jan 30 12:47:49.617358 systemd[1]: Created slice kubepods-besteffort-pod8180e49d_fcf7_4425_b3a8_26bfc045c6bb.slice - libcontainer container kubepods-besteffort-pod8180e49d_fcf7_4425_b3a8_26bfc045c6bb.slice. Jan 30 12:47:49.630073 systemd[1]: Created slice kubepods-burstable-pode7a86f44_a9bb_41e9_a2ad_e65ad5422464.slice - libcontainer container kubepods-burstable-pode7a86f44_a9bb_41e9_a2ad_e65ad5422464.slice. Jan 30 12:47:49.706043 kubelet[1738]: I0130 12:47:49.705982 1738 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 12:47:49.713189 kubelet[1738]: I0130 12:47:49.713130 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-clustermesh-secrets\") pod \"cilium-kf2gd\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " pod="kube-system/cilium-kf2gd" Jan 30 12:47:49.713189 kubelet[1738]: I0130 12:47:49.713180 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-host-proc-sys-net\") pod \"cilium-kf2gd\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " pod="kube-system/cilium-kf2gd" Jan 30 12:47:49.713351 kubelet[1738]: I0130 12:47:49.713201 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-host-proc-sys-kernel\") pod \"cilium-kf2gd\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " pod="kube-system/cilium-kf2gd" Jan 30 12:47:49.713351 kubelet[1738]: I0130 12:47:49.713225 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8180e49d-fcf7-4425-b3a8-26bfc045c6bb-kube-proxy\") pod \"kube-proxy-72jd9\" (UID: \"8180e49d-fcf7-4425-b3a8-26bfc045c6bb\") " pod="kube-system/kube-proxy-72jd9" Jan 30 12:47:49.713351 kubelet[1738]: I0130 12:47:49.713253 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-hostproc\") pod \"cilium-kf2gd\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " pod="kube-system/cilium-kf2gd" Jan 30 12:47:49.713351 kubelet[1738]: I0130 12:47:49.713271 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-cilium-cgroup\") pod \"cilium-kf2gd\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " pod="kube-system/cilium-kf2gd" Jan 30 12:47:49.713351 kubelet[1738]: I0130 12:47:49.713289 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-etc-cni-netd\") pod \"cilium-kf2gd\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " pod="kube-system/cilium-kf2gd" Jan 30 12:47:49.713351 kubelet[1738]: I0130 12:47:49.713307 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-bpf-maps\") pod \"cilium-kf2gd\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " pod="kube-system/cilium-kf2gd" Jan 30 12:47:49.713514 kubelet[1738]: I0130 12:47:49.713323 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k695m\" (UniqueName: \"kubernetes.io/projected/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-kube-api-access-k695m\") pod \"cilium-kf2gd\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " pod="kube-system/cilium-kf2gd" Jan 30 12:47:49.713514 kubelet[1738]: I0130 12:47:49.713340 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-hubble-tls\") pod \"cilium-kf2gd\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " pod="kube-system/cilium-kf2gd" Jan 30 12:47:49.713514 kubelet[1738]: I0130 12:47:49.713354 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8180e49d-fcf7-4425-b3a8-26bfc045c6bb-lib-modules\") pod \"kube-proxy-72jd9\" (UID: \"8180e49d-fcf7-4425-b3a8-26bfc045c6bb\") " pod="kube-system/kube-proxy-72jd9" Jan 30 12:47:49.713514 kubelet[1738]: I0130 12:47:49.713370 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6rn9\" (UniqueName: \"kubernetes.io/projected/8180e49d-fcf7-4425-b3a8-26bfc045c6bb-kube-api-access-j6rn9\") pod \"kube-proxy-72jd9\" (UID: \"8180e49d-fcf7-4425-b3a8-26bfc045c6bb\") " pod="kube-system/kube-proxy-72jd9" Jan 30 12:47:49.713514 kubelet[1738]: I0130 12:47:49.713386 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-cilium-run\") pod \"cilium-kf2gd\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " pod="kube-system/cilium-kf2gd" Jan 30 12:47:49.713630 kubelet[1738]: I0130 12:47:49.713427 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-xtables-lock\") pod \"cilium-kf2gd\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " pod="kube-system/cilium-kf2gd" Jan 30 12:47:49.713630 kubelet[1738]: I0130 12:47:49.713496 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-cilium-config-path\") pod \"cilium-kf2gd\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " pod="kube-system/cilium-kf2gd" Jan 30 12:47:49.713630 kubelet[1738]: I0130 12:47:49.713518 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-cni-path\") pod \"cilium-kf2gd\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " pod="kube-system/cilium-kf2gd" Jan 30 12:47:49.713630 kubelet[1738]: I0130 12:47:49.713534 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-lib-modules\") pod \"cilium-kf2gd\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " pod="kube-system/cilium-kf2gd" Jan 30 12:47:49.713630 kubelet[1738]: I0130 12:47:49.713556 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8180e49d-fcf7-4425-b3a8-26bfc045c6bb-xtables-lock\") pod \"kube-proxy-72jd9\" (UID: \"8180e49d-fcf7-4425-b3a8-26bfc045c6bb\") " pod="kube-system/kube-proxy-72jd9" Jan 30 12:47:49.927498 kubelet[1738]: E0130 12:47:49.927346 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:47:49.928744 containerd[1440]: time="2025-01-30T12:47:49.928672886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-72jd9,Uid:8180e49d-fcf7-4425-b3a8-26bfc045c6bb,Namespace:kube-system,Attempt:0,}" Jan 30 12:47:49.943080 kubelet[1738]: E0130 12:47:49.943027 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:47:49.944117 containerd[1440]: time="2025-01-30T12:47:49.943823712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kf2gd,Uid:e7a86f44-a9bb-41e9-a2ad-e65ad5422464,Namespace:kube-system,Attempt:0,}" Jan 30 12:47:50.438666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2500677030.mount: Deactivated successfully. Jan 30 12:47:50.447806 containerd[1440]: time="2025-01-30T12:47:50.447753862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:47:50.448898 containerd[1440]: time="2025-01-30T12:47:50.448868866Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:47:50.449288 containerd[1440]: time="2025-01-30T12:47:50.449219019Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 30 12:47:50.450003 containerd[1440]: time="2025-01-30T12:47:50.449886933Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 12:47:50.450613 containerd[1440]: time="2025-01-30T12:47:50.450549727Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:47:50.454859 containerd[1440]: time="2025-01-30T12:47:50.454795309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:47:50.455901 containerd[1440]: time="2025-01-30T12:47:50.455861805Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 511.753932ms" Jan 30 12:47:50.456879 containerd[1440]: time="2025-01-30T12:47:50.456846172Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 528.048727ms" Jan 30 12:47:50.586760 kubelet[1738]: E0130 12:47:50.585444 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:47:50.628681 containerd[1440]: time="2025-01-30T12:47:50.628569212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:47:50.628681 containerd[1440]: time="2025-01-30T12:47:50.628627167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:47:50.628681 containerd[1440]: time="2025-01-30T12:47:50.628638917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:47:50.628899 containerd[1440]: time="2025-01-30T12:47:50.628806869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:47:50.628899 containerd[1440]: time="2025-01-30T12:47:50.628799009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:47:50.628965 containerd[1440]: time="2025-01-30T12:47:50.628936276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:47:50.629029 containerd[1440]: time="2025-01-30T12:47:50.628992683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:47:50.632538 containerd[1440]: time="2025-01-30T12:47:50.632468650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:47:50.784946 systemd[1]: Started cri-containerd-72a2fd049093c99a3bdd14932c085462ef2039b9d946d94ce751ea8e111e2300.scope - libcontainer container 72a2fd049093c99a3bdd14932c085462ef2039b9d946d94ce751ea8e111e2300. Jan 30 12:47:50.786285 systemd[1]: Started cri-containerd-7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172.scope - libcontainer container 7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172. Jan 30 12:47:50.811085 containerd[1440]: time="2025-01-30T12:47:50.811034065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-72jd9,Uid:8180e49d-fcf7-4425-b3a8-26bfc045c6bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"72a2fd049093c99a3bdd14932c085462ef2039b9d946d94ce751ea8e111e2300\"" Jan 30 12:47:50.812535 kubelet[1738]: E0130 12:47:50.812471 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:47:50.814337 containerd[1440]: time="2025-01-30T12:47:50.814276584Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 12:47:50.817539 containerd[1440]: time="2025-01-30T12:47:50.817500128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kf2gd,Uid:e7a86f44-a9bb-41e9-a2ad-e65ad5422464,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172\"" Jan 30 12:47:50.818394 kubelet[1738]: E0130 12:47:50.818366 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:47:51.586368 kubelet[1738]: E0130 12:47:51.586300 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:47:51.881152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2468899032.mount: Deactivated successfully. Jan 30 12:47:52.096772 containerd[1440]: time="2025-01-30T12:47:52.096673706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:47:52.097554 containerd[1440]: time="2025-01-30T12:47:52.097509167Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=27364399" Jan 30 12:47:52.098793 containerd[1440]: time="2025-01-30T12:47:52.098757685Z" level=info msg="ImageCreate event name:\"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:47:52.104577 containerd[1440]: time="2025-01-30T12:47:52.104530669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:47:52.105509 containerd[1440]: time="2025-01-30T12:47:52.105473063Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"27363416\" in 1.291158434s" Jan 30 12:47:52.105595 containerd[1440]: time="2025-01-30T12:47:52.105511477Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\"" Jan 30 12:47:52.107043 containerd[1440]: time="2025-01-30T12:47:52.106901486Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 12:47:52.107822 containerd[1440]: time="2025-01-30T12:47:52.107788842Z" level=info msg="CreateContainer within sandbox \"72a2fd049093c99a3bdd14932c085462ef2039b9d946d94ce751ea8e111e2300\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 12:47:52.121472 containerd[1440]: time="2025-01-30T12:47:52.121417198Z" level=info msg="CreateContainer within sandbox \"72a2fd049093c99a3bdd14932c085462ef2039b9d946d94ce751ea8e111e2300\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c73024816070e1d9d07e4f0b4244e71c802ef80c3a106bd3d69eee48c091ccbf\"" Jan 30 12:47:52.122206 containerd[1440]: time="2025-01-30T12:47:52.122088899Z" level=info msg="StartContainer for \"c73024816070e1d9d07e4f0b4244e71c802ef80c3a106bd3d69eee48c091ccbf\"" Jan 30 12:47:52.144470 systemd[1]: run-containerd-runc-k8s.io-c73024816070e1d9d07e4f0b4244e71c802ef80c3a106bd3d69eee48c091ccbf-runc.L8E2SI.mount: Deactivated successfully. Jan 30 12:47:52.158925 systemd[1]: Started cri-containerd-c73024816070e1d9d07e4f0b4244e71c802ef80c3a106bd3d69eee48c091ccbf.scope - libcontainer container c73024816070e1d9d07e4f0b4244e71c802ef80c3a106bd3d69eee48c091ccbf. Jan 30 12:47:52.208843 containerd[1440]: time="2025-01-30T12:47:52.208775248Z" level=info msg="StartContainer for \"c73024816070e1d9d07e4f0b4244e71c802ef80c3a106bd3d69eee48c091ccbf\" returns successfully" Jan 30 12:47:52.586576 kubelet[1738]: E0130 12:47:52.586523 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:47:52.765173 kubelet[1738]: E0130 12:47:52.765127 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:47:52.777275 kubelet[1738]: I0130 12:47:52.777201 1738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-72jd9" podStartSLOduration=4.484706131 podStartE2EDuration="5.777181834s" podCreationTimestamp="2025-01-30 12:47:47 +0000 UTC" firstStartedPulling="2025-01-30 12:47:50.81386002 +0000 UTC m=+3.899065029" lastFinishedPulling="2025-01-30 12:47:52.106335723 +0000 UTC m=+5.191540732" observedRunningTime="2025-01-30 12:47:52.774112793 +0000 UTC m=+5.859317801" watchObservedRunningTime="2025-01-30 12:47:52.777181834 +0000 UTC m=+5.862386843" Jan 30 12:47:53.586854 kubelet[1738]: E0130 12:47:53.586812 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:47:53.765712 kubelet[1738]: E0130 12:47:53.765677 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:47:54.588058 kubelet[1738]: E0130 12:47:54.588018 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:47:55.152485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1279215047.mount: Deactivated successfully. Jan 30 12:47:55.588790 kubelet[1738]: E0130 12:47:55.588752 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:47:56.442477 containerd[1440]: time="2025-01-30T12:47:56.442424548Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:47:56.443522 containerd[1440]: time="2025-01-30T12:47:56.443210094Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 30 12:47:56.448235 containerd[1440]: time="2025-01-30T12:47:56.448163946Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:47:56.449865 containerd[1440]: time="2025-01-30T12:47:56.449813648Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.342876346s" Jan 30 12:47:56.449865 containerd[1440]: time="2025-01-30T12:47:56.449854628Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 30 12:47:56.451742 containerd[1440]: time="2025-01-30T12:47:56.451698026Z" level=info msg="CreateContainer within sandbox \"7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 12:47:56.463563 containerd[1440]: time="2025-01-30T12:47:56.463499834Z" level=info msg="CreateContainer within sandbox \"7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4207ace8d167f931e392884bf0ae13aed908cdf484c7bdb7fd9be9c1510b35eb\"" Jan 30 12:47:56.464059 containerd[1440]: time="2025-01-30T12:47:56.464031132Z" level=info msg="StartContainer for \"4207ace8d167f931e392884bf0ae13aed908cdf484c7bdb7fd9be9c1510b35eb\"" Jan 30 12:47:56.491951 systemd[1]: Started cri-containerd-4207ace8d167f931e392884bf0ae13aed908cdf484c7bdb7fd9be9c1510b35eb.scope - libcontainer container 4207ace8d167f931e392884bf0ae13aed908cdf484c7bdb7fd9be9c1510b35eb. Jan 30 12:47:56.522686 containerd[1440]: time="2025-01-30T12:47:56.522629140Z" level=info msg="StartContainer for \"4207ace8d167f931e392884bf0ae13aed908cdf484c7bdb7fd9be9c1510b35eb\" returns successfully" Jan 30 12:47:56.575079 systemd[1]: cri-containerd-4207ace8d167f931e392884bf0ae13aed908cdf484c7bdb7fd9be9c1510b35eb.scope: Deactivated successfully. Jan 30 12:47:56.589713 kubelet[1738]: E0130 12:47:56.589676 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:47:56.739118 containerd[1440]: time="2025-01-30T12:47:56.739022575Z" level=info msg="shim disconnected" id=4207ace8d167f931e392884bf0ae13aed908cdf484c7bdb7fd9be9c1510b35eb namespace=k8s.io Jan 30 12:47:56.739118 containerd[1440]: time="2025-01-30T12:47:56.739078144Z" level=warning msg="cleaning up after shim disconnected" id=4207ace8d167f931e392884bf0ae13aed908cdf484c7bdb7fd9be9c1510b35eb namespace=k8s.io Jan 30 12:47:56.739118 containerd[1440]: time="2025-01-30T12:47:56.739087631Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:47:56.771950 kubelet[1738]: E0130 12:47:56.771913 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:47:56.773840 containerd[1440]: time="2025-01-30T12:47:56.773797001Z" level=info msg="CreateContainer within sandbox \"7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 12:47:56.797284 containerd[1440]: time="2025-01-30T12:47:56.797178299Z" level=info msg="CreateContainer within sandbox \"7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"03adb28681bc67683218aa40679c944f36f100451c4f95291c140cf5d6096783\"" Jan 30 12:47:56.797997 containerd[1440]: time="2025-01-30T12:47:56.797823646Z" level=info msg="StartContainer for \"03adb28681bc67683218aa40679c944f36f100451c4f95291c140cf5d6096783\"" Jan 30 12:47:56.821935 systemd[1]: Started cri-containerd-03adb28681bc67683218aa40679c944f36f100451c4f95291c140cf5d6096783.scope - libcontainer container 03adb28681bc67683218aa40679c944f36f100451c4f95291c140cf5d6096783. Jan 30 12:47:56.843010 containerd[1440]: time="2025-01-30T12:47:56.842968071Z" level=info msg="StartContainer for \"03adb28681bc67683218aa40679c944f36f100451c4f95291c140cf5d6096783\" returns successfully" Jan 30 12:47:56.858637 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 12:47:56.858897 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:47:56.859028 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:47:56.868301 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:47:56.868911 systemd[1]: cri-containerd-03adb28681bc67683218aa40679c944f36f100451c4f95291c140cf5d6096783.scope: Deactivated successfully. Jan 30 12:47:56.886090 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:47:56.908349 containerd[1440]: time="2025-01-30T12:47:56.908271045Z" level=info msg="shim disconnected" id=03adb28681bc67683218aa40679c944f36f100451c4f95291c140cf5d6096783 namespace=k8s.io Jan 30 12:47:56.908349 containerd[1440]: time="2025-01-30T12:47:56.908327731Z" level=warning msg="cleaning up after shim disconnected" id=03adb28681bc67683218aa40679c944f36f100451c4f95291c140cf5d6096783 namespace=k8s.io Jan 30 12:47:56.908349 containerd[1440]: time="2025-01-30T12:47:56.908338414Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:47:57.460076 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4207ace8d167f931e392884bf0ae13aed908cdf484c7bdb7fd9be9c1510b35eb-rootfs.mount: Deactivated successfully. Jan 30 12:47:57.590376 kubelet[1738]: E0130 12:47:57.590328 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:47:57.775132 kubelet[1738]: E0130 12:47:57.775102 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:47:57.777146 containerd[1440]: time="2025-01-30T12:47:57.777107270Z" level=info msg="CreateContainer within sandbox \"7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 12:47:57.794689 containerd[1440]: time="2025-01-30T12:47:57.794597171Z" level=info msg="CreateContainer within sandbox \"7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"97ca62cbf19f63cfd9f8c4718168eeda555ce91956b04832ffd11d9b5dbd1ddd\"" Jan 30 12:47:57.795399 containerd[1440]: time="2025-01-30T12:47:57.795176314Z" level=info msg="StartContainer for \"97ca62cbf19f63cfd9f8c4718168eeda555ce91956b04832ffd11d9b5dbd1ddd\"" Jan 30 12:47:57.830934 systemd[1]: Started cri-containerd-97ca62cbf19f63cfd9f8c4718168eeda555ce91956b04832ffd11d9b5dbd1ddd.scope - libcontainer container 97ca62cbf19f63cfd9f8c4718168eeda555ce91956b04832ffd11d9b5dbd1ddd. Jan 30 12:47:57.856504 containerd[1440]: time="2025-01-30T12:47:57.855350548Z" level=info msg="StartContainer for \"97ca62cbf19f63cfd9f8c4718168eeda555ce91956b04832ffd11d9b5dbd1ddd\" returns successfully" Jan 30 12:47:57.891602 systemd[1]: cri-containerd-97ca62cbf19f63cfd9f8c4718168eeda555ce91956b04832ffd11d9b5dbd1ddd.scope: Deactivated successfully. Jan 30 12:47:57.921802 containerd[1440]: time="2025-01-30T12:47:57.921722793Z" level=info msg="shim disconnected" id=97ca62cbf19f63cfd9f8c4718168eeda555ce91956b04832ffd11d9b5dbd1ddd namespace=k8s.io Jan 30 12:47:57.922345 containerd[1440]: time="2025-01-30T12:47:57.922183690Z" level=warning msg="cleaning up after shim disconnected" id=97ca62cbf19f63cfd9f8c4718168eeda555ce91956b04832ffd11d9b5dbd1ddd namespace=k8s.io Jan 30 12:47:57.922345 containerd[1440]: time="2025-01-30T12:47:57.922200560Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:47:58.459884 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97ca62cbf19f63cfd9f8c4718168eeda555ce91956b04832ffd11d9b5dbd1ddd-rootfs.mount: Deactivated successfully. Jan 30 12:47:58.590973 kubelet[1738]: E0130 12:47:58.590925 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:47:58.778963 kubelet[1738]: E0130 12:47:58.778516 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:47:58.781656 containerd[1440]: time="2025-01-30T12:47:58.780365233Z" level=info msg="CreateContainer within sandbox \"7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 12:47:58.801584 containerd[1440]: time="2025-01-30T12:47:58.801421393Z" level=info msg="CreateContainer within sandbox \"7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2c8fb3ef11b756a65d95cef86ecfbe9364630817f6095550914bc25c90519d45\"" Jan 30 12:47:58.802332 containerd[1440]: time="2025-01-30T12:47:58.802296497Z" level=info msg="StartContainer for \"2c8fb3ef11b756a65d95cef86ecfbe9364630817f6095550914bc25c90519d45\"" Jan 30 12:47:58.831981 systemd[1]: Started cri-containerd-2c8fb3ef11b756a65d95cef86ecfbe9364630817f6095550914bc25c90519d45.scope - libcontainer container 2c8fb3ef11b756a65d95cef86ecfbe9364630817f6095550914bc25c90519d45. Jan 30 12:47:58.854942 systemd[1]: cri-containerd-2c8fb3ef11b756a65d95cef86ecfbe9364630817f6095550914bc25c90519d45.scope: Deactivated successfully. Jan 30 12:47:58.856745 containerd[1440]: time="2025-01-30T12:47:58.856594049Z" level=info msg="StartContainer for \"2c8fb3ef11b756a65d95cef86ecfbe9364630817f6095550914bc25c90519d45\" returns successfully" Jan 30 12:47:58.879548 containerd[1440]: time="2025-01-30T12:47:58.879306663Z" level=info msg="shim disconnected" id=2c8fb3ef11b756a65d95cef86ecfbe9364630817f6095550914bc25c90519d45 namespace=k8s.io Jan 30 12:47:58.879548 containerd[1440]: time="2025-01-30T12:47:58.879379911Z" level=warning msg="cleaning up after shim disconnected" id=2c8fb3ef11b756a65d95cef86ecfbe9364630817f6095550914bc25c90519d45 namespace=k8s.io Jan 30 12:47:58.879548 containerd[1440]: time="2025-01-30T12:47:58.879400856Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:47:59.459950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c8fb3ef11b756a65d95cef86ecfbe9364630817f6095550914bc25c90519d45-rootfs.mount: Deactivated successfully. Jan 30 12:47:59.591348 kubelet[1738]: E0130 12:47:59.591298 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:47:59.782919 kubelet[1738]: E0130 12:47:59.782886 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:47:59.785135 containerd[1440]: time="2025-01-30T12:47:59.785094918Z" level=info msg="CreateContainer within sandbox \"7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 12:47:59.855649 containerd[1440]: time="2025-01-30T12:47:59.855587233Z" level=info msg="CreateContainer within sandbox \"7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0\"" Jan 30 12:47:59.856159 containerd[1440]: time="2025-01-30T12:47:59.856130267Z" level=info msg="StartContainer for \"d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0\"" Jan 30 12:47:59.886091 systemd[1]: Started cri-containerd-d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0.scope - libcontainer container d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0. Jan 30 12:47:59.919429 containerd[1440]: time="2025-01-30T12:47:59.919060256Z" level=info msg="StartContainer for \"d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0\" returns successfully" Jan 30 12:48:00.002256 kubelet[1738]: I0130 12:48:00.002224 1738 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 12:48:00.591708 kubelet[1738]: E0130 12:48:00.591655 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:00.680827 kernel: Initializing XFRM netlink socket Jan 30 12:48:00.787086 kubelet[1738]: E0130 12:48:00.786795 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:48:00.808980 kubelet[1738]: I0130 12:48:00.808916 1738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kf2gd" podStartSLOduration=8.177142884 podStartE2EDuration="13.808883774s" podCreationTimestamp="2025-01-30 12:47:47 +0000 UTC" firstStartedPulling="2025-01-30 12:47:50.818877439 +0000 UTC m=+3.904082448" lastFinishedPulling="2025-01-30 12:47:56.450618369 +0000 UTC m=+9.535823338" observedRunningTime="2025-01-30 12:48:00.807358236 +0000 UTC m=+13.892563245" watchObservedRunningTime="2025-01-30 12:48:00.808883774 +0000 UTC m=+13.894088743" Jan 30 12:48:01.592494 kubelet[1738]: E0130 12:48:01.592431 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:01.788075 kubelet[1738]: E0130 12:48:01.788026 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:48:02.359552 systemd-networkd[1377]: cilium_host: Link UP Jan 30 12:48:02.359762 systemd-networkd[1377]: cilium_net: Link UP Jan 30 12:48:02.360025 systemd-networkd[1377]: cilium_net: Gained carrier Jan 30 12:48:02.360231 systemd-networkd[1377]: cilium_host: Gained carrier Jan 30 12:48:02.459928 systemd-networkd[1377]: cilium_vxlan: Link UP Jan 30 12:48:02.459940 systemd-networkd[1377]: cilium_vxlan: Gained carrier Jan 30 12:48:02.496929 systemd-networkd[1377]: cilium_net: Gained IPv6LL Jan 30 12:48:02.593162 kubelet[1738]: E0130 12:48:02.593105 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:02.770779 kernel: NET: Registered PF_ALG protocol family Jan 30 12:48:02.789996 kubelet[1738]: E0130 12:48:02.789953 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:48:03.016909 systemd-networkd[1377]: cilium_host: Gained IPv6LL Jan 30 12:48:03.333557 systemd-networkd[1377]: lxc_health: Link UP Jan 30 12:48:03.344394 systemd-networkd[1377]: lxc_health: Gained carrier Jan 30 12:48:03.594172 kubelet[1738]: E0130 12:48:03.594057 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:03.945531 kubelet[1738]: E0130 12:48:03.945143 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:48:03.974770 systemd[1]: Created slice kubepods-besteffort-pod6a47e639_102a_4cf6_b13d_effc77fe5cdc.slice - libcontainer container kubepods-besteffort-pod6a47e639_102a_4cf6_b13d_effc77fe5cdc.slice. Jan 30 12:48:04.009262 kubelet[1738]: I0130 12:48:04.009202 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9kkw\" (UniqueName: \"kubernetes.io/projected/6a47e639-102a-4cf6-b13d-effc77fe5cdc-kube-api-access-s9kkw\") pod \"nginx-deployment-7fcdb87857-l76lh\" (UID: \"6a47e639-102a-4cf6-b13d-effc77fe5cdc\") " pod="default/nginx-deployment-7fcdb87857-l76lh" Jan 30 12:48:04.040962 systemd-networkd[1377]: cilium_vxlan: Gained IPv6LL Jan 30 12:48:04.277948 containerd[1440]: time="2025-01-30T12:48:04.277908718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-l76lh,Uid:6a47e639-102a-4cf6-b13d-effc77fe5cdc,Namespace:default,Attempt:0,}" Jan 30 12:48:04.352378 systemd-networkd[1377]: lxc80576251dc8d: Link UP Jan 30 12:48:04.361767 kernel: eth0: renamed from tmpd85af Jan 30 12:48:04.368628 systemd-networkd[1377]: lxc80576251dc8d: Gained carrier Jan 30 12:48:04.595162 kubelet[1738]: E0130 12:48:04.595023 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:04.792793 kubelet[1738]: E0130 12:48:04.792438 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:48:05.321200 systemd-networkd[1377]: lxc_health: Gained IPv6LL Jan 30 12:48:05.512971 systemd-networkd[1377]: lxc80576251dc8d: Gained IPv6LL Jan 30 12:48:05.595707 kubelet[1738]: E0130 12:48:05.595593 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:05.797758 kubelet[1738]: E0130 12:48:05.796386 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:48:06.596348 kubelet[1738]: E0130 12:48:06.596302 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:07.583861 kubelet[1738]: E0130 12:48:07.583806 1738 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:07.597267 kubelet[1738]: E0130 12:48:07.597234 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:07.927338 containerd[1440]: time="2025-01-30T12:48:07.927059921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:48:07.927338 containerd[1440]: time="2025-01-30T12:48:07.927107563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:48:07.927338 containerd[1440]: time="2025-01-30T12:48:07.927119194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:48:07.927338 containerd[1440]: time="2025-01-30T12:48:07.927199291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:48:07.944923 systemd[1]: Started cri-containerd-d85affa39e1af565bbbf9b632c9dbfb2328bc788cd98fb6b53ea9383eca2c8a5.scope - libcontainer container d85affa39e1af565bbbf9b632c9dbfb2328bc788cd98fb6b53ea9383eca2c8a5. Jan 30 12:48:07.954306 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:48:07.970387 containerd[1440]: time="2025-01-30T12:48:07.970349198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-l76lh,Uid:6a47e639-102a-4cf6-b13d-effc77fe5cdc,Namespace:default,Attempt:0,} returns sandbox id \"d85affa39e1af565bbbf9b632c9dbfb2328bc788cd98fb6b53ea9383eca2c8a5\"" Jan 30 12:48:07.971627 containerd[1440]: time="2025-01-30T12:48:07.971599374Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 12:48:08.598023 kubelet[1738]: E0130 12:48:08.597978 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:09.598928 kubelet[1738]: E0130 12:48:09.598881 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:09.603316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1522464653.mount: Deactivated successfully. Jan 30 12:48:10.325438 containerd[1440]: time="2025-01-30T12:48:10.325376765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:48:10.327216 containerd[1440]: time="2025-01-30T12:48:10.327167165Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67680490" Jan 30 12:48:10.328332 containerd[1440]: time="2025-01-30T12:48:10.328307729Z" level=info msg="ImageCreate event name:\"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:48:10.330939 containerd[1440]: time="2025-01-30T12:48:10.330901073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:48:10.332113 containerd[1440]: time="2025-01-30T12:48:10.332076403Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 2.360439687s" Jan 30 12:48:10.332113 containerd[1440]: time="2025-01-30T12:48:10.332112689Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\"" Jan 30 12:48:10.334280 containerd[1440]: time="2025-01-30T12:48:10.334247111Z" level=info msg="CreateContainer within sandbox \"d85affa39e1af565bbbf9b632c9dbfb2328bc788cd98fb6b53ea9383eca2c8a5\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 30 12:48:10.344451 containerd[1440]: time="2025-01-30T12:48:10.344406327Z" level=info msg="CreateContainer within sandbox \"d85affa39e1af565bbbf9b632c9dbfb2328bc788cd98fb6b53ea9383eca2c8a5\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"6e9a3834b3ab10f648e54979c44d8df7679684d01d097c10ea2e5b1e933a77ab\"" Jan 30 12:48:10.345125 containerd[1440]: time="2025-01-30T12:48:10.345097530Z" level=info msg="StartContainer for \"6e9a3834b3ab10f648e54979c44d8df7679684d01d097c10ea2e5b1e933a77ab\"" Jan 30 12:48:10.369935 systemd[1]: Started cri-containerd-6e9a3834b3ab10f648e54979c44d8df7679684d01d097c10ea2e5b1e933a77ab.scope - libcontainer container 6e9a3834b3ab10f648e54979c44d8df7679684d01d097c10ea2e5b1e933a77ab. Jan 30 12:48:10.390006 containerd[1440]: time="2025-01-30T12:48:10.389964150Z" level=info msg="StartContainer for \"6e9a3834b3ab10f648e54979c44d8df7679684d01d097c10ea2e5b1e933a77ab\" returns successfully" Jan 30 12:48:10.599778 kubelet[1738]: E0130 12:48:10.599652 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:11.600416 kubelet[1738]: E0130 12:48:11.600368 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:12.600984 kubelet[1738]: E0130 12:48:12.600940 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:13.601523 kubelet[1738]: E0130 12:48:13.601472 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:14.601802 kubelet[1738]: E0130 12:48:14.601712 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:15.602524 kubelet[1738]: E0130 12:48:15.602475 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:15.785651 kubelet[1738]: I0130 12:48:15.785575 1738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-l76lh" podStartSLOduration=10.423733089 podStartE2EDuration="12.785545047s" podCreationTimestamp="2025-01-30 12:48:03 +0000 UTC" firstStartedPulling="2025-01-30 12:48:07.971360841 +0000 UTC m=+21.056565850" lastFinishedPulling="2025-01-30 12:48:10.333172799 +0000 UTC m=+23.418377808" observedRunningTime="2025-01-30 12:48:10.837229092 +0000 UTC m=+23.922434101" watchObservedRunningTime="2025-01-30 12:48:15.785545047 +0000 UTC m=+28.870750056" Jan 30 12:48:15.791599 systemd[1]: Created slice kubepods-besteffort-pod5d171c4b_e40c_4932_9e67_31b4c70441c9.slice - libcontainer container kubepods-besteffort-pod5d171c4b_e40c_4932_9e67_31b4c70441c9.slice. Jan 30 12:48:15.875863 kubelet[1738]: I0130 12:48:15.875700 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th7tc\" (UniqueName: \"kubernetes.io/projected/5d171c4b-e40c-4932-9e67-31b4c70441c9-kube-api-access-th7tc\") pod \"nfs-server-provisioner-0\" (UID: \"5d171c4b-e40c-4932-9e67-31b4c70441c9\") " pod="default/nfs-server-provisioner-0" Jan 30 12:48:15.875863 kubelet[1738]: I0130 12:48:15.875774 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/5d171c4b-e40c-4932-9e67-31b4c70441c9-data\") pod \"nfs-server-provisioner-0\" (UID: \"5d171c4b-e40c-4932-9e67-31b4c70441c9\") " pod="default/nfs-server-provisioner-0" Jan 30 12:48:16.095081 containerd[1440]: time="2025-01-30T12:48:16.095030965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5d171c4b-e40c-4932-9e67-31b4c70441c9,Namespace:default,Attempt:0,}" Jan 30 12:48:16.126711 systemd-networkd[1377]: lxce79dc6e6238c: Link UP Jan 30 12:48:16.136793 kernel: eth0: renamed from tmpabab8 Jan 30 12:48:16.139771 systemd-networkd[1377]: lxce79dc6e6238c: Gained carrier Jan 30 12:48:16.343935 containerd[1440]: time="2025-01-30T12:48:16.343785540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:48:16.344606 containerd[1440]: time="2025-01-30T12:48:16.344556478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:48:16.345051 containerd[1440]: time="2025-01-30T12:48:16.344596243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:48:16.345213 containerd[1440]: time="2025-01-30T12:48:16.345179118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:48:16.367966 systemd[1]: Started cri-containerd-abab8ad7b86e5d016d17d2b5ae1097338fb325e89c3e71c6faefc6ade898f659.scope - libcontainer container abab8ad7b86e5d016d17d2b5ae1097338fb325e89c3e71c6faefc6ade898f659. Jan 30 12:48:16.379876 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:48:16.399064 containerd[1440]: time="2025-01-30T12:48:16.399027716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:5d171c4b-e40c-4932-9e67-31b4c70441c9,Namespace:default,Attempt:0,} returns sandbox id \"abab8ad7b86e5d016d17d2b5ae1097338fb325e89c3e71c6faefc6ade898f659\"" Jan 30 12:48:16.400782 containerd[1440]: time="2025-01-30T12:48:16.400676447Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 30 12:48:16.603324 kubelet[1738]: E0130 12:48:16.603273 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:17.604028 kubelet[1738]: E0130 12:48:17.603990 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:17.864914 systemd-networkd[1377]: lxce79dc6e6238c: Gained IPv6LL Jan 30 12:48:18.163236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2260277988.mount: Deactivated successfully. Jan 30 12:48:18.604865 kubelet[1738]: E0130 12:48:18.604818 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:19.602927 containerd[1440]: time="2025-01-30T12:48:19.602867639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:48:19.603942 containerd[1440]: time="2025-01-30T12:48:19.603746255Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Jan 30 12:48:19.604708 containerd[1440]: time="2025-01-30T12:48:19.604625590Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:48:19.605953 kubelet[1738]: E0130 12:48:19.605873 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:19.607479 containerd[1440]: time="2025-01-30T12:48:19.607431096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:48:19.609596 containerd[1440]: time="2025-01-30T12:48:19.609465477Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.208745625s" Jan 30 12:48:19.609596 containerd[1440]: time="2025-01-30T12:48:19.609506281Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 30 12:48:19.611877 containerd[1440]: time="2025-01-30T12:48:19.611700000Z" level=info msg="CreateContainer within sandbox \"abab8ad7b86e5d016d17d2b5ae1097338fb325e89c3e71c6faefc6ade898f659\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 30 12:48:19.623216 containerd[1440]: time="2025-01-30T12:48:19.623172207Z" level=info msg="CreateContainer within sandbox \"abab8ad7b86e5d016d17d2b5ae1097338fb325e89c3e71c6faefc6ade898f659\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"2391b3fd107787b7cd31b07d1e85ef3b0fc02744a09e2ea71017b5c4ce03312e\"" Jan 30 12:48:19.623684 containerd[1440]: time="2025-01-30T12:48:19.623625897Z" level=info msg="StartContainer for \"2391b3fd107787b7cd31b07d1e85ef3b0fc02744a09e2ea71017b5c4ce03312e\"" Jan 30 12:48:19.696922 systemd[1]: Started cri-containerd-2391b3fd107787b7cd31b07d1e85ef3b0fc02744a09e2ea71017b5c4ce03312e.scope - libcontainer container 2391b3fd107787b7cd31b07d1e85ef3b0fc02744a09e2ea71017b5c4ce03312e. Jan 30 12:48:19.758100 containerd[1440]: time="2025-01-30T12:48:19.757969946Z" level=info msg="StartContainer for \"2391b3fd107787b7cd31b07d1e85ef3b0fc02744a09e2ea71017b5c4ce03312e\" returns successfully" Jan 30 12:48:20.610064 kubelet[1738]: E0130 12:48:20.606475 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:21.606636 kubelet[1738]: E0130 12:48:21.606582 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:22.606882 kubelet[1738]: E0130 12:48:22.606831 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:23.118104 update_engine[1431]: I20250130 12:48:23.118011 1431 update_attempter.cc:509] Updating boot flags... Jan 30 12:48:23.145850 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3138) Jan 30 12:48:23.165185 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3137) Jan 30 12:48:23.186761 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3137) Jan 30 12:48:23.607594 kubelet[1738]: E0130 12:48:23.607535 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:24.608599 kubelet[1738]: E0130 12:48:24.608537 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:25.609472 kubelet[1738]: E0130 12:48:25.609413 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:26.612451 kubelet[1738]: E0130 12:48:26.610285 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:27.583807 kubelet[1738]: E0130 12:48:27.583715 1738 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:27.611191 kubelet[1738]: E0130 12:48:27.611122 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:28.612216 kubelet[1738]: E0130 12:48:28.612155 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:29.612932 kubelet[1738]: E0130 12:48:29.612827 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:29.802589 kubelet[1738]: I0130 12:48:29.799014 1738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.589190671 podStartE2EDuration="14.798995816s" podCreationTimestamp="2025-01-30 12:48:15 +0000 UTC" firstStartedPulling="2025-01-30 12:48:16.400430335 +0000 UTC m=+29.485635344" lastFinishedPulling="2025-01-30 12:48:19.61023548 +0000 UTC m=+32.695440489" observedRunningTime="2025-01-30 12:48:19.847944331 +0000 UTC m=+32.933149340" watchObservedRunningTime="2025-01-30 12:48:29.798995816 +0000 UTC m=+42.884200825" Jan 30 12:48:29.814385 systemd[1]: Created slice kubepods-besteffort-podeaf17e25_ee21_4c8d_ad90_0e23a48b629a.slice - libcontainer container kubepods-besteffort-podeaf17e25_ee21_4c8d_ad90_0e23a48b629a.slice. Jan 30 12:48:29.865676 kubelet[1738]: I0130 12:48:29.865544 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b0854c1b-1d32-43cc-b935-15217586cd33\" (UniqueName: \"kubernetes.io/nfs/eaf17e25-ee21-4c8d-ad90-0e23a48b629a-pvc-b0854c1b-1d32-43cc-b935-15217586cd33\") pod \"test-pod-1\" (UID: \"eaf17e25-ee21-4c8d-ad90-0e23a48b629a\") " pod="default/test-pod-1" Jan 30 12:48:29.865676 kubelet[1738]: I0130 12:48:29.865604 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t884k\" (UniqueName: \"kubernetes.io/projected/eaf17e25-ee21-4c8d-ad90-0e23a48b629a-kube-api-access-t884k\") pod \"test-pod-1\" (UID: \"eaf17e25-ee21-4c8d-ad90-0e23a48b629a\") " pod="default/test-pod-1" Jan 30 12:48:30.009847 kernel: FS-Cache: Loaded Jan 30 12:48:30.041350 kernel: RPC: Registered named UNIX socket transport module. Jan 30 12:48:30.041468 kernel: RPC: Registered udp transport module. Jan 30 12:48:30.041487 kernel: RPC: Registered tcp transport module. Jan 30 12:48:30.041589 kernel: RPC: Registered tcp-with-tls transport module. Jan 30 12:48:30.041606 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 30 12:48:30.235791 kernel: NFS: Registering the id_resolver key type Jan 30 12:48:30.235917 kernel: Key type id_resolver registered Jan 30 12:48:30.235973 kernel: Key type id_legacy registered Jan 30 12:48:30.267999 nfsidmap[3164]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 30 12:48:30.272331 nfsidmap[3167]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 30 12:48:30.421638 containerd[1440]: time="2025-01-30T12:48:30.421222278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:eaf17e25-ee21-4c8d-ad90-0e23a48b629a,Namespace:default,Attempt:0,}" Jan 30 12:48:30.453072 systemd-networkd[1377]: lxcdab29e29c88f: Link UP Jan 30 12:48:30.465817 kernel: eth0: renamed from tmp59ba5 Jan 30 12:48:30.475459 systemd-networkd[1377]: lxcdab29e29c88f: Gained carrier Jan 30 12:48:30.613282 kubelet[1738]: E0130 12:48:30.613158 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:30.689310 containerd[1440]: time="2025-01-30T12:48:30.689063584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:48:30.689310 containerd[1440]: time="2025-01-30T12:48:30.689142829Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:48:30.689310 containerd[1440]: time="2025-01-30T12:48:30.689154230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:48:30.689499 containerd[1440]: time="2025-01-30T12:48:30.689295359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:48:30.721976 systemd[1]: Started cri-containerd-59ba5b52ac6ab692cd6177f31a9a0bc87298a325265b0fcadcea06101bea6926.scope - libcontainer container 59ba5b52ac6ab692cd6177f31a9a0bc87298a325265b0fcadcea06101bea6926. Jan 30 12:48:30.734140 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:48:30.769693 containerd[1440]: time="2025-01-30T12:48:30.769637986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:eaf17e25-ee21-4c8d-ad90-0e23a48b629a,Namespace:default,Attempt:0,} returns sandbox id \"59ba5b52ac6ab692cd6177f31a9a0bc87298a325265b0fcadcea06101bea6926\"" Jan 30 12:48:30.771059 containerd[1440]: time="2025-01-30T12:48:30.771020354Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 12:48:31.046642 containerd[1440]: time="2025-01-30T12:48:31.046570607Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:48:31.048913 containerd[1440]: time="2025-01-30T12:48:31.048859186Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 30 12:48:31.052267 containerd[1440]: time="2025-01-30T12:48:31.052155426Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 281.096391ms" Jan 30 12:48:31.052267 containerd[1440]: time="2025-01-30T12:48:31.052203629Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\"" Jan 30 12:48:31.054566 containerd[1440]: time="2025-01-30T12:48:31.054385522Z" level=info msg="CreateContainer within sandbox \"59ba5b52ac6ab692cd6177f31a9a0bc87298a325265b0fcadcea06101bea6926\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 30 12:48:31.088311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4195381999.mount: Deactivated successfully. Jan 30 12:48:31.092518 containerd[1440]: time="2025-01-30T12:48:31.092469479Z" level=info msg="CreateContainer within sandbox \"59ba5b52ac6ab692cd6177f31a9a0bc87298a325265b0fcadcea06101bea6926\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"a8678115bb59a62ff60443f087218aeab7a0faadb0edefa11e61242f165e7151\"" Jan 30 12:48:31.093248 containerd[1440]: time="2025-01-30T12:48:31.093215604Z" level=info msg="StartContainer for \"a8678115bb59a62ff60443f087218aeab7a0faadb0edefa11e61242f165e7151\"" Jan 30 12:48:31.124604 systemd[1]: Started cri-containerd-a8678115bb59a62ff60443f087218aeab7a0faadb0edefa11e61242f165e7151.scope - libcontainer container a8678115bb59a62ff60443f087218aeab7a0faadb0edefa11e61242f165e7151. Jan 30 12:48:31.160101 containerd[1440]: time="2025-01-30T12:48:31.160006588Z" level=info msg="StartContainer for \"a8678115bb59a62ff60443f087218aeab7a0faadb0edefa11e61242f165e7151\" returns successfully" Jan 30 12:48:31.614307 kubelet[1738]: E0130 12:48:31.614247 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:31.899076 kubelet[1738]: I0130 12:48:31.898908 1738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.616386181 podStartE2EDuration="15.898891259s" podCreationTimestamp="2025-01-30 12:48:16 +0000 UTC" firstStartedPulling="2025-01-30 12:48:30.770479839 +0000 UTC m=+43.855684848" lastFinishedPulling="2025-01-30 12:48:31.052984917 +0000 UTC m=+44.138189926" observedRunningTime="2025-01-30 12:48:31.898474313 +0000 UTC m=+44.983679642" watchObservedRunningTime="2025-01-30 12:48:31.898891259 +0000 UTC m=+44.984096268" Jan 30 12:48:32.456996 systemd-networkd[1377]: lxcdab29e29c88f: Gained IPv6LL Jan 30 12:48:32.615466 kubelet[1738]: E0130 12:48:32.615411 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:33.616614 kubelet[1738]: E0130 12:48:33.616558 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:34.616790 kubelet[1738]: E0130 12:48:34.616721 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:35.617672 kubelet[1738]: E0130 12:48:35.617618 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:36.617873 kubelet[1738]: E0130 12:48:36.617824 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:37.618531 kubelet[1738]: E0130 12:48:37.618479 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:38.619528 kubelet[1738]: E0130 12:48:38.619475 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:39.619922 kubelet[1738]: E0130 12:48:39.619799 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:39.669723 containerd[1440]: time="2025-01-30T12:48:39.669669361Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 12:48:39.676262 containerd[1440]: time="2025-01-30T12:48:39.676082405Z" level=info msg="StopContainer for \"d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0\" with timeout 2 (s)" Jan 30 12:48:39.676667 containerd[1440]: time="2025-01-30T12:48:39.676490063Z" level=info msg="Stop container \"d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0\" with signal terminated" Jan 30 12:48:39.692581 systemd-networkd[1377]: lxc_health: Link DOWN Jan 30 12:48:39.692589 systemd-networkd[1377]: lxc_health: Lost carrier Jan 30 12:48:39.716459 systemd[1]: cri-containerd-d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0.scope: Deactivated successfully. Jan 30 12:48:39.716921 systemd[1]: cri-containerd-d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0.scope: Consumed 6.880s CPU time. Jan 30 12:48:39.734247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0-rootfs.mount: Deactivated successfully. Jan 30 12:48:39.746236 containerd[1440]: time="2025-01-30T12:48:39.745944060Z" level=info msg="shim disconnected" id=d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0 namespace=k8s.io Jan 30 12:48:39.746416 containerd[1440]: time="2025-01-30T12:48:39.746241514Z" level=warning msg="cleaning up after shim disconnected" id=d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0 namespace=k8s.io Jan 30 12:48:39.746416 containerd[1440]: time="2025-01-30T12:48:39.746329638Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:48:39.765951 containerd[1440]: time="2025-01-30T12:48:39.765901825Z" level=info msg="StopContainer for \"d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0\" returns successfully" Jan 30 12:48:39.766605 containerd[1440]: time="2025-01-30T12:48:39.766578575Z" level=info msg="StopPodSandbox for \"7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172\"" Jan 30 12:48:39.766647 containerd[1440]: time="2025-01-30T12:48:39.766624257Z" level=info msg="Container to stop \"4207ace8d167f931e392884bf0ae13aed908cdf484c7bdb7fd9be9c1510b35eb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:48:39.766647 containerd[1440]: time="2025-01-30T12:48:39.766638858Z" level=info msg="Container to stop \"97ca62cbf19f63cfd9f8c4718168eeda555ce91956b04832ffd11d9b5dbd1ddd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:48:39.766705 containerd[1440]: time="2025-01-30T12:48:39.766649138Z" level=info msg="Container to stop \"d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:48:39.766705 containerd[1440]: time="2025-01-30T12:48:39.766660578Z" level=info msg="Container to stop \"03adb28681bc67683218aa40679c944f36f100451c4f95291c140cf5d6096783\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:48:39.766705 containerd[1440]: time="2025-01-30T12:48:39.766670659Z" level=info msg="Container to stop \"2c8fb3ef11b756a65d95cef86ecfbe9364630817f6095550914bc25c90519d45\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:48:39.768204 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172-shm.mount: Deactivated successfully. Jan 30 12:48:39.775191 systemd[1]: cri-containerd-7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172.scope: Deactivated successfully. Jan 30 12:48:39.794319 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172-rootfs.mount: Deactivated successfully. Jan 30 12:48:39.801790 containerd[1440]: time="2025-01-30T12:48:39.801706851Z" level=info msg="shim disconnected" id=7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172 namespace=k8s.io Jan 30 12:48:39.801790 containerd[1440]: time="2025-01-30T12:48:39.801781695Z" level=warning msg="cleaning up after shim disconnected" id=7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172 namespace=k8s.io Jan 30 12:48:39.801790 containerd[1440]: time="2025-01-30T12:48:39.801790575Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:48:39.813909 containerd[1440]: time="2025-01-30T12:48:39.813855430Z" level=info msg="TearDown network for sandbox \"7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172\" successfully" Jan 30 12:48:39.813909 containerd[1440]: time="2025-01-30T12:48:39.813894312Z" level=info msg="StopPodSandbox for \"7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172\" returns successfully" Jan 30 12:48:39.900324 kubelet[1738]: I0130 12:48:39.899659 1738 scope.go:117] "RemoveContainer" containerID="d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0" Jan 30 12:48:39.901930 containerd[1440]: time="2025-01-30T12:48:39.901896251Z" level=info msg="RemoveContainer for \"d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0\"" Jan 30 12:48:39.906402 containerd[1440]: time="2025-01-30T12:48:39.906367929Z" level=info msg="RemoveContainer for \"d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0\" returns successfully" Jan 30 12:48:39.906659 kubelet[1738]: I0130 12:48:39.906636 1738 scope.go:117] "RemoveContainer" containerID="2c8fb3ef11b756a65d95cef86ecfbe9364630817f6095550914bc25c90519d45" Jan 30 12:48:39.907709 containerd[1440]: time="2025-01-30T12:48:39.907682188Z" level=info msg="RemoveContainer for \"2c8fb3ef11b756a65d95cef86ecfbe9364630817f6095550914bc25c90519d45\"" Jan 30 12:48:39.910167 containerd[1440]: time="2025-01-30T12:48:39.910131576Z" level=info msg="RemoveContainer for \"2c8fb3ef11b756a65d95cef86ecfbe9364630817f6095550914bc25c90519d45\" returns successfully" Jan 30 12:48:39.910405 kubelet[1738]: I0130 12:48:39.910362 1738 scope.go:117] "RemoveContainer" containerID="97ca62cbf19f63cfd9f8c4718168eeda555ce91956b04832ffd11d9b5dbd1ddd" Jan 30 12:48:39.911501 containerd[1440]: time="2025-01-30T12:48:39.911476196Z" level=info msg="RemoveContainer for \"97ca62cbf19f63cfd9f8c4718168eeda555ce91956b04832ffd11d9b5dbd1ddd\"" Jan 30 12:48:39.917294 containerd[1440]: time="2025-01-30T12:48:39.917236691Z" level=info msg="RemoveContainer for \"97ca62cbf19f63cfd9f8c4718168eeda555ce91956b04832ffd11d9b5dbd1ddd\" returns successfully" Jan 30 12:48:39.917537 kubelet[1738]: I0130 12:48:39.917501 1738 scope.go:117] "RemoveContainer" containerID="03adb28681bc67683218aa40679c944f36f100451c4f95291c140cf5d6096783" Jan 30 12:48:39.918701 containerd[1440]: time="2025-01-30T12:48:39.918673795Z" level=info msg="RemoveContainer for \"03adb28681bc67683218aa40679c944f36f100451c4f95291c140cf5d6096783\"" Jan 30 12:48:39.929784 kubelet[1738]: I0130 12:48:39.929723 1738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-lib-modules\") pod \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " Jan 30 12:48:39.929784 kubelet[1738]: I0130 12:48:39.929785 1738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-hostproc\") pod \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " Jan 30 12:48:39.929784 kubelet[1738]: I0130 12:48:39.929803 1738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-etc-cni-netd\") pod \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " Jan 30 12:48:39.930523 kubelet[1738]: I0130 12:48:39.929817 1738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-cni-path\") pod \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " Jan 30 12:48:39.930523 kubelet[1738]: I0130 12:48:39.929836 1738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-host-proc-sys-kernel\") pod \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " Jan 30 12:48:39.930523 kubelet[1738]: I0130 12:48:39.929861 1738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-cilium-cgroup\") pod \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " Jan 30 12:48:39.930523 kubelet[1738]: I0130 12:48:39.929883 1738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-hubble-tls\") pod \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " Jan 30 12:48:39.930523 kubelet[1738]: I0130 12:48:39.929899 1738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-cilium-run\") pod \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " Jan 30 12:48:39.930523 kubelet[1738]: I0130 12:48:39.929917 1738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k695m\" (UniqueName: \"kubernetes.io/projected/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-kube-api-access-k695m\") pod \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " Jan 30 12:48:39.930691 kubelet[1738]: I0130 12:48:39.929907 1738 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-hostproc" (OuterVolumeSpecName: "hostproc") pod "e7a86f44-a9bb-41e9-a2ad-e65ad5422464" (UID: "e7a86f44-a9bb-41e9-a2ad-e65ad5422464"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:48:39.930691 kubelet[1738]: I0130 12:48:39.929939 1738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-clustermesh-secrets\") pod \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " Jan 30 12:48:39.930691 kubelet[1738]: I0130 12:48:39.929955 1738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-host-proc-sys-net\") pod \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " Jan 30 12:48:39.930691 kubelet[1738]: I0130 12:48:39.929962 1738 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e7a86f44-a9bb-41e9-a2ad-e65ad5422464" (UID: "e7a86f44-a9bb-41e9-a2ad-e65ad5422464"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:48:39.930691 kubelet[1738]: I0130 12:48:39.929970 1738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-xtables-lock\") pod \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " Jan 30 12:48:39.930834 kubelet[1738]: I0130 12:48:39.929988 1738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-cilium-config-path\") pod \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " Jan 30 12:48:39.930834 kubelet[1738]: I0130 12:48:39.930006 1738 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-bpf-maps\") pod \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\" (UID: \"e7a86f44-a9bb-41e9-a2ad-e65ad5422464\") " Jan 30 12:48:39.930834 kubelet[1738]: I0130 12:48:39.930033 1738 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-hostproc\") on node \"10.0.0.22\" DevicePath \"\"" Jan 30 12:48:39.930834 kubelet[1738]: I0130 12:48:39.930043 1738 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-lib-modules\") on node \"10.0.0.22\" DevicePath \"\"" Jan 30 12:48:39.930834 kubelet[1738]: I0130 12:48:39.930080 1738 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e7a86f44-a9bb-41e9-a2ad-e65ad5422464" (UID: "e7a86f44-a9bb-41e9-a2ad-e65ad5422464"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:48:39.930834 kubelet[1738]: I0130 12:48:39.930104 1738 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e7a86f44-a9bb-41e9-a2ad-e65ad5422464" (UID: "e7a86f44-a9bb-41e9-a2ad-e65ad5422464"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:48:39.930975 containerd[1440]: time="2025-01-30T12:48:39.930790092Z" level=info msg="RemoveContainer for \"03adb28681bc67683218aa40679c944f36f100451c4f95291c140cf5d6096783\" returns successfully" Jan 30 12:48:39.931015 kubelet[1738]: I0130 12:48:39.930388 1738 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-cni-path" (OuterVolumeSpecName: "cni-path") pod "e7a86f44-a9bb-41e9-a2ad-e65ad5422464" (UID: "e7a86f44-a9bb-41e9-a2ad-e65ad5422464"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:48:39.931161 kubelet[1738]: I0130 12:48:39.931076 1738 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e7a86f44-a9bb-41e9-a2ad-e65ad5422464" (UID: "e7a86f44-a9bb-41e9-a2ad-e65ad5422464"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:48:39.931373 kubelet[1738]: I0130 12:48:39.931353 1738 scope.go:117] "RemoveContainer" containerID="4207ace8d167f931e392884bf0ae13aed908cdf484c7bdb7fd9be9c1510b35eb" Jan 30 12:48:39.931558 kubelet[1738]: I0130 12:48:39.931539 1738 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e7a86f44-a9bb-41e9-a2ad-e65ad5422464" (UID: "e7a86f44-a9bb-41e9-a2ad-e65ad5422464"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:48:39.931689 kubelet[1738]: I0130 12:48:39.931661 1738 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e7a86f44-a9bb-41e9-a2ad-e65ad5422464" (UID: "e7a86f44-a9bb-41e9-a2ad-e65ad5422464"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:48:39.932105 kubelet[1738]: I0130 12:48:39.931679 1738 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e7a86f44-a9bb-41e9-a2ad-e65ad5422464" (UID: "e7a86f44-a9bb-41e9-a2ad-e65ad5422464"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:48:39.932105 kubelet[1738]: I0130 12:48:39.931798 1738 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e7a86f44-a9bb-41e9-a2ad-e65ad5422464" (UID: "e7a86f44-a9bb-41e9-a2ad-e65ad5422464"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 12:48:39.932699 containerd[1440]: time="2025-01-30T12:48:39.932656774Z" level=info msg="RemoveContainer for \"4207ace8d167f931e392884bf0ae13aed908cdf484c7bdb7fd9be9c1510b35eb\"" Jan 30 12:48:39.935214 systemd[1]: var-lib-kubelet-pods-e7a86f44\x2da9bb\x2d41e9\x2da2ad\x2de65ad5422464-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 12:48:39.935995 containerd[1440]: time="2025-01-30T12:48:39.935382055Z" level=info msg="RemoveContainer for \"4207ace8d167f931e392884bf0ae13aed908cdf484c7bdb7fd9be9c1510b35eb\" returns successfully" Jan 30 12:48:39.936035 kubelet[1738]: I0130 12:48:39.935870 1738 scope.go:117] "RemoveContainer" containerID="d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0" Jan 30 12:48:39.936346 containerd[1440]: time="2025-01-30T12:48:39.936092086Z" level=error msg="ContainerStatus for \"d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0\": not found" Jan 30 12:48:39.936417 kubelet[1738]: E0130 12:48:39.936230 1738 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0\": not found" containerID="d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0" Jan 30 12:48:39.936417 kubelet[1738]: I0130 12:48:39.936257 1738 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0"} err="failed to get container status \"d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0\": rpc error: code = NotFound desc = an error occurred when try to find container \"d86397eecdf7eef68350e8b5aa91ab0daf10e179ad6fbf50aacecc8e203375d0\": not found" Jan 30 12:48:39.936417 kubelet[1738]: I0130 12:48:39.936336 1738 scope.go:117] "RemoveContainer" containerID="2c8fb3ef11b756a65d95cef86ecfbe9364630817f6095550914bc25c90519d45" Jan 30 12:48:39.936417 kubelet[1738]: I0130 12:48:39.936314 1738 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e7a86f44-a9bb-41e9-a2ad-e65ad5422464" (UID: "e7a86f44-a9bb-41e9-a2ad-e65ad5422464"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 12:48:39.937825 kubelet[1738]: I0130 12:48:39.937778 1738 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e7a86f44-a9bb-41e9-a2ad-e65ad5422464" (UID: "e7a86f44-a9bb-41e9-a2ad-e65ad5422464"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 12:48:39.937900 containerd[1440]: time="2025-01-30T12:48:39.936678872Z" level=error msg="ContainerStatus for \"2c8fb3ef11b756a65d95cef86ecfbe9364630817f6095550914bc25c90519d45\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2c8fb3ef11b756a65d95cef86ecfbe9364630817f6095550914bc25c90519d45\": not found" Jan 30 12:48:39.938090 kubelet[1738]: E0130 12:48:39.938014 1738 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2c8fb3ef11b756a65d95cef86ecfbe9364630817f6095550914bc25c90519d45\": not found" containerID="2c8fb3ef11b756a65d95cef86ecfbe9364630817f6095550914bc25c90519d45" Jan 30 12:48:39.938090 kubelet[1738]: I0130 12:48:39.938029 1738 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-kube-api-access-k695m" (OuterVolumeSpecName: "kube-api-access-k695m") pod "e7a86f44-a9bb-41e9-a2ad-e65ad5422464" (UID: "e7a86f44-a9bb-41e9-a2ad-e65ad5422464"). InnerVolumeSpecName "kube-api-access-k695m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 12:48:39.938090 kubelet[1738]: I0130 12:48:39.938045 1738 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2c8fb3ef11b756a65d95cef86ecfbe9364630817f6095550914bc25c90519d45"} err="failed to get container status \"2c8fb3ef11b756a65d95cef86ecfbe9364630817f6095550914bc25c90519d45\": rpc error: code = NotFound desc = an error occurred when try to find container \"2c8fb3ef11b756a65d95cef86ecfbe9364630817f6095550914bc25c90519d45\": not found" Jan 30 12:48:39.938090 kubelet[1738]: I0130 12:48:39.938063 1738 scope.go:117] "RemoveContainer" containerID="97ca62cbf19f63cfd9f8c4718168eeda555ce91956b04832ffd11d9b5dbd1ddd" Jan 30 12:48:39.938479 containerd[1440]: time="2025-01-30T12:48:39.938431630Z" level=error msg="ContainerStatus for \"97ca62cbf19f63cfd9f8c4718168eeda555ce91956b04832ffd11d9b5dbd1ddd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"97ca62cbf19f63cfd9f8c4718168eeda555ce91956b04832ffd11d9b5dbd1ddd\": not found" Jan 30 12:48:39.938587 kubelet[1738]: E0130 12:48:39.938562 1738 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"97ca62cbf19f63cfd9f8c4718168eeda555ce91956b04832ffd11d9b5dbd1ddd\": not found" containerID="97ca62cbf19f63cfd9f8c4718168eeda555ce91956b04832ffd11d9b5dbd1ddd" Jan 30 12:48:39.938629 kubelet[1738]: I0130 12:48:39.938591 1738 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"97ca62cbf19f63cfd9f8c4718168eeda555ce91956b04832ffd11d9b5dbd1ddd"} err="failed to get container status \"97ca62cbf19f63cfd9f8c4718168eeda555ce91956b04832ffd11d9b5dbd1ddd\": rpc error: code = NotFound desc = an error occurred when try to find container \"97ca62cbf19f63cfd9f8c4718168eeda555ce91956b04832ffd11d9b5dbd1ddd\": not found" Jan 30 12:48:39.938629 kubelet[1738]: I0130 12:48:39.938613 1738 scope.go:117] "RemoveContainer" containerID="03adb28681bc67683218aa40679c944f36f100451c4f95291c140cf5d6096783" Jan 30 12:48:39.938824 containerd[1440]: time="2025-01-30T12:48:39.938795046Z" level=error msg="ContainerStatus for \"03adb28681bc67683218aa40679c944f36f100451c4f95291c140cf5d6096783\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"03adb28681bc67683218aa40679c944f36f100451c4f95291c140cf5d6096783\": not found" Jan 30 12:48:39.938978 kubelet[1738]: E0130 12:48:39.938948 1738 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"03adb28681bc67683218aa40679c944f36f100451c4f95291c140cf5d6096783\": not found" containerID="03adb28681bc67683218aa40679c944f36f100451c4f95291c140cf5d6096783" Jan 30 12:48:39.939193 kubelet[1738]: I0130 12:48:39.939052 1738 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"03adb28681bc67683218aa40679c944f36f100451c4f95291c140cf5d6096783"} err="failed to get container status \"03adb28681bc67683218aa40679c944f36f100451c4f95291c140cf5d6096783\": rpc error: code = NotFound desc = an error occurred when try to find container \"03adb28681bc67683218aa40679c944f36f100451c4f95291c140cf5d6096783\": not found" Jan 30 12:48:39.939193 kubelet[1738]: I0130 12:48:39.939110 1738 scope.go:117] "RemoveContainer" containerID="4207ace8d167f931e392884bf0ae13aed908cdf484c7bdb7fd9be9c1510b35eb" Jan 30 12:48:39.939319 containerd[1440]: time="2025-01-30T12:48:39.939282908Z" level=error msg="ContainerStatus for \"4207ace8d167f931e392884bf0ae13aed908cdf484c7bdb7fd9be9c1510b35eb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4207ace8d167f931e392884bf0ae13aed908cdf484c7bdb7fd9be9c1510b35eb\": not found" Jan 30 12:48:39.939484 kubelet[1738]: E0130 12:48:39.939395 1738 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4207ace8d167f931e392884bf0ae13aed908cdf484c7bdb7fd9be9c1510b35eb\": not found" containerID="4207ace8d167f931e392884bf0ae13aed908cdf484c7bdb7fd9be9c1510b35eb" Jan 30 12:48:39.939484 kubelet[1738]: I0130 12:48:39.939417 1738 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4207ace8d167f931e392884bf0ae13aed908cdf484c7bdb7fd9be9c1510b35eb"} err="failed to get container status \"4207ace8d167f931e392884bf0ae13aed908cdf484c7bdb7fd9be9c1510b35eb\": rpc error: code = NotFound desc = an error occurred when try to find container \"4207ace8d167f931e392884bf0ae13aed908cdf484c7bdb7fd9be9c1510b35eb\": not found" Jan 30 12:48:39.942471 kubelet[1738]: I0130 12:48:39.942431 1738 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e7a86f44-a9bb-41e9-a2ad-e65ad5422464" (UID: "e7a86f44-a9bb-41e9-a2ad-e65ad5422464"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 12:48:40.030756 kubelet[1738]: I0130 12:48:40.030694 1738 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k695m\" (UniqueName: \"kubernetes.io/projected/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-kube-api-access-k695m\") on node \"10.0.0.22\" DevicePath \"\"" Jan 30 12:48:40.030756 kubelet[1738]: I0130 12:48:40.030726 1738 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-clustermesh-secrets\") on node \"10.0.0.22\" DevicePath \"\"" Jan 30 12:48:40.030756 kubelet[1738]: I0130 12:48:40.030757 1738 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-host-proc-sys-net\") on node \"10.0.0.22\" DevicePath \"\"" Jan 30 12:48:40.030756 kubelet[1738]: I0130 12:48:40.030771 1738 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-hubble-tls\") on node \"10.0.0.22\" DevicePath \"\"" Jan 30 12:48:40.030947 kubelet[1738]: I0130 12:48:40.030781 1738 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-cilium-run\") on node \"10.0.0.22\" DevicePath \"\"" Jan 30 12:48:40.030947 kubelet[1738]: I0130 12:48:40.030790 1738 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-bpf-maps\") on node \"10.0.0.22\" DevicePath \"\"" Jan 30 12:48:40.030947 kubelet[1738]: I0130 12:48:40.030798 1738 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-xtables-lock\") on node \"10.0.0.22\" DevicePath \"\"" Jan 30 12:48:40.030947 kubelet[1738]: I0130 12:48:40.030806 1738 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-cilium-config-path\") on node \"10.0.0.22\" DevicePath \"\"" Jan 30 12:48:40.030947 kubelet[1738]: I0130 12:48:40.030814 1738 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-host-proc-sys-kernel\") on node \"10.0.0.22\" DevicePath \"\"" Jan 30 12:48:40.030947 kubelet[1738]: I0130 12:48:40.030822 1738 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-cilium-cgroup\") on node \"10.0.0.22\" DevicePath \"\"" Jan 30 12:48:40.030947 kubelet[1738]: I0130 12:48:40.030830 1738 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-etc-cni-netd\") on node \"10.0.0.22\" DevicePath \"\"" Jan 30 12:48:40.030947 kubelet[1738]: I0130 12:48:40.030864 1738 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e7a86f44-a9bb-41e9-a2ad-e65ad5422464-cni-path\") on node \"10.0.0.22\" DevicePath \"\"" Jan 30 12:48:40.205281 systemd[1]: Removed slice kubepods-burstable-pode7a86f44_a9bb_41e9_a2ad_e65ad5422464.slice - libcontainer container kubepods-burstable-pode7a86f44_a9bb_41e9_a2ad_e65ad5422464.slice. Jan 30 12:48:40.205614 systemd[1]: kubepods-burstable-pode7a86f44_a9bb_41e9_a2ad_e65ad5422464.slice: Consumed 7.059s CPU time. Jan 30 12:48:40.620630 kubelet[1738]: E0130 12:48:40.620578 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:40.655851 systemd[1]: var-lib-kubelet-pods-e7a86f44\x2da9bb\x2d41e9\x2da2ad\x2de65ad5422464-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk695m.mount: Deactivated successfully. Jan 30 12:48:40.655956 systemd[1]: var-lib-kubelet-pods-e7a86f44\x2da9bb\x2d41e9\x2da2ad\x2de65ad5422464-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 12:48:41.623292 kubelet[1738]: E0130 12:48:41.623220 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:41.753970 kubelet[1738]: I0130 12:48:41.753930 1738 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7a86f44-a9bb-41e9-a2ad-e65ad5422464" path="/var/lib/kubelet/pods/e7a86f44-a9bb-41e9-a2ad-e65ad5422464/volumes" Jan 30 12:48:42.624064 kubelet[1738]: E0130 12:48:42.623992 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:42.686036 kubelet[1738]: I0130 12:48:42.685966 1738 memory_manager.go:355] "RemoveStaleState removing state" podUID="e7a86f44-a9bb-41e9-a2ad-e65ad5422464" containerName="cilium-agent" Jan 30 12:48:42.698882 systemd[1]: Created slice kubepods-besteffort-podadd12047_13fd_4723_b4b2_237d16cbc963.slice - libcontainer container kubepods-besteffort-podadd12047_13fd_4723_b4b2_237d16cbc963.slice. Jan 30 12:48:42.707362 systemd[1]: Created slice kubepods-burstable-pod893cda95_765d_4ee2_97f5_d7ce94104688.slice - libcontainer container kubepods-burstable-pod893cda95_765d_4ee2_97f5_d7ce94104688.slice. Jan 30 12:48:42.762378 kubelet[1738]: E0130 12:48:42.762287 1738 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 12:48:42.847150 kubelet[1738]: I0130 12:48:42.847068 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq92x\" (UniqueName: \"kubernetes.io/projected/893cda95-765d-4ee2-97f5-d7ce94104688-kube-api-access-lq92x\") pod \"cilium-5k5n5\" (UID: \"893cda95-765d-4ee2-97f5-d7ce94104688\") " pod="kube-system/cilium-5k5n5" Jan 30 12:48:42.847150 kubelet[1738]: I0130 12:48:42.847119 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/893cda95-765d-4ee2-97f5-d7ce94104688-clustermesh-secrets\") pod \"cilium-5k5n5\" (UID: \"893cda95-765d-4ee2-97f5-d7ce94104688\") " pod="kube-system/cilium-5k5n5" Jan 30 12:48:42.847150 kubelet[1738]: I0130 12:48:42.847137 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/893cda95-765d-4ee2-97f5-d7ce94104688-host-proc-sys-kernel\") pod \"cilium-5k5n5\" (UID: \"893cda95-765d-4ee2-97f5-d7ce94104688\") " pod="kube-system/cilium-5k5n5" Jan 30 12:48:42.847150 kubelet[1738]: I0130 12:48:42.847157 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/893cda95-765d-4ee2-97f5-d7ce94104688-cilium-run\") pod \"cilium-5k5n5\" (UID: \"893cda95-765d-4ee2-97f5-d7ce94104688\") " pod="kube-system/cilium-5k5n5" Jan 30 12:48:42.847385 kubelet[1738]: I0130 12:48:42.847173 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/893cda95-765d-4ee2-97f5-d7ce94104688-bpf-maps\") pod \"cilium-5k5n5\" (UID: \"893cda95-765d-4ee2-97f5-d7ce94104688\") " pod="kube-system/cilium-5k5n5" Jan 30 12:48:42.847385 kubelet[1738]: I0130 12:48:42.847187 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/893cda95-765d-4ee2-97f5-d7ce94104688-hostproc\") pod \"cilium-5k5n5\" (UID: \"893cda95-765d-4ee2-97f5-d7ce94104688\") " pod="kube-system/cilium-5k5n5" Jan 30 12:48:42.847385 kubelet[1738]: I0130 12:48:42.847201 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/893cda95-765d-4ee2-97f5-d7ce94104688-etc-cni-netd\") pod \"cilium-5k5n5\" (UID: \"893cda95-765d-4ee2-97f5-d7ce94104688\") " pod="kube-system/cilium-5k5n5" Jan 30 12:48:42.847385 kubelet[1738]: I0130 12:48:42.847219 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/893cda95-765d-4ee2-97f5-d7ce94104688-host-proc-sys-net\") pod \"cilium-5k5n5\" (UID: \"893cda95-765d-4ee2-97f5-d7ce94104688\") " pod="kube-system/cilium-5k5n5" Jan 30 12:48:42.847385 kubelet[1738]: I0130 12:48:42.847236 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xttwr\" (UniqueName: \"kubernetes.io/projected/add12047-13fd-4723-b4b2-237d16cbc963-kube-api-access-xttwr\") pod \"cilium-operator-6c4d7847fc-6tvpv\" (UID: \"add12047-13fd-4723-b4b2-237d16cbc963\") " pod="kube-system/cilium-operator-6c4d7847fc-6tvpv" Jan 30 12:48:42.847540 kubelet[1738]: I0130 12:48:42.847252 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/893cda95-765d-4ee2-97f5-d7ce94104688-cni-path\") pod \"cilium-5k5n5\" (UID: \"893cda95-765d-4ee2-97f5-d7ce94104688\") " pod="kube-system/cilium-5k5n5" Jan 30 12:48:42.847540 kubelet[1738]: I0130 12:48:42.847268 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/893cda95-765d-4ee2-97f5-d7ce94104688-cilium-config-path\") pod \"cilium-5k5n5\" (UID: \"893cda95-765d-4ee2-97f5-d7ce94104688\") " pod="kube-system/cilium-5k5n5" Jan 30 12:48:42.847540 kubelet[1738]: I0130 12:48:42.847296 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/893cda95-765d-4ee2-97f5-d7ce94104688-cilium-ipsec-secrets\") pod \"cilium-5k5n5\" (UID: \"893cda95-765d-4ee2-97f5-d7ce94104688\") " pod="kube-system/cilium-5k5n5" Jan 30 12:48:42.847540 kubelet[1738]: I0130 12:48:42.847320 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/893cda95-765d-4ee2-97f5-d7ce94104688-hubble-tls\") pod \"cilium-5k5n5\" (UID: \"893cda95-765d-4ee2-97f5-d7ce94104688\") " pod="kube-system/cilium-5k5n5" Jan 30 12:48:42.847540 kubelet[1738]: I0130 12:48:42.847337 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/893cda95-765d-4ee2-97f5-d7ce94104688-cilium-cgroup\") pod \"cilium-5k5n5\" (UID: \"893cda95-765d-4ee2-97f5-d7ce94104688\") " pod="kube-system/cilium-5k5n5" Jan 30 12:48:42.847540 kubelet[1738]: I0130 12:48:42.847357 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/893cda95-765d-4ee2-97f5-d7ce94104688-lib-modules\") pod \"cilium-5k5n5\" (UID: \"893cda95-765d-4ee2-97f5-d7ce94104688\") " pod="kube-system/cilium-5k5n5" Jan 30 12:48:42.847662 kubelet[1738]: I0130 12:48:42.847388 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/893cda95-765d-4ee2-97f5-d7ce94104688-xtables-lock\") pod \"cilium-5k5n5\" (UID: \"893cda95-765d-4ee2-97f5-d7ce94104688\") " pod="kube-system/cilium-5k5n5" Jan 30 12:48:42.847662 kubelet[1738]: I0130 12:48:42.847404 1738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/add12047-13fd-4723-b4b2-237d16cbc963-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-6tvpv\" (UID: \"add12047-13fd-4723-b4b2-237d16cbc963\") " pod="kube-system/cilium-operator-6c4d7847fc-6tvpv" Jan 30 12:48:43.003393 kubelet[1738]: E0130 12:48:43.003346 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:48:43.004087 containerd[1440]: time="2025-01-30T12:48:43.004049899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6tvpv,Uid:add12047-13fd-4723-b4b2-237d16cbc963,Namespace:kube-system,Attempt:0,}" Jan 30 12:48:43.024461 containerd[1440]: time="2025-01-30T12:48:43.024338805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:48:43.024461 containerd[1440]: time="2025-01-30T12:48:43.024421128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:48:43.024461 containerd[1440]: time="2025-01-30T12:48:43.024438049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:48:43.024719 containerd[1440]: time="2025-01-30T12:48:43.024529692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:48:43.025574 kubelet[1738]: E0130 12:48:43.025071 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:48:43.026039 containerd[1440]: time="2025-01-30T12:48:43.026005190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5k5n5,Uid:893cda95-765d-4ee2-97f5-d7ce94104688,Namespace:kube-system,Attempt:0,}" Jan 30 12:48:43.043942 systemd[1]: Started cri-containerd-e0c53a6436597d86367e03f28880d1ffee1d8cea0d16668c22c864389b35b4a1.scope - libcontainer container e0c53a6436597d86367e03f28880d1ffee1d8cea0d16668c22c864389b35b4a1. Jan 30 12:48:43.051591 containerd[1440]: time="2025-01-30T12:48:43.051482857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:48:43.051591 containerd[1440]: time="2025-01-30T12:48:43.051559500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:48:43.051591 containerd[1440]: time="2025-01-30T12:48:43.051572420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:48:43.051805 containerd[1440]: time="2025-01-30T12:48:43.051730866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:48:43.077121 systemd[1]: Started cri-containerd-4853e018b610afc205d56b02def18e5a86ee4cd98f7b1fb9c95d2416572ce0dd.scope - libcontainer container 4853e018b610afc205d56b02def18e5a86ee4cd98f7b1fb9c95d2416572ce0dd. Jan 30 12:48:43.096524 containerd[1440]: time="2025-01-30T12:48:43.096464080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-6tvpv,Uid:add12047-13fd-4723-b4b2-237d16cbc963,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0c53a6436597d86367e03f28880d1ffee1d8cea0d16668c22c864389b35b4a1\"" Jan 30 12:48:43.097339 kubelet[1738]: E0130 12:48:43.097290 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:48:43.098316 containerd[1440]: time="2025-01-30T12:48:43.098153425Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 12:48:43.105746 containerd[1440]: time="2025-01-30T12:48:43.105663116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5k5n5,Uid:893cda95-765d-4ee2-97f5-d7ce94104688,Namespace:kube-system,Attempt:0,} returns sandbox id \"4853e018b610afc205d56b02def18e5a86ee4cd98f7b1fb9c95d2416572ce0dd\"" Jan 30 12:48:43.106325 kubelet[1738]: E0130 12:48:43.106303 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:48:43.108047 containerd[1440]: time="2025-01-30T12:48:43.108006447Z" level=info msg="CreateContainer within sandbox \"4853e018b610afc205d56b02def18e5a86ee4cd98f7b1fb9c95d2416572ce0dd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 12:48:43.119250 containerd[1440]: time="2025-01-30T12:48:43.119183120Z" level=info msg="CreateContainer within sandbox \"4853e018b610afc205d56b02def18e5a86ee4cd98f7b1fb9c95d2416572ce0dd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0022656647b869b8627e5e386b1cd5e7f2a611bae2853e70c32851e663e3d873\"" Jan 30 12:48:43.119862 containerd[1440]: time="2025-01-30T12:48:43.119810144Z" level=info msg="StartContainer for \"0022656647b869b8627e5e386b1cd5e7f2a611bae2853e70c32851e663e3d873\"" Jan 30 12:48:43.154961 systemd[1]: Started cri-containerd-0022656647b869b8627e5e386b1cd5e7f2a611bae2853e70c32851e663e3d873.scope - libcontainer container 0022656647b869b8627e5e386b1cd5e7f2a611bae2853e70c32851e663e3d873. Jan 30 12:48:43.179386 containerd[1440]: time="2025-01-30T12:48:43.176968759Z" level=info msg="StartContainer for \"0022656647b869b8627e5e386b1cd5e7f2a611bae2853e70c32851e663e3d873\" returns successfully" Jan 30 12:48:43.306462 systemd[1]: cri-containerd-0022656647b869b8627e5e386b1cd5e7f2a611bae2853e70c32851e663e3d873.scope: Deactivated successfully. Jan 30 12:48:43.339607 containerd[1440]: time="2025-01-30T12:48:43.339544219Z" level=info msg="shim disconnected" id=0022656647b869b8627e5e386b1cd5e7f2a611bae2853e70c32851e663e3d873 namespace=k8s.io Jan 30 12:48:43.339607 containerd[1440]: time="2025-01-30T12:48:43.339600741Z" level=warning msg="cleaning up after shim disconnected" id=0022656647b869b8627e5e386b1cd5e7f2a611bae2853e70c32851e663e3d873 namespace=k8s.io Jan 30 12:48:43.339607 containerd[1440]: time="2025-01-30T12:48:43.339609742Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:48:43.624932 kubelet[1738]: E0130 12:48:43.624804 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:43.910057 kubelet[1738]: E0130 12:48:43.909334 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:48:43.911431 containerd[1440]: time="2025-01-30T12:48:43.911383138Z" level=info msg="CreateContainer within sandbox \"4853e018b610afc205d56b02def18e5a86ee4cd98f7b1fb9c95d2416572ce0dd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 12:48:43.930307 containerd[1440]: time="2025-01-30T12:48:43.930243869Z" level=info msg="CreateContainer within sandbox \"4853e018b610afc205d56b02def18e5a86ee4cd98f7b1fb9c95d2416572ce0dd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"03bc6b400d9071a8c3e5eb2f1ab8b6bc4c57324023bd87daf80fcf8f2dbfbd66\"" Jan 30 12:48:43.930814 containerd[1440]: time="2025-01-30T12:48:43.930774809Z" level=info msg="StartContainer for \"03bc6b400d9071a8c3e5eb2f1ab8b6bc4c57324023bd87daf80fcf8f2dbfbd66\"" Jan 30 12:48:43.959936 systemd[1]: Started cri-containerd-03bc6b400d9071a8c3e5eb2f1ab8b6bc4c57324023bd87daf80fcf8f2dbfbd66.scope - libcontainer container 03bc6b400d9071a8c3e5eb2f1ab8b6bc4c57324023bd87daf80fcf8f2dbfbd66. Jan 30 12:48:43.984339 containerd[1440]: time="2025-01-30T12:48:43.984282123Z" level=info msg="StartContainer for \"03bc6b400d9071a8c3e5eb2f1ab8b6bc4c57324023bd87daf80fcf8f2dbfbd66\" returns successfully" Jan 30 12:48:44.000120 systemd[1]: cri-containerd-03bc6b400d9071a8c3e5eb2f1ab8b6bc4c57324023bd87daf80fcf8f2dbfbd66.scope: Deactivated successfully. Jan 30 12:48:44.015494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03bc6b400d9071a8c3e5eb2f1ab8b6bc4c57324023bd87daf80fcf8f2dbfbd66-rootfs.mount: Deactivated successfully. Jan 30 12:48:44.023803 containerd[1440]: time="2025-01-30T12:48:44.023716544Z" level=info msg="shim disconnected" id=03bc6b400d9071a8c3e5eb2f1ab8b6bc4c57324023bd87daf80fcf8f2dbfbd66 namespace=k8s.io Jan 30 12:48:44.023803 containerd[1440]: time="2025-01-30T12:48:44.023786466Z" level=warning msg="cleaning up after shim disconnected" id=03bc6b400d9071a8c3e5eb2f1ab8b6bc4c57324023bd87daf80fcf8f2dbfbd66 namespace=k8s.io Jan 30 12:48:44.023803 containerd[1440]: time="2025-01-30T12:48:44.023795627Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:48:44.071260 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4163909165.mount: Deactivated successfully. Jan 30 12:48:44.465959 containerd[1440]: time="2025-01-30T12:48:44.465866075Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:48:44.466263 containerd[1440]: time="2025-01-30T12:48:44.466219368Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 30 12:48:44.467043 containerd[1440]: time="2025-01-30T12:48:44.467008838Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:48:44.468658 containerd[1440]: time="2025-01-30T12:48:44.468613978Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.370411311s" Jan 30 12:48:44.468658 containerd[1440]: time="2025-01-30T12:48:44.468655340Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 30 12:48:44.471183 containerd[1440]: time="2025-01-30T12:48:44.471142313Z" level=info msg="CreateContainer within sandbox \"e0c53a6436597d86367e03f28880d1ffee1d8cea0d16668c22c864389b35b4a1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 12:48:44.483841 containerd[1440]: time="2025-01-30T12:48:44.483772828Z" level=info msg="CreateContainer within sandbox \"e0c53a6436597d86367e03f28880d1ffee1d8cea0d16668c22c864389b35b4a1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4f7b9fff6e9d0fabfe7f7d87b26e475aa79f286e40ab75a79d0a194385722fff\"" Jan 30 12:48:44.484609 containerd[1440]: time="2025-01-30T12:48:44.484562938Z" level=info msg="StartContainer for \"4f7b9fff6e9d0fabfe7f7d87b26e475aa79f286e40ab75a79d0a194385722fff\"" Jan 30 12:48:44.515936 systemd[1]: Started cri-containerd-4f7b9fff6e9d0fabfe7f7d87b26e475aa79f286e40ab75a79d0a194385722fff.scope - libcontainer container 4f7b9fff6e9d0fabfe7f7d87b26e475aa79f286e40ab75a79d0a194385722fff. Jan 30 12:48:44.540261 containerd[1440]: time="2025-01-30T12:48:44.540218388Z" level=info msg="StartContainer for \"4f7b9fff6e9d0fabfe7f7d87b26e475aa79f286e40ab75a79d0a194385722fff\" returns successfully" Jan 30 12:48:44.625331 kubelet[1738]: E0130 12:48:44.625254 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:44.916134 kubelet[1738]: E0130 12:48:44.916089 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:48:44.918434 containerd[1440]: time="2025-01-30T12:48:44.918292593Z" level=info msg="CreateContainer within sandbox \"4853e018b610afc205d56b02def18e5a86ee4cd98f7b1fb9c95d2416572ce0dd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 12:48:44.918580 kubelet[1738]: E0130 12:48:44.918395 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:48:44.943187 containerd[1440]: time="2025-01-30T12:48:44.943128126Z" level=info msg="CreateContainer within sandbox \"4853e018b610afc205d56b02def18e5a86ee4cd98f7b1fb9c95d2416572ce0dd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9fc12365d6e2bc2465ac9bc7e449531a0a76bd975906c02abfbc4f194d1af180\"" Jan 30 12:48:44.945395 kubelet[1738]: I0130 12:48:44.945325 1738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-6tvpv" podStartSLOduration=1.573717612 podStartE2EDuration="2.945306407s" podCreationTimestamp="2025-01-30 12:48:42 +0000 UTC" firstStartedPulling="2025-01-30 12:48:43.097800772 +0000 UTC m=+56.183005741" lastFinishedPulling="2025-01-30 12:48:44.469389527 +0000 UTC m=+57.554594536" observedRunningTime="2025-01-30 12:48:44.945307208 +0000 UTC m=+58.030512217" watchObservedRunningTime="2025-01-30 12:48:44.945306407 +0000 UTC m=+58.030511416" Jan 30 12:48:44.945854 containerd[1440]: time="2025-01-30T12:48:44.945812426Z" level=info msg="StartContainer for \"9fc12365d6e2bc2465ac9bc7e449531a0a76bd975906c02abfbc4f194d1af180\"" Jan 30 12:48:44.980953 systemd[1]: Started cri-containerd-9fc12365d6e2bc2465ac9bc7e449531a0a76bd975906c02abfbc4f194d1af180.scope - libcontainer container 9fc12365d6e2bc2465ac9bc7e449531a0a76bd975906c02abfbc4f194d1af180. Jan 30 12:48:45.005959 systemd[1]: cri-containerd-9fc12365d6e2bc2465ac9bc7e449531a0a76bd975906c02abfbc4f194d1af180.scope: Deactivated successfully. Jan 30 12:48:45.011223 containerd[1440]: time="2025-01-30T12:48:45.010970183Z" level=info msg="StartContainer for \"9fc12365d6e2bc2465ac9bc7e449531a0a76bd975906c02abfbc4f194d1af180\" returns successfully" Jan 30 12:48:45.028118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fc12365d6e2bc2465ac9bc7e449531a0a76bd975906c02abfbc4f194d1af180-rootfs.mount: Deactivated successfully. Jan 30 12:48:45.035621 containerd[1440]: time="2025-01-30T12:48:45.035532519Z" level=info msg="shim disconnected" id=9fc12365d6e2bc2465ac9bc7e449531a0a76bd975906c02abfbc4f194d1af180 namespace=k8s.io Jan 30 12:48:45.035621 containerd[1440]: time="2025-01-30T12:48:45.035588841Z" level=warning msg="cleaning up after shim disconnected" id=9fc12365d6e2bc2465ac9bc7e449531a0a76bd975906c02abfbc4f194d1af180 namespace=k8s.io Jan 30 12:48:45.035621 containerd[1440]: time="2025-01-30T12:48:45.035597521Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:48:45.625791 kubelet[1738]: E0130 12:48:45.625584 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:45.922317 kubelet[1738]: E0130 12:48:45.921974 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:48:45.922317 kubelet[1738]: E0130 12:48:45.922009 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:48:45.924541 containerd[1440]: time="2025-01-30T12:48:45.924494173Z" level=info msg="CreateContainer within sandbox \"4853e018b610afc205d56b02def18e5a86ee4cd98f7b1fb9c95d2416572ce0dd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 12:48:45.944205 containerd[1440]: time="2025-01-30T12:48:45.944131609Z" level=info msg="CreateContainer within sandbox \"4853e018b610afc205d56b02def18e5a86ee4cd98f7b1fb9c95d2416572ce0dd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"806c32f1120cb3c800ccb704f859a47b010814048097f6f7b00b05ab95a6f612\"" Jan 30 12:48:45.945661 containerd[1440]: time="2025-01-30T12:48:45.944805313Z" level=info msg="StartContainer for \"806c32f1120cb3c800ccb704f859a47b010814048097f6f7b00b05ab95a6f612\"" Jan 30 12:48:45.983958 systemd[1]: Started cri-containerd-806c32f1120cb3c800ccb704f859a47b010814048097f6f7b00b05ab95a6f612.scope - libcontainer container 806c32f1120cb3c800ccb704f859a47b010814048097f6f7b00b05ab95a6f612. Jan 30 12:48:46.008392 systemd[1]: cri-containerd-806c32f1120cb3c800ccb704f859a47b010814048097f6f7b00b05ab95a6f612.scope: Deactivated successfully. Jan 30 12:48:46.010862 containerd[1440]: time="2025-01-30T12:48:46.010709906Z" level=info msg="StartContainer for \"806c32f1120cb3c800ccb704f859a47b010814048097f6f7b00b05ab95a6f612\" returns successfully" Jan 30 12:48:46.027655 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-806c32f1120cb3c800ccb704f859a47b010814048097f6f7b00b05ab95a6f612-rootfs.mount: Deactivated successfully. Jan 30 12:48:46.032576 containerd[1440]: time="2025-01-30T12:48:46.032513839Z" level=info msg="shim disconnected" id=806c32f1120cb3c800ccb704f859a47b010814048097f6f7b00b05ab95a6f612 namespace=k8s.io Jan 30 12:48:46.032987 containerd[1440]: time="2025-01-30T12:48:46.032778528Z" level=warning msg="cleaning up after shim disconnected" id=806c32f1120cb3c800ccb704f859a47b010814048097f6f7b00b05ab95a6f612 namespace=k8s.io Jan 30 12:48:46.032987 containerd[1440]: time="2025-01-30T12:48:46.032795089Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:48:46.626135 kubelet[1738]: E0130 12:48:46.626065 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:46.929069 kubelet[1738]: E0130 12:48:46.928963 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:48:46.932997 containerd[1440]: time="2025-01-30T12:48:46.932945417Z" level=info msg="CreateContainer within sandbox \"4853e018b610afc205d56b02def18e5a86ee4cd98f7b1fb9c95d2416572ce0dd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 12:48:46.949906 containerd[1440]: time="2025-01-30T12:48:46.949863536Z" level=info msg="CreateContainer within sandbox \"4853e018b610afc205d56b02def18e5a86ee4cd98f7b1fb9c95d2416572ce0dd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"564f738dd3cb39c34cb64a307ce634222bd1727bd8feb15941ea10272dc3a642\"" Jan 30 12:48:46.950448 containerd[1440]: time="2025-01-30T12:48:46.950386755Z" level=info msg="StartContainer for \"564f738dd3cb39c34cb64a307ce634222bd1727bd8feb15941ea10272dc3a642\"" Jan 30 12:48:46.998951 systemd[1]: Started cri-containerd-564f738dd3cb39c34cb64a307ce634222bd1727bd8feb15941ea10272dc3a642.scope - libcontainer container 564f738dd3cb39c34cb64a307ce634222bd1727bd8feb15941ea10272dc3a642. Jan 30 12:48:47.051630 containerd[1440]: time="2025-01-30T12:48:47.051506207Z" level=info msg="StartContainer for \"564f738dd3cb39c34cb64a307ce634222bd1727bd8feb15941ea10272dc3a642\" returns successfully" Jan 30 12:48:47.323761 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 30 12:48:47.584261 kubelet[1738]: E0130 12:48:47.584138 1738 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:47.606654 containerd[1440]: time="2025-01-30T12:48:47.606616413Z" level=info msg="StopPodSandbox for \"7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172\"" Jan 30 12:48:47.606813 containerd[1440]: time="2025-01-30T12:48:47.606723936Z" level=info msg="TearDown network for sandbox \"7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172\" successfully" Jan 30 12:48:47.606813 containerd[1440]: time="2025-01-30T12:48:47.606752377Z" level=info msg="StopPodSandbox for \"7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172\" returns successfully" Jan 30 12:48:47.607200 containerd[1440]: time="2025-01-30T12:48:47.607179352Z" level=info msg="RemovePodSandbox for \"7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172\"" Jan 30 12:48:47.614394 containerd[1440]: time="2025-01-30T12:48:47.614346119Z" level=info msg="Forcibly stopping sandbox \"7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172\"" Jan 30 12:48:47.614478 containerd[1440]: time="2025-01-30T12:48:47.614445962Z" level=info msg="TearDown network for sandbox \"7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172\" successfully" Jan 30 12:48:47.621798 containerd[1440]: time="2025-01-30T12:48:47.621745294Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:48:47.621912 containerd[1440]: time="2025-01-30T12:48:47.621857538Z" level=info msg="RemovePodSandbox \"7e632b94379c0a3d628b90191b0129037af62b34988fffdcac67cc90c0f0e172\" returns successfully" Jan 30 12:48:47.626797 kubelet[1738]: E0130 12:48:47.626766 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:47.933993 kubelet[1738]: E0130 12:48:47.933885 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:48:48.627372 kubelet[1738]: E0130 12:48:48.627312 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:49.028756 kubelet[1738]: E0130 12:48:49.026871 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:48:49.628412 kubelet[1738]: E0130 12:48:49.628349 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:50.286346 systemd-networkd[1377]: lxc_health: Link UP Jan 30 12:48:50.298700 systemd-networkd[1377]: lxc_health: Gained carrier Jan 30 12:48:50.629027 kubelet[1738]: E0130 12:48:50.628763 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:51.026804 kubelet[1738]: E0130 12:48:51.026476 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:48:51.045345 kubelet[1738]: I0130 12:48:51.045273 1738 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5k5n5" podStartSLOduration=9.045257369 podStartE2EDuration="9.045257369s" podCreationTimestamp="2025-01-30 12:48:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:48:47.961039503 +0000 UTC m=+61.046244512" watchObservedRunningTime="2025-01-30 12:48:51.045257369 +0000 UTC m=+64.130462378" Jan 30 12:48:51.592884 systemd-networkd[1377]: lxc_health: Gained IPv6LL Jan 30 12:48:51.629206 kubelet[1738]: E0130 12:48:51.629145 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:51.941092 kubelet[1738]: E0130 12:48:51.940980 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:48:52.630145 kubelet[1738]: E0130 12:48:52.630052 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:52.943467 kubelet[1738]: E0130 12:48:52.943158 1738 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:48:53.630369 kubelet[1738]: E0130 12:48:53.630309 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:54.630840 kubelet[1738]: E0130 12:48:54.630785 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:55.631943 kubelet[1738]: E0130 12:48:55.631888 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:48:56.632424 kubelet[1738]: E0130 12:48:56.632354 1738 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"