Jan 13 21:39:07.904615 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 21:39:07.904637 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jan 13 19:43:39 -00 2025 Jan 13 21:39:07.904647 kernel: KASLR enabled Jan 13 21:39:07.904653 kernel: efi: EFI v2.7 by EDK II Jan 13 21:39:07.904659 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 13 21:39:07.904664 kernel: random: crng init done Jan 13 21:39:07.904671 kernel: ACPI: Early table checksum verification disabled Jan 13 21:39:07.904677 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 13 21:39:07.904684 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 13 21:39:07.904691 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:39:07.904697 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:39:07.904703 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:39:07.904709 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:39:07.904714 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:39:07.904722 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:39:07.904729 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:39:07.904736 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:39:07.904742 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:39:07.904748 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 13 21:39:07.904755 kernel: NUMA: Failed to initialise from firmware Jan 13 21:39:07.904761 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:39:07.904767 kernel: NUMA: NODE_DATA [mem 0xdc95b800-0xdc960fff] Jan 13 21:39:07.904773 kernel: Zone ranges: Jan 13 21:39:07.904779 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:39:07.904786 kernel: DMA32 empty Jan 13 21:39:07.904793 kernel: Normal empty Jan 13 21:39:07.904799 kernel: Movable zone start for each node Jan 13 21:39:07.904805 kernel: Early memory node ranges Jan 13 21:39:07.904812 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 13 21:39:07.904818 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 13 21:39:07.904824 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 13 21:39:07.904831 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 13 21:39:07.904837 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 13 21:39:07.904843 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 13 21:39:07.904849 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 13 21:39:07.904855 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 21:39:07.904862 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 13 21:39:07.904869 kernel: psci: probing for conduit method from ACPI. Jan 13 21:39:07.904875 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 21:39:07.904882 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 21:39:07.904890 kernel: psci: Trusted OS migration not required Jan 13 21:39:07.904897 kernel: psci: SMC Calling Convention v1.1 Jan 13 21:39:07.904904 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 21:39:07.904912 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 21:39:07.904919 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 21:39:07.904925 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 13 21:39:07.904932 kernel: Detected PIPT I-cache on CPU0 Jan 13 21:39:07.904938 kernel: CPU features: detected: GIC system register CPU interface Jan 13 21:39:07.904945 kernel: CPU features: detected: Hardware dirty bit management Jan 13 21:39:07.904952 kernel: CPU features: detected: Spectre-v4 Jan 13 21:39:07.904958 kernel: CPU features: detected: Spectre-BHB Jan 13 21:39:07.904965 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 21:39:07.904971 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 21:39:07.904979 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 21:39:07.904986 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 21:39:07.904993 kernel: alternatives: applying boot alternatives Jan 13 21:39:07.905000 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:39:07.905007 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:39:07.905014 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:39:07.905036 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:39:07.905043 kernel: Fallback order for Node 0: 0 Jan 13 21:39:07.905050 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 13 21:39:07.905057 kernel: Policy zone: DMA Jan 13 21:39:07.905063 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:39:07.905072 kernel: software IO TLB: area num 4. Jan 13 21:39:07.905079 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 13 21:39:07.905086 kernel: Memory: 2386544K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185744K reserved, 0K cma-reserved) Jan 13 21:39:07.905093 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 21:39:07.905100 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:39:07.905107 kernel: rcu: RCU event tracing is enabled. Jan 13 21:39:07.905113 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 21:39:07.905126 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:39:07.905133 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:39:07.905140 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:39:07.905146 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 21:39:07.905153 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 21:39:07.905161 kernel: GICv3: 256 SPIs implemented Jan 13 21:39:07.905168 kernel: GICv3: 0 Extended SPIs implemented Jan 13 21:39:07.905174 kernel: Root IRQ handler: gic_handle_irq Jan 13 21:39:07.905181 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 21:39:07.905188 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 21:39:07.905195 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 21:39:07.905201 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 21:39:07.905208 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 21:39:07.905215 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 13 21:39:07.905222 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 13 21:39:07.905229 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:39:07.905237 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:39:07.905243 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 21:39:07.905250 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 21:39:07.905257 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 21:39:07.905264 kernel: arm-pv: using stolen time PV Jan 13 21:39:07.905271 kernel: Console: colour dummy device 80x25 Jan 13 21:39:07.905278 kernel: ACPI: Core revision 20230628 Jan 13 21:39:07.905285 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 21:39:07.905292 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:39:07.905299 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:39:07.905307 kernel: landlock: Up and running. Jan 13 21:39:07.905313 kernel: SELinux: Initializing. Jan 13 21:39:07.905320 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:39:07.905327 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:39:07.905334 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:39:07.905341 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:39:07.905348 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:39:07.905355 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:39:07.905362 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 21:39:07.905370 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 21:39:07.905377 kernel: Remapping and enabling EFI services. Jan 13 21:39:07.905384 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:39:07.905391 kernel: Detected PIPT I-cache on CPU1 Jan 13 21:39:07.905397 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 21:39:07.905404 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 13 21:39:07.905411 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:39:07.905418 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 21:39:07.905425 kernel: Detected PIPT I-cache on CPU2 Jan 13 21:39:07.905432 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 13 21:39:07.905440 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 13 21:39:07.905447 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:39:07.905458 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 13 21:39:07.905467 kernel: Detected PIPT I-cache on CPU3 Jan 13 21:39:07.905474 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 13 21:39:07.905481 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 13 21:39:07.905488 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 21:39:07.905495 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 13 21:39:07.905503 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 21:39:07.905511 kernel: SMP: Total of 4 processors activated. Jan 13 21:39:07.905519 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 21:39:07.905526 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 21:39:07.905534 kernel: CPU features: detected: Common not Private translations Jan 13 21:39:07.905541 kernel: CPU features: detected: CRC32 instructions Jan 13 21:39:07.905548 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 21:39:07.905555 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 21:39:07.905562 kernel: CPU features: detected: LSE atomic instructions Jan 13 21:39:07.905571 kernel: CPU features: detected: Privileged Access Never Jan 13 21:39:07.905578 kernel: CPU features: detected: RAS Extension Support Jan 13 21:39:07.905585 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 21:39:07.905592 kernel: CPU: All CPU(s) started at EL1 Jan 13 21:39:07.905600 kernel: alternatives: applying system-wide alternatives Jan 13 21:39:07.905607 kernel: devtmpfs: initialized Jan 13 21:39:07.905614 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:39:07.905621 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 21:39:07.905629 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:39:07.905637 kernel: SMBIOS 3.0.0 present. Jan 13 21:39:07.905644 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 13 21:39:07.905651 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:39:07.905659 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 21:39:07.905669 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 21:39:07.905676 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 21:39:07.905684 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:39:07.905691 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jan 13 21:39:07.905698 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:39:07.905707 kernel: cpuidle: using governor menu Jan 13 21:39:07.905714 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 21:39:07.905721 kernel: ASID allocator initialised with 32768 entries Jan 13 21:39:07.905728 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:39:07.905736 kernel: Serial: AMBA PL011 UART driver Jan 13 21:39:07.905743 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 21:39:07.905750 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 21:39:07.905757 kernel: Modules: 509040 pages in range for PLT usage Jan 13 21:39:07.905764 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:39:07.905773 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:39:07.905780 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 21:39:07.905787 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 21:39:07.905795 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:39:07.905802 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:39:07.905809 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 21:39:07.905816 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 21:39:07.905823 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:39:07.905830 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:39:07.905839 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:39:07.905846 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:39:07.905853 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:39:07.905860 kernel: ACPI: Interpreter enabled Jan 13 21:39:07.905867 kernel: ACPI: Using GIC for interrupt routing Jan 13 21:39:07.905874 kernel: ACPI: MCFG table detected, 1 entries Jan 13 21:39:07.905882 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 21:39:07.905889 kernel: printk: console [ttyAMA0] enabled Jan 13 21:39:07.905896 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:39:07.906075 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:39:07.906171 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 21:39:07.906236 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 21:39:07.906299 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 21:39:07.906360 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 21:39:07.906370 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 21:39:07.906377 kernel: PCI host bridge to bus 0000:00 Jan 13 21:39:07.906449 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 21:39:07.906507 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 21:39:07.906563 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 21:39:07.906619 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:39:07.906697 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 21:39:07.906775 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:39:07.906844 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 13 21:39:07.906909 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 13 21:39:07.906972 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 21:39:07.907048 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 21:39:07.907113 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 13 21:39:07.907187 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 13 21:39:07.907244 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 21:39:07.907303 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 21:39:07.907359 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 21:39:07.907368 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 21:39:07.907376 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 21:39:07.907383 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 21:39:07.907390 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 21:39:07.907398 kernel: iommu: Default domain type: Translated Jan 13 21:39:07.907405 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 21:39:07.907412 kernel: efivars: Registered efivars operations Jan 13 21:39:07.907422 kernel: vgaarb: loaded Jan 13 21:39:07.907429 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 21:39:07.907436 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:39:07.907443 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:39:07.907450 kernel: pnp: PnP ACPI init Jan 13 21:39:07.907525 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 21:39:07.907535 kernel: pnp: PnP ACPI: found 1 devices Jan 13 21:39:07.907542 kernel: NET: Registered PF_INET protocol family Jan 13 21:39:07.907552 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:39:07.907559 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:39:07.907567 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:39:07.907574 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:39:07.907581 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:39:07.907589 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:39:07.907596 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:39:07.907603 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:39:07.907610 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:39:07.907619 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:39:07.907626 kernel: kvm [1]: HYP mode not available Jan 13 21:39:07.907633 kernel: Initialise system trusted keyrings Jan 13 21:39:07.907640 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:39:07.907647 kernel: Key type asymmetric registered Jan 13 21:39:07.907654 kernel: Asymmetric key parser 'x509' registered Jan 13 21:39:07.907662 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 21:39:07.907669 kernel: io scheduler mq-deadline registered Jan 13 21:39:07.907676 kernel: io scheduler kyber registered Jan 13 21:39:07.907684 kernel: io scheduler bfq registered Jan 13 21:39:07.907692 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 21:39:07.907699 kernel: ACPI: button: Power Button [PWRB] Jan 13 21:39:07.907707 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 21:39:07.907773 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 13 21:39:07.907782 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:39:07.907790 kernel: thunder_xcv, ver 1.0 Jan 13 21:39:07.907797 kernel: thunder_bgx, ver 1.0 Jan 13 21:39:07.907804 kernel: nicpf, ver 1.0 Jan 13 21:39:07.907813 kernel: nicvf, ver 1.0 Jan 13 21:39:07.907882 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 21:39:07.907944 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T21:39:07 UTC (1736804347) Jan 13 21:39:07.907954 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 21:39:07.907961 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 21:39:07.907968 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 21:39:07.907976 kernel: watchdog: Hard watchdog permanently disabled Jan 13 21:39:07.907983 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:39:07.907992 kernel: Segment Routing with IPv6 Jan 13 21:39:07.907999 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:39:07.908006 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:39:07.908014 kernel: Key type dns_resolver registered Jan 13 21:39:07.908058 kernel: registered taskstats version 1 Jan 13 21:39:07.908066 kernel: Loading compiled-in X.509 certificates Jan 13 21:39:07.908073 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4d59b6166d6886703230c188f8df863190489638' Jan 13 21:39:07.908080 kernel: Key type .fscrypt registered Jan 13 21:39:07.908087 kernel: Key type fscrypt-provisioning registered Jan 13 21:39:07.908097 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:39:07.908104 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:39:07.908111 kernel: ima: No architecture policies found Jan 13 21:39:07.908124 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 21:39:07.908131 kernel: clk: Disabling unused clocks Jan 13 21:39:07.908139 kernel: Freeing unused kernel memory: 39360K Jan 13 21:39:07.908146 kernel: Run /init as init process Jan 13 21:39:07.908153 kernel: with arguments: Jan 13 21:39:07.908160 kernel: /init Jan 13 21:39:07.908169 kernel: with environment: Jan 13 21:39:07.908176 kernel: HOME=/ Jan 13 21:39:07.908183 kernel: TERM=linux Jan 13 21:39:07.908190 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:39:07.908199 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:39:07.908208 systemd[1]: Detected virtualization kvm. Jan 13 21:39:07.908216 systemd[1]: Detected architecture arm64. Jan 13 21:39:07.908225 systemd[1]: Running in initrd. Jan 13 21:39:07.908233 systemd[1]: No hostname configured, using default hostname. Jan 13 21:39:07.908240 systemd[1]: Hostname set to . Jan 13 21:39:07.908248 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:39:07.908256 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:39:07.908264 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:39:07.908272 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:39:07.908280 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:39:07.908289 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:39:07.908297 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:39:07.908305 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:39:07.908315 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:39:07.908323 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:39:07.908331 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:39:07.908339 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:39:07.908347 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:39:07.908355 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:39:07.908363 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:39:07.908371 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:39:07.908378 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:39:07.908386 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:39:07.908394 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:39:07.908402 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:39:07.908410 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:39:07.908419 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:39:07.908427 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:39:07.908434 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:39:07.908442 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:39:07.908450 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:39:07.908457 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:39:07.908465 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:39:07.908473 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:39:07.908482 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:39:07.908490 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:39:07.908497 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:39:07.908505 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:39:07.908513 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:39:07.908521 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:39:07.908531 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:39:07.908555 systemd-journald[236]: Collecting audit messages is disabled. Jan 13 21:39:07.908573 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:39:07.908583 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:39:07.908592 systemd-journald[236]: Journal started Jan 13 21:39:07.908610 systemd-journald[236]: Runtime Journal (/run/log/journal/7844118fcdd7490a99abeabca294b0bf) is 5.9M, max 47.3M, 41.4M free. Jan 13 21:39:07.900318 systemd-modules-load[237]: Inserted module 'overlay' Jan 13 21:39:07.913745 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:39:07.915063 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:39:07.915097 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:39:07.918890 systemd-modules-load[237]: Inserted module 'br_netfilter' Jan 13 21:39:07.919757 kernel: Bridge firewalling registered Jan 13 21:39:07.920016 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:39:07.924153 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:39:07.926930 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:39:07.928224 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:39:07.933160 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:39:07.934530 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:39:07.938178 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:39:07.943060 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:39:07.945775 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:39:07.952856 dracut-cmdline[274]: dracut-dracut-053 Jan 13 21:39:07.955240 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c6a3a48cbc65bf640516dc59d6b026e304001b7b3125ecbabbbe9ce0bd8888f0 Jan 13 21:39:07.972963 systemd-resolved[279]: Positive Trust Anchors: Jan 13 21:39:07.972984 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:39:07.973016 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:39:07.978071 systemd-resolved[279]: Defaulting to hostname 'linux'. Jan 13 21:39:07.978978 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:39:07.985387 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:39:08.032040 kernel: SCSI subsystem initialized Jan 13 21:39:08.037038 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:39:08.047047 kernel: iscsi: registered transport (tcp) Jan 13 21:39:08.062269 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:39:08.062304 kernel: QLogic iSCSI HBA Driver Jan 13 21:39:08.105469 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:39:08.118188 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:39:08.135090 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:39:08.135133 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:39:08.136143 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:39:08.181050 kernel: raid6: neonx8 gen() 15729 MB/s Jan 13 21:39:08.198044 kernel: raid6: neonx4 gen() 15577 MB/s Jan 13 21:39:08.215047 kernel: raid6: neonx2 gen() 13158 MB/s Jan 13 21:39:08.232048 kernel: raid6: neonx1 gen() 10442 MB/s Jan 13 21:39:08.249041 kernel: raid6: int64x8 gen() 6928 MB/s Jan 13 21:39:08.266052 kernel: raid6: int64x4 gen() 7319 MB/s Jan 13 21:39:08.283046 kernel: raid6: int64x2 gen() 6111 MB/s Jan 13 21:39:08.300137 kernel: raid6: int64x1 gen() 5036 MB/s Jan 13 21:39:08.300170 kernel: raid6: using algorithm neonx8 gen() 15729 MB/s Jan 13 21:39:08.318110 kernel: raid6: .... xor() 11904 MB/s, rmw enabled Jan 13 21:39:08.318149 kernel: raid6: using neon recovery algorithm Jan 13 21:39:08.323375 kernel: xor: measuring software checksum speed Jan 13 21:39:08.323396 kernel: 8regs : 19773 MB/sec Jan 13 21:39:08.324067 kernel: 32regs : 19617 MB/sec Jan 13 21:39:08.325268 kernel: arm64_neon : 22771 MB/sec Jan 13 21:39:08.325292 kernel: xor: using function: arm64_neon (22771 MB/sec) Jan 13 21:39:08.376047 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:39:08.386756 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:39:08.396230 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:39:08.407241 systemd-udevd[459]: Using default interface naming scheme 'v255'. Jan 13 21:39:08.410426 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:39:08.413728 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:39:08.427446 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Jan 13 21:39:08.452361 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:39:08.461240 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:39:08.501613 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:39:08.511208 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:39:08.523065 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:39:08.524707 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:39:08.526501 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:39:08.528815 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:39:08.541211 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:39:08.545039 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 13 21:39:08.551457 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 21:39:08.551552 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:39:08.551568 kernel: GPT:9289727 != 19775487 Jan 13 21:39:08.551578 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:39:08.551587 kernel: GPT:9289727 != 19775487 Jan 13 21:39:08.551597 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:39:08.551607 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:39:08.552600 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:39:08.559546 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:39:08.559626 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:39:08.562464 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:39:08.569982 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (524) Jan 13 21:39:08.570005 kernel: BTRFS: device fsid 475b4555-939b-441c-9b47-b8244f532234 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (523) Jan 13 21:39:08.564548 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:39:08.564609 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:39:08.571048 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:39:08.582253 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:39:08.588183 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:39:08.592171 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:39:08.596926 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:39:08.606696 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:39:08.610553 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:39:08.611762 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:39:08.624211 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:39:08.625945 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:39:08.630990 disk-uuid[551]: Primary Header is updated. Jan 13 21:39:08.630990 disk-uuid[551]: Secondary Entries is updated. Jan 13 21:39:08.630990 disk-uuid[551]: Secondary Header is updated. Jan 13 21:39:08.634048 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:39:08.649518 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:39:09.648053 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:39:09.648484 disk-uuid[552]: The operation has completed successfully. Jan 13 21:39:09.668639 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:39:09.668759 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:39:09.689208 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:39:09.691896 sh[576]: Success Jan 13 21:39:09.705052 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 21:39:09.735345 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:39:09.737053 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:39:09.738014 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:39:09.748243 kernel: BTRFS info (device dm-0): first mount of filesystem 475b4555-939b-441c-9b47-b8244f532234 Jan 13 21:39:09.748272 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:39:09.748289 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:39:09.750128 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:39:09.750142 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:39:09.754206 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:39:09.755467 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:39:09.767180 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:39:09.768648 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:39:09.776114 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:39:09.776149 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:39:09.776159 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:39:09.779163 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:39:09.787137 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:39:09.789313 kernel: BTRFS info (device vda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:39:09.795806 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:39:09.801186 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:39:09.866057 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:39:09.878884 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:39:09.890721 ignition[672]: Ignition 2.19.0 Jan 13 21:39:09.890730 ignition[672]: Stage: fetch-offline Jan 13 21:39:09.890763 ignition[672]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:39:09.890772 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:39:09.890915 ignition[672]: parsed url from cmdline: "" Jan 13 21:39:09.890919 ignition[672]: no config URL provided Jan 13 21:39:09.890923 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:39:09.890929 ignition[672]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:39:09.890952 ignition[672]: op(1): [started] loading QEMU firmware config module Jan 13 21:39:09.890957 ignition[672]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 21:39:09.900437 ignition[672]: op(1): [finished] loading QEMU firmware config module Jan 13 21:39:09.902749 systemd-networkd[767]: lo: Link UP Jan 13 21:39:09.902762 systemd-networkd[767]: lo: Gained carrier Jan 13 21:39:09.903451 systemd-networkd[767]: Enumeration completed Jan 13 21:39:09.903549 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:39:09.903865 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:39:09.903868 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:39:09.904963 systemd[1]: Reached target network.target - Network. Jan 13 21:39:09.905491 systemd-networkd[767]: eth0: Link UP Jan 13 21:39:09.905494 systemd-networkd[767]: eth0: Gained carrier Jan 13 21:39:09.905501 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:39:09.917280 ignition[672]: parsing config with SHA512: 64ca3fa3333578e571233b29ea7923cf30826935f7ac488433206a96d3f9bb0042a6d32e7e579d7f6389a659e21cc762f99df3202e6e96a563a7b5e1e92f85c7 Jan 13 21:39:09.920350 unknown[672]: fetched base config from "system" Jan 13 21:39:09.920365 unknown[672]: fetched user config from "qemu" Jan 13 21:39:09.920674 ignition[672]: fetch-offline: fetch-offline passed Jan 13 21:39:09.922304 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:39:09.920758 ignition[672]: Ignition finished successfully Jan 13 21:39:09.923729 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.155/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:39:09.923952 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:39:09.933156 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:39:09.942643 ignition[774]: Ignition 2.19.0 Jan 13 21:39:09.942653 ignition[774]: Stage: kargs Jan 13 21:39:09.942810 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:39:09.942819 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:39:09.945275 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:39:09.943484 ignition[774]: kargs: kargs passed Jan 13 21:39:09.943522 ignition[774]: Ignition finished successfully Jan 13 21:39:09.953184 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:39:09.962298 ignition[783]: Ignition 2.19.0 Jan 13 21:39:09.962306 ignition[783]: Stage: disks Jan 13 21:39:09.962465 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:39:09.962474 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:39:09.963174 ignition[783]: disks: disks passed Jan 13 21:39:09.966071 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:39:09.963216 ignition[783]: Ignition finished successfully Jan 13 21:39:09.967398 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:39:09.968811 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:39:09.970747 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:39:09.972291 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:39:09.974117 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:39:09.987214 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:39:09.996248 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:39:09.999218 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:39:10.013105 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:39:10.056876 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:39:10.058390 kernel: EXT4-fs (vda9): mounted filesystem 238cddae-3c4d-4696-a666-660fd149aa3e r/w with ordered data mode. Quota mode: none. Jan 13 21:39:10.058120 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:39:10.070130 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:39:10.071763 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:39:10.073190 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:39:10.073226 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:39:10.083263 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (801) Jan 13 21:39:10.083283 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:39:10.083294 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:39:10.083304 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:39:10.083314 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:39:10.073245 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:39:10.080298 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:39:10.084789 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:39:10.086754 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:39:10.128681 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:39:10.133066 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:39:10.136959 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:39:10.140927 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:39:10.204876 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:39:10.213244 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:39:10.215458 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:39:10.221032 kernel: BTRFS info (device vda6): last unmount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:39:10.235947 ignition[914]: INFO : Ignition 2.19.0 Jan 13 21:39:10.235947 ignition[914]: INFO : Stage: mount Jan 13 21:39:10.235947 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:39:10.235947 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:39:10.240199 ignition[914]: INFO : mount: mount passed Jan 13 21:39:10.240199 ignition[914]: INFO : Ignition finished successfully Jan 13 21:39:10.235969 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:39:10.239242 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:39:10.245135 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:39:10.747328 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:39:10.757203 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:39:10.763747 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Jan 13 21:39:10.763787 kernel: BTRFS info (device vda6): first mount of filesystem 1a82fd1a-1cbb-4d3a-bbb2-d4650cd9e9cd Jan 13 21:39:10.763798 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 21:39:10.765339 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:39:10.768045 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:39:10.768499 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:39:10.785486 ignition[945]: INFO : Ignition 2.19.0 Jan 13 21:39:10.785486 ignition[945]: INFO : Stage: files Jan 13 21:39:10.787214 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:39:10.787214 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:39:10.787214 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:39:10.790697 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:39:10.790697 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:39:10.790697 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:39:10.790697 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:39:10.790697 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:39:10.789998 unknown[945]: wrote ssh authorized keys file for user: core Jan 13 21:39:10.798341 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:39:10.798341 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:39:10.798341 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:39:10.798341 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:39:10.798341 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 21:39:10.798341 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 21:39:10.798341 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 21:39:10.798341 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 13 21:39:11.065343 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 21:39:11.496168 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 13 21:39:11.496168 ignition[945]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 13 21:39:11.499700 ignition[945]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:39:11.499700 ignition[945]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:39:11.499700 ignition[945]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 13 21:39:11.499700 ignition[945]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:39:11.518806 ignition[945]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:39:11.521890 ignition[945]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:39:11.524508 ignition[945]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:39:11.524508 ignition[945]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:39:11.524508 ignition[945]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:39:11.524508 ignition[945]: INFO : files: files passed Jan 13 21:39:11.524508 ignition[945]: INFO : Ignition finished successfully Jan 13 21:39:11.525015 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:39:11.534360 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:39:11.535951 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:39:11.537970 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:39:11.538130 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:39:11.543596 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 21:39:11.547136 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:39:11.547136 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:39:11.550323 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:39:11.550881 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:39:11.553296 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:39:11.570185 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:39:11.587492 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:39:11.587604 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:39:11.589724 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:39:11.591559 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:39:11.593322 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:39:11.593963 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:39:11.608730 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:39:11.618145 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:39:11.625210 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:39:11.626407 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:39:11.628469 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:39:11.630204 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:39:11.630306 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:39:11.632827 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:39:11.634829 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:39:11.636458 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:39:11.638123 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:39:11.640060 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:39:11.642010 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:39:11.643850 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:39:11.645780 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:39:11.647713 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:39:11.649418 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:39:11.650882 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:39:11.650989 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:39:11.653306 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:39:11.655316 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:39:11.657187 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:39:11.658099 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:39:11.659317 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:39:11.659420 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:39:11.662254 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:39:11.662410 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:39:11.664457 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:39:11.665944 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:39:11.667101 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:39:11.669072 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:39:11.670554 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:39:11.672216 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:39:11.672339 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:39:11.674391 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:39:11.674510 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:39:11.676017 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:39:11.676186 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:39:11.677857 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:39:11.677993 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:39:11.686214 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:39:11.687724 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:39:11.688708 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:39:11.688901 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:39:11.691561 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:39:11.691703 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:39:11.698409 ignition[999]: INFO : Ignition 2.19.0 Jan 13 21:39:11.698409 ignition[999]: INFO : Stage: umount Jan 13 21:39:11.701875 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:39:11.701875 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:39:11.701875 ignition[999]: INFO : umount: umount passed Jan 13 21:39:11.701875 ignition[999]: INFO : Ignition finished successfully Jan 13 21:39:11.698441 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:39:11.698522 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:39:11.700650 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:39:11.701585 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:39:11.701678 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:39:11.703116 systemd[1]: Stopped target network.target - Network. Jan 13 21:39:11.704729 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:39:11.704792 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:39:11.706874 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:39:11.706918 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:39:11.708491 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:39:11.708535 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:39:11.710129 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:39:11.710174 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:39:11.711946 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:39:11.714045 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:39:11.722050 systemd-networkd[767]: eth0: DHCPv6 lease lost Jan 13 21:39:11.723419 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:39:11.723522 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:39:11.724939 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:39:11.724970 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:39:11.740180 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:39:11.741043 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:39:11.741109 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:39:11.743199 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:39:11.745583 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:39:11.745666 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:39:11.752661 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:39:11.752791 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:39:11.755815 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:39:11.755951 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:39:11.759623 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:39:11.759671 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:39:11.761802 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:39:11.761834 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:39:11.763788 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:39:11.763835 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:39:11.766618 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:39:11.766662 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:39:11.769344 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:39:11.769388 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:39:11.772222 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:39:11.772268 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:39:11.781218 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:39:11.782248 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:39:11.782302 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:39:11.784276 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:39:11.784318 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:39:11.786053 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:39:11.786106 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:39:11.788092 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:39:11.788135 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:39:11.790097 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:39:11.790140 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:39:11.792159 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:39:11.792200 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:39:11.794254 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:39:11.794295 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:39:11.796672 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:39:11.798045 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:39:11.799722 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:39:11.799800 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:39:11.803919 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:39:11.811194 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:39:11.818225 systemd[1]: Switching root. Jan 13 21:39:11.849052 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Jan 13 21:39:11.849102 systemd-journald[236]: Journal stopped Jan 13 21:39:12.506110 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:39:12.506164 kernel: SELinux: policy capability open_perms=1 Jan 13 21:39:12.506176 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:39:12.506186 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:39:12.506195 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:39:12.506207 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:39:12.506217 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:39:12.506227 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:39:12.506236 kernel: audit: type=1403 audit(1736804351.975:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:39:12.506250 systemd[1]: Successfully loaded SELinux policy in 33.438ms. Jan 13 21:39:12.506267 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.117ms. Jan 13 21:39:12.506279 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:39:12.506291 systemd[1]: Detected virtualization kvm. Jan 13 21:39:12.506302 systemd[1]: Detected architecture arm64. Jan 13 21:39:12.506313 systemd[1]: Detected first boot. Jan 13 21:39:12.506324 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:39:12.506334 zram_generator::config[1045]: No configuration found. Jan 13 21:39:12.506346 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:39:12.506360 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 21:39:12.506372 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 21:39:12.506383 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 21:39:12.506394 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:39:12.506405 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:39:12.506415 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:39:12.506425 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:39:12.506436 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:39:12.506447 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:39:12.506459 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:39:12.506469 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:39:12.506479 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:39:12.506490 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:39:12.506501 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:39:12.506511 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:39:12.506523 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:39:12.506534 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:39:12.506544 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 21:39:12.506556 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:39:12.506567 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 21:39:12.506577 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 21:39:12.506587 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 21:39:12.506598 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:39:12.506609 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:39:12.506619 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:39:12.506631 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:39:12.506641 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:39:12.506651 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:39:12.506663 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:39:12.506673 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:39:12.506684 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:39:12.506694 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:39:12.506705 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:39:12.506715 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:39:12.506725 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:39:12.506737 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:39:12.506749 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:39:12.506759 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:39:12.506769 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:39:12.506780 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:39:12.506790 systemd[1]: Reached target machines.target - Containers. Jan 13 21:39:12.506801 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:39:12.506811 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:39:12.506823 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:39:12.506834 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:39:12.506845 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:39:12.506855 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:39:12.506865 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:39:12.506876 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:39:12.506886 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:39:12.506897 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:39:12.506907 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 21:39:12.506919 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 21:39:12.506930 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 21:39:12.506940 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 21:39:12.506950 kernel: fuse: init (API version 7.39) Jan 13 21:39:12.506959 kernel: loop: module loaded Jan 13 21:39:12.506970 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:39:12.506981 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:39:12.506991 kernel: ACPI: bus type drm_connector registered Jan 13 21:39:12.507001 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:39:12.507013 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:39:12.507064 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:39:12.507099 systemd-journald[1116]: Collecting audit messages is disabled. Jan 13 21:39:12.507125 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 21:39:12.507136 systemd[1]: Stopped verity-setup.service. Jan 13 21:39:12.507147 systemd-journald[1116]: Journal started Jan 13 21:39:12.507170 systemd-journald[1116]: Runtime Journal (/run/log/journal/7844118fcdd7490a99abeabca294b0bf) is 5.9M, max 47.3M, 41.4M free. Jan 13 21:39:12.318897 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:39:12.333852 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:39:12.334229 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 21:39:12.511446 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:39:12.512054 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:39:12.513150 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:39:12.514328 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:39:12.515352 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:39:12.516493 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:39:12.517662 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:39:12.518862 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:39:12.520258 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:39:12.521675 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:39:12.521820 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:39:12.523237 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:39:12.523365 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:39:12.524673 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:39:12.524822 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:39:12.526131 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:39:12.526278 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:39:12.527686 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:39:12.527819 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:39:12.529146 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:39:12.529282 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:39:12.530799 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:39:12.532144 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:39:12.533523 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:39:12.545266 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:39:12.552172 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:39:12.554171 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:39:12.555245 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:39:12.555274 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:39:12.557132 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:39:12.559175 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:39:12.561156 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:39:12.562221 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:39:12.563578 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:39:12.565447 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:39:12.566706 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:39:12.567771 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:39:12.568928 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:39:12.571704 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:39:12.577784 systemd-journald[1116]: Time spent on flushing to /var/log/journal/7844118fcdd7490a99abeabca294b0bf is 19.692ms for 837 entries. Jan 13 21:39:12.577784 systemd-journald[1116]: System Journal (/var/log/journal/7844118fcdd7490a99abeabca294b0bf) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:39:12.607053 systemd-journald[1116]: Received client request to flush runtime journal. Jan 13 21:39:12.607098 kernel: loop0: detected capacity change from 0 to 194096 Jan 13 21:39:12.578742 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:39:12.581125 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:39:12.583693 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:39:12.585161 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:39:12.586442 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:39:12.588404 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:39:12.596287 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:39:12.599706 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:39:12.605566 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:39:12.610043 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:39:12.619236 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:39:12.621293 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:39:12.624222 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Jan 13 21:39:12.624237 systemd-tmpfiles[1159]: ACLs are not supported, ignoring. Jan 13 21:39:12.625076 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:39:12.628448 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:39:12.639051 kernel: loop1: detected capacity change from 0 to 114328 Jan 13 21:39:12.639226 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:39:12.642961 udevadm[1171]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:39:12.645911 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:39:12.646483 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:39:12.665924 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:39:12.676218 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:39:12.680056 kernel: loop2: detected capacity change from 0 to 114432 Jan 13 21:39:12.691629 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jan 13 21:39:12.691647 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jan 13 21:39:12.695233 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:39:12.712047 kernel: loop3: detected capacity change from 0 to 194096 Jan 13 21:39:12.718208 kernel: loop4: detected capacity change from 0 to 114328 Jan 13 21:39:12.722084 kernel: loop5: detected capacity change from 0 to 114432 Jan 13 21:39:12.725375 (sd-merge)[1185]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 21:39:12.725731 (sd-merge)[1185]: Merged extensions into '/usr'. Jan 13 21:39:12.730541 systemd[1]: Reloading requested from client PID 1157 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:39:12.730559 systemd[1]: Reloading... Jan 13 21:39:12.799332 zram_generator::config[1217]: No configuration found. Jan 13 21:39:12.829669 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:39:12.864688 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:39:12.899610 systemd[1]: Reloading finished in 168 ms. Jan 13 21:39:12.934166 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:39:12.935671 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:39:12.949177 systemd[1]: Starting ensure-sysext.service... Jan 13 21:39:12.950977 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:39:12.957847 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:39:12.957862 systemd[1]: Reloading... Jan 13 21:39:12.967394 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:39:12.967926 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:39:12.968660 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:39:12.968962 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 13 21:39:12.969116 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 13 21:39:12.971429 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:39:12.971525 systemd-tmpfiles[1246]: Skipping /boot Jan 13 21:39:12.978679 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:39:12.978780 systemd-tmpfiles[1246]: Skipping /boot Jan 13 21:39:13.007056 zram_generator::config[1277]: No configuration found. Jan 13 21:39:13.082483 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:39:13.117829 systemd[1]: Reloading finished in 159 ms. Jan 13 21:39:13.132900 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:39:13.145492 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:39:13.152989 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:39:13.155600 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:39:13.157912 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:39:13.162323 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:39:13.168236 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:39:13.173285 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:39:13.176594 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:39:13.179766 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:39:13.182659 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:39:13.185520 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:39:13.186742 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:39:13.192721 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:39:13.194446 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:39:13.194571 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:39:13.198128 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:39:13.198246 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:39:13.199909 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:39:13.200138 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:39:13.202933 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Jan 13 21:39:13.205176 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:39:13.211225 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:39:13.221377 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:39:13.231312 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:39:13.233677 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:39:13.236037 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:39:13.237787 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:39:13.239738 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:39:13.242094 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:39:13.245050 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:39:13.256362 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:39:13.256499 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:39:13.258187 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:39:13.266032 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:39:13.266598 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:39:13.279109 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:39:13.279259 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:39:13.282608 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:39:13.284098 systemd[1]: Finished ensure-sysext.service. Jan 13 21:39:13.286666 augenrules[1369]: No rules Jan 13 21:39:13.288374 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 13 21:39:13.289318 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:39:13.300105 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1362) Jan 13 21:39:13.303743 systemd-resolved[1314]: Positive Trust Anchors: Jan 13 21:39:13.303759 systemd-resolved[1314]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:39:13.303790 systemd-resolved[1314]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:39:13.307434 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:39:13.309875 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:39:13.312471 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:39:13.312997 systemd-resolved[1314]: Defaulting to hostname 'linux'. Jan 13 21:39:13.313586 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:39:13.316938 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:39:13.317977 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:39:13.321210 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:39:13.322361 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:39:13.322608 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:39:13.325097 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:39:13.326663 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:39:13.326793 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:39:13.328272 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:39:13.328409 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:39:13.329695 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:39:13.329831 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:39:13.344192 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:39:13.345397 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:39:13.385316 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:39:13.387977 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:39:13.390562 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:39:13.392450 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:39:13.394488 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:39:13.399950 systemd-networkd[1386]: lo: Link UP Jan 13 21:39:13.399962 systemd-networkd[1386]: lo: Gained carrier Jan 13 21:39:13.400662 systemd-networkd[1386]: Enumeration completed Jan 13 21:39:13.402325 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:39:13.403747 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:39:13.404802 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:39:13.404810 systemd-networkd[1386]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:39:13.405474 systemd[1]: Reached target network.target - Network. Jan 13 21:39:13.407709 systemd-networkd[1386]: eth0: Link UP Jan 13 21:39:13.407717 systemd-networkd[1386]: eth0: Gained carrier Jan 13 21:39:13.407730 systemd-networkd[1386]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:39:13.408060 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:39:13.410450 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:39:13.416212 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:39:13.429117 systemd-networkd[1386]: eth0: DHCPv4 address 10.0.0.155/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:39:13.430168 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. Jan 13 21:39:13.436624 systemd-timesyncd[1387]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 21:39:13.436684 systemd-timesyncd[1387]: Initial clock synchronization to Mon 2025-01-13 21:39:13.739879 UTC. Jan 13 21:39:13.437669 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:39:13.457129 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:39:13.468752 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:39:13.470503 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:39:13.471685 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:39:13.472847 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:39:13.474118 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:39:13.475596 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:39:13.476767 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:39:13.478009 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:39:13.479249 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:39:13.479290 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:39:13.480182 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:39:13.481975 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:39:13.484402 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:39:13.494882 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:39:13.497096 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:39:13.498654 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:39:13.499844 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:39:13.500800 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:39:13.501761 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:39:13.501793 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:39:13.502651 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:39:13.504295 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:39:13.505223 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:39:13.507743 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:39:13.511121 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:39:13.512287 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:39:13.516217 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:39:13.520368 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:39:13.525257 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:39:13.526696 extend-filesystems[1416]: Found loop3 Jan 13 21:39:13.532535 extend-filesystems[1416]: Found loop4 Jan 13 21:39:13.532535 extend-filesystems[1416]: Found loop5 Jan 13 21:39:13.532535 extend-filesystems[1416]: Found vda Jan 13 21:39:13.532535 extend-filesystems[1416]: Found vda1 Jan 13 21:39:13.532535 extend-filesystems[1416]: Found vda2 Jan 13 21:39:13.532535 extend-filesystems[1416]: Found vda3 Jan 13 21:39:13.532535 extend-filesystems[1416]: Found usr Jan 13 21:39:13.532535 extend-filesystems[1416]: Found vda4 Jan 13 21:39:13.532535 extend-filesystems[1416]: Found vda6 Jan 13 21:39:13.532535 extend-filesystems[1416]: Found vda7 Jan 13 21:39:13.532535 extend-filesystems[1416]: Found vda9 Jan 13 21:39:13.532535 extend-filesystems[1416]: Checking size of /dev/vda9 Jan 13 21:39:13.556957 jq[1415]: false Jan 13 21:39:13.533935 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:39:13.553551 dbus-daemon[1414]: [system] SELinux support is enabled Jan 13 21:39:13.557405 extend-filesystems[1416]: Resized partition /dev/vda9 Jan 13 21:39:13.562215 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1363) Jan 13 21:39:13.562243 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 21:39:13.545561 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:39:13.562425 extend-filesystems[1433]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:39:13.546059 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:39:13.547012 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:39:13.556090 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:39:13.561590 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:39:13.565552 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:39:13.573098 jq[1437]: true Jan 13 21:39:13.572336 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:39:13.572485 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:39:13.572731 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:39:13.572879 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:39:13.574847 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:39:13.575014 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:39:13.587044 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 21:39:13.591255 jq[1439]: true Jan 13 21:39:13.600785 (ntainerd)[1446]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:39:13.602736 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:39:13.606955 extend-filesystems[1433]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:39:13.606955 extend-filesystems[1433]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:39:13.606955 extend-filesystems[1433]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 21:39:13.602762 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:39:13.619090 extend-filesystems[1416]: Resized filesystem in /dev/vda9 Jan 13 21:39:13.622649 update_engine[1435]: I20250113 21:39:13.606896 1435 main.cc:92] Flatcar Update Engine starting Jan 13 21:39:13.622649 update_engine[1435]: I20250113 21:39:13.608754 1435 update_check_scheduler.cc:74] Next update check in 10m41s Jan 13 21:39:13.604200 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:39:13.604219 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:39:13.608087 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:39:13.608287 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:39:13.609678 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:39:13.609905 systemd-logind[1429]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 21:39:13.611200 systemd-logind[1429]: New seat seat0. Jan 13 21:39:13.618227 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:39:13.619503 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:39:13.658507 bash[1466]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:39:13.661080 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:39:13.663533 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 21:39:13.668667 locksmithd[1453]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:39:13.778593 containerd[1446]: time="2025-01-13T21:39:13.778503640Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:39:13.801490 containerd[1446]: time="2025-01-13T21:39:13.801452320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:39:13.802991 containerd[1446]: time="2025-01-13T21:39:13.802952000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:39:13.802991 containerd[1446]: time="2025-01-13T21:39:13.802987600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:39:13.803075 containerd[1446]: time="2025-01-13T21:39:13.803006000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:39:13.803192 containerd[1446]: time="2025-01-13T21:39:13.803162320Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:39:13.803192 containerd[1446]: time="2025-01-13T21:39:13.803186200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:39:13.803262 containerd[1446]: time="2025-01-13T21:39:13.803246560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:39:13.803284 containerd[1446]: time="2025-01-13T21:39:13.803262160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:39:13.803441 containerd[1446]: time="2025-01-13T21:39:13.803413520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:39:13.803441 containerd[1446]: time="2025-01-13T21:39:13.803434440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:39:13.803491 containerd[1446]: time="2025-01-13T21:39:13.803447120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:39:13.803491 containerd[1446]: time="2025-01-13T21:39:13.803456800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:39:13.803540 containerd[1446]: time="2025-01-13T21:39:13.803524600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:39:13.803732 containerd[1446]: time="2025-01-13T21:39:13.803705480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:39:13.803824 containerd[1446]: time="2025-01-13T21:39:13.803806120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:39:13.803849 containerd[1446]: time="2025-01-13T21:39:13.803823160Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:39:13.803922 containerd[1446]: time="2025-01-13T21:39:13.803907760Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:39:13.803962 containerd[1446]: time="2025-01-13T21:39:13.803950880Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:39:13.807766 containerd[1446]: time="2025-01-13T21:39:13.807733920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:39:13.807828 containerd[1446]: time="2025-01-13T21:39:13.807789000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:39:13.807828 containerd[1446]: time="2025-01-13T21:39:13.807806760Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:39:13.807828 containerd[1446]: time="2025-01-13T21:39:13.807820640Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:39:13.807880 containerd[1446]: time="2025-01-13T21:39:13.807833560Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:39:13.807984 containerd[1446]: time="2025-01-13T21:39:13.807963600Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:39:13.808213 containerd[1446]: time="2025-01-13T21:39:13.808184840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:39:13.808311 containerd[1446]: time="2025-01-13T21:39:13.808294240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:39:13.808334 containerd[1446]: time="2025-01-13T21:39:13.808317640Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:39:13.808353 containerd[1446]: time="2025-01-13T21:39:13.808330840Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:39:13.808353 containerd[1446]: time="2025-01-13T21:39:13.808344680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:39:13.808391 containerd[1446]: time="2025-01-13T21:39:13.808357040Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:39:13.808391 containerd[1446]: time="2025-01-13T21:39:13.808369560Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:39:13.808391 containerd[1446]: time="2025-01-13T21:39:13.808383360Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:39:13.808440 containerd[1446]: time="2025-01-13T21:39:13.808396960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:39:13.808440 containerd[1446]: time="2025-01-13T21:39:13.808414480Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:39:13.808440 containerd[1446]: time="2025-01-13T21:39:13.808426280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:39:13.808440 containerd[1446]: time="2025-01-13T21:39:13.808436880Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:39:13.808508 containerd[1446]: time="2025-01-13T21:39:13.808455440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:39:13.808508 containerd[1446]: time="2025-01-13T21:39:13.808468360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:39:13.808508 containerd[1446]: time="2025-01-13T21:39:13.808490000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:39:13.808508 containerd[1446]: time="2025-01-13T21:39:13.808502440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:39:13.808580 containerd[1446]: time="2025-01-13T21:39:13.808516920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:39:13.808580 containerd[1446]: time="2025-01-13T21:39:13.808529960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:39:13.808580 containerd[1446]: time="2025-01-13T21:39:13.808541080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:39:13.808580 containerd[1446]: time="2025-01-13T21:39:13.808553200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:39:13.808580 containerd[1446]: time="2025-01-13T21:39:13.808565240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:39:13.808580 containerd[1446]: time="2025-01-13T21:39:13.808579120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:39:13.808683 containerd[1446]: time="2025-01-13T21:39:13.808590840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:39:13.808683 containerd[1446]: time="2025-01-13T21:39:13.808602640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:39:13.808683 containerd[1446]: time="2025-01-13T21:39:13.808619600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:39:13.808683 containerd[1446]: time="2025-01-13T21:39:13.808635080Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:39:13.808683 containerd[1446]: time="2025-01-13T21:39:13.808652960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:39:13.808683 containerd[1446]: time="2025-01-13T21:39:13.808664720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:39:13.808683 containerd[1446]: time="2025-01-13T21:39:13.808675240Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:39:13.808801 containerd[1446]: time="2025-01-13T21:39:13.808778800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:39:13.808801 containerd[1446]: time="2025-01-13T21:39:13.808793840Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:39:13.808838 containerd[1446]: time="2025-01-13T21:39:13.808803680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:39:13.808838 containerd[1446]: time="2025-01-13T21:39:13.808815720Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:39:13.808838 containerd[1446]: time="2025-01-13T21:39:13.808824720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:39:13.808838 containerd[1446]: time="2025-01-13T21:39:13.808836920Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:39:13.808908 containerd[1446]: time="2025-01-13T21:39:13.808849440Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:39:13.808908 containerd[1446]: time="2025-01-13T21:39:13.808862400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:39:13.809277 containerd[1446]: time="2025-01-13T21:39:13.809211800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:39:13.809277 containerd[1446]: time="2025-01-13T21:39:13.809274640Z" level=info msg="Connect containerd service" Jan 13 21:39:13.809417 containerd[1446]: time="2025-01-13T21:39:13.809368720Z" level=info msg="using legacy CRI server" Jan 13 21:39:13.809417 containerd[1446]: time="2025-01-13T21:39:13.809375720Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:39:13.809479 containerd[1446]: time="2025-01-13T21:39:13.809462480Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:39:13.810208 containerd[1446]: time="2025-01-13T21:39:13.810182920Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:39:13.810460 containerd[1446]: time="2025-01-13T21:39:13.810416680Z" level=info msg="Start subscribing containerd event" Jan 13 21:39:13.810489 containerd[1446]: time="2025-01-13T21:39:13.810475680Z" level=info msg="Start recovering state" Jan 13 21:39:13.810554 containerd[1446]: time="2025-01-13T21:39:13.810541400Z" level=info msg="Start event monitor" Jan 13 21:39:13.810579 containerd[1446]: time="2025-01-13T21:39:13.810557280Z" level=info msg="Start snapshots syncer" Jan 13 21:39:13.810579 containerd[1446]: time="2025-01-13T21:39:13.810566720Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:39:13.810579 containerd[1446]: time="2025-01-13T21:39:13.810573400Z" level=info msg="Start streaming server" Jan 13 21:39:13.810950 containerd[1446]: time="2025-01-13T21:39:13.810929240Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:39:13.811004 containerd[1446]: time="2025-01-13T21:39:13.810992160Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:39:13.811209 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:39:13.813341 containerd[1446]: time="2025-01-13T21:39:13.813305760Z" level=info msg="containerd successfully booted in 0.036907s" Jan 13 21:39:13.924330 sshd_keygen[1434]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:39:13.943000 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:39:13.957562 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:39:13.962409 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:39:13.962597 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:39:13.965844 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:39:13.980198 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:39:13.983325 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:39:13.985306 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 21:39:13.986571 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:39:15.340995 systemd-networkd[1386]: eth0: Gained IPv6LL Jan 13 21:39:15.343509 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:39:15.345274 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:39:15.354353 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:39:15.356789 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:39:15.358944 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:39:15.374503 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:39:15.374678 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:39:15.376589 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:39:15.380570 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:39:15.850095 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:39:15.851702 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:39:15.852929 systemd[1]: Startup finished in 559ms (kernel) + 4.264s (initrd) + 3.912s (userspace) = 8.736s. Jan 13 21:39:15.854928 (kubelet)[1521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:39:16.330753 kubelet[1521]: E0113 21:39:16.330644 1521 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:39:16.333497 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:39:16.333644 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:39:21.124736 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:39:21.125882 systemd[1]: Started sshd@0-10.0.0.155:22-10.0.0.1:53166.service - OpenSSH per-connection server daemon (10.0.0.1:53166). Jan 13 21:39:21.174255 sshd[1536]: Accepted publickey for core from 10.0.0.1 port 53166 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:39:21.175962 sshd[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:39:21.187538 systemd-logind[1429]: New session 1 of user core. Jan 13 21:39:21.188523 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:39:21.200292 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:39:21.212091 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:39:21.214245 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:39:21.220395 (systemd)[1540]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:39:21.299124 systemd[1540]: Queued start job for default target default.target. Jan 13 21:39:21.308062 systemd[1540]: Created slice app.slice - User Application Slice. Jan 13 21:39:21.308109 systemd[1540]: Reached target paths.target - Paths. Jan 13 21:39:21.308122 systemd[1540]: Reached target timers.target - Timers. Jan 13 21:39:21.309316 systemd[1540]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:39:21.318601 systemd[1540]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:39:21.318663 systemd[1540]: Reached target sockets.target - Sockets. Jan 13 21:39:21.318675 systemd[1540]: Reached target basic.target - Basic System. Jan 13 21:39:21.318710 systemd[1540]: Reached target default.target - Main User Target. Jan 13 21:39:21.318736 systemd[1540]: Startup finished in 93ms. Jan 13 21:39:21.318879 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:39:21.320050 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:39:21.383841 systemd[1]: Started sshd@1-10.0.0.155:22-10.0.0.1:53182.service - OpenSSH per-connection server daemon (10.0.0.1:53182). Jan 13 21:39:21.418947 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 53182 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:39:21.420263 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:39:21.424188 systemd-logind[1429]: New session 2 of user core. Jan 13 21:39:21.436255 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:39:21.487626 sshd[1551]: pam_unix(sshd:session): session closed for user core Jan 13 21:39:21.497213 systemd[1]: sshd@1-10.0.0.155:22-10.0.0.1:53182.service: Deactivated successfully. Jan 13 21:39:21.498512 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:39:21.499631 systemd-logind[1429]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:39:21.500714 systemd[1]: Started sshd@2-10.0.0.155:22-10.0.0.1:53198.service - OpenSSH per-connection server daemon (10.0.0.1:53198). Jan 13 21:39:21.501392 systemd-logind[1429]: Removed session 2. Jan 13 21:39:21.536171 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 53198 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:39:21.537264 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:39:21.540903 systemd-logind[1429]: New session 3 of user core. Jan 13 21:39:21.551166 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:39:21.599679 sshd[1558]: pam_unix(sshd:session): session closed for user core Jan 13 21:39:21.608253 systemd[1]: sshd@2-10.0.0.155:22-10.0.0.1:53198.service: Deactivated successfully. Jan 13 21:39:21.610451 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:39:21.611582 systemd-logind[1429]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:39:21.612659 systemd[1]: Started sshd@3-10.0.0.155:22-10.0.0.1:53210.service - OpenSSH per-connection server daemon (10.0.0.1:53210). Jan 13 21:39:21.613397 systemd-logind[1429]: Removed session 3. Jan 13 21:39:21.648533 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 53210 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:39:21.649639 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:39:21.652987 systemd-logind[1429]: New session 4 of user core. Jan 13 21:39:21.660176 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:39:21.711106 sshd[1565]: pam_unix(sshd:session): session closed for user core Jan 13 21:39:21.719204 systemd[1]: sshd@3-10.0.0.155:22-10.0.0.1:53210.service: Deactivated successfully. Jan 13 21:39:21.720615 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:39:21.721785 systemd-logind[1429]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:39:21.722888 systemd[1]: Started sshd@4-10.0.0.155:22-10.0.0.1:53218.service - OpenSSH per-connection server daemon (10.0.0.1:53218). Jan 13 21:39:21.723629 systemd-logind[1429]: Removed session 4. Jan 13 21:39:21.758221 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 53218 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:39:21.759364 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:39:21.763304 systemd-logind[1429]: New session 5 of user core. Jan 13 21:39:21.778246 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:39:21.837768 sudo[1575]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 21:39:21.839885 sudo[1575]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:39:21.852793 sudo[1575]: pam_unix(sudo:session): session closed for user root Jan 13 21:39:21.854414 sshd[1572]: pam_unix(sshd:session): session closed for user core Jan 13 21:39:21.872510 systemd[1]: sshd@4-10.0.0.155:22-10.0.0.1:53218.service: Deactivated successfully. Jan 13 21:39:21.875288 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:39:21.876488 systemd-logind[1429]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:39:21.886357 systemd[1]: Started sshd@5-10.0.0.155:22-10.0.0.1:53226.service - OpenSSH per-connection server daemon (10.0.0.1:53226). Jan 13 21:39:21.887154 systemd-logind[1429]: Removed session 5. Jan 13 21:39:21.918743 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 53226 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:39:21.920204 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:39:21.924116 systemd-logind[1429]: New session 6 of user core. Jan 13 21:39:21.937253 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:39:21.988587 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 21:39:21.988852 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:39:21.992257 sudo[1584]: pam_unix(sudo:session): session closed for user root Jan 13 21:39:21.996696 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 13 21:39:21.996961 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:39:22.013351 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 13 21:39:22.014403 auditctl[1587]: No rules Jan 13 21:39:22.015214 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 21:39:22.016139 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 13 21:39:22.017838 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:39:22.039978 augenrules[1605]: No rules Jan 13 21:39:22.041076 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:39:22.042320 sudo[1583]: pam_unix(sudo:session): session closed for user root Jan 13 21:39:22.044131 sshd[1580]: pam_unix(sshd:session): session closed for user core Jan 13 21:39:22.056222 systemd[1]: sshd@5-10.0.0.155:22-10.0.0.1:53226.service: Deactivated successfully. Jan 13 21:39:22.057566 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:39:22.060208 systemd-logind[1429]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:39:22.061983 systemd[1]: Started sshd@6-10.0.0.155:22-10.0.0.1:53236.service - OpenSSH per-connection server daemon (10.0.0.1:53236). Jan 13 21:39:22.063176 systemd-logind[1429]: Removed session 6. Jan 13 21:39:22.098417 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 53236 ssh2: RSA SHA256:yd4gyStb+mhc+KSvOhXa4vXVFZWeTXZvH887VPDApJg Jan 13 21:39:22.099586 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:39:22.102932 systemd-logind[1429]: New session 7 of user core. Jan 13 21:39:22.113174 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:39:22.163091 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:39:22.163382 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:39:22.180528 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:39:22.194608 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:39:22.194823 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:39:22.690112 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:39:22.703270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:39:22.719358 systemd[1]: Reloading requested from client PID 1664 ('systemctl') (unit session-7.scope)... Jan 13 21:39:22.719376 systemd[1]: Reloading... Jan 13 21:39:22.786148 zram_generator::config[1699]: No configuration found. Jan 13 21:39:23.023801 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:39:23.074661 systemd[1]: Reloading finished in 354 ms. Jan 13 21:39:23.116710 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:39:23.119946 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:39:23.120140 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:39:23.121458 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:39:23.210246 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:39:23.214705 (kubelet)[1749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:39:23.250794 kubelet[1749]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:39:23.250794 kubelet[1749]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:39:23.250794 kubelet[1749]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:39:23.251172 kubelet[1749]: I0113 21:39:23.250975 1749 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:39:24.187309 kubelet[1749]: I0113 21:39:24.187269 1749 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 13 21:39:24.187309 kubelet[1749]: I0113 21:39:24.187305 1749 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:39:24.188002 kubelet[1749]: I0113 21:39:24.187717 1749 server.go:927] "Client rotation is on, will bootstrap in background" Jan 13 21:39:24.234354 kubelet[1749]: I0113 21:39:24.234322 1749 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:39:24.241681 kubelet[1749]: I0113 21:39:24.241652 1749 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:39:24.242937 kubelet[1749]: I0113 21:39:24.242887 1749 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:39:24.243120 kubelet[1749]: I0113 21:39:24.242932 1749 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.155","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:39:24.243215 kubelet[1749]: I0113 21:39:24.243183 1749 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:39:24.243215 kubelet[1749]: I0113 21:39:24.243194 1749 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:39:24.243400 kubelet[1749]: I0113 21:39:24.243373 1749 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:39:24.244480 kubelet[1749]: I0113 21:39:24.244449 1749 kubelet.go:400] "Attempting to sync node with API server" Jan 13 21:39:24.244480 kubelet[1749]: I0113 21:39:24.244470 1749 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:39:24.244836 kubelet[1749]: I0113 21:39:24.244739 1749 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:39:24.244836 kubelet[1749]: I0113 21:39:24.244814 1749 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:39:24.245063 kubelet[1749]: E0113 21:39:24.244917 1749 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:24.245063 kubelet[1749]: E0113 21:39:24.245001 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:24.246242 kubelet[1749]: I0113 21:39:24.246203 1749 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:39:24.246620 kubelet[1749]: I0113 21:39:24.246589 1749 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:39:24.246709 kubelet[1749]: W0113 21:39:24.246696 1749 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:39:24.247506 kubelet[1749]: I0113 21:39:24.247481 1749 server.go:1264] "Started kubelet" Jan 13 21:39:24.248264 kubelet[1749]: I0113 21:39:24.247792 1749 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:39:24.248264 kubelet[1749]: I0113 21:39:24.248131 1749 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:39:24.249296 kubelet[1749]: I0113 21:39:24.248972 1749 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:39:24.249296 kubelet[1749]: I0113 21:39:24.249012 1749 server.go:455] "Adding debug handlers to kubelet server" Jan 13 21:39:24.250152 kubelet[1749]: I0113 21:39:24.249932 1749 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:39:24.250152 kubelet[1749]: I0113 21:39:24.250103 1749 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:39:24.251404 kubelet[1749]: I0113 21:39:24.251361 1749 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 13 21:39:24.251662 kubelet[1749]: I0113 21:39:24.251457 1749 reconciler.go:26] "Reconciler: start to sync state" Jan 13 21:39:24.258137 kubelet[1749]: W0113 21:39:24.256967 1749 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 21:39:24.258137 kubelet[1749]: I0113 21:39:24.256987 1749 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:39:24.258137 kubelet[1749]: E0113 21:39:24.258075 1749 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:39:24.258266 kubelet[1749]: E0113 21:39:24.257003 1749 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 13 21:39:24.259745 kubelet[1749]: I0113 21:39:24.259699 1749 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:39:24.259745 kubelet[1749]: I0113 21:39:24.259736 1749 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:39:24.268817 kubelet[1749]: I0113 21:39:24.268781 1749 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:39:24.269213 kubelet[1749]: I0113 21:39:24.268795 1749 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:39:24.269213 kubelet[1749]: I0113 21:39:24.269009 1749 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:39:24.270072 kubelet[1749]: E0113 21:39:24.269696 1749 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.155.181a5e5e075166ea default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.155,UID:10.0.0.155,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.155,},FirstTimestamp:2025-01-13 21:39:24.247459562 +0000 UTC m=+1.029711567,LastTimestamp:2025-01-13 21:39:24.247459562 +0000 UTC m=+1.029711567,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.155,}" Jan 13 21:39:24.270196 kubelet[1749]: E0113 21:39:24.270145 1749 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.155\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 13 21:39:24.270228 kubelet[1749]: W0113 21:39:24.270218 1749 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.155" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 21:39:24.270288 kubelet[1749]: E0113 21:39:24.270238 1749 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.155" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 21:39:24.270411 kubelet[1749]: W0113 21:39:24.270341 1749 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 21:39:24.270411 kubelet[1749]: E0113 21:39:24.270355 1749 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 21:39:24.270861 kubelet[1749]: I0113 21:39:24.270804 1749 policy_none.go:49] "None policy: Start" Jan 13 21:39:24.272188 kubelet[1749]: I0113 21:39:24.272161 1749 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:39:24.272188 kubelet[1749]: I0113 21:39:24.272186 1749 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:39:24.275559 kubelet[1749]: E0113 21:39:24.275467 1749 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.155.181a5e5e07f32fd0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.155,UID:10.0.0.155,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.155,},FirstTimestamp:2025-01-13 21:39:24.258062288 +0000 UTC m=+1.040314292,LastTimestamp:2025-01-13 21:39:24.258062288 +0000 UTC m=+1.040314292,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.155,}" Jan 13 21:39:24.277868 kubelet[1749]: E0113 21:39:24.277780 1749 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.155.181a5e5e088faa71 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.155,UID:10.0.0.155,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.155 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.155,},FirstTimestamp:2025-01-13 21:39:24.268317297 +0000 UTC m=+1.050569302,LastTimestamp:2025-01-13 21:39:24.268317297 +0000 UTC m=+1.050569302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.155,}" Jan 13 21:39:24.281148 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 21:39:24.296900 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 21:39:24.299815 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 21:39:24.302485 kubelet[1749]: I0113 21:39:24.302447 1749 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:39:24.303499 kubelet[1749]: I0113 21:39:24.303473 1749 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:39:24.303572 kubelet[1749]: I0113 21:39:24.303562 1749 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:39:24.303598 kubelet[1749]: I0113 21:39:24.303583 1749 kubelet.go:2337] "Starting kubelet main sync loop" Jan 13 21:39:24.303639 kubelet[1749]: E0113 21:39:24.303623 1749 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:39:24.305954 kubelet[1749]: I0113 21:39:24.305893 1749 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:39:24.306209 kubelet[1749]: I0113 21:39:24.306100 1749 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 21:39:24.306209 kubelet[1749]: I0113 21:39:24.306192 1749 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:39:24.307158 kubelet[1749]: E0113 21:39:24.307131 1749 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.155\" not found" Jan 13 21:39:24.353082 kubelet[1749]: I0113 21:39:24.353050 1749 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.155" Jan 13 21:39:24.358335 kubelet[1749]: I0113 21:39:24.358300 1749 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.155" Jan 13 21:39:24.369613 kubelet[1749]: E0113 21:39:24.369569 1749 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.155\" not found" Jan 13 21:39:24.470646 kubelet[1749]: E0113 21:39:24.470545 1749 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.155\" not found" Jan 13 21:39:24.572939 kubelet[1749]: E0113 21:39:24.571071 1749 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.155\" not found" Jan 13 21:39:24.671586 kubelet[1749]: E0113 21:39:24.671525 1749 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.155\" not found" Jan 13 21:39:24.730741 sudo[1616]: pam_unix(sudo:session): session closed for user root Jan 13 21:39:24.732421 sshd[1613]: pam_unix(sshd:session): session closed for user core Jan 13 21:39:24.735637 systemd[1]: sshd@6-10.0.0.155:22-10.0.0.1:53236.service: Deactivated successfully. Jan 13 21:39:24.738279 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:39:24.741016 systemd-logind[1429]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:39:24.742090 systemd-logind[1429]: Removed session 7. Jan 13 21:39:24.772301 kubelet[1749]: E0113 21:39:24.772260 1749 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.155\" not found" Jan 13 21:39:24.872659 kubelet[1749]: E0113 21:39:24.872621 1749 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.155\" not found" Jan 13 21:39:24.973130 kubelet[1749]: E0113 21:39:24.973100 1749 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.155\" not found" Jan 13 21:39:25.073591 kubelet[1749]: E0113 21:39:25.073502 1749 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.155\" not found" Jan 13 21:39:25.175016 kubelet[1749]: I0113 21:39:25.174984 1749 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 21:39:25.175397 containerd[1446]: time="2025-01-13T21:39:25.175297410Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:39:25.175784 kubelet[1749]: I0113 21:39:25.175484 1749 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 21:39:25.190426 kubelet[1749]: I0113 21:39:25.190397 1749 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 21:39:25.190570 kubelet[1749]: W0113 21:39:25.190542 1749 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 21:39:25.190570 kubelet[1749]: W0113 21:39:25.190544 1749 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 21:39:25.245774 kubelet[1749]: E0113 21:39:25.245737 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:25.245892 kubelet[1749]: I0113 21:39:25.245795 1749 apiserver.go:52] "Watching apiserver" Jan 13 21:39:25.251923 kubelet[1749]: I0113 21:39:25.251875 1749 topology_manager.go:215] "Topology Admit Handler" podUID="dc210349-8d96-4c3e-b683-eb0d6fc2c401" podNamespace="kube-system" podName="kube-proxy-2zg8r" Jan 13 21:39:25.252210 kubelet[1749]: I0113 21:39:25.251982 1749 topology_manager.go:215] "Topology Admit Handler" podUID="b1704612-6015-4be5-987f-81bb3776c171" podNamespace="kube-system" podName="cilium-qcqr2" Jan 13 21:39:25.267495 systemd[1]: Created slice kubepods-burstable-podb1704612_6015_4be5_987f_81bb3776c171.slice - libcontainer container kubepods-burstable-podb1704612_6015_4be5_987f_81bb3776c171.slice. Jan 13 21:39:25.280263 systemd[1]: Created slice kubepods-besteffort-poddc210349_8d96_4c3e_b683_eb0d6fc2c401.slice - libcontainer container kubepods-besteffort-poddc210349_8d96_4c3e_b683_eb0d6fc2c401.slice. Jan 13 21:39:25.352898 kubelet[1749]: I0113 21:39:25.352753 1749 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 13 21:39:25.362055 kubelet[1749]: I0113 21:39:25.361635 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-host-proc-sys-kernel\") pod \"cilium-qcqr2\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " pod="kube-system/cilium-qcqr2" Jan 13 21:39:25.362055 kubelet[1749]: I0113 21:39:25.362016 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b1704612-6015-4be5-987f-81bb3776c171-hubble-tls\") pod \"cilium-qcqr2\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " pod="kube-system/cilium-qcqr2" Jan 13 21:39:25.362375 kubelet[1749]: I0113 21:39:25.362212 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc210349-8d96-4c3e-b683-eb0d6fc2c401-lib-modules\") pod \"kube-proxy-2zg8r\" (UID: \"dc210349-8d96-4c3e-b683-eb0d6fc2c401\") " pod="kube-system/kube-proxy-2zg8r" Jan 13 21:39:25.362375 kubelet[1749]: I0113 21:39:25.362248 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xdnc\" (UniqueName: \"kubernetes.io/projected/dc210349-8d96-4c3e-b683-eb0d6fc2c401-kube-api-access-4xdnc\") pod \"kube-proxy-2zg8r\" (UID: \"dc210349-8d96-4c3e-b683-eb0d6fc2c401\") " pod="kube-system/kube-proxy-2zg8r" Jan 13 21:39:25.362375 kubelet[1749]: I0113 21:39:25.362268 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-cni-path\") pod \"cilium-qcqr2\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " pod="kube-system/cilium-qcqr2" Jan 13 21:39:25.362375 kubelet[1749]: I0113 21:39:25.362286 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b1704612-6015-4be5-987f-81bb3776c171-clustermesh-secrets\") pod \"cilium-qcqr2\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " pod="kube-system/cilium-qcqr2" Jan 13 21:39:25.362375 kubelet[1749]: I0113 21:39:25.362300 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-host-proc-sys-net\") pod \"cilium-qcqr2\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " pod="kube-system/cilium-qcqr2" Jan 13 21:39:25.362520 kubelet[1749]: I0113 21:39:25.362316 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc210349-8d96-4c3e-b683-eb0d6fc2c401-xtables-lock\") pod \"kube-proxy-2zg8r\" (UID: \"dc210349-8d96-4c3e-b683-eb0d6fc2c401\") " pod="kube-system/kube-proxy-2zg8r" Jan 13 21:39:25.362520 kubelet[1749]: I0113 21:39:25.362331 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-hostproc\") pod \"cilium-qcqr2\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " pod="kube-system/cilium-qcqr2" Jan 13 21:39:25.362520 kubelet[1749]: I0113 21:39:25.362371 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnxxx\" (UniqueName: \"kubernetes.io/projected/b1704612-6015-4be5-987f-81bb3776c171-kube-api-access-vnxxx\") pod \"cilium-qcqr2\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " pod="kube-system/cilium-qcqr2" Jan 13 21:39:25.362520 kubelet[1749]: I0113 21:39:25.362408 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dc210349-8d96-4c3e-b683-eb0d6fc2c401-kube-proxy\") pod \"kube-proxy-2zg8r\" (UID: \"dc210349-8d96-4c3e-b683-eb0d6fc2c401\") " pod="kube-system/kube-proxy-2zg8r" Jan 13 21:39:25.362520 kubelet[1749]: I0113 21:39:25.362446 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-cilium-run\") pod \"cilium-qcqr2\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " pod="kube-system/cilium-qcqr2" Jan 13 21:39:25.362520 kubelet[1749]: I0113 21:39:25.362497 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-bpf-maps\") pod \"cilium-qcqr2\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " pod="kube-system/cilium-qcqr2" Jan 13 21:39:25.362642 kubelet[1749]: I0113 21:39:25.362517 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-cilium-cgroup\") pod \"cilium-qcqr2\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " pod="kube-system/cilium-qcqr2" Jan 13 21:39:25.362642 kubelet[1749]: I0113 21:39:25.362534 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-xtables-lock\") pod \"cilium-qcqr2\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " pod="kube-system/cilium-qcqr2" Jan 13 21:39:25.362642 kubelet[1749]: I0113 21:39:25.362548 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-etc-cni-netd\") pod \"cilium-qcqr2\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " pod="kube-system/cilium-qcqr2" Jan 13 21:39:25.362642 kubelet[1749]: I0113 21:39:25.362563 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-lib-modules\") pod \"cilium-qcqr2\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " pod="kube-system/cilium-qcqr2" Jan 13 21:39:25.362642 kubelet[1749]: I0113 21:39:25.362578 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1704612-6015-4be5-987f-81bb3776c171-cilium-config-path\") pod \"cilium-qcqr2\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " pod="kube-system/cilium-qcqr2" Jan 13 21:39:25.579602 kubelet[1749]: E0113 21:39:25.578939 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:39:25.580503 containerd[1446]: time="2025-01-13T21:39:25.580081973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qcqr2,Uid:b1704612-6015-4be5-987f-81bb3776c171,Namespace:kube-system,Attempt:0,}" Jan 13 21:39:25.590830 kubelet[1749]: E0113 21:39:25.590583 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:39:25.591374 containerd[1446]: time="2025-01-13T21:39:25.591111168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2zg8r,Uid:dc210349-8d96-4c3e-b683-eb0d6fc2c401,Namespace:kube-system,Attempt:0,}" Jan 13 21:39:26.112202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1046800063.mount: Deactivated successfully. Jan 13 21:39:26.118791 containerd[1446]: time="2025-01-13T21:39:26.118740848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:39:26.120009 containerd[1446]: time="2025-01-13T21:39:26.119788794Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:39:26.120785 containerd[1446]: time="2025-01-13T21:39:26.120739000Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 13 21:39:26.120848 containerd[1446]: time="2025-01-13T21:39:26.120819530Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:39:26.122336 containerd[1446]: time="2025-01-13T21:39:26.121345640Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:39:26.123201 containerd[1446]: time="2025-01-13T21:39:26.123166204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:39:26.125857 containerd[1446]: time="2025-01-13T21:39:26.125385392Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 534.193052ms" Jan 13 21:39:26.126876 containerd[1446]: time="2025-01-13T21:39:26.126842037Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 546.661218ms" Jan 13 21:39:26.215841 containerd[1446]: time="2025-01-13T21:39:26.215739681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:39:26.215841 containerd[1446]: time="2025-01-13T21:39:26.215798527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:39:26.216381 containerd[1446]: time="2025-01-13T21:39:26.216258495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:26.216927 containerd[1446]: time="2025-01-13T21:39:26.216860137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:26.227401 containerd[1446]: time="2025-01-13T21:39:26.227314731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:39:26.228043 containerd[1446]: time="2025-01-13T21:39:26.227511624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:39:26.228043 containerd[1446]: time="2025-01-13T21:39:26.227573654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:26.228043 containerd[1446]: time="2025-01-13T21:39:26.227658457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:26.246775 kubelet[1749]: E0113 21:39:26.246736 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:26.315199 systemd[1]: Started cri-containerd-45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a.scope - libcontainer container 45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a. Jan 13 21:39:26.317676 systemd[1]: Started cri-containerd-a0b75d4095b4e861141e9bfdc3187224e30411a8add15e31599cf809a43dc423.scope - libcontainer container a0b75d4095b4e861141e9bfdc3187224e30411a8add15e31599cf809a43dc423. Jan 13 21:39:26.338609 containerd[1446]: time="2025-01-13T21:39:26.336890451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qcqr2,Uid:b1704612-6015-4be5-987f-81bb3776c171,Namespace:kube-system,Attempt:0,} returns sandbox id \"45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a\"" Jan 13 21:39:26.338707 kubelet[1749]: E0113 21:39:26.337844 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:39:26.339841 containerd[1446]: time="2025-01-13T21:39:26.339809505Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 21:39:26.340401 containerd[1446]: time="2025-01-13T21:39:26.340374590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2zg8r,Uid:dc210349-8d96-4c3e-b683-eb0d6fc2c401,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0b75d4095b4e861141e9bfdc3187224e30411a8add15e31599cf809a43dc423\"" Jan 13 21:39:26.340949 kubelet[1749]: E0113 21:39:26.340930 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:39:27.247240 kubelet[1749]: E0113 21:39:27.247188 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:28.248039 kubelet[1749]: E0113 21:39:28.247975 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:29.248659 kubelet[1749]: E0113 21:39:29.248615 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:30.248916 kubelet[1749]: E0113 21:39:30.248861 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:31.249503 kubelet[1749]: E0113 21:39:31.249469 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:32.250641 kubelet[1749]: E0113 21:39:32.250582 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:33.251668 kubelet[1749]: E0113 21:39:33.251619 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:34.252528 kubelet[1749]: E0113 21:39:34.252474 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:35.252702 kubelet[1749]: E0113 21:39:35.252651 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:36.253523 kubelet[1749]: E0113 21:39:36.253481 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:37.254533 kubelet[1749]: E0113 21:39:37.254503 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:38.255076 kubelet[1749]: E0113 21:39:38.255040 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:38.483841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2514260420.mount: Deactivated successfully. Jan 13 21:39:39.255642 kubelet[1749]: E0113 21:39:39.255614 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:39.662340 containerd[1446]: time="2025-01-13T21:39:39.662236038Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:39.663405 containerd[1446]: time="2025-01-13T21:39:39.663243793Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651518" Jan 13 21:39:39.664196 containerd[1446]: time="2025-01-13T21:39:39.664159665Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:39.666287 containerd[1446]: time="2025-01-13T21:39:39.666246671Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 13.32639684s" Jan 13 21:39:39.666287 containerd[1446]: time="2025-01-13T21:39:39.666285764Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 13 21:39:39.667649 containerd[1446]: time="2025-01-13T21:39:39.667616073Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 13 21:39:39.668546 containerd[1446]: time="2025-01-13T21:39:39.668499181Z" level=info msg="CreateContainer within sandbox \"45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:39:39.680864 containerd[1446]: time="2025-01-13T21:39:39.680826638Z" level=info msg="CreateContainer within sandbox \"45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d4176c68e26447a08156efdcd9f127c5900a9bff4d71497317e9bfb70efed797\"" Jan 13 21:39:39.681516 containerd[1446]: time="2025-01-13T21:39:39.681491933Z" level=info msg="StartContainer for \"d4176c68e26447a08156efdcd9f127c5900a9bff4d71497317e9bfb70efed797\"" Jan 13 21:39:39.702183 systemd[1]: Started cri-containerd-d4176c68e26447a08156efdcd9f127c5900a9bff4d71497317e9bfb70efed797.scope - libcontainer container d4176c68e26447a08156efdcd9f127c5900a9bff4d71497317e9bfb70efed797. Jan 13 21:39:39.724340 containerd[1446]: time="2025-01-13T21:39:39.724295254Z" level=info msg="StartContainer for \"d4176c68e26447a08156efdcd9f127c5900a9bff4d71497317e9bfb70efed797\" returns successfully" Jan 13 21:39:39.758217 systemd[1]: cri-containerd-d4176c68e26447a08156efdcd9f127c5900a9bff4d71497317e9bfb70efed797.scope: Deactivated successfully. Jan 13 21:39:39.876559 containerd[1446]: time="2025-01-13T21:39:39.876484594Z" level=info msg="shim disconnected" id=d4176c68e26447a08156efdcd9f127c5900a9bff4d71497317e9bfb70efed797 namespace=k8s.io Jan 13 21:39:39.876559 containerd[1446]: time="2025-01-13T21:39:39.876550323Z" level=warning msg="cleaning up after shim disconnected" id=d4176c68e26447a08156efdcd9f127c5900a9bff4d71497317e9bfb70efed797 namespace=k8s.io Jan 13 21:39:39.876559 containerd[1446]: time="2025-01-13T21:39:39.876559214Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:39:40.256946 kubelet[1749]: E0113 21:39:40.256903 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:40.329945 kubelet[1749]: E0113 21:39:40.329914 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:39:40.331598 containerd[1446]: time="2025-01-13T21:39:40.331561027Z" level=info msg="CreateContainer within sandbox \"45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:39:40.345393 containerd[1446]: time="2025-01-13T21:39:40.345321862Z" level=info msg="CreateContainer within sandbox \"45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ecb831363749445c1919e53f547204a5f5d9a83753c382ae5db927afa55ebbd0\"" Jan 13 21:39:40.345982 containerd[1446]: time="2025-01-13T21:39:40.345949721Z" level=info msg="StartContainer for \"ecb831363749445c1919e53f547204a5f5d9a83753c382ae5db927afa55ebbd0\"" Jan 13 21:39:40.366186 systemd[1]: Started cri-containerd-ecb831363749445c1919e53f547204a5f5d9a83753c382ae5db927afa55ebbd0.scope - libcontainer container ecb831363749445c1919e53f547204a5f5d9a83753c382ae5db927afa55ebbd0. Jan 13 21:39:40.389166 containerd[1446]: time="2025-01-13T21:39:40.389047242Z" level=info msg="StartContainer for \"ecb831363749445c1919e53f547204a5f5d9a83753c382ae5db927afa55ebbd0\" returns successfully" Jan 13 21:39:40.399327 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:39:40.399547 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:39:40.399611 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:39:40.406326 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:39:40.406490 systemd[1]: cri-containerd-ecb831363749445c1919e53f547204a5f5d9a83753c382ae5db927afa55ebbd0.scope: Deactivated successfully. Jan 13 21:39:40.419070 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:39:40.440676 containerd[1446]: time="2025-01-13T21:39:40.440616572Z" level=info msg="shim disconnected" id=ecb831363749445c1919e53f547204a5f5d9a83753c382ae5db927afa55ebbd0 namespace=k8s.io Jan 13 21:39:40.440869 containerd[1446]: time="2025-01-13T21:39:40.440690940Z" level=warning msg="cleaning up after shim disconnected" id=ecb831363749445c1919e53f547204a5f5d9a83753c382ae5db927afa55ebbd0 namespace=k8s.io Jan 13 21:39:40.440869 containerd[1446]: time="2025-01-13T21:39:40.440699950Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:39:40.676282 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4176c68e26447a08156efdcd9f127c5900a9bff4d71497317e9bfb70efed797-rootfs.mount: Deactivated successfully. Jan 13 21:39:40.828641 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2778102591.mount: Deactivated successfully. Jan 13 21:39:41.018621 containerd[1446]: time="2025-01-13T21:39:41.018499620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:41.019234 containerd[1446]: time="2025-01-13T21:39:41.019190972Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662013" Jan 13 21:39:41.019871 containerd[1446]: time="2025-01-13T21:39:41.019835396Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:41.022349 containerd[1446]: time="2025-01-13T21:39:41.022317232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:41.023105 containerd[1446]: time="2025-01-13T21:39:41.023073611Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 1.355431346s" Jan 13 21:39:41.023146 containerd[1446]: time="2025-01-13T21:39:41.023108687Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Jan 13 21:39:41.025349 containerd[1446]: time="2025-01-13T21:39:41.025317602Z" level=info msg="CreateContainer within sandbox \"a0b75d4095b4e861141e9bfdc3187224e30411a8add15e31599cf809a43dc423\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:39:41.038954 containerd[1446]: time="2025-01-13T21:39:41.038914406Z" level=info msg="CreateContainer within sandbox \"a0b75d4095b4e861141e9bfdc3187224e30411a8add15e31599cf809a43dc423\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f84974024f581f28812d27610d2d849c6ad236aff7db6f9a921a67d4673a5a7a\"" Jan 13 21:39:41.039588 containerd[1446]: time="2025-01-13T21:39:41.039562513Z" level=info msg="StartContainer for \"f84974024f581f28812d27610d2d849c6ad236aff7db6f9a921a67d4673a5a7a\"" Jan 13 21:39:41.064170 systemd[1]: Started cri-containerd-f84974024f581f28812d27610d2d849c6ad236aff7db6f9a921a67d4673a5a7a.scope - libcontainer container f84974024f581f28812d27610d2d849c6ad236aff7db6f9a921a67d4673a5a7a. Jan 13 21:39:41.086730 containerd[1446]: time="2025-01-13T21:39:41.086569647Z" level=info msg="StartContainer for \"f84974024f581f28812d27610d2d849c6ad236aff7db6f9a921a67d4673a5a7a\" returns successfully" Jan 13 21:39:41.257063 kubelet[1749]: E0113 21:39:41.257003 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:41.332713 kubelet[1749]: E0113 21:39:41.332626 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:39:41.336056 kubelet[1749]: E0113 21:39:41.336030 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:39:41.337889 containerd[1446]: time="2025-01-13T21:39:41.337714746Z" level=info msg="CreateContainer within sandbox \"45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:39:41.350519 containerd[1446]: time="2025-01-13T21:39:41.350472045Z" level=info msg="CreateContainer within sandbox \"45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"64512d08b423f961f4cfcb86dfe371fb4579a1a25f64266b2d2ee513ef2bea50\"" Jan 13 21:39:41.351189 containerd[1446]: time="2025-01-13T21:39:41.351145618Z" level=info msg="StartContainer for \"64512d08b423f961f4cfcb86dfe371fb4579a1a25f64266b2d2ee513ef2bea50\"" Jan 13 21:39:41.354740 kubelet[1749]: I0113 21:39:41.354688 1749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2zg8r" podStartSLOduration=2.671918895 podStartE2EDuration="17.354670289s" podCreationTimestamp="2025-01-13 21:39:24 +0000 UTC" firstStartedPulling="2025-01-13 21:39:26.341327012 +0000 UTC m=+3.123578976" lastFinishedPulling="2025-01-13 21:39:41.024078406 +0000 UTC m=+17.806330370" observedRunningTime="2025-01-13 21:39:41.340965294 +0000 UTC m=+18.123217298" watchObservedRunningTime="2025-01-13 21:39:41.354670289 +0000 UTC m=+18.136922293" Jan 13 21:39:41.376172 systemd[1]: Started cri-containerd-64512d08b423f961f4cfcb86dfe371fb4579a1a25f64266b2d2ee513ef2bea50.scope - libcontainer container 64512d08b423f961f4cfcb86dfe371fb4579a1a25f64266b2d2ee513ef2bea50. Jan 13 21:39:41.396956 containerd[1446]: time="2025-01-13T21:39:41.396915398Z" level=info msg="StartContainer for \"64512d08b423f961f4cfcb86dfe371fb4579a1a25f64266b2d2ee513ef2bea50\" returns successfully" Jan 13 21:39:41.411139 systemd[1]: cri-containerd-64512d08b423f961f4cfcb86dfe371fb4579a1a25f64266b2d2ee513ef2bea50.scope: Deactivated successfully. Jan 13 21:39:41.540327 containerd[1446]: time="2025-01-13T21:39:41.540268560Z" level=info msg="shim disconnected" id=64512d08b423f961f4cfcb86dfe371fb4579a1a25f64266b2d2ee513ef2bea50 namespace=k8s.io Jan 13 21:39:41.540327 containerd[1446]: time="2025-01-13T21:39:41.540321894Z" level=warning msg="cleaning up after shim disconnected" id=64512d08b423f961f4cfcb86dfe371fb4579a1a25f64266b2d2ee513ef2bea50 namespace=k8s.io Jan 13 21:39:41.540327 containerd[1446]: time="2025-01-13T21:39:41.540330023Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:39:42.257337 kubelet[1749]: E0113 21:39:42.257298 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:42.339384 kubelet[1749]: E0113 21:39:42.339191 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:39:42.339384 kubelet[1749]: E0113 21:39:42.339314 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:39:42.341086 containerd[1446]: time="2025-01-13T21:39:42.341054408Z" level=info msg="CreateContainer within sandbox \"45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:39:42.356323 containerd[1446]: time="2025-01-13T21:39:42.356271924Z" level=info msg="CreateContainer within sandbox \"45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9b93f089c4c70372c04c329a412aebdbd99e247b60f6a73a0e7a726aa2877033\"" Jan 13 21:39:42.356898 containerd[1446]: time="2025-01-13T21:39:42.356868141Z" level=info msg="StartContainer for \"9b93f089c4c70372c04c329a412aebdbd99e247b60f6a73a0e7a726aa2877033\"" Jan 13 21:39:42.384166 systemd[1]: Started cri-containerd-9b93f089c4c70372c04c329a412aebdbd99e247b60f6a73a0e7a726aa2877033.scope - libcontainer container 9b93f089c4c70372c04c329a412aebdbd99e247b60f6a73a0e7a726aa2877033. Jan 13 21:39:42.401793 systemd[1]: cri-containerd-9b93f089c4c70372c04c329a412aebdbd99e247b60f6a73a0e7a726aa2877033.scope: Deactivated successfully. Jan 13 21:39:42.404340 containerd[1446]: time="2025-01-13T21:39:42.404298890Z" level=info msg="StartContainer for \"9b93f089c4c70372c04c329a412aebdbd99e247b60f6a73a0e7a726aa2877033\" returns successfully" Jan 13 21:39:42.421177 containerd[1446]: time="2025-01-13T21:39:42.421125416Z" level=info msg="shim disconnected" id=9b93f089c4c70372c04c329a412aebdbd99e247b60f6a73a0e7a726aa2877033 namespace=k8s.io Jan 13 21:39:42.421332 containerd[1446]: time="2025-01-13T21:39:42.421315387Z" level=warning msg="cleaning up after shim disconnected" id=9b93f089c4c70372c04c329a412aebdbd99e247b60f6a73a0e7a726aa2877033 namespace=k8s.io Jan 13 21:39:42.421391 containerd[1446]: time="2025-01-13T21:39:42.421378604Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:39:42.675584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b93f089c4c70372c04c329a412aebdbd99e247b60f6a73a0e7a726aa2877033-rootfs.mount: Deactivated successfully. Jan 13 21:39:43.258180 kubelet[1749]: E0113 21:39:43.258142 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:43.342581 kubelet[1749]: E0113 21:39:43.342541 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:39:43.344702 containerd[1446]: time="2025-01-13T21:39:43.344661658Z" level=info msg="CreateContainer within sandbox \"45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:39:43.358940 containerd[1446]: time="2025-01-13T21:39:43.358894524Z" level=info msg="CreateContainer within sandbox \"45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140\"" Jan 13 21:39:43.359759 containerd[1446]: time="2025-01-13T21:39:43.359728102Z" level=info msg="StartContainer for \"25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140\"" Jan 13 21:39:43.382463 systemd[1]: Started cri-containerd-25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140.scope - libcontainer container 25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140. Jan 13 21:39:43.407193 containerd[1446]: time="2025-01-13T21:39:43.407150825Z" level=info msg="StartContainer for \"25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140\" returns successfully" Jan 13 21:39:43.523778 kubelet[1749]: I0113 21:39:43.523657 1749 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:39:43.913079 kernel: Initializing XFRM netlink socket Jan 13 21:39:44.245662 kubelet[1749]: E0113 21:39:44.245614 1749 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:44.258921 kubelet[1749]: E0113 21:39:44.258882 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:44.347216 kubelet[1749]: E0113 21:39:44.347175 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:39:44.360089 kubelet[1749]: I0113 21:39:44.359812 1749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qcqr2" podStartSLOduration=7.031659904 podStartE2EDuration="20.35979597s" podCreationTimestamp="2025-01-13 21:39:24 +0000 UTC" firstStartedPulling="2025-01-13 21:39:26.338819599 +0000 UTC m=+3.121071603" lastFinishedPulling="2025-01-13 21:39:39.666955705 +0000 UTC m=+16.449207669" observedRunningTime="2025-01-13 21:39:44.359593831 +0000 UTC m=+21.141845835" watchObservedRunningTime="2025-01-13 21:39:44.35979597 +0000 UTC m=+21.142047974" Jan 13 21:39:45.259158 kubelet[1749]: E0113 21:39:45.259113 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:45.348858 kubelet[1749]: E0113 21:39:45.348827 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:39:45.529185 systemd-networkd[1386]: cilium_host: Link UP Jan 13 21:39:45.529349 systemd-networkd[1386]: cilium_net: Link UP Jan 13 21:39:45.529472 systemd-networkd[1386]: cilium_net: Gained carrier Jan 13 21:39:45.529587 systemd-networkd[1386]: cilium_host: Gained carrier Jan 13 21:39:45.603362 systemd-networkd[1386]: cilium_vxlan: Link UP Jan 13 21:39:45.603369 systemd-networkd[1386]: cilium_vxlan: Gained carrier Jan 13 21:39:45.861165 systemd-networkd[1386]: cilium_host: Gained IPv6LL Jan 13 21:39:45.908051 kernel: NET: Registered PF_ALG protocol family Jan 13 21:39:45.995366 systemd-networkd[1386]: cilium_net: Gained IPv6LL Jan 13 21:39:46.015594 kubelet[1749]: I0113 21:39:46.015543 1749 topology_manager.go:215] "Topology Admit Handler" podUID="aa5c5a53-1574-4df9-b971-aa4b4f303331" podNamespace="default" podName="nginx-deployment-85f456d6dd-thzsc" Jan 13 21:39:46.020604 systemd[1]: Created slice kubepods-besteffort-podaa5c5a53_1574_4df9_b971_aa4b4f303331.slice - libcontainer container kubepods-besteffort-podaa5c5a53_1574_4df9_b971_aa4b4f303331.slice. Jan 13 21:39:46.095338 kubelet[1749]: I0113 21:39:46.095284 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8794\" (UniqueName: \"kubernetes.io/projected/aa5c5a53-1574-4df9-b971-aa4b4f303331-kube-api-access-v8794\") pod \"nginx-deployment-85f456d6dd-thzsc\" (UID: \"aa5c5a53-1574-4df9-b971-aa4b4f303331\") " pod="default/nginx-deployment-85f456d6dd-thzsc" Jan 13 21:39:46.259500 kubelet[1749]: E0113 21:39:46.259385 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:46.323715 containerd[1446]: time="2025-01-13T21:39:46.323669177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-thzsc,Uid:aa5c5a53-1574-4df9-b971-aa4b4f303331,Namespace:default,Attempt:0,}" Jan 13 21:39:46.351965 kubelet[1749]: E0113 21:39:46.351933 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:39:46.477765 systemd-networkd[1386]: lxc_health: Link UP Jan 13 21:39:46.487628 systemd-networkd[1386]: lxc_health: Gained carrier Jan 13 21:39:46.888115 systemd-networkd[1386]: lxc01695ebabf47: Link UP Jan 13 21:39:46.894340 kernel: eth0: renamed from tmpe4d76 Jan 13 21:39:46.900822 systemd-networkd[1386]: lxc01695ebabf47: Gained carrier Jan 13 21:39:47.148133 systemd-networkd[1386]: cilium_vxlan: Gained IPv6LL Jan 13 21:39:47.260121 kubelet[1749]: E0113 21:39:47.260069 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:47.851257 systemd-networkd[1386]: lxc_health: Gained IPv6LL Jan 13 21:39:47.944048 kubelet[1749]: E0113 21:39:47.943922 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:39:48.261244 kubelet[1749]: E0113 21:39:48.261134 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:48.354507 kubelet[1749]: E0113 21:39:48.354443 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:39:48.812139 systemd-networkd[1386]: lxc01695ebabf47: Gained IPv6LL Jan 13 21:39:49.261686 kubelet[1749]: E0113 21:39:49.261528 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:50.261830 kubelet[1749]: E0113 21:39:50.261779 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:50.368131 containerd[1446]: time="2025-01-13T21:39:50.368052139Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:39:50.368131 containerd[1446]: time="2025-01-13T21:39:50.368100204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:39:50.368131 containerd[1446]: time="2025-01-13T21:39:50.368110850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:50.368530 containerd[1446]: time="2025-01-13T21:39:50.368184408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:50.391162 systemd[1]: Started cri-containerd-e4d7627d2402c2a6ea09e20a48dda04806a003214191f59ee24a0df92b2497b8.scope - libcontainer container e4d7627d2402c2a6ea09e20a48dda04806a003214191f59ee24a0df92b2497b8. Jan 13 21:39:50.402724 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:39:50.416684 containerd[1446]: time="2025-01-13T21:39:50.416643623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-thzsc,Uid:aa5c5a53-1574-4df9-b971-aa4b4f303331,Namespace:default,Attempt:0,} returns sandbox id \"e4d7627d2402c2a6ea09e20a48dda04806a003214191f59ee24a0df92b2497b8\"" Jan 13 21:39:50.418165 containerd[1446]: time="2025-01-13T21:39:50.418141489Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 21:39:51.261987 kubelet[1749]: E0113 21:39:51.261943 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:51.974597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1675035621.mount: Deactivated successfully. Jan 13 21:39:52.262581 kubelet[1749]: E0113 21:39:52.262142 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:52.702043 containerd[1446]: time="2025-01-13T21:39:52.700816127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:52.702043 containerd[1446]: time="2025-01-13T21:39:52.701730637Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67697045" Jan 13 21:39:52.702414 containerd[1446]: time="2025-01-13T21:39:52.702308549Z" level=info msg="ImageCreate event name:\"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:52.704617 containerd[1446]: time="2025-01-13T21:39:52.704571452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:39:52.705960 containerd[1446]: time="2025-01-13T21:39:52.705616783Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"67696923\" in 2.287442917s" Jan 13 21:39:52.705960 containerd[1446]: time="2025-01-13T21:39:52.705648998Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\"" Jan 13 21:39:52.707944 containerd[1446]: time="2025-01-13T21:39:52.707917824Z" level=info msg="CreateContainer within sandbox \"e4d7627d2402c2a6ea09e20a48dda04806a003214191f59ee24a0df92b2497b8\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 21:39:52.717612 containerd[1446]: time="2025-01-13T21:39:52.717533181Z" level=info msg="CreateContainer within sandbox \"e4d7627d2402c2a6ea09e20a48dda04806a003214191f59ee24a0df92b2497b8\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"e33a09fbffc8370f8e0dedb13404e33bc035a56cc34c8f7231b4bc6884afced1\"" Jan 13 21:39:52.718750 containerd[1446]: time="2025-01-13T21:39:52.718039259Z" level=info msg="StartContainer for \"e33a09fbffc8370f8e0dedb13404e33bc035a56cc34c8f7231b4bc6884afced1\"" Jan 13 21:39:52.745176 systemd[1]: Started cri-containerd-e33a09fbffc8370f8e0dedb13404e33bc035a56cc34c8f7231b4bc6884afced1.scope - libcontainer container e33a09fbffc8370f8e0dedb13404e33bc035a56cc34c8f7231b4bc6884afced1. Jan 13 21:39:52.768553 containerd[1446]: time="2025-01-13T21:39:52.768511732Z" level=info msg="StartContainer for \"e33a09fbffc8370f8e0dedb13404e33bc035a56cc34c8f7231b4bc6884afced1\" returns successfully" Jan 13 21:39:53.263281 kubelet[1749]: E0113 21:39:53.263248 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:53.373186 kubelet[1749]: I0113 21:39:53.373130 1749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-thzsc" podStartSLOduration=5.084183898 podStartE2EDuration="7.373116375s" podCreationTimestamp="2025-01-13 21:39:46 +0000 UTC" firstStartedPulling="2025-01-13 21:39:50.417816158 +0000 UTC m=+27.200068122" lastFinishedPulling="2025-01-13 21:39:52.706748595 +0000 UTC m=+29.489000599" observedRunningTime="2025-01-13 21:39:53.372594503 +0000 UTC m=+30.154846507" watchObservedRunningTime="2025-01-13 21:39:53.373116375 +0000 UTC m=+30.155368380" Jan 13 21:39:54.263927 kubelet[1749]: E0113 21:39:54.263885 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:55.264356 kubelet[1749]: E0113 21:39:55.264279 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:56.265230 kubelet[1749]: E0113 21:39:56.265182 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:57.266132 kubelet[1749]: E0113 21:39:57.266096 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:58.243995 kubelet[1749]: I0113 21:39:58.243892 1749 topology_manager.go:215] "Topology Admit Handler" podUID="880263ab-a0f4-4f57-be35-e4638cda9983" podNamespace="default" podName="nfs-server-provisioner-0" Jan 13 21:39:58.249904 systemd[1]: Created slice kubepods-besteffort-pod880263ab_a0f4_4f57_be35_e4638cda9983.slice - libcontainer container kubepods-besteffort-pod880263ab_a0f4_4f57_be35_e4638cda9983.slice. Jan 13 21:39:58.266429 kubelet[1749]: E0113 21:39:58.266393 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:58.354288 kubelet[1749]: I0113 21:39:58.354239 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd95l\" (UniqueName: \"kubernetes.io/projected/880263ab-a0f4-4f57-be35-e4638cda9983-kube-api-access-gd95l\") pod \"nfs-server-provisioner-0\" (UID: \"880263ab-a0f4-4f57-be35-e4638cda9983\") " pod="default/nfs-server-provisioner-0" Jan 13 21:39:58.354288 kubelet[1749]: I0113 21:39:58.354280 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/880263ab-a0f4-4f57-be35-e4638cda9983-data\") pod \"nfs-server-provisioner-0\" (UID: \"880263ab-a0f4-4f57-be35-e4638cda9983\") " pod="default/nfs-server-provisioner-0" Jan 13 21:39:58.553490 containerd[1446]: time="2025-01-13T21:39:58.552996099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:880263ab-a0f4-4f57-be35-e4638cda9983,Namespace:default,Attempt:0,}" Jan 13 21:39:58.576529 systemd-networkd[1386]: lxce104e5a09e82: Link UP Jan 13 21:39:58.584054 kernel: eth0: renamed from tmpf5264 Jan 13 21:39:58.594340 systemd-networkd[1386]: lxce104e5a09e82: Gained carrier Jan 13 21:39:58.771375 containerd[1446]: time="2025-01-13T21:39:58.771194627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:39:58.771375 containerd[1446]: time="2025-01-13T21:39:58.771242964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:39:58.771375 containerd[1446]: time="2025-01-13T21:39:58.771254448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:58.771375 containerd[1446]: time="2025-01-13T21:39:58.771329873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:39:58.796159 systemd[1]: Started cri-containerd-f52645c2bc3625f8dc092fa278d9a15a7ff094492f3e5131e724c9cc94000278.scope - libcontainer container f52645c2bc3625f8dc092fa278d9a15a7ff094492f3e5131e724c9cc94000278. Jan 13 21:39:58.805181 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:39:58.820832 containerd[1446]: time="2025-01-13T21:39:58.820591664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:880263ab-a0f4-4f57-be35-e4638cda9983,Namespace:default,Attempt:0,} returns sandbox id \"f52645c2bc3625f8dc092fa278d9a15a7ff094492f3e5131e724c9cc94000278\"" Jan 13 21:39:58.822042 containerd[1446]: time="2025-01-13T21:39:58.821887148Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 21:39:59.117252 update_engine[1435]: I20250113 21:39:59.117101 1435 update_attempter.cc:509] Updating boot flags... Jan 13 21:39:59.135079 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2974) Jan 13 21:39:59.151783 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2974) Jan 13 21:39:59.178550 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2974) Jan 13 21:39:59.267184 kubelet[1749]: E0113 21:39:59.267137 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:39:59.627310 systemd-networkd[1386]: lxce104e5a09e82: Gained IPv6LL Jan 13 21:40:00.268642 kubelet[1749]: E0113 21:40:00.267429 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:00.813770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3939495302.mount: Deactivated successfully. Jan 13 21:40:01.268429 kubelet[1749]: E0113 21:40:01.268393 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:02.119855 containerd[1446]: time="2025-01-13T21:40:02.119794873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:40:02.120347 containerd[1446]: time="2025-01-13T21:40:02.120308418Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Jan 13 21:40:02.121187 containerd[1446]: time="2025-01-13T21:40:02.121155696Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:40:02.123893 containerd[1446]: time="2025-01-13T21:40:02.123860817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:40:02.128740 containerd[1446]: time="2025-01-13T21:40:02.128604991Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.306643258s" Jan 13 21:40:02.128740 containerd[1446]: time="2025-01-13T21:40:02.128646042Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 13 21:40:02.130841 containerd[1446]: time="2025-01-13T21:40:02.130809451Z" level=info msg="CreateContainer within sandbox \"f52645c2bc3625f8dc092fa278d9a15a7ff094492f3e5131e724c9cc94000278\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 21:40:02.141687 containerd[1446]: time="2025-01-13T21:40:02.141652020Z" level=info msg="CreateContainer within sandbox \"f52645c2bc3625f8dc092fa278d9a15a7ff094492f3e5131e724c9cc94000278\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"b6cf5516c02d531b319036e68ee6dc2055156704ea0320c0d9534b3c03aa2b54\"" Jan 13 21:40:02.142089 containerd[1446]: time="2025-01-13T21:40:02.142040849Z" level=info msg="StartContainer for \"b6cf5516c02d531b319036e68ee6dc2055156704ea0320c0d9534b3c03aa2b54\"" Jan 13 21:40:02.217252 systemd[1]: Started cri-containerd-b6cf5516c02d531b319036e68ee6dc2055156704ea0320c0d9534b3c03aa2b54.scope - libcontainer container b6cf5516c02d531b319036e68ee6dc2055156704ea0320c0d9534b3c03aa2b54. Jan 13 21:40:02.268656 kubelet[1749]: E0113 21:40:02.268613 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:02.290719 containerd[1446]: time="2025-01-13T21:40:02.290648837Z" level=info msg="StartContainer for \"b6cf5516c02d531b319036e68ee6dc2055156704ea0320c0d9534b3c03aa2b54\" returns successfully" Jan 13 21:40:03.272410 kubelet[1749]: E0113 21:40:03.268725 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:04.245200 kubelet[1749]: E0113 21:40:04.245157 1749 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:04.269501 kubelet[1749]: E0113 21:40:04.269461 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:05.269951 kubelet[1749]: E0113 21:40:05.269901 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:06.270687 kubelet[1749]: E0113 21:40:06.270644 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:07.271403 kubelet[1749]: E0113 21:40:07.271349 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:08.271859 kubelet[1749]: E0113 21:40:08.271818 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:09.272711 kubelet[1749]: E0113 21:40:09.272672 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:10.273179 kubelet[1749]: E0113 21:40:10.273135 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:11.274136 kubelet[1749]: E0113 21:40:11.274086 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:12.079127 kubelet[1749]: I0113 21:40:12.079077 1749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=10.771240116 podStartE2EDuration="14.079058562s" podCreationTimestamp="2025-01-13 21:39:58 +0000 UTC" firstStartedPulling="2025-01-13 21:39:58.821669753 +0000 UTC m=+35.603921717" lastFinishedPulling="2025-01-13 21:40:02.129488159 +0000 UTC m=+38.911740163" observedRunningTime="2025-01-13 21:40:02.389731538 +0000 UTC m=+39.171983542" watchObservedRunningTime="2025-01-13 21:40:12.079058562 +0000 UTC m=+48.861310566" Jan 13 21:40:12.081090 kubelet[1749]: I0113 21:40:12.079432 1749 topology_manager.go:215] "Topology Admit Handler" podUID="fac51733-d7d6-4466-bd59-12b1dadc80e1" podNamespace="default" podName="test-pod-1" Jan 13 21:40:12.084716 systemd[1]: Created slice kubepods-besteffort-podfac51733_d7d6_4466_bd59_12b1dadc80e1.slice - libcontainer container kubepods-besteffort-podfac51733_d7d6_4466_bd59_12b1dadc80e1.slice. Jan 13 21:40:12.214915 kubelet[1749]: I0113 21:40:12.214876 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z75zn\" (UniqueName: \"kubernetes.io/projected/fac51733-d7d6-4466-bd59-12b1dadc80e1-kube-api-access-z75zn\") pod \"test-pod-1\" (UID: \"fac51733-d7d6-4466-bd59-12b1dadc80e1\") " pod="default/test-pod-1" Jan 13 21:40:12.214915 kubelet[1749]: I0113 21:40:12.214915 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-314e7acf-8759-41b4-97e8-3de77151584d\" (UniqueName: \"kubernetes.io/nfs/fac51733-d7d6-4466-bd59-12b1dadc80e1-pvc-314e7acf-8759-41b4-97e8-3de77151584d\") pod \"test-pod-1\" (UID: \"fac51733-d7d6-4466-bd59-12b1dadc80e1\") " pod="default/test-pod-1" Jan 13 21:40:12.274415 kubelet[1749]: E0113 21:40:12.274382 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:12.334108 kernel: FS-Cache: Loaded Jan 13 21:40:12.358559 kernel: RPC: Registered named UNIX socket transport module. Jan 13 21:40:12.358647 kernel: RPC: Registered udp transport module. Jan 13 21:40:12.358679 kernel: RPC: Registered tcp transport module. Jan 13 21:40:12.359179 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 21:40:12.360538 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 21:40:12.537158 kernel: NFS: Registering the id_resolver key type Jan 13 21:40:12.537332 kernel: Key type id_resolver registered Jan 13 21:40:12.537359 kernel: Key type id_legacy registered Jan 13 21:40:12.563763 nfsidmap[3151]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 21:40:12.567151 nfsidmap[3154]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 21:40:12.688140 containerd[1446]: time="2025-01-13T21:40:12.688045142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:fac51733-d7d6-4466-bd59-12b1dadc80e1,Namespace:default,Attempt:0,}" Jan 13 21:40:12.712861 systemd-networkd[1386]: lxcfbe41f879e09: Link UP Jan 13 21:40:12.719075 kernel: eth0: renamed from tmp6191c Jan 13 21:40:12.727942 systemd-networkd[1386]: lxcfbe41f879e09: Gained carrier Jan 13 21:40:12.869387 containerd[1446]: time="2025-01-13T21:40:12.869308975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:40:12.869387 containerd[1446]: time="2025-01-13T21:40:12.869357544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:40:12.869387 containerd[1446]: time="2025-01-13T21:40:12.869378268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:40:12.869568 containerd[1446]: time="2025-01-13T21:40:12.869448521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:40:12.894178 systemd[1]: Started cri-containerd-6191caf1f9cceac3965ba9f17c5e65bd7931e93c1357144d2da8095553193a4d.scope - libcontainer container 6191caf1f9cceac3965ba9f17c5e65bd7931e93c1357144d2da8095553193a4d. Jan 13 21:40:12.903104 systemd-resolved[1314]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:40:12.941374 containerd[1446]: time="2025-01-13T21:40:12.941226802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:fac51733-d7d6-4466-bd59-12b1dadc80e1,Namespace:default,Attempt:0,} returns sandbox id \"6191caf1f9cceac3965ba9f17c5e65bd7931e93c1357144d2da8095553193a4d\"" Jan 13 21:40:12.942634 containerd[1446]: time="2025-01-13T21:40:12.942609894Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 21:40:13.245833 containerd[1446]: time="2025-01-13T21:40:13.245543325Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:40:13.246423 containerd[1446]: time="2025-01-13T21:40:13.246395555Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 21:40:13.249280 containerd[1446]: time="2025-01-13T21:40:13.249235093Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"67696923\" in 306.498496ms" Jan 13 21:40:13.249328 containerd[1446]: time="2025-01-13T21:40:13.249272699Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\"" Jan 13 21:40:13.251064 containerd[1446]: time="2025-01-13T21:40:13.251007844Z" level=info msg="CreateContainer within sandbox \"6191caf1f9cceac3965ba9f17c5e65bd7931e93c1357144d2da8095553193a4d\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 21:40:13.260880 containerd[1446]: time="2025-01-13T21:40:13.260840569Z" level=info msg="CreateContainer within sandbox \"6191caf1f9cceac3965ba9f17c5e65bd7931e93c1357144d2da8095553193a4d\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"6462c45c3f20c038c0171c9ff5a68a56b9ba4dd28ff6ac2d2878f9dac0d6675d\"" Jan 13 21:40:13.261379 containerd[1446]: time="2025-01-13T21:40:13.261332135Z" level=info msg="StartContainer for \"6462c45c3f20c038c0171c9ff5a68a56b9ba4dd28ff6ac2d2878f9dac0d6675d\"" Jan 13 21:40:13.275277 kubelet[1749]: E0113 21:40:13.275179 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:13.285167 systemd[1]: Started cri-containerd-6462c45c3f20c038c0171c9ff5a68a56b9ba4dd28ff6ac2d2878f9dac0d6675d.scope - libcontainer container 6462c45c3f20c038c0171c9ff5a68a56b9ba4dd28ff6ac2d2878f9dac0d6675d. Jan 13 21:40:13.306173 containerd[1446]: time="2025-01-13T21:40:13.306130193Z" level=info msg="StartContainer for \"6462c45c3f20c038c0171c9ff5a68a56b9ba4dd28ff6ac2d2878f9dac0d6675d\" returns successfully" Jan 13 21:40:13.835689 systemd-networkd[1386]: lxcfbe41f879e09: Gained IPv6LL Jan 13 21:40:14.275688 kubelet[1749]: E0113 21:40:14.275653 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:15.276150 kubelet[1749]: E0113 21:40:15.276110 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:16.277215 kubelet[1749]: E0113 21:40:16.277176 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:16.358736 kubelet[1749]: I0113 21:40:16.358656 1749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.051114707 podStartE2EDuration="18.358640066s" podCreationTimestamp="2025-01-13 21:39:58 +0000 UTC" firstStartedPulling="2025-01-13 21:40:12.942322241 +0000 UTC m=+49.724574245" lastFinishedPulling="2025-01-13 21:40:13.2498476 +0000 UTC m=+50.032099604" observedRunningTime="2025-01-13 21:40:13.432506882 +0000 UTC m=+50.214758886" watchObservedRunningTime="2025-01-13 21:40:16.358640066 +0000 UTC m=+53.140892070" Jan 13 21:40:16.391898 containerd[1446]: time="2025-01-13T21:40:16.391604975Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:40:16.397211 containerd[1446]: time="2025-01-13T21:40:16.397052712Z" level=info msg="StopContainer for \"25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140\" with timeout 2 (s)" Jan 13 21:40:16.397342 containerd[1446]: time="2025-01-13T21:40:16.397303472Z" level=info msg="Stop container \"25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140\" with signal terminated" Jan 13 21:40:16.402430 systemd-networkd[1386]: lxc_health: Link DOWN Jan 13 21:40:16.402436 systemd-networkd[1386]: lxc_health: Lost carrier Jan 13 21:40:16.429544 systemd[1]: cri-containerd-25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140.scope: Deactivated successfully. Jan 13 21:40:16.429926 systemd[1]: cri-containerd-25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140.scope: Consumed 6.364s CPU time. Jan 13 21:40:16.443876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140-rootfs.mount: Deactivated successfully. Jan 13 21:40:16.453587 containerd[1446]: time="2025-01-13T21:40:16.453532763Z" level=info msg="shim disconnected" id=25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140 namespace=k8s.io Jan 13 21:40:16.453912 containerd[1446]: time="2025-01-13T21:40:16.453754558Z" level=warning msg="cleaning up after shim disconnected" id=25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140 namespace=k8s.io Jan 13 21:40:16.453912 containerd[1446]: time="2025-01-13T21:40:16.453775201Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:40:16.466784 containerd[1446]: time="2025-01-13T21:40:16.466729600Z" level=info msg="StopContainer for \"25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140\" returns successfully" Jan 13 21:40:16.467365 containerd[1446]: time="2025-01-13T21:40:16.467317972Z" level=info msg="StopPodSandbox for \"45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a\"" Jan 13 21:40:16.467365 containerd[1446]: time="2025-01-13T21:40:16.467360019Z" level=info msg="Container to stop \"9b93f089c4c70372c04c329a412aebdbd99e247b60f6a73a0e7a726aa2877033\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:40:16.467468 containerd[1446]: time="2025-01-13T21:40:16.467372421Z" level=info msg="Container to stop \"25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:40:16.467468 containerd[1446]: time="2025-01-13T21:40:16.467382223Z" level=info msg="Container to stop \"64512d08b423f961f4cfcb86dfe371fb4579a1a25f64266b2d2ee513ef2bea50\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:40:16.467468 containerd[1446]: time="2025-01-13T21:40:16.467391064Z" level=info msg="Container to stop \"ecb831363749445c1919e53f547204a5f5d9a83753c382ae5db927afa55ebbd0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:40:16.467468 containerd[1446]: time="2025-01-13T21:40:16.467399745Z" level=info msg="Container to stop \"d4176c68e26447a08156efdcd9f127c5900a9bff4d71497317e9bfb70efed797\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 21:40:16.469036 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a-shm.mount: Deactivated successfully. Jan 13 21:40:16.472449 systemd[1]: cri-containerd-45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a.scope: Deactivated successfully. Jan 13 21:40:16.493846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a-rootfs.mount: Deactivated successfully. Jan 13 21:40:16.498601 containerd[1446]: time="2025-01-13T21:40:16.498547488Z" level=info msg="shim disconnected" id=45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a namespace=k8s.io Jan 13 21:40:16.498601 containerd[1446]: time="2025-01-13T21:40:16.498599936Z" level=warning msg="cleaning up after shim disconnected" id=45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a namespace=k8s.io Jan 13 21:40:16.498764 containerd[1446]: time="2025-01-13T21:40:16.498607978Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:40:16.508799 containerd[1446]: time="2025-01-13T21:40:16.508746053Z" level=info msg="TearDown network for sandbox \"45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a\" successfully" Jan 13 21:40:16.508799 containerd[1446]: time="2025-01-13T21:40:16.508779459Z" level=info msg="StopPodSandbox for \"45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a\" returns successfully" Jan 13 21:40:16.639115 kubelet[1749]: I0113 21:40:16.638970 1749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b1704612-6015-4be5-987f-81bb3776c171-hubble-tls\") pod \"b1704612-6015-4be5-987f-81bb3776c171\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " Jan 13 21:40:16.639115 kubelet[1749]: I0113 21:40:16.639043 1749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-etc-cni-netd\") pod \"b1704612-6015-4be5-987f-81bb3776c171\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " Jan 13 21:40:16.639115 kubelet[1749]: I0113 21:40:16.639061 1749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-lib-modules\") pod \"b1704612-6015-4be5-987f-81bb3776c171\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " Jan 13 21:40:16.639115 kubelet[1749]: I0113 21:40:16.639076 1749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-xtables-lock\") pod \"b1704612-6015-4be5-987f-81bb3776c171\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " Jan 13 21:40:16.639115 kubelet[1749]: I0113 21:40:16.639091 1749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-host-proc-sys-kernel\") pod \"b1704612-6015-4be5-987f-81bb3776c171\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " Jan 13 21:40:16.639115 kubelet[1749]: I0113 21:40:16.639110 1749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnxxx\" (UniqueName: \"kubernetes.io/projected/b1704612-6015-4be5-987f-81bb3776c171-kube-api-access-vnxxx\") pod \"b1704612-6015-4be5-987f-81bb3776c171\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " Jan 13 21:40:16.640322 kubelet[1749]: I0113 21:40:16.639107 1749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b1704612-6015-4be5-987f-81bb3776c171" (UID: "b1704612-6015-4be5-987f-81bb3776c171"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:40:16.640322 kubelet[1749]: I0113 21:40:16.639146 1749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b1704612-6015-4be5-987f-81bb3776c171" (UID: "b1704612-6015-4be5-987f-81bb3776c171"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:40:16.640322 kubelet[1749]: I0113 21:40:16.639124 1749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-cilium-cgroup\") pod \"b1704612-6015-4be5-987f-81bb3776c171\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " Jan 13 21:40:16.640322 kubelet[1749]: I0113 21:40:16.639163 1749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b1704612-6015-4be5-987f-81bb3776c171" (UID: "b1704612-6015-4be5-987f-81bb3776c171"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:40:16.640322 kubelet[1749]: I0113 21:40:16.639175 1749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-cni-path\") pod \"b1704612-6015-4be5-987f-81bb3776c171\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " Jan 13 21:40:16.640465 kubelet[1749]: I0113 21:40:16.639182 1749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b1704612-6015-4be5-987f-81bb3776c171" (UID: "b1704612-6015-4be5-987f-81bb3776c171"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:40:16.640465 kubelet[1749]: I0113 21:40:16.639193 1749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b1704612-6015-4be5-987f-81bb3776c171-clustermesh-secrets\") pod \"b1704612-6015-4be5-987f-81bb3776c171\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " Jan 13 21:40:16.640465 kubelet[1749]: I0113 21:40:16.639209 1749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-hostproc\") pod \"b1704612-6015-4be5-987f-81bb3776c171\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " Jan 13 21:40:16.640465 kubelet[1749]: I0113 21:40:16.639224 1749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-cilium-run\") pod \"b1704612-6015-4be5-987f-81bb3776c171\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " Jan 13 21:40:16.640465 kubelet[1749]: I0113 21:40:16.639237 1749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-bpf-maps\") pod \"b1704612-6015-4be5-987f-81bb3776c171\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " Jan 13 21:40:16.640465 kubelet[1749]: I0113 21:40:16.639255 1749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1704612-6015-4be5-987f-81bb3776c171-cilium-config-path\") pod \"b1704612-6015-4be5-987f-81bb3776c171\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " Jan 13 21:40:16.640906 kubelet[1749]: I0113 21:40:16.639270 1749 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-host-proc-sys-net\") pod \"b1704612-6015-4be5-987f-81bb3776c171\" (UID: \"b1704612-6015-4be5-987f-81bb3776c171\") " Jan 13 21:40:16.640906 kubelet[1749]: I0113 21:40:16.639293 1749 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-cilium-cgroup\") on node \"10.0.0.155\" DevicePath \"\"" Jan 13 21:40:16.640906 kubelet[1749]: I0113 21:40:16.639302 1749 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-etc-cni-netd\") on node \"10.0.0.155\" DevicePath \"\"" Jan 13 21:40:16.640906 kubelet[1749]: I0113 21:40:16.639310 1749 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-lib-modules\") on node \"10.0.0.155\" DevicePath \"\"" Jan 13 21:40:16.640906 kubelet[1749]: I0113 21:40:16.639317 1749 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-xtables-lock\") on node \"10.0.0.155\" DevicePath \"\"" Jan 13 21:40:16.640906 kubelet[1749]: I0113 21:40:16.639334 1749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b1704612-6015-4be5-987f-81bb3776c171" (UID: "b1704612-6015-4be5-987f-81bb3776c171"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:40:16.640906 kubelet[1749]: I0113 21:40:16.639352 1749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-cni-path" (OuterVolumeSpecName: "cni-path") pod "b1704612-6015-4be5-987f-81bb3776c171" (UID: "b1704612-6015-4be5-987f-81bb3776c171"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:40:16.641075 kubelet[1749]: I0113 21:40:16.639922 1749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b1704612-6015-4be5-987f-81bb3776c171" (UID: "b1704612-6015-4be5-987f-81bb3776c171"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:40:16.641075 kubelet[1749]: I0113 21:40:16.640011 1749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-hostproc" (OuterVolumeSpecName: "hostproc") pod "b1704612-6015-4be5-987f-81bb3776c171" (UID: "b1704612-6015-4be5-987f-81bb3776c171"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:40:16.641075 kubelet[1749]: I0113 21:40:16.640048 1749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b1704612-6015-4be5-987f-81bb3776c171" (UID: "b1704612-6015-4be5-987f-81bb3776c171"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:40:16.641075 kubelet[1749]: I0113 21:40:16.639881 1749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b1704612-6015-4be5-987f-81bb3776c171" (UID: "b1704612-6015-4be5-987f-81bb3776c171"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 21:40:16.641826 kubelet[1749]: I0113 21:40:16.641755 1749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1704612-6015-4be5-987f-81bb3776c171-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b1704612-6015-4be5-987f-81bb3776c171" (UID: "b1704612-6015-4be5-987f-81bb3776c171"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 21:40:16.645365 kubelet[1749]: I0113 21:40:16.645316 1749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1704612-6015-4be5-987f-81bb3776c171-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b1704612-6015-4be5-987f-81bb3776c171" (UID: "b1704612-6015-4be5-987f-81bb3776c171"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 21:40:16.645567 kubelet[1749]: I0113 21:40:16.645533 1749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1704612-6015-4be5-987f-81bb3776c171-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b1704612-6015-4be5-987f-81bb3776c171" (UID: "b1704612-6015-4be5-987f-81bb3776c171"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:40:16.646517 systemd[1]: var-lib-kubelet-pods-b1704612\x2d6015\x2d4be5\x2d987f\x2d81bb3776c171-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 21:40:16.646619 systemd[1]: var-lib-kubelet-pods-b1704612\x2d6015\x2d4be5\x2d987f\x2d81bb3776c171-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 21:40:16.648579 kubelet[1749]: I0113 21:40:16.648522 1749 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1704612-6015-4be5-987f-81bb3776c171-kube-api-access-vnxxx" (OuterVolumeSpecName: "kube-api-access-vnxxx") pod "b1704612-6015-4be5-987f-81bb3776c171" (UID: "b1704612-6015-4be5-987f-81bb3776c171"). InnerVolumeSpecName "kube-api-access-vnxxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 21:40:16.740004 kubelet[1749]: I0113 21:40:16.739967 1749 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-cni-path\") on node \"10.0.0.155\" DevicePath \"\"" Jan 13 21:40:16.740004 kubelet[1749]: I0113 21:40:16.739996 1749 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vnxxx\" (UniqueName: \"kubernetes.io/projected/b1704612-6015-4be5-987f-81bb3776c171-kube-api-access-vnxxx\") on node \"10.0.0.155\" DevicePath \"\"" Jan 13 21:40:16.740004 kubelet[1749]: I0113 21:40:16.740005 1749 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-hostproc\") on node \"10.0.0.155\" DevicePath \"\"" Jan 13 21:40:16.740200 kubelet[1749]: I0113 21:40:16.740013 1749 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-cilium-run\") on node \"10.0.0.155\" DevicePath \"\"" Jan 13 21:40:16.740200 kubelet[1749]: I0113 21:40:16.740036 1749 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b1704612-6015-4be5-987f-81bb3776c171-clustermesh-secrets\") on node \"10.0.0.155\" DevicePath \"\"" Jan 13 21:40:16.740200 kubelet[1749]: I0113 21:40:16.740044 1749 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1704612-6015-4be5-987f-81bb3776c171-cilium-config-path\") on node \"10.0.0.155\" DevicePath \"\"" Jan 13 21:40:16.740200 kubelet[1749]: I0113 21:40:16.740052 1749 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-host-proc-sys-net\") on node \"10.0.0.155\" DevicePath \"\"" Jan 13 21:40:16.740200 kubelet[1749]: I0113 21:40:16.740059 1749 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-bpf-maps\") on node \"10.0.0.155\" DevicePath \"\"" Jan 13 21:40:16.740200 kubelet[1749]: I0113 21:40:16.740066 1749 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b1704612-6015-4be5-987f-81bb3776c171-host-proc-sys-kernel\") on node \"10.0.0.155\" DevicePath \"\"" Jan 13 21:40:16.740200 kubelet[1749]: I0113 21:40:16.740073 1749 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b1704612-6015-4be5-987f-81bb3776c171-hubble-tls\") on node \"10.0.0.155\" DevicePath \"\"" Jan 13 21:40:17.277369 kubelet[1749]: E0113 21:40:17.277316 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:17.380167 systemd[1]: var-lib-kubelet-pods-b1704612\x2d6015\x2d4be5\x2d987f\x2d81bb3776c171-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvnxxx.mount: Deactivated successfully. Jan 13 21:40:17.432533 kubelet[1749]: I0113 21:40:17.432449 1749 scope.go:117] "RemoveContainer" containerID="25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140" Jan 13 21:40:17.434362 containerd[1446]: time="2025-01-13T21:40:17.434315116Z" level=info msg="RemoveContainer for \"25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140\"" Jan 13 21:40:17.437067 containerd[1446]: time="2025-01-13T21:40:17.437038491Z" level=info msg="RemoveContainer for \"25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140\" returns successfully" Jan 13 21:40:17.437239 kubelet[1749]: I0113 21:40:17.437218 1749 scope.go:117] "RemoveContainer" containerID="9b93f089c4c70372c04c329a412aebdbd99e247b60f6a73a0e7a726aa2877033" Jan 13 21:40:17.437530 systemd[1]: Removed slice kubepods-burstable-podb1704612_6015_4be5_987f_81bb3776c171.slice - libcontainer container kubepods-burstable-podb1704612_6015_4be5_987f_81bb3776c171.slice. Jan 13 21:40:17.437621 systemd[1]: kubepods-burstable-podb1704612_6015_4be5_987f_81bb3776c171.slice: Consumed 6.474s CPU time. Jan 13 21:40:17.439799 containerd[1446]: time="2025-01-13T21:40:17.439216862Z" level=info msg="RemoveContainer for \"9b93f089c4c70372c04c329a412aebdbd99e247b60f6a73a0e7a726aa2877033\"" Jan 13 21:40:17.443054 containerd[1446]: time="2025-01-13T21:40:17.442999117Z" level=info msg="RemoveContainer for \"9b93f089c4c70372c04c329a412aebdbd99e247b60f6a73a0e7a726aa2877033\" returns successfully" Jan 13 21:40:17.443381 kubelet[1749]: I0113 21:40:17.443267 1749 scope.go:117] "RemoveContainer" containerID="64512d08b423f961f4cfcb86dfe371fb4579a1a25f64266b2d2ee513ef2bea50" Jan 13 21:40:17.444196 containerd[1446]: time="2025-01-13T21:40:17.444170416Z" level=info msg="RemoveContainer for \"64512d08b423f961f4cfcb86dfe371fb4579a1a25f64266b2d2ee513ef2bea50\"" Jan 13 21:40:17.446868 containerd[1446]: time="2025-01-13T21:40:17.446832901Z" level=info msg="RemoveContainer for \"64512d08b423f961f4cfcb86dfe371fb4579a1a25f64266b2d2ee513ef2bea50\" returns successfully" Jan 13 21:40:17.447229 kubelet[1749]: I0113 21:40:17.447063 1749 scope.go:117] "RemoveContainer" containerID="ecb831363749445c1919e53f547204a5f5d9a83753c382ae5db927afa55ebbd0" Jan 13 21:40:17.448378 containerd[1446]: time="2025-01-13T21:40:17.448311046Z" level=info msg="RemoveContainer for \"ecb831363749445c1919e53f547204a5f5d9a83753c382ae5db927afa55ebbd0\"" Jan 13 21:40:17.450942 containerd[1446]: time="2025-01-13T21:40:17.450906640Z" level=info msg="RemoveContainer for \"ecb831363749445c1919e53f547204a5f5d9a83753c382ae5db927afa55ebbd0\" returns successfully" Jan 13 21:40:17.451526 kubelet[1749]: I0113 21:40:17.451086 1749 scope.go:117] "RemoveContainer" containerID="d4176c68e26447a08156efdcd9f127c5900a9bff4d71497317e9bfb70efed797" Jan 13 21:40:17.451986 containerd[1446]: time="2025-01-13T21:40:17.451877748Z" level=info msg="RemoveContainer for \"d4176c68e26447a08156efdcd9f127c5900a9bff4d71497317e9bfb70efed797\"" Jan 13 21:40:17.453937 containerd[1446]: time="2025-01-13T21:40:17.453901056Z" level=info msg="RemoveContainer for \"d4176c68e26447a08156efdcd9f127c5900a9bff4d71497317e9bfb70efed797\" returns successfully" Jan 13 21:40:17.454152 kubelet[1749]: I0113 21:40:17.454052 1749 scope.go:117] "RemoveContainer" containerID="25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140" Jan 13 21:40:17.454317 containerd[1446]: time="2025-01-13T21:40:17.454217304Z" level=error msg="ContainerStatus for \"25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140\": not found" Jan 13 21:40:17.454642 kubelet[1749]: E0113 21:40:17.454432 1749 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140\": not found" containerID="25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140" Jan 13 21:40:17.454642 kubelet[1749]: I0113 21:40:17.454501 1749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140"} err="failed to get container status \"25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140\": rpc error: code = NotFound desc = an error occurred when try to find container \"25bfaf9269ad6e156d54a56a0c894442c61c529b7c65a0a84b5cf1d8d9890140\": not found" Jan 13 21:40:17.454642 kubelet[1749]: I0113 21:40:17.454582 1749 scope.go:117] "RemoveContainer" containerID="9b93f089c4c70372c04c329a412aebdbd99e247b60f6a73a0e7a726aa2877033" Jan 13 21:40:17.454759 containerd[1446]: time="2025-01-13T21:40:17.454726342Z" level=error msg="ContainerStatus for \"9b93f089c4c70372c04c329a412aebdbd99e247b60f6a73a0e7a726aa2877033\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9b93f089c4c70372c04c329a412aebdbd99e247b60f6a73a0e7a726aa2877033\": not found" Jan 13 21:40:17.454960 kubelet[1749]: E0113 21:40:17.454861 1749 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9b93f089c4c70372c04c329a412aebdbd99e247b60f6a73a0e7a726aa2877033\": not found" containerID="9b93f089c4c70372c04c329a412aebdbd99e247b60f6a73a0e7a726aa2877033" Jan 13 21:40:17.454960 kubelet[1749]: I0113 21:40:17.454882 1749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9b93f089c4c70372c04c329a412aebdbd99e247b60f6a73a0e7a726aa2877033"} err="failed to get container status \"9b93f089c4c70372c04c329a412aebdbd99e247b60f6a73a0e7a726aa2877033\": rpc error: code = NotFound desc = an error occurred when try to find container \"9b93f089c4c70372c04c329a412aebdbd99e247b60f6a73a0e7a726aa2877033\": not found" Jan 13 21:40:17.454960 kubelet[1749]: I0113 21:40:17.454896 1749 scope.go:117] "RemoveContainer" containerID="64512d08b423f961f4cfcb86dfe371fb4579a1a25f64266b2d2ee513ef2bea50" Jan 13 21:40:17.455164 containerd[1446]: time="2025-01-13T21:40:17.455041709Z" level=error msg="ContainerStatus for \"64512d08b423f961f4cfcb86dfe371fb4579a1a25f64266b2d2ee513ef2bea50\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64512d08b423f961f4cfcb86dfe371fb4579a1a25f64266b2d2ee513ef2bea50\": not found" Jan 13 21:40:17.455202 kubelet[1749]: E0113 21:40:17.455152 1749 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64512d08b423f961f4cfcb86dfe371fb4579a1a25f64266b2d2ee513ef2bea50\": not found" containerID="64512d08b423f961f4cfcb86dfe371fb4579a1a25f64266b2d2ee513ef2bea50" Jan 13 21:40:17.455202 kubelet[1749]: I0113 21:40:17.455175 1749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"64512d08b423f961f4cfcb86dfe371fb4579a1a25f64266b2d2ee513ef2bea50"} err="failed to get container status \"64512d08b423f961f4cfcb86dfe371fb4579a1a25f64266b2d2ee513ef2bea50\": rpc error: code = NotFound desc = an error occurred when try to find container \"64512d08b423f961f4cfcb86dfe371fb4579a1a25f64266b2d2ee513ef2bea50\": not found" Jan 13 21:40:17.455202 kubelet[1749]: I0113 21:40:17.455193 1749 scope.go:117] "RemoveContainer" containerID="ecb831363749445c1919e53f547204a5f5d9a83753c382ae5db927afa55ebbd0" Jan 13 21:40:17.455361 containerd[1446]: time="2025-01-13T21:40:17.455329473Z" level=error msg="ContainerStatus for \"ecb831363749445c1919e53f547204a5f5d9a83753c382ae5db927afa55ebbd0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ecb831363749445c1919e53f547204a5f5d9a83753c382ae5db927afa55ebbd0\": not found" Jan 13 21:40:17.455577 kubelet[1749]: E0113 21:40:17.455435 1749 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ecb831363749445c1919e53f547204a5f5d9a83753c382ae5db927afa55ebbd0\": not found" containerID="ecb831363749445c1919e53f547204a5f5d9a83753c382ae5db927afa55ebbd0" Jan 13 21:40:17.455577 kubelet[1749]: I0113 21:40:17.455457 1749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ecb831363749445c1919e53f547204a5f5d9a83753c382ae5db927afa55ebbd0"} err="failed to get container status \"ecb831363749445c1919e53f547204a5f5d9a83753c382ae5db927afa55ebbd0\": rpc error: code = NotFound desc = an error occurred when try to find container \"ecb831363749445c1919e53f547204a5f5d9a83753c382ae5db927afa55ebbd0\": not found" Jan 13 21:40:17.455577 kubelet[1749]: I0113 21:40:17.455470 1749 scope.go:117] "RemoveContainer" containerID="d4176c68e26447a08156efdcd9f127c5900a9bff4d71497317e9bfb70efed797" Jan 13 21:40:17.455659 containerd[1446]: time="2025-01-13T21:40:17.455597154Z" level=error msg="ContainerStatus for \"d4176c68e26447a08156efdcd9f127c5900a9bff4d71497317e9bfb70efed797\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d4176c68e26447a08156efdcd9f127c5900a9bff4d71497317e9bfb70efed797\": not found" Jan 13 21:40:17.455740 kubelet[1749]: E0113 21:40:17.455682 1749 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d4176c68e26447a08156efdcd9f127c5900a9bff4d71497317e9bfb70efed797\": not found" containerID="d4176c68e26447a08156efdcd9f127c5900a9bff4d71497317e9bfb70efed797" Jan 13 21:40:17.455740 kubelet[1749]: I0113 21:40:17.455705 1749 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d4176c68e26447a08156efdcd9f127c5900a9bff4d71497317e9bfb70efed797"} err="failed to get container status \"d4176c68e26447a08156efdcd9f127c5900a9bff4d71497317e9bfb70efed797\": rpc error: code = NotFound desc = an error occurred when try to find container \"d4176c68e26447a08156efdcd9f127c5900a9bff4d71497317e9bfb70efed797\": not found" Jan 13 21:40:18.277630 kubelet[1749]: E0113 21:40:18.277593 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:18.306309 kubelet[1749]: I0113 21:40:18.306265 1749 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1704612-6015-4be5-987f-81bb3776c171" path="/var/lib/kubelet/pods/b1704612-6015-4be5-987f-81bb3776c171/volumes" Jan 13 21:40:19.081731 kubelet[1749]: I0113 21:40:19.081690 1749 topology_manager.go:215] "Topology Admit Handler" podUID="29348a06-0d34-4f78-9b12-12b2a66ce538" podNamespace="kube-system" podName="cilium-operator-599987898-s57jq" Jan 13 21:40:19.081843 kubelet[1749]: E0113 21:40:19.081744 1749 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1704612-6015-4be5-987f-81bb3776c171" containerName="mount-bpf-fs" Jan 13 21:40:19.081843 kubelet[1749]: E0113 21:40:19.081755 1749 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1704612-6015-4be5-987f-81bb3776c171" containerName="clean-cilium-state" Jan 13 21:40:19.081843 kubelet[1749]: E0113 21:40:19.081761 1749 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1704612-6015-4be5-987f-81bb3776c171" containerName="cilium-agent" Jan 13 21:40:19.081843 kubelet[1749]: E0113 21:40:19.081767 1749 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1704612-6015-4be5-987f-81bb3776c171" containerName="apply-sysctl-overwrites" Jan 13 21:40:19.081843 kubelet[1749]: E0113 21:40:19.081783 1749 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1704612-6015-4be5-987f-81bb3776c171" containerName="mount-cgroup" Jan 13 21:40:19.081843 kubelet[1749]: I0113 21:40:19.081801 1749 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1704612-6015-4be5-987f-81bb3776c171" containerName="cilium-agent" Jan 13 21:40:19.088160 systemd[1]: Created slice kubepods-besteffort-pod29348a06_0d34_4f78_9b12_12b2a66ce538.slice - libcontainer container kubepods-besteffort-pod29348a06_0d34_4f78_9b12_12b2a66ce538.slice. Jan 13 21:40:19.105151 kubelet[1749]: I0113 21:40:19.104424 1749 topology_manager.go:215] "Topology Admit Handler" podUID="070781cd-fe67-497b-ba1a-ae5e9129f30b" podNamespace="kube-system" podName="cilium-7m485" Jan 13 21:40:19.109332 systemd[1]: Created slice kubepods-burstable-pod070781cd_fe67_497b_ba1a_ae5e9129f30b.slice - libcontainer container kubepods-burstable-pod070781cd_fe67_497b_ba1a_ae5e9129f30b.slice. Jan 13 21:40:19.254548 kubelet[1749]: I0113 21:40:19.254513 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/070781cd-fe67-497b-ba1a-ae5e9129f30b-cilium-run\") pod \"cilium-7m485\" (UID: \"070781cd-fe67-497b-ba1a-ae5e9129f30b\") " pod="kube-system/cilium-7m485" Jan 13 21:40:19.254548 kubelet[1749]: I0113 21:40:19.254549 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/070781cd-fe67-497b-ba1a-ae5e9129f30b-clustermesh-secrets\") pod \"cilium-7m485\" (UID: \"070781cd-fe67-497b-ba1a-ae5e9129f30b\") " pod="kube-system/cilium-7m485" Jan 13 21:40:19.254711 kubelet[1749]: I0113 21:40:19.254570 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/070781cd-fe67-497b-ba1a-ae5e9129f30b-host-proc-sys-kernel\") pod \"cilium-7m485\" (UID: \"070781cd-fe67-497b-ba1a-ae5e9129f30b\") " pod="kube-system/cilium-7m485" Jan 13 21:40:19.254711 kubelet[1749]: I0113 21:40:19.254586 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/070781cd-fe67-497b-ba1a-ae5e9129f30b-bpf-maps\") pod \"cilium-7m485\" (UID: \"070781cd-fe67-497b-ba1a-ae5e9129f30b\") " pod="kube-system/cilium-7m485" Jan 13 21:40:19.254711 kubelet[1749]: I0113 21:40:19.254600 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/070781cd-fe67-497b-ba1a-ae5e9129f30b-hostproc\") pod \"cilium-7m485\" (UID: \"070781cd-fe67-497b-ba1a-ae5e9129f30b\") " pod="kube-system/cilium-7m485" Jan 13 21:40:19.254711 kubelet[1749]: I0113 21:40:19.254617 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/070781cd-fe67-497b-ba1a-ae5e9129f30b-cilium-config-path\") pod \"cilium-7m485\" (UID: \"070781cd-fe67-497b-ba1a-ae5e9129f30b\") " pod="kube-system/cilium-7m485" Jan 13 21:40:19.254711 kubelet[1749]: I0113 21:40:19.254633 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/070781cd-fe67-497b-ba1a-ae5e9129f30b-cilium-ipsec-secrets\") pod \"cilium-7m485\" (UID: \"070781cd-fe67-497b-ba1a-ae5e9129f30b\") " pod="kube-system/cilium-7m485" Jan 13 21:40:19.254834 kubelet[1749]: I0113 21:40:19.254693 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/070781cd-fe67-497b-ba1a-ae5e9129f30b-host-proc-sys-net\") pod \"cilium-7m485\" (UID: \"070781cd-fe67-497b-ba1a-ae5e9129f30b\") " pod="kube-system/cilium-7m485" Jan 13 21:40:19.254834 kubelet[1749]: I0113 21:40:19.254779 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29348a06-0d34-4f78-9b12-12b2a66ce538-cilium-config-path\") pod \"cilium-operator-599987898-s57jq\" (UID: \"29348a06-0d34-4f78-9b12-12b2a66ce538\") " pod="kube-system/cilium-operator-599987898-s57jq" Jan 13 21:40:19.254834 kubelet[1749]: I0113 21:40:19.254812 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/070781cd-fe67-497b-ba1a-ae5e9129f30b-lib-modules\") pod \"cilium-7m485\" (UID: \"070781cd-fe67-497b-ba1a-ae5e9129f30b\") " pod="kube-system/cilium-7m485" Jan 13 21:40:19.254906 kubelet[1749]: I0113 21:40:19.254833 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx6z7\" (UniqueName: \"kubernetes.io/projected/29348a06-0d34-4f78-9b12-12b2a66ce538-kube-api-access-mx6z7\") pod \"cilium-operator-599987898-s57jq\" (UID: \"29348a06-0d34-4f78-9b12-12b2a66ce538\") " pod="kube-system/cilium-operator-599987898-s57jq" Jan 13 21:40:19.254906 kubelet[1749]: I0113 21:40:19.254870 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/070781cd-fe67-497b-ba1a-ae5e9129f30b-cilium-cgroup\") pod \"cilium-7m485\" (UID: \"070781cd-fe67-497b-ba1a-ae5e9129f30b\") " pod="kube-system/cilium-7m485" Jan 13 21:40:19.254906 kubelet[1749]: I0113 21:40:19.254891 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/070781cd-fe67-497b-ba1a-ae5e9129f30b-cni-path\") pod \"cilium-7m485\" (UID: \"070781cd-fe67-497b-ba1a-ae5e9129f30b\") " pod="kube-system/cilium-7m485" Jan 13 21:40:19.254966 kubelet[1749]: I0113 21:40:19.254947 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/070781cd-fe67-497b-ba1a-ae5e9129f30b-etc-cni-netd\") pod \"cilium-7m485\" (UID: \"070781cd-fe67-497b-ba1a-ae5e9129f30b\") " pod="kube-system/cilium-7m485" Jan 13 21:40:19.255005 kubelet[1749]: I0113 21:40:19.254986 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/070781cd-fe67-497b-ba1a-ae5e9129f30b-xtables-lock\") pod \"cilium-7m485\" (UID: \"070781cd-fe67-497b-ba1a-ae5e9129f30b\") " pod="kube-system/cilium-7m485" Jan 13 21:40:19.255127 kubelet[1749]: I0113 21:40:19.255009 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/070781cd-fe67-497b-ba1a-ae5e9129f30b-hubble-tls\") pod \"cilium-7m485\" (UID: \"070781cd-fe67-497b-ba1a-ae5e9129f30b\") " pod="kube-system/cilium-7m485" Jan 13 21:40:19.255127 kubelet[1749]: I0113 21:40:19.255039 1749 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdpt8\" (UniqueName: \"kubernetes.io/projected/070781cd-fe67-497b-ba1a-ae5e9129f30b-kube-api-access-gdpt8\") pod \"cilium-7m485\" (UID: \"070781cd-fe67-497b-ba1a-ae5e9129f30b\") " pod="kube-system/cilium-7m485" Jan 13 21:40:19.277747 kubelet[1749]: E0113 21:40:19.277714 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:19.316583 kubelet[1749]: E0113 21:40:19.316552 1749 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 21:40:19.391268 kubelet[1749]: E0113 21:40:19.390633 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:40:19.392514 containerd[1446]: time="2025-01-13T21:40:19.391899905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-s57jq,Uid:29348a06-0d34-4f78-9b12-12b2a66ce538,Namespace:kube-system,Attempt:0,}" Jan 13 21:40:19.409080 containerd[1446]: time="2025-01-13T21:40:19.408988981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:40:19.409080 containerd[1446]: time="2025-01-13T21:40:19.409073993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:40:19.409188 containerd[1446]: time="2025-01-13T21:40:19.409088715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:40:19.409239 containerd[1446]: time="2025-01-13T21:40:19.409166486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:40:19.421345 kubelet[1749]: E0113 21:40:19.421128 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:40:19.421491 containerd[1446]: time="2025-01-13T21:40:19.421449197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7m485,Uid:070781cd-fe67-497b-ba1a-ae5e9129f30b,Namespace:kube-system,Attempt:0,}" Jan 13 21:40:19.424175 systemd[1]: Started cri-containerd-cf1b9c40674196cea4c0602afa6c206f1d920fb5b7dd2d30b1b892f745d8ab24.scope - libcontainer container cf1b9c40674196cea4c0602afa6c206f1d920fb5b7dd2d30b1b892f745d8ab24. Jan 13 21:40:19.441918 containerd[1446]: time="2025-01-13T21:40:19.441810700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:40:19.441918 containerd[1446]: time="2025-01-13T21:40:19.441867228Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:40:19.441918 containerd[1446]: time="2025-01-13T21:40:19.441878950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:40:19.442088 containerd[1446]: time="2025-01-13T21:40:19.441943719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:40:19.457198 systemd[1]: Started cri-containerd-1fbdbd25194cbf90745dcd42b1b8cf0b7afb48caa46f8994a0d04119d5685623.scope - libcontainer container 1fbdbd25194cbf90745dcd42b1b8cf0b7afb48caa46f8994a0d04119d5685623. Jan 13 21:40:19.457854 containerd[1446]: time="2025-01-13T21:40:19.457636676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-s57jq,Uid:29348a06-0d34-4f78-9b12-12b2a66ce538,Namespace:kube-system,Attempt:0,} returns sandbox id \"cf1b9c40674196cea4c0602afa6c206f1d920fb5b7dd2d30b1b892f745d8ab24\"" Jan 13 21:40:19.459115 kubelet[1749]: E0113 21:40:19.458722 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:40:19.459471 containerd[1446]: time="2025-01-13T21:40:19.459439933Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 21:40:19.478107 containerd[1446]: time="2025-01-13T21:40:19.478068789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7m485,Uid:070781cd-fe67-497b-ba1a-ae5e9129f30b,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fbdbd25194cbf90745dcd42b1b8cf0b7afb48caa46f8994a0d04119d5685623\"" Jan 13 21:40:19.479170 kubelet[1749]: E0113 21:40:19.478797 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:40:19.480968 containerd[1446]: time="2025-01-13T21:40:19.480935878Z" level=info msg="CreateContainer within sandbox \"1fbdbd25194cbf90745dcd42b1b8cf0b7afb48caa46f8994a0d04119d5685623\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 21:40:19.504864 containerd[1446]: time="2025-01-13T21:40:19.504799480Z" level=info msg="CreateContainer within sandbox \"1fbdbd25194cbf90745dcd42b1b8cf0b7afb48caa46f8994a0d04119d5685623\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7ffe68b86e63de4f9da86577ca7df4b47fde56a69675a7aca838b9dfbe505237\"" Jan 13 21:40:19.505453 containerd[1446]: time="2025-01-13T21:40:19.505375322Z" level=info msg="StartContainer for \"7ffe68b86e63de4f9da86577ca7df4b47fde56a69675a7aca838b9dfbe505237\"" Jan 13 21:40:19.531182 systemd[1]: Started cri-containerd-7ffe68b86e63de4f9da86577ca7df4b47fde56a69675a7aca838b9dfbe505237.scope - libcontainer container 7ffe68b86e63de4f9da86577ca7df4b47fde56a69675a7aca838b9dfbe505237. Jan 13 21:40:19.549527 containerd[1446]: time="2025-01-13T21:40:19.549488131Z" level=info msg="StartContainer for \"7ffe68b86e63de4f9da86577ca7df4b47fde56a69675a7aca838b9dfbe505237\" returns successfully" Jan 13 21:40:19.600891 systemd[1]: cri-containerd-7ffe68b86e63de4f9da86577ca7df4b47fde56a69675a7aca838b9dfbe505237.scope: Deactivated successfully. Jan 13 21:40:19.627970 containerd[1446]: time="2025-01-13T21:40:19.627907790Z" level=info msg="shim disconnected" id=7ffe68b86e63de4f9da86577ca7df4b47fde56a69675a7aca838b9dfbe505237 namespace=k8s.io Jan 13 21:40:19.627970 containerd[1446]: time="2025-01-13T21:40:19.627963838Z" level=warning msg="cleaning up after shim disconnected" id=7ffe68b86e63de4f9da86577ca7df4b47fde56a69675a7aca838b9dfbe505237 namespace=k8s.io Jan 13 21:40:19.627970 containerd[1446]: time="2025-01-13T21:40:19.627972680Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:40:20.278155 kubelet[1749]: E0113 21:40:20.278091 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:20.439741 kubelet[1749]: E0113 21:40:20.439714 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:40:20.441540 containerd[1446]: time="2025-01-13T21:40:20.441429815Z" level=info msg="CreateContainer within sandbox \"1fbdbd25194cbf90745dcd42b1b8cf0b7afb48caa46f8994a0d04119d5685623\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 21:40:20.451913 containerd[1446]: time="2025-01-13T21:40:20.451867098Z" level=info msg="CreateContainer within sandbox \"1fbdbd25194cbf90745dcd42b1b8cf0b7afb48caa46f8994a0d04119d5685623\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3c7f56549b847f20fa035c36db263689550bef0d5d98f3d4bfabb65535d9239d\"" Jan 13 21:40:20.452382 containerd[1446]: time="2025-01-13T21:40:20.452350644Z" level=info msg="StartContainer for \"3c7f56549b847f20fa035c36db263689550bef0d5d98f3d4bfabb65535d9239d\"" Jan 13 21:40:20.474290 systemd[1]: Started cri-containerd-3c7f56549b847f20fa035c36db263689550bef0d5d98f3d4bfabb65535d9239d.scope - libcontainer container 3c7f56549b847f20fa035c36db263689550bef0d5d98f3d4bfabb65535d9239d. Jan 13 21:40:20.493497 containerd[1446]: time="2025-01-13T21:40:20.493395718Z" level=info msg="StartContainer for \"3c7f56549b847f20fa035c36db263689550bef0d5d98f3d4bfabb65535d9239d\" returns successfully" Jan 13 21:40:20.509257 systemd[1]: cri-containerd-3c7f56549b847f20fa035c36db263689550bef0d5d98f3d4bfabb65535d9239d.scope: Deactivated successfully. Jan 13 21:40:20.527116 containerd[1446]: time="2025-01-13T21:40:20.527038728Z" level=info msg="shim disconnected" id=3c7f56549b847f20fa035c36db263689550bef0d5d98f3d4bfabb65535d9239d namespace=k8s.io Jan 13 21:40:20.527116 containerd[1446]: time="2025-01-13T21:40:20.527114578Z" level=warning msg="cleaning up after shim disconnected" id=3c7f56549b847f20fa035c36db263689550bef0d5d98f3d4bfabb65535d9239d namespace=k8s.io Jan 13 21:40:20.527116 containerd[1446]: time="2025-01-13T21:40:20.527124140Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:40:21.279196 kubelet[1749]: E0113 21:40:21.279150 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:21.318326 containerd[1446]: time="2025-01-13T21:40:21.318278001Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:40:21.318906 containerd[1446]: time="2025-01-13T21:40:21.318710059Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17137750" Jan 13 21:40:21.319677 containerd[1446]: time="2025-01-13T21:40:21.319640303Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:40:21.321248 containerd[1446]: time="2025-01-13T21:40:21.321216915Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.861740736s" Jan 13 21:40:21.321351 containerd[1446]: time="2025-01-13T21:40:21.321334291Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 13 21:40:21.323510 containerd[1446]: time="2025-01-13T21:40:21.323481739Z" level=info msg="CreateContainer within sandbox \"cf1b9c40674196cea4c0602afa6c206f1d920fb5b7dd2d30b1b892f745d8ab24\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 21:40:21.340245 containerd[1446]: time="2025-01-13T21:40:21.340207582Z" level=info msg="CreateContainer within sandbox \"cf1b9c40674196cea4c0602afa6c206f1d920fb5b7dd2d30b1b892f745d8ab24\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"674b53ac445806e149f87faa5a080a14298db5ceeda7dbca0539107a4e5e88c0\"" Jan 13 21:40:21.340763 containerd[1446]: time="2025-01-13T21:40:21.340730333Z" level=info msg="StartContainer for \"674b53ac445806e149f87faa5a080a14298db5ceeda7dbca0539107a4e5e88c0\"" Jan 13 21:40:21.365177 systemd[1]: Started cri-containerd-674b53ac445806e149f87faa5a080a14298db5ceeda7dbca0539107a4e5e88c0.scope - libcontainer container 674b53ac445806e149f87faa5a080a14298db5ceeda7dbca0539107a4e5e88c0. Jan 13 21:40:21.367425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c7f56549b847f20fa035c36db263689550bef0d5d98f3d4bfabb65535d9239d-rootfs.mount: Deactivated successfully. Jan 13 21:40:21.390595 containerd[1446]: time="2025-01-13T21:40:21.390489808Z" level=info msg="StartContainer for \"674b53ac445806e149f87faa5a080a14298db5ceeda7dbca0539107a4e5e88c0\" returns successfully" Jan 13 21:40:21.445683 kubelet[1749]: E0113 21:40:21.445643 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:40:21.447501 kubelet[1749]: E0113 21:40:21.447435 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:40:21.448685 containerd[1446]: time="2025-01-13T21:40:21.448247556Z" level=info msg="CreateContainer within sandbox \"1fbdbd25194cbf90745dcd42b1b8cf0b7afb48caa46f8994a0d04119d5685623\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 21:40:21.461157 containerd[1446]: time="2025-01-13T21:40:21.461112362Z" level=info msg="CreateContainer within sandbox \"1fbdbd25194cbf90745dcd42b1b8cf0b7afb48caa46f8994a0d04119d5685623\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4c9e466adc0441e94d891ea0f8aae8dc1d8ebeb0a157f0f7dcd1837c6b030d62\"" Jan 13 21:40:21.461710 containerd[1446]: time="2025-01-13T21:40:21.461669716Z" level=info msg="StartContainer for \"4c9e466adc0441e94d891ea0f8aae8dc1d8ebeb0a157f0f7dcd1837c6b030d62\"" Jan 13 21:40:21.470112 kubelet[1749]: I0113 21:40:21.470009 1749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-s57jq" podStartSLOduration=0.606972283 podStartE2EDuration="2.469993873s" podCreationTimestamp="2025-01-13 21:40:19 +0000 UTC" firstStartedPulling="2025-01-13 21:40:19.459224543 +0000 UTC m=+56.241476547" lastFinishedPulling="2025-01-13 21:40:21.322246133 +0000 UTC m=+58.104498137" observedRunningTime="2025-01-13 21:40:21.469736759 +0000 UTC m=+58.251988763" watchObservedRunningTime="2025-01-13 21:40:21.469993873 +0000 UTC m=+58.252245877" Jan 13 21:40:21.495229 systemd[1]: Started cri-containerd-4c9e466adc0441e94d891ea0f8aae8dc1d8ebeb0a157f0f7dcd1837c6b030d62.scope - libcontainer container 4c9e466adc0441e94d891ea0f8aae8dc1d8ebeb0a157f0f7dcd1837c6b030d62. Jan 13 21:40:21.529554 systemd[1]: cri-containerd-4c9e466adc0441e94d891ea0f8aae8dc1d8ebeb0a157f0f7dcd1837c6b030d62.scope: Deactivated successfully. Jan 13 21:40:21.530898 containerd[1446]: time="2025-01-13T21:40:21.530786108Z" level=info msg="StartContainer for \"4c9e466adc0441e94d891ea0f8aae8dc1d8ebeb0a157f0f7dcd1837c6b030d62\" returns successfully" Jan 13 21:40:21.620483 containerd[1446]: time="2025-01-13T21:40:21.620426573Z" level=info msg="shim disconnected" id=4c9e466adc0441e94d891ea0f8aae8dc1d8ebeb0a157f0f7dcd1837c6b030d62 namespace=k8s.io Jan 13 21:40:21.620483 containerd[1446]: time="2025-01-13T21:40:21.620477420Z" level=warning msg="cleaning up after shim disconnected" id=4c9e466adc0441e94d891ea0f8aae8dc1d8ebeb0a157f0f7dcd1837c6b030d62 namespace=k8s.io Jan 13 21:40:21.620483 containerd[1446]: time="2025-01-13T21:40:21.620487141Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:40:22.279600 kubelet[1749]: E0113 21:40:22.279564 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:22.368696 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c9e466adc0441e94d891ea0f8aae8dc1d8ebeb0a157f0f7dcd1837c6b030d62-rootfs.mount: Deactivated successfully. Jan 13 21:40:22.451101 kubelet[1749]: E0113 21:40:22.450714 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:40:22.451101 kubelet[1749]: E0113 21:40:22.450903 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:40:22.452850 containerd[1446]: time="2025-01-13T21:40:22.452727181Z" level=info msg="CreateContainer within sandbox \"1fbdbd25194cbf90745dcd42b1b8cf0b7afb48caa46f8994a0d04119d5685623\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 21:40:22.464819 containerd[1446]: time="2025-01-13T21:40:22.464773271Z" level=info msg="CreateContainer within sandbox \"1fbdbd25194cbf90745dcd42b1b8cf0b7afb48caa46f8994a0d04119d5685623\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"76bcd56aacefb968d382678dd6e255588c62dd1f4197457a56495480c8afb792\"" Jan 13 21:40:22.465273 containerd[1446]: time="2025-01-13T21:40:22.465243412Z" level=info msg="StartContainer for \"76bcd56aacefb968d382678dd6e255588c62dd1f4197457a56495480c8afb792\"" Jan 13 21:40:22.485565 systemd[1]: run-containerd-runc-k8s.io-76bcd56aacefb968d382678dd6e255588c62dd1f4197457a56495480c8afb792-runc.aYvssE.mount: Deactivated successfully. Jan 13 21:40:22.500262 systemd[1]: Started cri-containerd-76bcd56aacefb968d382678dd6e255588c62dd1f4197457a56495480c8afb792.scope - libcontainer container 76bcd56aacefb968d382678dd6e255588c62dd1f4197457a56495480c8afb792. Jan 13 21:40:22.517189 systemd[1]: cri-containerd-76bcd56aacefb968d382678dd6e255588c62dd1f4197457a56495480c8afb792.scope: Deactivated successfully. Jan 13 21:40:22.519983 containerd[1446]: time="2025-01-13T21:40:22.519937141Z" level=info msg="StartContainer for \"76bcd56aacefb968d382678dd6e255588c62dd1f4197457a56495480c8afb792\" returns successfully" Jan 13 21:40:22.531083 containerd[1446]: time="2025-01-13T21:40:22.529949566Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod070781cd_fe67_497b_ba1a_ae5e9129f30b.slice/cri-containerd-76bcd56aacefb968d382678dd6e255588c62dd1f4197457a56495480c8afb792.scope/memory.events\": no such file or directory" Jan 13 21:40:22.540054 containerd[1446]: time="2025-01-13T21:40:22.539965351Z" level=info msg="shim disconnected" id=76bcd56aacefb968d382678dd6e255588c62dd1f4197457a56495480c8afb792 namespace=k8s.io Jan 13 21:40:22.540054 containerd[1446]: time="2025-01-13T21:40:22.540032520Z" level=warning msg="cleaning up after shim disconnected" id=76bcd56aacefb968d382678dd6e255588c62dd1f4197457a56495480c8afb792 namespace=k8s.io Jan 13 21:40:22.540054 containerd[1446]: time="2025-01-13T21:40:22.540054083Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:40:23.279887 kubelet[1749]: E0113 21:40:23.279831 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:23.367017 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76bcd56aacefb968d382678dd6e255588c62dd1f4197457a56495480c8afb792-rootfs.mount: Deactivated successfully. Jan 13 21:40:23.457042 kubelet[1749]: E0113 21:40:23.455004 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:40:23.459892 containerd[1446]: time="2025-01-13T21:40:23.459856169Z" level=info msg="CreateContainer within sandbox \"1fbdbd25194cbf90745dcd42b1b8cf0b7afb48caa46f8994a0d04119d5685623\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 21:40:23.472824 containerd[1446]: time="2025-01-13T21:40:23.472779607Z" level=info msg="CreateContainer within sandbox \"1fbdbd25194cbf90745dcd42b1b8cf0b7afb48caa46f8994a0d04119d5685623\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"652b8dbafd0467fecad5bd2f2b1024814d042575a527f5e53fb71e28e277b291\"" Jan 13 21:40:23.473305 containerd[1446]: time="2025-01-13T21:40:23.473276630Z" level=info msg="StartContainer for \"652b8dbafd0467fecad5bd2f2b1024814d042575a527f5e53fb71e28e277b291\"" Jan 13 21:40:23.504176 systemd[1]: Started cri-containerd-652b8dbafd0467fecad5bd2f2b1024814d042575a527f5e53fb71e28e277b291.scope - libcontainer container 652b8dbafd0467fecad5bd2f2b1024814d042575a527f5e53fb71e28e277b291. Jan 13 21:40:23.524760 containerd[1446]: time="2025-01-13T21:40:23.524711670Z" level=info msg="StartContainer for \"652b8dbafd0467fecad5bd2f2b1024814d042575a527f5e53fb71e28e277b291\" returns successfully" Jan 13 21:40:23.779046 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 13 21:40:24.245294 kubelet[1749]: E0113 21:40:24.245156 1749 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:24.257257 containerd[1446]: time="2025-01-13T21:40:24.257223047Z" level=info msg="StopPodSandbox for \"45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a\"" Jan 13 21:40:24.257364 containerd[1446]: time="2025-01-13T21:40:24.257305097Z" level=info msg="TearDown network for sandbox \"45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a\" successfully" Jan 13 21:40:24.257364 containerd[1446]: time="2025-01-13T21:40:24.257315739Z" level=info msg="StopPodSandbox for \"45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a\" returns successfully" Jan 13 21:40:24.257704 containerd[1446]: time="2025-01-13T21:40:24.257676023Z" level=info msg="RemovePodSandbox for \"45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a\"" Jan 13 21:40:24.267678 containerd[1446]: time="2025-01-13T21:40:24.267636812Z" level=info msg="Forcibly stopping sandbox \"45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a\"" Jan 13 21:40:24.267737 containerd[1446]: time="2025-01-13T21:40:24.267717702Z" level=info msg="TearDown network for sandbox \"45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a\" successfully" Jan 13 21:40:24.280752 kubelet[1749]: E0113 21:40:24.280723 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:24.282683 containerd[1446]: time="2025-01-13T21:40:24.282638543Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 21:40:24.282736 containerd[1446]: time="2025-01-13T21:40:24.282702111Z" level=info msg="RemovePodSandbox \"45eee1d59a9417da3fecee1a90142d046b04a75c784256549de1f7bbe6b0066a\" returns successfully" Jan 13 21:40:24.459586 kubelet[1749]: E0113 21:40:24.459349 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:40:24.473542 kubelet[1749]: I0113 21:40:24.473453 1749 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7m485" podStartSLOduration=5.473429327 podStartE2EDuration="5.473429327s" podCreationTimestamp="2025-01-13 21:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:40:24.473149332 +0000 UTC m=+61.255401336" watchObservedRunningTime="2025-01-13 21:40:24.473429327 +0000 UTC m=+61.255681331" Jan 13 21:40:25.281787 kubelet[1749]: E0113 21:40:25.281726 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:25.461205 kubelet[1749]: E0113 21:40:25.461164 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:40:26.282583 kubelet[1749]: E0113 21:40:26.282542 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:26.557694 systemd-networkd[1386]: lxc_health: Link UP Jan 13 21:40:26.560841 systemd-networkd[1386]: lxc_health: Gained carrier Jan 13 21:40:27.282882 kubelet[1749]: E0113 21:40:27.282816 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:27.424085 kubelet[1749]: E0113 21:40:27.423541 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:40:27.464088 kubelet[1749]: E0113 21:40:27.464053 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:40:27.709251 systemd[1]: run-containerd-runc-k8s.io-652b8dbafd0467fecad5bd2f2b1024814d042575a527f5e53fb71e28e277b291-runc.RESRrm.mount: Deactivated successfully. Jan 13 21:40:28.283252 kubelet[1749]: E0113 21:40:28.283193 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:28.363191 systemd-networkd[1386]: lxc_health: Gained IPv6LL Jan 13 21:40:28.466566 kubelet[1749]: E0113 21:40:28.466540 1749 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:40:29.284238 kubelet[1749]: E0113 21:40:29.284193 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:30.284398 kubelet[1749]: E0113 21:40:30.284349 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:31.284976 kubelet[1749]: E0113 21:40:31.284943 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:32.286041 kubelet[1749]: E0113 21:40:32.285976 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 21:40:33.286834 kubelet[1749]: E0113 21:40:33.286781 1749 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"