Jan 13 20:22:55.934725 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 13 20:22:55.934747 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:57:23 -00 2025 Jan 13 20:22:55.934757 kernel: KASLR enabled Jan 13 20:22:55.934762 kernel: efi: EFI v2.7 by EDK II Jan 13 20:22:55.934768 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 Jan 13 20:22:55.934774 kernel: random: crng init done Jan 13 20:22:55.934781 kernel: secureboot: Secure boot disabled Jan 13 20:22:55.934787 kernel: ACPI: Early table checksum verification disabled Jan 13 20:22:55.934793 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 13 20:22:55.934802 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 13 20:22:55.934808 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:55.934814 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:55.934820 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:55.934826 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:55.934834 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:55.934842 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:55.934848 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:55.934855 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:55.934861 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 20:22:55.934867 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 13 20:22:55.934874 kernel: NUMA: Failed to initialise from firmware Jan 13 20:22:55.934880 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:22:55.934887 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 13 20:22:55.934893 kernel: Zone ranges: Jan 13 20:22:55.934899 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:22:55.934907 kernel: DMA32 empty Jan 13 20:22:55.934913 kernel: Normal empty Jan 13 20:22:55.934919 kernel: Movable zone start for each node Jan 13 20:22:55.934926 kernel: Early memory node ranges Jan 13 20:22:55.934932 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 13 20:22:55.934939 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 13 20:22:55.934945 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 13 20:22:55.934952 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 13 20:22:55.934958 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 13 20:22:55.934964 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 13 20:22:55.934971 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 13 20:22:55.934977 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 13 20:22:55.934985 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 13 20:22:55.934991 kernel: psci: probing for conduit method from ACPI. Jan 13 20:22:55.934998 kernel: psci: PSCIv1.1 detected in firmware. Jan 13 20:22:55.935007 kernel: psci: Using standard PSCI v0.2 function IDs Jan 13 20:22:55.935023 kernel: psci: Trusted OS migration not required Jan 13 20:22:55.935030 kernel: psci: SMC Calling Convention v1.1 Jan 13 20:22:55.935040 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 13 20:22:55.935047 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 13 20:22:55.935054 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 13 20:22:55.935061 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 13 20:22:55.935067 kernel: Detected PIPT I-cache on CPU0 Jan 13 20:22:55.935074 kernel: CPU features: detected: GIC system register CPU interface Jan 13 20:22:55.935081 kernel: CPU features: detected: Hardware dirty bit management Jan 13 20:22:55.935087 kernel: CPU features: detected: Spectre-v4 Jan 13 20:22:55.935094 kernel: CPU features: detected: Spectre-BHB Jan 13 20:22:55.935101 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 13 20:22:55.935109 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 13 20:22:55.935116 kernel: CPU features: detected: ARM erratum 1418040 Jan 13 20:22:55.935123 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 13 20:22:55.935129 kernel: alternatives: applying boot alternatives Jan 13 20:22:55.935137 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:22:55.935144 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 20:22:55.935151 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 20:22:55.935158 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 20:22:55.935164 kernel: Fallback order for Node 0: 0 Jan 13 20:22:55.935171 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 13 20:22:55.935178 kernel: Policy zone: DMA Jan 13 20:22:55.935186 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 20:22:55.935192 kernel: software IO TLB: area num 4. Jan 13 20:22:55.935199 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 13 20:22:55.935207 kernel: Memory: 2386324K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 185964K reserved, 0K cma-reserved) Jan 13 20:22:55.935214 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 20:22:55.935221 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 20:22:55.935238 kernel: rcu: RCU event tracing is enabled. Jan 13 20:22:55.935245 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 20:22:55.935252 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 20:22:55.935259 kernel: Tracing variant of Tasks RCU enabled. Jan 13 20:22:55.935266 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 20:22:55.935273 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 20:22:55.935282 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 13 20:22:55.935288 kernel: GICv3: 256 SPIs implemented Jan 13 20:22:55.935295 kernel: GICv3: 0 Extended SPIs implemented Jan 13 20:22:55.935302 kernel: Root IRQ handler: gic_handle_irq Jan 13 20:22:55.935308 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 13 20:22:55.935315 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 13 20:22:55.935322 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 13 20:22:55.935328 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 13 20:22:55.935335 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 13 20:22:55.935342 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 13 20:22:55.935349 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 13 20:22:55.935357 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 20:22:55.935364 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:22:55.935370 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 13 20:22:55.935377 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 13 20:22:55.935384 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 13 20:22:55.935391 kernel: arm-pv: using stolen time PV Jan 13 20:22:55.935398 kernel: Console: colour dummy device 80x25 Jan 13 20:22:55.935404 kernel: ACPI: Core revision 20230628 Jan 13 20:22:55.935412 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 13 20:22:55.935419 kernel: pid_max: default: 32768 minimum: 301 Jan 13 20:22:55.935427 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 20:22:55.935434 kernel: landlock: Up and running. Jan 13 20:22:55.935441 kernel: SELinux: Initializing. Jan 13 20:22:55.935448 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:22:55.935455 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 20:22:55.935462 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:22:55.935469 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 20:22:55.935476 kernel: rcu: Hierarchical SRCU implementation. Jan 13 20:22:55.935483 kernel: rcu: Max phase no-delay instances is 400. Jan 13 20:22:55.935491 kernel: Platform MSI: ITS@0x8080000 domain created Jan 13 20:22:55.935498 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 13 20:22:55.935505 kernel: Remapping and enabling EFI services. Jan 13 20:22:55.935512 kernel: smp: Bringing up secondary CPUs ... Jan 13 20:22:55.935519 kernel: Detected PIPT I-cache on CPU1 Jan 13 20:22:55.935525 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 13 20:22:55.935532 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 13 20:22:55.935540 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:22:55.935547 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 13 20:22:55.935554 kernel: Detected PIPT I-cache on CPU2 Jan 13 20:22:55.935562 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 13 20:22:55.935569 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 13 20:22:55.935581 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:22:55.935590 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 13 20:22:55.935597 kernel: Detected PIPT I-cache on CPU3 Jan 13 20:22:55.935604 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 13 20:22:55.935611 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 13 20:22:55.935619 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 13 20:22:55.935626 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 13 20:22:55.935635 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 20:22:55.935642 kernel: SMP: Total of 4 processors activated. Jan 13 20:22:55.935649 kernel: CPU features: detected: 32-bit EL0 Support Jan 13 20:22:55.935656 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 13 20:22:55.935664 kernel: CPU features: detected: Common not Private translations Jan 13 20:22:55.935671 kernel: CPU features: detected: CRC32 instructions Jan 13 20:22:55.935678 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 13 20:22:55.935685 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 13 20:22:55.935693 kernel: CPU features: detected: LSE atomic instructions Jan 13 20:22:55.935700 kernel: CPU features: detected: Privileged Access Never Jan 13 20:22:55.935707 kernel: CPU features: detected: RAS Extension Support Jan 13 20:22:55.935715 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 13 20:22:55.935722 kernel: CPU: All CPU(s) started at EL1 Jan 13 20:22:55.935729 kernel: alternatives: applying system-wide alternatives Jan 13 20:22:55.935736 kernel: devtmpfs: initialized Jan 13 20:22:55.935744 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 20:22:55.935751 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 20:22:55.935759 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 20:22:55.935766 kernel: SMBIOS 3.0.0 present. Jan 13 20:22:55.935773 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 13 20:22:55.935781 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 20:22:55.935788 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 13 20:22:55.935795 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 13 20:22:55.935803 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 13 20:22:55.935810 kernel: audit: initializing netlink subsys (disabled) Jan 13 20:22:55.935817 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Jan 13 20:22:55.935826 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 20:22:55.935833 kernel: cpuidle: using governor menu Jan 13 20:22:55.935841 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 13 20:22:55.935848 kernel: ASID allocator initialised with 32768 entries Jan 13 20:22:55.935855 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 20:22:55.935862 kernel: Serial: AMBA PL011 UART driver Jan 13 20:22:55.935869 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 13 20:22:55.935877 kernel: Modules: 0 pages in range for non-PLT usage Jan 13 20:22:55.935884 kernel: Modules: 508960 pages in range for PLT usage Jan 13 20:22:55.935893 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 20:22:55.935900 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 20:22:55.935908 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 13 20:22:55.935915 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 13 20:22:55.935922 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 20:22:55.935930 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 20:22:55.935937 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 13 20:22:55.935944 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 13 20:22:55.935951 kernel: ACPI: Added _OSI(Module Device) Jan 13 20:22:55.935960 kernel: ACPI: Added _OSI(Processor Device) Jan 13 20:22:55.935967 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 20:22:55.935975 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 20:22:55.935982 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 20:22:55.935989 kernel: ACPI: Interpreter enabled Jan 13 20:22:55.935996 kernel: ACPI: Using GIC for interrupt routing Jan 13 20:22:55.936003 kernel: ACPI: MCFG table detected, 1 entries Jan 13 20:22:55.936015 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 13 20:22:55.936022 kernel: printk: console [ttyAMA0] enabled Jan 13 20:22:55.936030 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 20:22:55.936174 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 20:22:55.936271 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 13 20:22:55.936341 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 13 20:22:55.936406 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 13 20:22:55.936471 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 13 20:22:55.936481 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 13 20:22:55.936491 kernel: PCI host bridge to bus 0000:00 Jan 13 20:22:55.936565 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 13 20:22:55.936624 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 13 20:22:55.936681 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 13 20:22:55.936738 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 20:22:55.936818 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 13 20:22:55.936893 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 20:22:55.936964 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 13 20:22:55.937040 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 13 20:22:55.937108 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:22:55.937176 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 13 20:22:55.937252 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 13 20:22:55.937320 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 13 20:22:55.937379 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 13 20:22:55.937439 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 13 20:22:55.937496 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 13 20:22:55.937506 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 13 20:22:55.937513 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 13 20:22:55.937521 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 13 20:22:55.937528 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 13 20:22:55.937535 kernel: iommu: Default domain type: Translated Jan 13 20:22:55.937543 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 13 20:22:55.937552 kernel: efivars: Registered efivars operations Jan 13 20:22:55.937559 kernel: vgaarb: loaded Jan 13 20:22:55.937566 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 13 20:22:55.937573 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 20:22:55.937581 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 20:22:55.937588 kernel: pnp: PnP ACPI init Jan 13 20:22:55.937659 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 13 20:22:55.937670 kernel: pnp: PnP ACPI: found 1 devices Jan 13 20:22:55.937679 kernel: NET: Registered PF_INET protocol family Jan 13 20:22:55.937686 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 20:22:55.937694 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 20:22:55.937701 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 20:22:55.937709 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 20:22:55.937716 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 20:22:55.937723 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 20:22:55.937730 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:22:55.937738 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 20:22:55.937746 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 20:22:55.937754 kernel: PCI: CLS 0 bytes, default 64 Jan 13 20:22:55.937761 kernel: kvm [1]: HYP mode not available Jan 13 20:22:55.937768 kernel: Initialise system trusted keyrings Jan 13 20:22:55.937775 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 20:22:55.937783 kernel: Key type asymmetric registered Jan 13 20:22:55.937790 kernel: Asymmetric key parser 'x509' registered Jan 13 20:22:55.937797 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 13 20:22:55.937805 kernel: io scheduler mq-deadline registered Jan 13 20:22:55.937814 kernel: io scheduler kyber registered Jan 13 20:22:55.937821 kernel: io scheduler bfq registered Jan 13 20:22:55.937828 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 13 20:22:55.937835 kernel: ACPI: button: Power Button [PWRB] Jan 13 20:22:55.937843 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 13 20:22:55.937925 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 13 20:22:55.937935 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 20:22:55.937942 kernel: thunder_xcv, ver 1.0 Jan 13 20:22:55.937950 kernel: thunder_bgx, ver 1.0 Jan 13 20:22:55.937959 kernel: nicpf, ver 1.0 Jan 13 20:22:55.937966 kernel: nicvf, ver 1.0 Jan 13 20:22:55.938052 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 13 20:22:55.938118 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:22:55 UTC (1736799775) Jan 13 20:22:55.938128 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 13 20:22:55.938136 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 13 20:22:55.938144 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 13 20:22:55.938151 kernel: watchdog: Hard watchdog permanently disabled Jan 13 20:22:55.938161 kernel: NET: Registered PF_INET6 protocol family Jan 13 20:22:55.938174 kernel: Segment Routing with IPv6 Jan 13 20:22:55.938182 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 20:22:55.938189 kernel: NET: Registered PF_PACKET protocol family Jan 13 20:22:55.938196 kernel: Key type dns_resolver registered Jan 13 20:22:55.938203 kernel: registered taskstats version 1 Jan 13 20:22:55.938211 kernel: Loading compiled-in X.509 certificates Jan 13 20:22:55.938219 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: a9edf9d44b1b82dedf7830d1843430df7c4d16cb' Jan 13 20:22:55.938234 kernel: Key type .fscrypt registered Jan 13 20:22:55.938241 kernel: Key type fscrypt-provisioning registered Jan 13 20:22:55.938251 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 20:22:55.938260 kernel: ima: Allocated hash algorithm: sha1 Jan 13 20:22:55.938270 kernel: ima: No architecture policies found Jan 13 20:22:55.938279 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 13 20:22:55.938287 kernel: clk: Disabling unused clocks Jan 13 20:22:55.938294 kernel: Freeing unused kernel memory: 39680K Jan 13 20:22:55.938302 kernel: Run /init as init process Jan 13 20:22:55.938309 kernel: with arguments: Jan 13 20:22:55.938318 kernel: /init Jan 13 20:22:55.938325 kernel: with environment: Jan 13 20:22:55.938335 kernel: HOME=/ Jan 13 20:22:55.938342 kernel: TERM=linux Jan 13 20:22:55.938349 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 20:22:55.938358 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:22:55.938367 systemd[1]: Detected virtualization kvm. Jan 13 20:22:55.938375 systemd[1]: Detected architecture arm64. Jan 13 20:22:55.938384 systemd[1]: Running in initrd. Jan 13 20:22:55.938391 systemd[1]: No hostname configured, using default hostname. Jan 13 20:22:55.938399 systemd[1]: Hostname set to . Jan 13 20:22:55.938407 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:22:55.938414 systemd[1]: Queued start job for default target initrd.target. Jan 13 20:22:55.938422 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:22:55.938430 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:22:55.938438 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 20:22:55.938447 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:22:55.938455 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 20:22:55.938463 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 20:22:55.938472 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 20:22:55.938480 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 20:22:55.938488 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:22:55.938496 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:22:55.938505 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:22:55.938513 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:22:55.938521 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:22:55.938529 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:22:55.938537 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:22:55.938545 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:22:55.938553 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 20:22:55.938562 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 20:22:55.938571 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:22:55.938579 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:22:55.938587 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:22:55.938594 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:22:55.938602 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 20:22:55.938610 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:22:55.938617 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 20:22:55.938625 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 20:22:55.938633 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:22:55.938642 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:22:55.938650 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:22:55.938657 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 20:22:55.938665 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:22:55.938673 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 20:22:55.938700 systemd-journald[239]: Collecting audit messages is disabled. Jan 13 20:22:55.938722 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:22:55.938730 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:22:55.938740 systemd-journald[239]: Journal started Jan 13 20:22:55.938762 systemd-journald[239]: Runtime Journal (/run/log/journal/6af3fb9ecd144b64b88df7ee616bbc38) is 5.9M, max 47.3M, 41.4M free. Jan 13 20:22:55.922005 systemd-modules-load[240]: Inserted module 'overlay' Jan 13 20:22:55.944251 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 20:22:55.944284 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:22:55.946355 kernel: Bridge firewalling registered Jan 13 20:22:55.946915 systemd-modules-load[240]: Inserted module 'br_netfilter' Jan 13 20:22:55.947967 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:22:55.949560 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:22:55.968432 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:22:55.970381 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:22:55.972655 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:22:55.977617 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:22:55.985566 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:22:55.987168 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:22:55.990020 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:22:55.993313 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:22:56.005450 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 20:22:56.007737 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:22:56.015364 dracut-cmdline[276]: dracut-dracut-053 Jan 13 20:22:56.017875 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436 Jan 13 20:22:56.052784 systemd-resolved[279]: Positive Trust Anchors: Jan 13 20:22:56.052865 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:22:56.052897 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:22:56.059782 systemd-resolved[279]: Defaulting to hostname 'linux'. Jan 13 20:22:56.060843 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:22:56.061974 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:22:56.098255 kernel: SCSI subsystem initialized Jan 13 20:22:56.103245 kernel: Loading iSCSI transport class v2.0-870. Jan 13 20:22:56.112293 kernel: iscsi: registered transport (tcp) Jan 13 20:22:56.125274 kernel: iscsi: registered transport (qla4xxx) Jan 13 20:22:56.125317 kernel: QLogic iSCSI HBA Driver Jan 13 20:22:56.166602 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 20:22:56.182380 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 20:22:56.199265 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 20:22:56.199304 kernel: device-mapper: uevent: version 1.0.3 Jan 13 20:22:56.199324 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 20:22:56.251252 kernel: raid6: neonx8 gen() 14986 MB/s Jan 13 20:22:56.268281 kernel: raid6: neonx4 gen() 15470 MB/s Jan 13 20:22:56.285267 kernel: raid6: neonx2 gen() 13089 MB/s Jan 13 20:22:56.302261 kernel: raid6: neonx1 gen() 10384 MB/s Jan 13 20:22:56.319262 kernel: raid6: int64x8 gen() 6918 MB/s Jan 13 20:22:56.336269 kernel: raid6: int64x4 gen() 7352 MB/s Jan 13 20:22:56.353272 kernel: raid6: int64x2 gen() 5844 MB/s Jan 13 20:22:56.370418 kernel: raid6: int64x1 gen() 5008 MB/s Jan 13 20:22:56.370462 kernel: raid6: using algorithm neonx4 gen() 15470 MB/s Jan 13 20:22:56.388353 kernel: raid6: .... xor() 12235 MB/s, rmw enabled Jan 13 20:22:56.388386 kernel: raid6: using neon recovery algorithm Jan 13 20:22:56.393720 kernel: xor: measuring software checksum speed Jan 13 20:22:56.393742 kernel: 8regs : 19788 MB/sec Jan 13 20:22:56.394387 kernel: 32regs : 19641 MB/sec Jan 13 20:22:56.395654 kernel: arm64_neon : 26963 MB/sec Jan 13 20:22:56.395694 kernel: xor: using function: arm64_neon (26963 MB/sec) Jan 13 20:22:56.446275 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 20:22:56.458161 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:22:56.472401 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:22:56.484997 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jan 13 20:22:56.489161 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:22:56.500404 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 20:22:56.513126 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Jan 13 20:22:56.541133 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:22:56.553405 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:22:56.596144 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:22:56.607451 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 20:22:56.618982 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 20:22:56.621752 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:22:56.623545 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:22:56.626014 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:22:56.632640 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 20:22:56.647658 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:22:56.651256 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 13 20:22:56.657774 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 20:22:56.657880 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 20:22:56.657890 kernel: GPT:9289727 != 19775487 Jan 13 20:22:56.657900 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 20:22:56.657916 kernel: GPT:9289727 != 19775487 Jan 13 20:22:56.657925 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 20:22:56.657934 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:22:56.659084 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:22:56.659209 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:22:56.664150 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:22:56.665851 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:22:56.675060 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (506) Jan 13 20:22:56.666014 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:22:56.671868 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:22:56.680259 kernel: BTRFS: device fsid 8e09fced-e016-4c4f-bac5-4013d13dfd78 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (510) Jan 13 20:22:56.682564 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:22:56.690510 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 20:22:56.694279 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:22:56.702684 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 20:22:56.710543 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:22:56.717682 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 20:22:56.718943 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 20:22:56.734427 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 20:22:56.739440 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 20:22:56.743665 disk-uuid[550]: Primary Header is updated. Jan 13 20:22:56.743665 disk-uuid[550]: Secondary Entries is updated. Jan 13 20:22:56.743665 disk-uuid[550]: Secondary Header is updated. Jan 13 20:22:56.747266 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:22:56.764294 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:22:57.754555 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 20:22:57.754984 disk-uuid[551]: The operation has completed successfully. Jan 13 20:22:57.777563 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 20:22:57.777662 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 20:22:57.806282 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 20:22:57.809160 sh[570]: Success Jan 13 20:22:57.826263 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 13 20:22:57.853024 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 20:22:57.865705 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 20:22:57.867577 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 20:22:57.879260 kernel: BTRFS info (device dm-0): first mount of filesystem 8e09fced-e016-4c4f-bac5-4013d13dfd78 Jan 13 20:22:57.879300 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:22:57.879311 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 20:22:57.881770 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 20:22:57.881785 kernel: BTRFS info (device dm-0): using free space tree Jan 13 20:22:57.885295 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 20:22:57.886691 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 20:22:57.904434 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 20:22:57.906746 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 20:22:57.913628 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:22:57.913671 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:22:57.913681 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:22:57.916256 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:22:57.926697 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 20:22:57.930061 kernel: BTRFS info (device vda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:22:57.937260 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 20:22:57.945381 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 20:22:58.004135 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:22:58.015393 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:22:58.040782 systemd-networkd[757]: lo: Link UP Jan 13 20:22:58.040797 systemd-networkd[757]: lo: Gained carrier Jan 13 20:22:58.041699 systemd-networkd[757]: Enumeration completed Jan 13 20:22:58.041793 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:22:58.042151 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:22:58.042154 systemd-networkd[757]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:22:58.043085 systemd-networkd[757]: eth0: Link UP Jan 13 20:22:58.043088 systemd-networkd[757]: eth0: Gained carrier Jan 13 20:22:58.043095 systemd-networkd[757]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:22:58.044331 systemd[1]: Reached target network.target - Network. Jan 13 20:22:58.057638 ignition[677]: Ignition 2.20.0 Jan 13 20:22:58.057649 ignition[677]: Stage: fetch-offline Jan 13 20:22:58.057683 ignition[677]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:58.057692 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:22:58.057910 ignition[677]: parsed url from cmdline: "" Jan 13 20:22:58.057913 ignition[677]: no config URL provided Jan 13 20:22:58.057919 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 20:22:58.057926 ignition[677]: no config at "/usr/lib/ignition/user.ign" Jan 13 20:22:58.057952 ignition[677]: op(1): [started] loading QEMU firmware config module Jan 13 20:22:58.057956 ignition[677]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 20:22:58.064881 ignition[677]: op(1): [finished] loading QEMU firmware config module Jan 13 20:22:58.066291 systemd-networkd[757]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:22:58.072528 ignition[677]: parsing config with SHA512: 647edaecbe08912068f0ee58dc486fe397b4a1ccdc8c6d96ba320684bdef9fa89bd85b1c533d9c5989433ebd8b304ba0d71cace3b9ee635f7fc0bea17afec862 Jan 13 20:22:58.076409 unknown[677]: fetched base config from "system" Jan 13 20:22:58.076420 unknown[677]: fetched user config from "qemu" Jan 13 20:22:58.076705 ignition[677]: fetch-offline: fetch-offline passed Jan 13 20:22:58.078617 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:22:58.076791 ignition[677]: Ignition finished successfully Jan 13 20:22:58.080222 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 20:22:58.090411 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 20:22:58.100774 ignition[768]: Ignition 2.20.0 Jan 13 20:22:58.100792 ignition[768]: Stage: kargs Jan 13 20:22:58.100951 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:58.100960 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:22:58.101636 ignition[768]: kargs: kargs passed Jan 13 20:22:58.105565 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 20:22:58.101677 ignition[768]: Ignition finished successfully Jan 13 20:22:58.117389 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 20:22:58.126711 ignition[776]: Ignition 2.20.0 Jan 13 20:22:58.126723 ignition[776]: Stage: disks Jan 13 20:22:58.126879 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:58.126888 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:22:58.127566 ignition[776]: disks: disks passed Jan 13 20:22:58.130280 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 20:22:58.127610 ignition[776]: Ignition finished successfully Jan 13 20:22:58.131858 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 20:22:58.133545 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 20:22:58.135342 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:22:58.137236 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:22:58.139287 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:22:58.149544 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 20:22:58.160409 systemd-fsck[787]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 20:22:58.164612 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 20:22:58.167065 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 20:22:58.210251 kernel: EXT4-fs (vda9): mounted filesystem 8fd847fb-a6be-44f6-9adf-0a0a79b9fa94 r/w with ordered data mode. Quota mode: none. Jan 13 20:22:58.210523 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 20:22:58.211916 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 20:22:58.221337 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:22:58.223636 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 20:22:58.224716 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 20:22:58.224759 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 20:22:58.224795 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:22:58.229105 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 20:22:58.238067 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (795) Jan 13 20:22:58.238111 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:22:58.238125 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:22:58.238137 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:22:58.232093 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 20:22:58.241252 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:22:58.242956 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:22:58.306396 initrd-setup-root[819]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 20:22:58.310435 initrd-setup-root[826]: cut: /sysroot/etc/group: No such file or directory Jan 13 20:22:58.314274 initrd-setup-root[833]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 20:22:58.318264 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 20:22:58.389349 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 20:22:58.404340 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 20:22:58.406612 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 20:22:58.411273 kernel: BTRFS info (device vda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:22:58.427662 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 20:22:58.430170 ignition[908]: INFO : Ignition 2.20.0 Jan 13 20:22:58.430170 ignition[908]: INFO : Stage: mount Jan 13 20:22:58.430170 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:58.430170 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:22:58.430170 ignition[908]: INFO : mount: mount passed Jan 13 20:22:58.430170 ignition[908]: INFO : Ignition finished successfully Jan 13 20:22:58.430915 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 20:22:58.441311 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 20:22:58.877770 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 20:22:58.893431 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 20:22:58.899909 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (922) Jan 13 20:22:58.899942 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6 Jan 13 20:22:58.899953 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 13 20:22:58.901508 kernel: BTRFS info (device vda6): using free space tree Jan 13 20:22:58.904252 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 20:22:58.904800 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 20:22:58.920584 ignition[939]: INFO : Ignition 2.20.0 Jan 13 20:22:58.920584 ignition[939]: INFO : Stage: files Jan 13 20:22:58.922207 ignition[939]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:58.922207 ignition[939]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:22:58.922207 ignition[939]: DEBUG : files: compiled without relabeling support, skipping Jan 13 20:22:58.922207 ignition[939]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 20:22:58.922207 ignition[939]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 20:22:58.928502 ignition[939]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 20:22:58.928502 ignition[939]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 20:22:58.928502 ignition[939]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 20:22:58.928502 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 13 20:22:58.928502 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 20:22:58.928502 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:22:58.928502 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 20:22:58.928502 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:22:58.928502 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:22:58.928502 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:22:58.928502 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Jan 13 20:22:58.924320 unknown[939]: wrote ssh authorized keys file for user: core Jan 13 20:22:59.262274 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 13 20:22:59.306468 systemd-networkd[757]: eth0: Gained IPv6LL Jan 13 20:22:59.482294 ignition[939]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Jan 13 20:22:59.482294 ignition[939]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 13 20:22:59.485918 ignition[939]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:22:59.485918 ignition[939]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 20:22:59.485918 ignition[939]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 13 20:22:59.485918 ignition[939]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 20:22:59.509105 ignition[939]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:22:59.513023 ignition[939]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 20:22:59.515683 ignition[939]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 20:22:59.515683 ignition[939]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:22:59.515683 ignition[939]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 20:22:59.515683 ignition[939]: INFO : files: files passed Jan 13 20:22:59.515683 ignition[939]: INFO : Ignition finished successfully Jan 13 20:22:59.516128 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 20:22:59.526369 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 20:22:59.528777 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 20:22:59.530282 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 20:22:59.532279 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 20:22:59.536304 initrd-setup-root-after-ignition[967]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 20:22:59.538814 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:22:59.538814 initrd-setup-root-after-ignition[970]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:22:59.542190 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 20:22:59.541778 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:22:59.543602 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 20:22:59.553412 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 20:22:59.573048 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 20:22:59.573159 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 20:22:59.575465 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 20:22:59.577198 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 20:22:59.579120 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 20:22:59.602446 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 20:22:59.614753 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:22:59.617554 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 20:22:59.630318 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:22:59.631545 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:22:59.633518 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 20:22:59.635213 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 20:22:59.635356 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 20:22:59.638056 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 20:22:59.640298 systemd[1]: Stopped target basic.target - Basic System. Jan 13 20:22:59.642100 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 20:22:59.643927 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 20:22:59.645935 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 20:22:59.648138 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 20:22:59.650108 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 20:22:59.652207 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 20:22:59.654353 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 20:22:59.656223 systemd[1]: Stopped target swap.target - Swaps. Jan 13 20:22:59.657954 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 20:22:59.658094 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 20:22:59.660799 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:22:59.662887 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:22:59.664996 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 20:22:59.668285 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:22:59.669516 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 20:22:59.669638 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 20:22:59.672552 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 20:22:59.672667 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 20:22:59.674630 systemd[1]: Stopped target paths.target - Path Units. Jan 13 20:22:59.676257 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 20:22:59.677342 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:22:59.678692 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 20:22:59.680403 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 20:22:59.682334 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 20:22:59.682429 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 20:22:59.684674 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 20:22:59.684758 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 20:22:59.686397 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 20:22:59.686511 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 20:22:59.688417 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 20:22:59.688517 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 20:22:59.700410 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 20:22:59.702094 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 20:22:59.703121 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 20:22:59.703270 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:22:59.705301 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 20:22:59.705408 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 20:22:59.712500 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 20:22:59.712602 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 20:22:59.717238 ignition[995]: INFO : Ignition 2.20.0 Jan 13 20:22:59.717238 ignition[995]: INFO : Stage: umount Jan 13 20:22:59.717238 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 20:22:59.717238 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 20:22:59.717238 ignition[995]: INFO : umount: umount passed Jan 13 20:22:59.717238 ignition[995]: INFO : Ignition finished successfully Jan 13 20:22:59.717522 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 20:22:59.718082 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 20:22:59.719262 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 20:22:59.721205 systemd[1]: Stopped target network.target - Network. Jan 13 20:22:59.723134 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 20:22:59.723206 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 20:22:59.724982 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 20:22:59.725043 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 20:22:59.726893 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 20:22:59.726937 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 20:22:59.728793 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 20:22:59.728837 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 20:22:59.730744 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 20:22:59.732581 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 20:22:59.742420 systemd-networkd[757]: eth0: DHCPv6 lease lost Jan 13 20:22:59.744527 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 20:22:59.744645 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 20:22:59.747195 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 20:22:59.747361 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 20:22:59.749952 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 20:22:59.749997 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:22:59.764370 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 20:22:59.765315 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 20:22:59.765393 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 20:22:59.767599 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:22:59.767658 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:22:59.769460 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 20:22:59.769509 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 20:22:59.771696 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 20:22:59.771746 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:22:59.773888 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:22:59.776576 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 20:22:59.776696 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 20:22:59.779310 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 20:22:59.779363 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 20:22:59.786423 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 20:22:59.787306 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 20:22:59.788675 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 20:22:59.788805 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:22:59.791324 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 20:22:59.791385 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 20:22:59.793262 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 20:22:59.793299 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:22:59.795321 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 20:22:59.795376 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 20:22:59.798238 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 20:22:59.798288 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 20:22:59.800960 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 20:22:59.801022 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 20:22:59.811379 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 20:22:59.812408 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 20:22:59.812477 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:22:59.814513 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 20:22:59.814561 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:22:59.816500 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 20:22:59.816546 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:22:59.818628 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 20:22:59.818677 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:22:59.820876 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 20:22:59.820960 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 20:22:59.823264 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 20:22:59.825321 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 20:22:59.835537 systemd[1]: Switching root. Jan 13 20:22:59.869429 systemd-journald[239]: Journal stopped Jan 13 20:23:00.539460 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jan 13 20:23:00.539517 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 20:23:00.539530 kernel: SELinux: policy capability open_perms=1 Jan 13 20:23:00.539542 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 20:23:00.539552 kernel: SELinux: policy capability always_check_network=0 Jan 13 20:23:00.539561 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 20:23:00.539571 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 20:23:00.539580 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 20:23:00.539589 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 20:23:00.539600 kernel: audit: type=1403 audit(1736799779.998:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 20:23:00.539616 systemd[1]: Successfully loaded SELinux policy in 33.694ms. Jan 13 20:23:00.539637 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.988ms. Jan 13 20:23:00.539649 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 20:23:00.539660 systemd[1]: Detected virtualization kvm. Jan 13 20:23:00.539670 systemd[1]: Detected architecture arm64. Jan 13 20:23:00.539680 systemd[1]: Detected first boot. Jan 13 20:23:00.539691 systemd[1]: Initializing machine ID from VM UUID. Jan 13 20:23:00.539703 zram_generator::config[1042]: No configuration found. Jan 13 20:23:00.539714 systemd[1]: Populated /etc with preset unit settings. Jan 13 20:23:00.539725 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 13 20:23:00.539735 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 13 20:23:00.539746 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 13 20:23:00.539757 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 20:23:00.539768 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 20:23:00.539779 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 20:23:00.539791 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 20:23:00.539801 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 20:23:00.539812 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 20:23:00.539823 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 20:23:00.539834 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 20:23:00.539845 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 20:23:00.539857 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 20:23:00.539868 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 20:23:00.539879 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 20:23:00.539890 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 20:23:00.539902 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 20:23:00.539912 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 13 20:23:00.539923 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 20:23:00.539933 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 13 20:23:00.539943 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 13 20:23:00.539954 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 13 20:23:00.539967 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 20:23:00.539978 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 20:23:00.539989 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 20:23:00.540005 systemd[1]: Reached target slices.target - Slice Units. Jan 13 20:23:00.540017 systemd[1]: Reached target swap.target - Swaps. Jan 13 20:23:00.540027 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 20:23:00.540039 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 20:23:00.540049 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 20:23:00.540060 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 20:23:00.540071 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 20:23:00.540084 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 20:23:00.540095 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 20:23:00.540106 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 20:23:00.540117 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 20:23:00.540128 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 20:23:00.540139 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 20:23:00.540149 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 20:23:00.540161 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 20:23:00.540173 systemd[1]: Reached target machines.target - Containers. Jan 13 20:23:00.540184 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 20:23:00.540194 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:23:00.540205 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 20:23:00.540216 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 20:23:00.540253 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:23:00.540266 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:23:00.540277 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:23:00.540287 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 20:23:00.540300 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:23:00.540311 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 20:23:00.540322 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 13 20:23:00.540337 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 13 20:23:00.540347 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 13 20:23:00.540358 kernel: fuse: init (API version 7.39) Jan 13 20:23:00.540368 systemd[1]: Stopped systemd-fsck-usr.service. Jan 13 20:23:00.540379 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 20:23:00.540389 kernel: ACPI: bus type drm_connector registered Jan 13 20:23:00.540401 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 20:23:00.540412 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 20:23:00.540422 kernel: loop: module loaded Jan 13 20:23:00.540433 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 20:23:00.540443 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 20:23:00.540454 systemd[1]: verity-setup.service: Deactivated successfully. Jan 13 20:23:00.540466 systemd[1]: Stopped verity-setup.service. Jan 13 20:23:00.540477 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 20:23:00.540488 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 20:23:00.540500 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 20:23:00.540511 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 20:23:00.540522 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 20:23:00.540532 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 20:23:00.540545 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 20:23:00.540574 systemd-journald[1120]: Collecting audit messages is disabled. Jan 13 20:23:00.540595 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 20:23:00.540606 systemd-journald[1120]: Journal started Jan 13 20:23:00.540628 systemd-journald[1120]: Runtime Journal (/run/log/journal/6af3fb9ecd144b64b88df7ee616bbc38) is 5.9M, max 47.3M, 41.4M free. Jan 13 20:23:00.336772 systemd[1]: Queued start job for default target multi-user.target. Jan 13 20:23:00.350146 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 20:23:00.350470 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 13 20:23:00.543794 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 20:23:00.544530 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 20:23:00.544715 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 20:23:00.546133 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:23:00.547291 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:23:00.548637 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:23:00.548775 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:23:00.550122 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:23:00.550289 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:23:00.551712 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 20:23:00.551850 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 20:23:00.553256 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:23:00.553389 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:23:00.554744 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 20:23:00.556153 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 20:23:00.557639 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 20:23:00.569371 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 20:23:00.578360 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 20:23:00.580520 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 20:23:00.581629 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 20:23:00.581681 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 20:23:00.583822 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 20:23:00.586091 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 20:23:00.588220 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 20:23:00.589322 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:23:00.590845 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 20:23:00.593155 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 20:23:00.594511 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:23:00.596416 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 20:23:00.600314 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:23:00.603079 systemd-journald[1120]: Time spent on flushing to /var/log/journal/6af3fb9ecd144b64b88df7ee616bbc38 is 22.256ms for 838 entries. Jan 13 20:23:00.603079 systemd-journald[1120]: System Journal (/var/log/journal/6af3fb9ecd144b64b88df7ee616bbc38) is 8.0M, max 195.6M, 187.6M free. Jan 13 20:23:00.635881 systemd-journald[1120]: Received client request to flush runtime journal. Jan 13 20:23:00.606047 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:23:00.609625 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 20:23:00.612596 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 20:23:00.619829 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 20:23:00.621440 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 20:23:00.623551 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 20:23:00.625748 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 20:23:00.627672 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 20:23:00.636922 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:23:00.637765 kernel: loop0: detected capacity change from 0 to 113536 Jan 13 20:23:00.639199 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 20:23:00.644889 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 20:23:00.651279 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 20:23:00.654454 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 20:23:00.656700 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 20:23:00.667123 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Jan 13 20:23:00.667141 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Jan 13 20:23:00.672288 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 20:23:00.674415 udevadm[1170]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 13 20:23:00.678265 kernel: loop1: detected capacity change from 0 to 189592 Jan 13 20:23:00.681899 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 20:23:00.686051 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 20:23:00.686764 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 20:23:00.702520 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 20:23:00.707327 kernel: loop2: detected capacity change from 0 to 116808 Jan 13 20:23:00.711074 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 20:23:00.720695 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jan 13 20:23:00.720713 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jan 13 20:23:00.724016 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 20:23:00.758254 kernel: loop3: detected capacity change from 0 to 113536 Jan 13 20:23:00.763259 kernel: loop4: detected capacity change from 0 to 189592 Jan 13 20:23:00.768242 kernel: loop5: detected capacity change from 0 to 116808 Jan 13 20:23:00.770742 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 20:23:00.771134 (sd-merge)[1182]: Merged extensions into '/usr'. Jan 13 20:23:00.777143 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 20:23:00.777333 systemd[1]: Reloading... Jan 13 20:23:00.832260 zram_generator::config[1211]: No configuration found. Jan 13 20:23:00.889486 ldconfig[1148]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 20:23:00.928478 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:23:00.964384 systemd[1]: Reloading finished in 186 ms. Jan 13 20:23:00.989697 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 20:23:00.991371 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 20:23:01.009401 systemd[1]: Starting ensure-sysext.service... Jan 13 20:23:01.011628 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 20:23:01.018266 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Jan 13 20:23:01.018280 systemd[1]: Reloading... Jan 13 20:23:01.036915 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 20:23:01.037201 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 20:23:01.037911 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 20:23:01.038147 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jan 13 20:23:01.038199 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Jan 13 20:23:01.040318 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:23:01.040331 systemd-tmpfiles[1243]: Skipping /boot Jan 13 20:23:01.048879 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 20:23:01.048893 systemd-tmpfiles[1243]: Skipping /boot Jan 13 20:23:01.064409 zram_generator::config[1267]: No configuration found. Jan 13 20:23:01.149049 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:23:01.184220 systemd[1]: Reloading finished in 165 ms. Jan 13 20:23:01.197264 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 20:23:01.198719 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 20:23:01.217202 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:23:01.219586 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 20:23:01.221855 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 20:23:01.226531 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 20:23:01.229723 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 20:23:01.233505 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 20:23:01.236576 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:23:01.240487 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:23:01.246529 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:23:01.251499 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:23:01.252839 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:23:01.253621 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:23:01.253773 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:23:01.256740 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:23:01.258360 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:23:01.260132 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 20:23:01.263298 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:23:01.263428 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:23:01.267681 systemd-udevd[1316]: Using default interface naming scheme 'v255'. Jan 13 20:23:01.271516 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:23:01.278897 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:23:01.287137 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:23:01.292415 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:23:01.293487 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:23:01.297093 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 20:23:01.299667 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 20:23:01.301437 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 20:23:01.303436 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 20:23:01.305293 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 20:23:01.306964 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:23:01.308266 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:23:01.309959 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:23:01.310101 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:23:01.311791 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:23:01.311939 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:23:01.313661 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 20:23:01.325531 systemd[1]: Finished ensure-sysext.service. Jan 13 20:23:01.330603 augenrules[1363]: No rules Jan 13 20:23:01.334083 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:23:01.336554 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:23:01.340671 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 20:23:01.349275 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1349) Jan 13 20:23:01.368735 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 20:23:01.373945 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 20:23:01.377855 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 20:23:01.381135 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 20:23:01.382317 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 20:23:01.384899 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 20:23:01.388329 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 20:23:01.389455 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 20:23:01.390157 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 20:23:01.393826 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 20:23:01.393973 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 20:23:01.395482 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 20:23:01.395614 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 20:23:01.397042 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 20:23:01.397171 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 20:23:01.398678 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 20:23:01.398824 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 20:23:01.403042 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 13 20:23:01.411469 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 20:23:01.414481 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 20:23:01.418392 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 20:23:01.418466 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 20:23:01.425902 systemd-resolved[1309]: Positive Trust Anchors: Jan 13 20:23:01.425978 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 20:23:01.426020 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 20:23:01.434548 systemd-resolved[1309]: Defaulting to hostname 'linux'. Jan 13 20:23:01.441756 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 20:23:01.443050 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 20:23:01.452723 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 20:23:01.468253 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 20:23:01.470273 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 20:23:01.477785 systemd-networkd[1390]: lo: Link UP Jan 13 20:23:01.477799 systemd-networkd[1390]: lo: Gained carrier Jan 13 20:23:01.478624 systemd-networkd[1390]: Enumeration completed Jan 13 20:23:01.480483 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 20:23:01.481799 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 20:23:01.482860 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:23:01.482869 systemd-networkd[1390]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 20:23:01.483331 systemd[1]: Reached target network.target - Network. Jan 13 20:23:01.485401 systemd-networkd[1390]: eth0: Link UP Jan 13 20:23:01.485410 systemd-networkd[1390]: eth0: Gained carrier Jan 13 20:23:01.485423 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 20:23:01.490080 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 20:23:01.500549 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 20:23:01.503615 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 20:23:01.505047 systemd-networkd[1390]: eth0: DHCPv4 address 10.0.0.112/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 20:23:01.505946 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Jan 13 20:23:01.506904 systemd-timesyncd[1391]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 20:23:01.507018 systemd-timesyncd[1391]: Initial clock synchronization to Mon 2025-01-13 20:23:01.423330 UTC. Jan 13 20:23:01.522664 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:23:01.528966 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 20:23:01.550710 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 20:23:01.552168 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 20:23:01.553309 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 20:23:01.554525 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 20:23:01.555806 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 20:23:01.557255 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 20:23:01.558395 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 20:23:01.559655 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 20:23:01.560909 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 20:23:01.560954 systemd[1]: Reached target paths.target - Path Units. Jan 13 20:23:01.562071 systemd[1]: Reached target timers.target - Timer Units. Jan 13 20:23:01.564060 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 20:23:01.566529 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 20:23:01.577242 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 20:23:01.579646 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 20:23:01.581371 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 20:23:01.582568 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 20:23:01.583525 systemd[1]: Reached target basic.target - Basic System. Jan 13 20:23:01.584515 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:23:01.584547 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 20:23:01.585533 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 20:23:01.587524 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 20:23:01.588337 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 20:23:01.591439 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 20:23:01.596509 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 20:23:01.598721 jq[1418]: false Jan 13 20:23:01.599095 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 20:23:01.600239 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 20:23:01.603414 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 20:23:01.608535 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 20:23:01.612109 extend-filesystems[1419]: Found loop3 Jan 13 20:23:01.612979 extend-filesystems[1419]: Found loop4 Jan 13 20:23:01.612979 extend-filesystems[1419]: Found loop5 Jan 13 20:23:01.612979 extend-filesystems[1419]: Found vda Jan 13 20:23:01.612979 extend-filesystems[1419]: Found vda1 Jan 13 20:23:01.612979 extend-filesystems[1419]: Found vda2 Jan 13 20:23:01.612979 extend-filesystems[1419]: Found vda3 Jan 13 20:23:01.612979 extend-filesystems[1419]: Found usr Jan 13 20:23:01.612979 extend-filesystems[1419]: Found vda4 Jan 13 20:23:01.612979 extend-filesystems[1419]: Found vda6 Jan 13 20:23:01.612979 extend-filesystems[1419]: Found vda7 Jan 13 20:23:01.612979 extend-filesystems[1419]: Found vda9 Jan 13 20:23:01.612979 extend-filesystems[1419]: Checking size of /dev/vda9 Jan 13 20:23:01.612730 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 20:23:01.623636 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 20:23:01.624150 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 20:23:01.626762 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 20:23:01.630438 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 20:23:01.634264 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 20:23:01.636364 extend-filesystems[1419]: Resized partition /dev/vda9 Jan 13 20:23:01.637580 dbus-daemon[1417]: [system] SELinux support is enabled Jan 13 20:23:01.646164 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 20:23:01.649396 extend-filesystems[1439]: resize2fs 1.47.1 (20-May-2024) Jan 13 20:23:01.651278 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1349) Jan 13 20:23:01.652469 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 20:23:01.652641 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 20:23:01.652889 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 20:23:01.653036 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 20:23:01.654427 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 20:23:01.656338 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 20:23:01.666657 jq[1434]: true Jan 13 20:23:01.672438 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 20:23:01.672571 (ntainerd)[1441]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 20:23:01.686723 jq[1449]: true Jan 13 20:23:01.687153 update_engine[1431]: I20250113 20:23:01.687005 1431 main.cc:92] Flatcar Update Engine starting Jan 13 20:23:01.689509 update_engine[1431]: I20250113 20:23:01.689467 1431 update_check_scheduler.cc:74] Next update check in 8m52s Jan 13 20:23:01.695608 systemd[1]: Started update-engine.service - Update Engine. Jan 13 20:23:01.696908 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 20:23:01.696944 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 20:23:01.700139 systemd-logind[1424]: Watching system buttons on /dev/input/event0 (Power Button) Jan 13 20:23:01.700171 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 20:23:01.700189 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 20:23:01.701140 systemd-logind[1424]: New seat seat0. Jan 13 20:23:01.713396 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 20:23:01.713425 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 20:23:01.736022 extend-filesystems[1439]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 20:23:01.736022 extend-filesystems[1439]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 20:23:01.736022 extend-filesystems[1439]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 20:23:01.714778 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 20:23:01.740533 extend-filesystems[1419]: Resized filesystem in /dev/vda9 Jan 13 20:23:01.738215 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 20:23:01.738405 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 20:23:01.754258 bash[1468]: Updated "/home/core/.ssh/authorized_keys" Jan 13 20:23:01.755224 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 20:23:01.758180 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 20:23:01.771742 locksmithd[1459]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 20:23:01.873800 containerd[1441]: time="2025-01-13T20:23:01.873714360Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 13 20:23:01.900792 containerd[1441]: time="2025-01-13T20:23:01.900680080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:23:01.902094 containerd[1441]: time="2025-01-13T20:23:01.902054400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:23:01.902094 containerd[1441]: time="2025-01-13T20:23:01.902087280Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 20:23:01.902173 containerd[1441]: time="2025-01-13T20:23:01.902103800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 20:23:01.902305 containerd[1441]: time="2025-01-13T20:23:01.902279360Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 20:23:01.902333 containerd[1441]: time="2025-01-13T20:23:01.902304040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 20:23:01.902374 containerd[1441]: time="2025-01-13T20:23:01.902360680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:23:01.902403 containerd[1441]: time="2025-01-13T20:23:01.902375000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:23:01.902556 containerd[1441]: time="2025-01-13T20:23:01.902530600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:23:01.902556 containerd[1441]: time="2025-01-13T20:23:01.902550880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 20:23:01.902593 containerd[1441]: time="2025-01-13T20:23:01.902563720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:23:01.902593 containerd[1441]: time="2025-01-13T20:23:01.902572760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 20:23:01.902652 containerd[1441]: time="2025-01-13T20:23:01.902639440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:23:01.902836 containerd[1441]: time="2025-01-13T20:23:01.902820440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 20:23:01.902930 containerd[1441]: time="2025-01-13T20:23:01.902915720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 20:23:01.902950 containerd[1441]: time="2025-01-13T20:23:01.902931840Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 20:23:01.903035 containerd[1441]: time="2025-01-13T20:23:01.903020720Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 20:23:01.903079 containerd[1441]: time="2025-01-13T20:23:01.903068960Z" level=info msg="metadata content store policy set" policy=shared Jan 13 20:23:01.907536 containerd[1441]: time="2025-01-13T20:23:01.907502280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 20:23:01.907586 containerd[1441]: time="2025-01-13T20:23:01.907555240Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 20:23:01.907586 containerd[1441]: time="2025-01-13T20:23:01.907570400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 20:23:01.907648 containerd[1441]: time="2025-01-13T20:23:01.907586960Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 20:23:01.907648 containerd[1441]: time="2025-01-13T20:23:01.907601520Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 20:23:01.907771 containerd[1441]: time="2025-01-13T20:23:01.907743560Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 20:23:01.910064 containerd[1441]: time="2025-01-13T20:23:01.908005240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 20:23:01.910064 containerd[1441]: time="2025-01-13T20:23:01.908154800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 20:23:01.910064 containerd[1441]: time="2025-01-13T20:23:01.908174800Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 20:23:01.910064 containerd[1441]: time="2025-01-13T20:23:01.908190480Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 20:23:01.910064 containerd[1441]: time="2025-01-13T20:23:01.908204960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 20:23:01.910064 containerd[1441]: time="2025-01-13T20:23:01.908217600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 20:23:01.910064 containerd[1441]: time="2025-01-13T20:23:01.908246240Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 20:23:01.910064 containerd[1441]: time="2025-01-13T20:23:01.908261280Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 20:23:01.910064 containerd[1441]: time="2025-01-13T20:23:01.908275520Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 20:23:01.910064 containerd[1441]: time="2025-01-13T20:23:01.908287520Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 20:23:01.910064 containerd[1441]: time="2025-01-13T20:23:01.908299320Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 20:23:01.910064 containerd[1441]: time="2025-01-13T20:23:01.908310520Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 20:23:01.910064 containerd[1441]: time="2025-01-13T20:23:01.908329640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 20:23:01.910064 containerd[1441]: time="2025-01-13T20:23:01.908343520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 20:23:01.910357 containerd[1441]: time="2025-01-13T20:23:01.908356040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 20:23:01.910357 containerd[1441]: time="2025-01-13T20:23:01.908368120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 20:23:01.910357 containerd[1441]: time="2025-01-13T20:23:01.908381520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 20:23:01.910357 containerd[1441]: time="2025-01-13T20:23:01.908394360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 20:23:01.910357 containerd[1441]: time="2025-01-13T20:23:01.908407000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 20:23:01.910357 containerd[1441]: time="2025-01-13T20:23:01.908424560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 20:23:01.910357 containerd[1441]: time="2025-01-13T20:23:01.908438120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 20:23:01.910357 containerd[1441]: time="2025-01-13T20:23:01.908455040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 20:23:01.910357 containerd[1441]: time="2025-01-13T20:23:01.908466160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 20:23:01.910357 containerd[1441]: time="2025-01-13T20:23:01.908477440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 20:23:01.910357 containerd[1441]: time="2025-01-13T20:23:01.908490240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 20:23:01.910357 containerd[1441]: time="2025-01-13T20:23:01.908504520Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 20:23:01.910357 containerd[1441]: time="2025-01-13T20:23:01.908524920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 20:23:01.910357 containerd[1441]: time="2025-01-13T20:23:01.908538200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 20:23:01.910357 containerd[1441]: time="2025-01-13T20:23:01.908549120Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 20:23:01.910617 containerd[1441]: time="2025-01-13T20:23:01.908723160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 20:23:01.910617 containerd[1441]: time="2025-01-13T20:23:01.908743560Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 20:23:01.910617 containerd[1441]: time="2025-01-13T20:23:01.908753440Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 20:23:01.910617 containerd[1441]: time="2025-01-13T20:23:01.908765520Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 20:23:01.910617 containerd[1441]: time="2025-01-13T20:23:01.908788480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 20:23:01.910617 containerd[1441]: time="2025-01-13T20:23:01.908803600Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 20:23:01.910617 containerd[1441]: time="2025-01-13T20:23:01.908813240Z" level=info msg="NRI interface is disabled by configuration." Jan 13 20:23:01.910617 containerd[1441]: time="2025-01-13T20:23:01.908822600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 20:23:01.910748 containerd[1441]: time="2025-01-13T20:23:01.909090040Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 20:23:01.910748 containerd[1441]: time="2025-01-13T20:23:01.909133360Z" level=info msg="Connect containerd service" Jan 13 20:23:01.910748 containerd[1441]: time="2025-01-13T20:23:01.909167480Z" level=info msg="using legacy CRI server" Jan 13 20:23:01.910748 containerd[1441]: time="2025-01-13T20:23:01.909176320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 20:23:01.910748 containerd[1441]: time="2025-01-13T20:23:01.909410280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 20:23:01.911155 containerd[1441]: time="2025-01-13T20:23:01.911126040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:23:01.911470 containerd[1441]: time="2025-01-13T20:23:01.911417720Z" level=info msg="Start subscribing containerd event" Jan 13 20:23:01.911508 containerd[1441]: time="2025-01-13T20:23:01.911473320Z" level=info msg="Start recovering state" Jan 13 20:23:01.911617 containerd[1441]: time="2025-01-13T20:23:01.911537720Z" level=info msg="Start event monitor" Jan 13 20:23:01.911617 containerd[1441]: time="2025-01-13T20:23:01.911553200Z" level=info msg="Start snapshots syncer" Jan 13 20:23:01.911617 containerd[1441]: time="2025-01-13T20:23:01.911564720Z" level=info msg="Start cni network conf syncer for default" Jan 13 20:23:01.911617 containerd[1441]: time="2025-01-13T20:23:01.911575480Z" level=info msg="Start streaming server" Jan 13 20:23:01.912026 containerd[1441]: time="2025-01-13T20:23:01.912002040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 20:23:01.912169 containerd[1441]: time="2025-01-13T20:23:01.912154200Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 20:23:01.912372 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 20:23:01.913534 containerd[1441]: time="2025-01-13T20:23:01.913510320Z" level=info msg="containerd successfully booted in 0.042081s" Jan 13 20:23:02.212958 sshd_keygen[1442]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 20:23:02.231074 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 20:23:02.245837 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 20:23:02.251350 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 20:23:02.252339 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 20:23:02.255417 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 20:23:02.269288 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 20:23:02.271983 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 20:23:02.274039 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 13 20:23:02.275347 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 20:23:03.274331 systemd-networkd[1390]: eth0: Gained IPv6LL Jan 13 20:23:03.277500 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 20:23:03.279589 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 20:23:03.292492 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:23:03.295214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:23:03.297473 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 20:23:03.314385 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:23:03.315919 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:23:03.317776 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 20:23:03.321776 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 20:23:03.800034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:23:03.801569 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 20:23:03.804260 (kubelet)[1523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 20:23:03.808412 systemd[1]: Startup finished in 557ms (kernel) + 4.289s (initrd) + 3.846s (userspace) = 8.693s. Jan 13 20:23:04.235453 kubelet[1523]: E0113 20:23:04.235391 1523 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 20:23:04.237795 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 20:23:04.237940 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 20:23:08.197132 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 20:23:08.198316 systemd[1]: Started sshd@0-10.0.0.112:22-10.0.0.1:55164.service - OpenSSH per-connection server daemon (10.0.0.1:55164). Jan 13 20:23:08.260409 sshd[1536]: Accepted publickey for core from 10.0.0.1 port 55164 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:08.268057 sshd-session[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:08.288042 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 20:23:08.297478 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 20:23:08.298933 systemd-logind[1424]: New session 1 of user core. Jan 13 20:23:08.306553 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 20:23:08.308709 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 20:23:08.315086 (systemd)[1540]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 20:23:08.386216 systemd[1540]: Queued start job for default target default.target. Jan 13 20:23:08.394259 systemd[1540]: Created slice app.slice - User Application Slice. Jan 13 20:23:08.394297 systemd[1540]: Reached target paths.target - Paths. Jan 13 20:23:08.394310 systemd[1540]: Reached target timers.target - Timers. Jan 13 20:23:08.395638 systemd[1540]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 20:23:08.406632 systemd[1540]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 20:23:08.406743 systemd[1540]: Reached target sockets.target - Sockets. Jan 13 20:23:08.406756 systemd[1540]: Reached target basic.target - Basic System. Jan 13 20:23:08.406793 systemd[1540]: Reached target default.target - Main User Target. Jan 13 20:23:08.406821 systemd[1540]: Startup finished in 86ms. Jan 13 20:23:08.407064 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 20:23:08.408453 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 20:23:08.472889 systemd[1]: Started sshd@1-10.0.0.112:22-10.0.0.1:55176.service - OpenSSH per-connection server daemon (10.0.0.1:55176). Jan 13 20:23:08.519823 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 55176 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:08.521184 sshd-session[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:08.525287 systemd-logind[1424]: New session 2 of user core. Jan 13 20:23:08.534406 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 20:23:08.591060 sshd[1553]: Connection closed by 10.0.0.1 port 55176 Jan 13 20:23:08.591620 sshd-session[1551]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:08.602218 systemd[1]: sshd@1-10.0.0.112:22-10.0.0.1:55176.service: Deactivated successfully. Jan 13 20:23:08.603753 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 20:23:08.606412 systemd-logind[1424]: Session 2 logged out. Waiting for processes to exit. Jan 13 20:23:08.607741 systemd[1]: Started sshd@2-10.0.0.112:22-10.0.0.1:55188.service - OpenSSH per-connection server daemon (10.0.0.1:55188). Jan 13 20:23:08.608460 systemd-logind[1424]: Removed session 2. Jan 13 20:23:08.653397 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 55188 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:08.654631 sshd-session[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:08.658437 systemd-logind[1424]: New session 3 of user core. Jan 13 20:23:08.675436 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 20:23:08.722948 sshd[1560]: Connection closed by 10.0.0.1 port 55188 Jan 13 20:23:08.723297 sshd-session[1558]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:08.742492 systemd[1]: sshd@2-10.0.0.112:22-10.0.0.1:55188.service: Deactivated successfully. Jan 13 20:23:08.743872 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 20:23:08.746283 systemd-logind[1424]: Session 3 logged out. Waiting for processes to exit. Jan 13 20:23:08.747484 systemd[1]: Started sshd@3-10.0.0.112:22-10.0.0.1:55202.service - OpenSSH per-connection server daemon (10.0.0.1:55202). Jan 13 20:23:08.748130 systemd-logind[1424]: Removed session 3. Jan 13 20:23:08.792206 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 55202 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:08.793481 sshd-session[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:08.796948 systemd-logind[1424]: New session 4 of user core. Jan 13 20:23:08.803358 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 20:23:08.853971 sshd[1567]: Connection closed by 10.0.0.1 port 55202 Jan 13 20:23:08.854279 sshd-session[1565]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:08.869503 systemd[1]: sshd@3-10.0.0.112:22-10.0.0.1:55202.service: Deactivated successfully. Jan 13 20:23:08.870771 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 20:23:08.873262 systemd-logind[1424]: Session 4 logged out. Waiting for processes to exit. Jan 13 20:23:08.874262 systemd[1]: Started sshd@4-10.0.0.112:22-10.0.0.1:55204.service - OpenSSH per-connection server daemon (10.0.0.1:55204). Jan 13 20:23:08.875600 systemd-logind[1424]: Removed session 4. Jan 13 20:23:08.919158 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 55204 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:08.920361 sshd-session[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:08.924183 systemd-logind[1424]: New session 5 of user core. Jan 13 20:23:08.935377 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 20:23:08.996682 sudo[1575]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 13 20:23:08.996948 sudo[1575]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:23:09.012138 sudo[1575]: pam_unix(sudo:session): session closed for user root Jan 13 20:23:09.013712 sshd[1574]: Connection closed by 10.0.0.1 port 55204 Jan 13 20:23:09.014215 sshd-session[1572]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:09.023535 systemd[1]: sshd@4-10.0.0.112:22-10.0.0.1:55204.service: Deactivated successfully. Jan 13 20:23:09.024850 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 20:23:09.026309 systemd-logind[1424]: Session 5 logged out. Waiting for processes to exit. Jan 13 20:23:09.027512 systemd[1]: Started sshd@5-10.0.0.112:22-10.0.0.1:55208.service - OpenSSH per-connection server daemon (10.0.0.1:55208). Jan 13 20:23:09.028183 systemd-logind[1424]: Removed session 5. Jan 13 20:23:09.073086 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 55208 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:09.074405 sshd-session[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:09.078302 systemd-logind[1424]: New session 6 of user core. Jan 13 20:23:09.084388 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 20:23:09.134523 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 13 20:23:09.134791 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:23:09.137751 sudo[1584]: pam_unix(sudo:session): session closed for user root Jan 13 20:23:09.142036 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 13 20:23:09.142313 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:23:09.162517 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 13 20:23:09.184654 augenrules[1606]: No rules Jan 13 20:23:09.185880 systemd[1]: audit-rules.service: Deactivated successfully. Jan 13 20:23:09.186052 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 13 20:23:09.187004 sudo[1583]: pam_unix(sudo:session): session closed for user root Jan 13 20:23:09.188195 sshd[1582]: Connection closed by 10.0.0.1 port 55208 Jan 13 20:23:09.188667 sshd-session[1580]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:09.195561 systemd[1]: sshd@5-10.0.0.112:22-10.0.0.1:55208.service: Deactivated successfully. Jan 13 20:23:09.196946 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 20:23:09.198115 systemd-logind[1424]: Session 6 logged out. Waiting for processes to exit. Jan 13 20:23:09.199253 systemd[1]: Started sshd@6-10.0.0.112:22-10.0.0.1:55210.service - OpenSSH per-connection server daemon (10.0.0.1:55210). Jan 13 20:23:09.200022 systemd-logind[1424]: Removed session 6. Jan 13 20:23:09.244579 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 55210 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo Jan 13 20:23:09.245768 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 20:23:09.249744 systemd-logind[1424]: New session 7 of user core. Jan 13 20:23:09.258395 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 20:23:09.308662 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 20:23:09.308927 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 20:23:09.331657 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 20:23:09.346599 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 20:23:09.346799 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 20:23:09.766426 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:23:09.782529 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:23:09.800671 systemd[1]: Reloading requested from client PID 1658 ('systemctl') (unit session-7.scope)... Jan 13 20:23:09.800686 systemd[1]: Reloading... Jan 13 20:23:09.864253 zram_generator::config[1697]: No configuration found. Jan 13 20:23:10.036032 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 20:23:10.086876 systemd[1]: Reloading finished in 285 ms. Jan 13 20:23:10.134539 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:23:10.136913 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 20:23:10.137171 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:23:10.138595 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 20:23:10.233863 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 20:23:10.238988 (kubelet)[1743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 20:23:10.275564 kubelet[1743]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:23:10.275564 kubelet[1743]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 20:23:10.275564 kubelet[1743]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 20:23:10.275910 kubelet[1743]: I0113 20:23:10.275734 1743 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 20:23:11.764561 kubelet[1743]: I0113 20:23:11.764515 1743 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 13 20:23:11.764561 kubelet[1743]: I0113 20:23:11.764549 1743 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 20:23:11.764954 kubelet[1743]: I0113 20:23:11.764791 1743 server.go:929] "Client rotation is on, will bootstrap in background" Jan 13 20:23:11.802488 kubelet[1743]: I0113 20:23:11.802439 1743 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 20:23:11.811093 kubelet[1743]: E0113 20:23:11.811065 1743 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 13 20:23:11.811404 kubelet[1743]: I0113 20:23:11.811264 1743 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 13 20:23:11.814604 kubelet[1743]: I0113 20:23:11.814578 1743 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 20:23:11.815362 kubelet[1743]: I0113 20:23:11.815333 1743 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 13 20:23:11.815510 kubelet[1743]: I0113 20:23:11.815479 1743 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 20:23:11.815687 kubelet[1743]: I0113 20:23:11.815504 1743 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.112","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 13 20:23:11.815776 kubelet[1743]: I0113 20:23:11.815754 1743 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 20:23:11.815776 kubelet[1743]: I0113 20:23:11.815764 1743 container_manager_linux.go:300] "Creating device plugin manager" Jan 13 20:23:11.815952 kubelet[1743]: I0113 20:23:11.815927 1743 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:23:11.818631 kubelet[1743]: I0113 20:23:11.816764 1743 kubelet.go:408] "Attempting to sync node with API server" Jan 13 20:23:11.818631 kubelet[1743]: I0113 20:23:11.816795 1743 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 20:23:11.818631 kubelet[1743]: I0113 20:23:11.816926 1743 kubelet.go:314] "Adding apiserver pod source" Jan 13 20:23:11.818631 kubelet[1743]: I0113 20:23:11.816936 1743 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 20:23:11.818631 kubelet[1743]: E0113 20:23:11.817097 1743 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:11.818631 kubelet[1743]: E0113 20:23:11.817101 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:11.821367 kubelet[1743]: I0113 20:23:11.821336 1743 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 13 20:23:11.823105 kubelet[1743]: I0113 20:23:11.823088 1743 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 20:23:11.823361 kubelet[1743]: W0113 20:23:11.823332 1743 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 20:23:11.824406 kubelet[1743]: W0113 20:23:11.824304 1743 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.112" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 13 20:23:11.824406 kubelet[1743]: E0113 20:23:11.824358 1743 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.112\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 13 20:23:11.824498 kubelet[1743]: W0113 20:23:11.824447 1743 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 13 20:23:11.824498 kubelet[1743]: E0113 20:23:11.824472 1743 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 13 20:23:11.827210 kubelet[1743]: I0113 20:23:11.826575 1743 server.go:1269] "Started kubelet" Jan 13 20:23:11.827210 kubelet[1743]: I0113 20:23:11.826784 1743 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 20:23:11.828900 kubelet[1743]: I0113 20:23:11.828336 1743 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 20:23:11.829561 kubelet[1743]: I0113 20:23:11.829435 1743 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 13 20:23:11.833929 kubelet[1743]: I0113 20:23:11.833888 1743 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 13 20:23:11.834093 kubelet[1743]: I0113 20:23:11.834060 1743 server.go:460] "Adding debug handlers to kubelet server" Jan 13 20:23:11.834352 kubelet[1743]: E0113 20:23:11.834325 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.112\" not found" Jan 13 20:23:11.834817 kubelet[1743]: I0113 20:23:11.834790 1743 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 13 20:23:11.834909 kubelet[1743]: I0113 20:23:11.834899 1743 reconciler.go:26] "Reconciler: start to sync state" Jan 13 20:23:11.836927 kubelet[1743]: I0113 20:23:11.836868 1743 factory.go:221] Registration of the systemd container factory successfully Jan 13 20:23:11.837134 kubelet[1743]: I0113 20:23:11.836957 1743 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 20:23:11.837701 kubelet[1743]: E0113 20:23:11.837532 1743 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 20:23:11.837857 kubelet[1743]: I0113 20:23:11.837803 1743 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 20:23:11.838662 kubelet[1743]: I0113 20:23:11.838175 1743 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 20:23:11.838662 kubelet[1743]: E0113 20:23:11.838435 1743 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.112\" not found" node="10.0.0.112" Jan 13 20:23:11.839378 kubelet[1743]: I0113 20:23:11.839289 1743 factory.go:221] Registration of the containerd container factory successfully Jan 13 20:23:11.847727 kubelet[1743]: I0113 20:23:11.847649 1743 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 20:23:11.847727 kubelet[1743]: I0113 20:23:11.847663 1743 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 20:23:11.847727 kubelet[1743]: I0113 20:23:11.847680 1743 state_mem.go:36] "Initialized new in-memory state store" Jan 13 20:23:11.907372 kubelet[1743]: I0113 20:23:11.905526 1743 policy_none.go:49] "None policy: Start" Jan 13 20:23:11.908074 kubelet[1743]: I0113 20:23:11.908051 1743 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 20:23:11.908127 kubelet[1743]: I0113 20:23:11.908079 1743 state_mem.go:35] "Initializing new in-memory state store" Jan 13 20:23:11.915033 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 13 20:23:11.926932 kubelet[1743]: I0113 20:23:11.926800 1743 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 20:23:11.928597 kubelet[1743]: I0113 20:23:11.928294 1743 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 20:23:11.928597 kubelet[1743]: I0113 20:23:11.928323 1743 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 20:23:11.928597 kubelet[1743]: I0113 20:23:11.928341 1743 kubelet.go:2321] "Starting kubelet main sync loop" Jan 13 20:23:11.928597 kubelet[1743]: E0113 20:23:11.928444 1743 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 20:23:11.932748 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 13 20:23:11.934794 kubelet[1743]: E0113 20:23:11.934766 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.112\" not found" Jan 13 20:23:11.935292 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 13 20:23:11.943079 kubelet[1743]: I0113 20:23:11.943051 1743 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 20:23:11.943279 kubelet[1743]: I0113 20:23:11.943261 1743 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 13 20:23:11.943317 kubelet[1743]: I0113 20:23:11.943274 1743 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 13 20:23:11.943665 kubelet[1743]: I0113 20:23:11.943635 1743 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 20:23:11.945483 kubelet[1743]: E0113 20:23:11.945451 1743 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.112\" not found" Jan 13 20:23:12.044936 kubelet[1743]: I0113 20:23:12.044848 1743 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.112" Jan 13 20:23:12.048497 kubelet[1743]: I0113 20:23:12.048458 1743 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.112" Jan 13 20:23:12.048497 kubelet[1743]: E0113 20:23:12.048494 1743 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.112\": node \"10.0.0.112\" not found" Jan 13 20:23:12.057740 kubelet[1743]: E0113 20:23:12.057704 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.112\" not found" Jan 13 20:23:12.158708 kubelet[1743]: E0113 20:23:12.158659 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.112\" not found" Jan 13 20:23:12.258833 kubelet[1743]: E0113 20:23:12.258796 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.112\" not found" Jan 13 20:23:12.267207 sudo[1617]: pam_unix(sudo:session): session closed for user root Jan 13 20:23:12.268809 sshd[1616]: Connection closed by 10.0.0.1 port 55210 Jan 13 20:23:12.268686 sshd-session[1614]: pam_unix(sshd:session): session closed for user core Jan 13 20:23:12.272289 systemd[1]: sshd@6-10.0.0.112:22-10.0.0.1:55210.service: Deactivated successfully. Jan 13 20:23:12.273972 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 20:23:12.274667 systemd-logind[1424]: Session 7 logged out. Waiting for processes to exit. Jan 13 20:23:12.275574 systemd-logind[1424]: Removed session 7. Jan 13 20:23:12.359711 kubelet[1743]: E0113 20:23:12.359601 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.112\" not found" Jan 13 20:23:12.459972 kubelet[1743]: E0113 20:23:12.459932 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.112\" not found" Jan 13 20:23:12.560426 kubelet[1743]: E0113 20:23:12.560388 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.112\" not found" Jan 13 20:23:12.661097 kubelet[1743]: E0113 20:23:12.661012 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.112\" not found" Jan 13 20:23:12.761583 kubelet[1743]: E0113 20:23:12.761551 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.112\" not found" Jan 13 20:23:12.767756 kubelet[1743]: I0113 20:23:12.767692 1743 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 13 20:23:12.768196 kubelet[1743]: W0113 20:23:12.767846 1743 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 20:23:12.768196 kubelet[1743]: W0113 20:23:12.767887 1743 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 13 20:23:12.818033 kubelet[1743]: E0113 20:23:12.817980 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:12.861763 kubelet[1743]: E0113 20:23:12.861732 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.112\" not found" Jan 13 20:23:12.962546 kubelet[1743]: E0113 20:23:12.962507 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.112\" not found" Jan 13 20:23:13.062988 kubelet[1743]: E0113 20:23:13.062949 1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.112\" not found" Jan 13 20:23:13.164157 kubelet[1743]: I0113 20:23:13.164127 1743 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 13 20:23:13.164450 containerd[1441]: time="2025-01-13T20:23:13.164403659Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 20:23:13.164725 kubelet[1743]: I0113 20:23:13.164560 1743 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 13 20:23:13.818443 kubelet[1743]: E0113 20:23:13.818400 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:13.818443 kubelet[1743]: I0113 20:23:13.818405 1743 apiserver.go:52] "Watching apiserver" Jan 13 20:23:13.827342 systemd[1]: Created slice kubepods-besteffort-pod77b06692_394b_4393_ac0b_c89350320161.slice - libcontainer container kubepods-besteffort-pod77b06692_394b_4393_ac0b_c89350320161.slice. Jan 13 20:23:13.836711 kubelet[1743]: I0113 20:23:13.836671 1743 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 13 20:23:13.843293 kubelet[1743]: I0113 20:23:13.843242 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-cilium-config-path\") pod \"cilium-9qksz\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " pod="kube-system/cilium-9qksz" Jan 13 20:23:13.843293 kubelet[1743]: I0113 20:23:13.843280 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-hubble-tls\") pod \"cilium-9qksz\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " pod="kube-system/cilium-9qksz" Jan 13 20:23:13.843293 kubelet[1743]: I0113 20:23:13.843300 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77b06692-394b-4393-ac0b-c89350320161-lib-modules\") pod \"kube-proxy-8nf85\" (UID: \"77b06692-394b-4393-ac0b-c89350320161\") " pod="kube-system/kube-proxy-8nf85" Jan 13 20:23:13.843293 kubelet[1743]: I0113 20:23:13.843351 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-cilium-cgroup\") pod \"cilium-9qksz\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " pod="kube-system/cilium-9qksz" Jan 13 20:23:13.843293 kubelet[1743]: I0113 20:23:13.843367 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-xtables-lock\") pod \"cilium-9qksz\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " pod="kube-system/cilium-9qksz" Jan 13 20:23:13.843542 kubelet[1743]: I0113 20:23:13.843383 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-host-proc-sys-net\") pod \"cilium-9qksz\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " pod="kube-system/cilium-9qksz" Jan 13 20:23:13.843542 kubelet[1743]: I0113 20:23:13.843424 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-host-proc-sys-kernel\") pod \"cilium-9qksz\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " pod="kube-system/cilium-9qksz" Jan 13 20:23:13.843542 kubelet[1743]: I0113 20:23:13.843439 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77b06692-394b-4393-ac0b-c89350320161-xtables-lock\") pod \"kube-proxy-8nf85\" (UID: \"77b06692-394b-4393-ac0b-c89350320161\") " pod="kube-system/kube-proxy-8nf85" Jan 13 20:23:13.843542 kubelet[1743]: I0113 20:23:13.843453 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-cilium-run\") pod \"cilium-9qksz\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " pod="kube-system/cilium-9qksz" Jan 13 20:23:13.843542 kubelet[1743]: I0113 20:23:13.843497 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-hostproc\") pod \"cilium-9qksz\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " pod="kube-system/cilium-9qksz" Jan 13 20:23:13.843542 kubelet[1743]: I0113 20:23:13.843513 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-clustermesh-secrets\") pod \"cilium-9qksz\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " pod="kube-system/cilium-9qksz" Jan 13 20:23:13.843660 kubelet[1743]: I0113 20:23:13.843528 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf9lq\" (UniqueName: \"kubernetes.io/projected/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-kube-api-access-tf9lq\") pod \"cilium-9qksz\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " pod="kube-system/cilium-9qksz" Jan 13 20:23:13.843660 kubelet[1743]: I0113 20:23:13.843541 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/77b06692-394b-4393-ac0b-c89350320161-kube-proxy\") pod \"kube-proxy-8nf85\" (UID: \"77b06692-394b-4393-ac0b-c89350320161\") " pod="kube-system/kube-proxy-8nf85" Jan 13 20:23:13.843660 kubelet[1743]: I0113 20:23:13.843645 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2mp9\" (UniqueName: \"kubernetes.io/projected/77b06692-394b-4393-ac0b-c89350320161-kube-api-access-j2mp9\") pod \"kube-proxy-8nf85\" (UID: \"77b06692-394b-4393-ac0b-c89350320161\") " pod="kube-system/kube-proxy-8nf85" Jan 13 20:23:13.843720 kubelet[1743]: I0113 20:23:13.843662 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-etc-cni-netd\") pod \"cilium-9qksz\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " pod="kube-system/cilium-9qksz" Jan 13 20:23:13.843720 kubelet[1743]: I0113 20:23:13.843676 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-lib-modules\") pod \"cilium-9qksz\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " pod="kube-system/cilium-9qksz" Jan 13 20:23:13.844184 kubelet[1743]: I0113 20:23:13.844160 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-bpf-maps\") pod \"cilium-9qksz\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " pod="kube-system/cilium-9qksz" Jan 13 20:23:13.844248 kubelet[1743]: I0113 20:23:13.844193 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-cni-path\") pod \"cilium-9qksz\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " pod="kube-system/cilium-9qksz" Jan 13 20:23:13.849747 systemd[1]: Created slice kubepods-burstable-pod8f5872d5_ebbb_4ca5_a879_beb7d15629d4.slice - libcontainer container kubepods-burstable-pod8f5872d5_ebbb_4ca5_a879_beb7d15629d4.slice. Jan 13 20:23:14.148829 kubelet[1743]: E0113 20:23:14.148718 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:14.150024 containerd[1441]: time="2025-01-13T20:23:14.149982653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8nf85,Uid:77b06692-394b-4393-ac0b-c89350320161,Namespace:kube-system,Attempt:0,}" Jan 13 20:23:14.162305 kubelet[1743]: E0113 20:23:14.162218 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:14.162706 containerd[1441]: time="2025-01-13T20:23:14.162675609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9qksz,Uid:8f5872d5-ebbb-4ca5-a879-beb7d15629d4,Namespace:kube-system,Attempt:0,}" Jan 13 20:23:14.672998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2719208490.mount: Deactivated successfully. Jan 13 20:23:14.682577 containerd[1441]: time="2025-01-13T20:23:14.682388736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:23:14.683910 containerd[1441]: time="2025-01-13T20:23:14.683804482Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:23:14.686270 containerd[1441]: time="2025-01-13T20:23:14.686204576Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 13 20:23:14.687054 containerd[1441]: time="2025-01-13T20:23:14.686989842Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:23:14.691296 containerd[1441]: time="2025-01-13T20:23:14.690279032Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 20:23:14.692160 containerd[1441]: time="2025-01-13T20:23:14.692090783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 20:23:14.693601 containerd[1441]: time="2025-01-13T20:23:14.693462476Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 543.383456ms" Jan 13 20:23:14.694310 containerd[1441]: time="2025-01-13T20:23:14.694278110Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 531.531632ms" Jan 13 20:23:14.818750 containerd[1441]: time="2025-01-13T20:23:14.818580366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:23:14.818750 containerd[1441]: time="2025-01-13T20:23:14.818715640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:23:14.819022 containerd[1441]: time="2025-01-13T20:23:14.818938982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:23:14.819022 containerd[1441]: time="2025-01-13T20:23:14.818979085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:23:14.819022 containerd[1441]: time="2025-01-13T20:23:14.818995326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:23:14.819238 containerd[1441]: time="2025-01-13T20:23:14.819098357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:23:14.819238 containerd[1441]: time="2025-01-13T20:23:14.819130320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:23:14.819238 containerd[1441]: time="2025-01-13T20:23:14.819204701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:23:14.819340 kubelet[1743]: E0113 20:23:14.819105 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:14.907415 systemd[1]: Started cri-containerd-e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9.scope - libcontainer container e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9. Jan 13 20:23:14.908997 systemd[1]: Started cri-containerd-f69b799aa0f36e92699a78276491ed14063675ab6cf7805ff0f6955b59c28fbc.scope - libcontainer container f69b799aa0f36e92699a78276491ed14063675ab6cf7805ff0f6955b59c28fbc. Jan 13 20:23:14.930919 containerd[1441]: time="2025-01-13T20:23:14.930670587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9qksz,Uid:8f5872d5-ebbb-4ca5-a879-beb7d15629d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9\"" Jan 13 20:23:14.930919 containerd[1441]: time="2025-01-13T20:23:14.930768711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8nf85,Uid:77b06692-394b-4393-ac0b-c89350320161,Namespace:kube-system,Attempt:0,} returns sandbox id \"f69b799aa0f36e92699a78276491ed14063675ab6cf7805ff0f6955b59c28fbc\"" Jan 13 20:23:14.932028 kubelet[1743]: E0113 20:23:14.932005 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:14.932116 kubelet[1743]: E0113 20:23:14.932059 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:14.933393 containerd[1441]: time="2025-01-13T20:23:14.933365769Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Jan 13 20:23:15.819658 kubelet[1743]: E0113 20:23:15.819614 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:15.961337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3221723915.mount: Deactivated successfully. Jan 13 20:23:16.536607 containerd[1441]: time="2025-01-13T20:23:16.536559813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:16.537358 containerd[1441]: time="2025-01-13T20:23:16.537319683Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771428" Jan 13 20:23:16.538246 containerd[1441]: time="2025-01-13T20:23:16.538144096Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:16.540178 containerd[1441]: time="2025-01-13T20:23:16.540123982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:16.541094 containerd[1441]: time="2025-01-13T20:23:16.541049182Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 1.607649812s" Jan 13 20:23:16.541094 containerd[1441]: time="2025-01-13T20:23:16.541085305Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Jan 13 20:23:16.542436 containerd[1441]: time="2025-01-13T20:23:16.542316856Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 13 20:23:16.543970 containerd[1441]: time="2025-01-13T20:23:16.543917865Z" level=info msg="CreateContainer within sandbox \"f69b799aa0f36e92699a78276491ed14063675ab6cf7805ff0f6955b59c28fbc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 20:23:16.608872 containerd[1441]: time="2025-01-13T20:23:16.608821007Z" level=info msg="CreateContainer within sandbox \"f69b799aa0f36e92699a78276491ed14063675ab6cf7805ff0f6955b59c28fbc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e817c8f7bd241c1071c680136e5cc28be3673107edc5c94f1d2e9b3de7890d2e\"" Jan 13 20:23:16.610309 containerd[1441]: time="2025-01-13T20:23:16.610248743Z" level=info msg="StartContainer for \"e817c8f7bd241c1071c680136e5cc28be3673107edc5c94f1d2e9b3de7890d2e\"" Jan 13 20:23:16.639635 systemd[1]: Started cri-containerd-e817c8f7bd241c1071c680136e5cc28be3673107edc5c94f1d2e9b3de7890d2e.scope - libcontainer container e817c8f7bd241c1071c680136e5cc28be3673107edc5c94f1d2e9b3de7890d2e. Jan 13 20:23:16.667137 containerd[1441]: time="2025-01-13T20:23:16.666990974Z" level=info msg="StartContainer for \"e817c8f7bd241c1071c680136e5cc28be3673107edc5c94f1d2e9b3de7890d2e\" returns successfully" Jan 13 20:23:16.820751 kubelet[1743]: E0113 20:23:16.820330 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:16.941726 kubelet[1743]: E0113 20:23:16.941666 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:16.951350 kubelet[1743]: I0113 20:23:16.951286 1743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8nf85" podStartSLOduration=3.341934657 podStartE2EDuration="4.951273681s" podCreationTimestamp="2025-01-13 20:23:12 +0000 UTC" firstStartedPulling="2025-01-13 20:23:14.932813101 +0000 UTC m=+4.690885465" lastFinishedPulling="2025-01-13 20:23:16.542152125 +0000 UTC m=+6.300224489" observedRunningTime="2025-01-13 20:23:16.950750111 +0000 UTC m=+6.708822514" watchObservedRunningTime="2025-01-13 20:23:16.951273681 +0000 UTC m=+6.709346045" Jan 13 20:23:17.820663 kubelet[1743]: E0113 20:23:17.820611 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:17.943093 kubelet[1743]: E0113 20:23:17.943060 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:18.821296 kubelet[1743]: E0113 20:23:18.821211 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:19.821771 kubelet[1743]: E0113 20:23:19.821729 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:20.822625 kubelet[1743]: E0113 20:23:20.822574 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:21.824239 kubelet[1743]: E0113 20:23:21.824185 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:22.825366 kubelet[1743]: E0113 20:23:22.825312 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:23.825582 kubelet[1743]: E0113 20:23:23.825450 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:24.576392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount767858115.mount: Deactivated successfully. Jan 13 20:23:24.826091 kubelet[1743]: E0113 20:23:24.826057 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:25.742244 containerd[1441]: time="2025-01-13T20:23:25.742182430Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:25.743092 containerd[1441]: time="2025-01-13T20:23:25.742697980Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157650946" Jan 13 20:23:25.743430 containerd[1441]: time="2025-01-13T20:23:25.743407460Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:25.745073 containerd[1441]: time="2025-01-13T20:23:25.744953589Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.20259992s" Jan 13 20:23:25.745073 containerd[1441]: time="2025-01-13T20:23:25.744991624Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 13 20:23:25.747274 containerd[1441]: time="2025-01-13T20:23:25.747243518Z" level=info msg="CreateContainer within sandbox \"e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:23:25.756510 containerd[1441]: time="2025-01-13T20:23:25.756468275Z" level=info msg="CreateContainer within sandbox \"e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"86fe1a58ed027328f8037b5a8c99609839295f45c93048074c55460534272b18\"" Jan 13 20:23:25.757018 containerd[1441]: time="2025-01-13T20:23:25.756947748Z" level=info msg="StartContainer for \"86fe1a58ed027328f8037b5a8c99609839295f45c93048074c55460534272b18\"" Jan 13 20:23:25.792449 systemd[1]: Started cri-containerd-86fe1a58ed027328f8037b5a8c99609839295f45c93048074c55460534272b18.scope - libcontainer container 86fe1a58ed027328f8037b5a8c99609839295f45c93048074c55460534272b18. Jan 13 20:23:25.813915 containerd[1441]: time="2025-01-13T20:23:25.813867952Z" level=info msg="StartContainer for \"86fe1a58ed027328f8037b5a8c99609839295f45c93048074c55460534272b18\" returns successfully" Jan 13 20:23:25.826856 kubelet[1743]: E0113 20:23:25.826820 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:25.869817 systemd[1]: cri-containerd-86fe1a58ed027328f8037b5a8c99609839295f45c93048074c55460534272b18.scope: Deactivated successfully. Jan 13 20:23:25.955952 kubelet[1743]: E0113 20:23:25.955915 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:26.051618 containerd[1441]: time="2025-01-13T20:23:26.051306600Z" level=info msg="shim disconnected" id=86fe1a58ed027328f8037b5a8c99609839295f45c93048074c55460534272b18 namespace=k8s.io Jan 13 20:23:26.051618 containerd[1441]: time="2025-01-13T20:23:26.051387550Z" level=warning msg="cleaning up after shim disconnected" id=86fe1a58ed027328f8037b5a8c99609839295f45c93048074c55460534272b18 namespace=k8s.io Jan 13 20:23:26.051618 containerd[1441]: time="2025-01-13T20:23:26.051398498Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:23:26.753418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86fe1a58ed027328f8037b5a8c99609839295f45c93048074c55460534272b18-rootfs.mount: Deactivated successfully. Jan 13 20:23:26.829309 kubelet[1743]: E0113 20:23:26.826902 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:26.958957 kubelet[1743]: E0113 20:23:26.958800 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:26.960878 containerd[1441]: time="2025-01-13T20:23:26.960694344Z" level=info msg="CreateContainer within sandbox \"e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:23:26.975474 containerd[1441]: time="2025-01-13T20:23:26.975013451Z" level=info msg="CreateContainer within sandbox \"e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"af026a6d62b7a4f39c0c0bac7db453fc059c70cd61f199ff6e0bc68a6d8d8729\"" Jan 13 20:23:26.975589 containerd[1441]: time="2025-01-13T20:23:26.975552972Z" level=info msg="StartContainer for \"af026a6d62b7a4f39c0c0bac7db453fc059c70cd61f199ff6e0bc68a6d8d8729\"" Jan 13 20:23:27.004390 systemd[1]: Started cri-containerd-af026a6d62b7a4f39c0c0bac7db453fc059c70cd61f199ff6e0bc68a6d8d8729.scope - libcontainer container af026a6d62b7a4f39c0c0bac7db453fc059c70cd61f199ff6e0bc68a6d8d8729. Jan 13 20:23:27.023873 containerd[1441]: time="2025-01-13T20:23:27.023825720Z" level=info msg="StartContainer for \"af026a6d62b7a4f39c0c0bac7db453fc059c70cd61f199ff6e0bc68a6d8d8729\" returns successfully" Jan 13 20:23:27.039111 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 20:23:27.039939 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:23:27.040015 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:23:27.046499 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 20:23:27.046668 systemd[1]: cri-containerd-af026a6d62b7a4f39c0c0bac7db453fc059c70cd61f199ff6e0bc68a6d8d8729.scope: Deactivated successfully. Jan 13 20:23:27.064455 containerd[1441]: time="2025-01-13T20:23:27.064391112Z" level=info msg="shim disconnected" id=af026a6d62b7a4f39c0c0bac7db453fc059c70cd61f199ff6e0bc68a6d8d8729 namespace=k8s.io Jan 13 20:23:27.064857 containerd[1441]: time="2025-01-13T20:23:27.064703107Z" level=warning msg="cleaning up after shim disconnected" id=af026a6d62b7a4f39c0c0bac7db453fc059c70cd61f199ff6e0bc68a6d8d8729 namespace=k8s.io Jan 13 20:23:27.064857 containerd[1441]: time="2025-01-13T20:23:27.064725164Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:23:27.066898 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 20:23:27.753629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af026a6d62b7a4f39c0c0bac7db453fc059c70cd61f199ff6e0bc68a6d8d8729-rootfs.mount: Deactivated successfully. Jan 13 20:23:27.829260 kubelet[1743]: E0113 20:23:27.827119 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:27.961515 kubelet[1743]: E0113 20:23:27.961487 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:27.963285 containerd[1441]: time="2025-01-13T20:23:27.963251082Z" level=info msg="CreateContainer within sandbox \"e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:23:27.980488 containerd[1441]: time="2025-01-13T20:23:27.980371947Z" level=info msg="CreateContainer within sandbox \"e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cca7e47d95b3b4e396b990863c62c2222b6b8ee4b6f7c1d7b7b39d4ff3df592d\"" Jan 13 20:23:27.982575 containerd[1441]: time="2025-01-13T20:23:27.981155612Z" level=info msg="StartContainer for \"cca7e47d95b3b4e396b990863c62c2222b6b8ee4b6f7c1d7b7b39d4ff3df592d\"" Jan 13 20:23:28.009421 systemd[1]: Started cri-containerd-cca7e47d95b3b4e396b990863c62c2222b6b8ee4b6f7c1d7b7b39d4ff3df592d.scope - libcontainer container cca7e47d95b3b4e396b990863c62c2222b6b8ee4b6f7c1d7b7b39d4ff3df592d. Jan 13 20:23:28.033495 containerd[1441]: time="2025-01-13T20:23:28.033456090Z" level=info msg="StartContainer for \"cca7e47d95b3b4e396b990863c62c2222b6b8ee4b6f7c1d7b7b39d4ff3df592d\" returns successfully" Jan 13 20:23:28.053030 systemd[1]: cri-containerd-cca7e47d95b3b4e396b990863c62c2222b6b8ee4b6f7c1d7b7b39d4ff3df592d.scope: Deactivated successfully. Jan 13 20:23:28.073643 containerd[1441]: time="2025-01-13T20:23:28.073579633Z" level=info msg="shim disconnected" id=cca7e47d95b3b4e396b990863c62c2222b6b8ee4b6f7c1d7b7b39d4ff3df592d namespace=k8s.io Jan 13 20:23:28.073643 containerd[1441]: time="2025-01-13T20:23:28.073637896Z" level=warning msg="cleaning up after shim disconnected" id=cca7e47d95b3b4e396b990863c62c2222b6b8ee4b6f7c1d7b7b39d4ff3df592d namespace=k8s.io Jan 13 20:23:28.073885 containerd[1441]: time="2025-01-13T20:23:28.073649365Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:23:28.753591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cca7e47d95b3b4e396b990863c62c2222b6b8ee4b6f7c1d7b7b39d4ff3df592d-rootfs.mount: Deactivated successfully. Jan 13 20:23:28.827473 kubelet[1743]: E0113 20:23:28.827414 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:28.965153 kubelet[1743]: E0113 20:23:28.964993 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:28.966627 containerd[1441]: time="2025-01-13T20:23:28.966596413Z" level=info msg="CreateContainer within sandbox \"e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:23:28.982027 containerd[1441]: time="2025-01-13T20:23:28.981972455Z" level=info msg="CreateContainer within sandbox \"e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"814c71f31e43781701984bd49bce80b0c001277f54930ccca1356249951e20d1\"" Jan 13 20:23:28.982800 containerd[1441]: time="2025-01-13T20:23:28.982764443Z" level=info msg="StartContainer for \"814c71f31e43781701984bd49bce80b0c001277f54930ccca1356249951e20d1\"" Jan 13 20:23:29.006386 systemd[1]: Started cri-containerd-814c71f31e43781701984bd49bce80b0c001277f54930ccca1356249951e20d1.scope - libcontainer container 814c71f31e43781701984bd49bce80b0c001277f54930ccca1356249951e20d1. Jan 13 20:23:29.025399 systemd[1]: cri-containerd-814c71f31e43781701984bd49bce80b0c001277f54930ccca1356249951e20d1.scope: Deactivated successfully. Jan 13 20:23:29.026871 containerd[1441]: time="2025-01-13T20:23:29.026769238Z" level=info msg="StartContainer for \"814c71f31e43781701984bd49bce80b0c001277f54930ccca1356249951e20d1\" returns successfully" Jan 13 20:23:29.044584 containerd[1441]: time="2025-01-13T20:23:29.044532076Z" level=info msg="shim disconnected" id=814c71f31e43781701984bd49bce80b0c001277f54930ccca1356249951e20d1 namespace=k8s.io Jan 13 20:23:29.044799 containerd[1441]: time="2025-01-13T20:23:29.044779649Z" level=warning msg="cleaning up after shim disconnected" id=814c71f31e43781701984bd49bce80b0c001277f54930ccca1356249951e20d1 namespace=k8s.io Jan 13 20:23:29.044866 containerd[1441]: time="2025-01-13T20:23:29.044854261Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:23:29.753667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-814c71f31e43781701984bd49bce80b0c001277f54930ccca1356249951e20d1-rootfs.mount: Deactivated successfully. Jan 13 20:23:29.828468 kubelet[1743]: E0113 20:23:29.828392 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:29.968107 kubelet[1743]: E0113 20:23:29.968078 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:29.970273 containerd[1441]: time="2025-01-13T20:23:29.970237337Z" level=info msg="CreateContainer within sandbox \"e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:23:29.986488 containerd[1441]: time="2025-01-13T20:23:29.986384173Z" level=info msg="CreateContainer within sandbox \"e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7\"" Jan 13 20:23:29.986975 containerd[1441]: time="2025-01-13T20:23:29.986856980Z" level=info msg="StartContainer for \"f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7\"" Jan 13 20:23:30.009536 systemd[1]: Started cri-containerd-f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7.scope - libcontainer container f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7. Jan 13 20:23:30.036555 containerd[1441]: time="2025-01-13T20:23:30.036322437Z" level=info msg="StartContainer for \"f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7\" returns successfully" Jan 13 20:23:30.167103 kubelet[1743]: I0113 20:23:30.166370 1743 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 13 20:23:30.554258 kernel: Initializing XFRM netlink socket Jan 13 20:23:30.829398 kubelet[1743]: E0113 20:23:30.829278 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:30.972452 kubelet[1743]: E0113 20:23:30.972377 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:31.817817 kubelet[1743]: E0113 20:23:31.817763 1743 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:31.830294 kubelet[1743]: E0113 20:23:31.830245 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:31.974050 kubelet[1743]: E0113 20:23:31.974012 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:32.184113 systemd-networkd[1390]: cilium_host: Link UP Jan 13 20:23:32.184313 systemd-networkd[1390]: cilium_net: Link UP Jan 13 20:23:32.184448 systemd-networkd[1390]: cilium_net: Gained carrier Jan 13 20:23:32.184563 systemd-networkd[1390]: cilium_host: Gained carrier Jan 13 20:23:32.277911 systemd-networkd[1390]: cilium_vxlan: Link UP Jan 13 20:23:32.277928 systemd-networkd[1390]: cilium_vxlan: Gained carrier Jan 13 20:23:32.443367 systemd-networkd[1390]: cilium_host: Gained IPv6LL Jan 13 20:23:32.618322 kernel: NET: Registered PF_ALG protocol family Jan 13 20:23:32.830748 kubelet[1743]: E0113 20:23:32.830611 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:32.975710 kubelet[1743]: E0113 20:23:32.975671 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:33.098355 systemd-networkd[1390]: cilium_net: Gained IPv6LL Jan 13 20:23:33.176314 systemd-networkd[1390]: lxc_health: Link UP Jan 13 20:23:33.185219 systemd-networkd[1390]: lxc_health: Gained carrier Jan 13 20:23:33.542166 kubelet[1743]: I0113 20:23:33.541265 1743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9qksz" podStartSLOduration=10.728350059 podStartE2EDuration="21.541217716s" podCreationTimestamp="2025-01-13 20:23:12 +0000 UTC" firstStartedPulling="2025-01-13 20:23:14.932870603 +0000 UTC m=+4.690942927" lastFinishedPulling="2025-01-13 20:23:25.74573822 +0000 UTC m=+15.503810584" observedRunningTime="2025-01-13 20:23:30.989258305 +0000 UTC m=+20.747330669" watchObservedRunningTime="2025-01-13 20:23:33.541217716 +0000 UTC m=+23.299290080" Jan 13 20:23:33.546369 systemd-networkd[1390]: cilium_vxlan: Gained IPv6LL Jan 13 20:23:33.548392 systemd[1]: Created slice kubepods-besteffort-pod68a2ab4d_5991_4ff5_ad05_28e5c0f3d074.slice - libcontainer container kubepods-besteffort-pod68a2ab4d_5991_4ff5_ad05_28e5c0f3d074.slice. Jan 13 20:23:33.570359 kubelet[1743]: I0113 20:23:33.570292 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96288\" (UniqueName: \"kubernetes.io/projected/68a2ab4d-5991-4ff5-ad05-28e5c0f3d074-kube-api-access-96288\") pod \"nginx-deployment-8587fbcb89-x7wgg\" (UID: \"68a2ab4d-5991-4ff5-ad05-28e5c0f3d074\") " pod="default/nginx-deployment-8587fbcb89-x7wgg" Jan 13 20:23:33.831107 kubelet[1743]: E0113 20:23:33.830988 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:33.850905 containerd[1441]: time="2025-01-13T20:23:33.850858927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-x7wgg,Uid:68a2ab4d-5991-4ff5-ad05-28e5c0f3d074,Namespace:default,Attempt:0,}" Jan 13 20:23:33.890820 systemd-networkd[1390]: lxcbe2e85752e70: Link UP Jan 13 20:23:33.897253 kernel: eth0: renamed from tmpc87dc Jan 13 20:23:33.902249 systemd-networkd[1390]: lxcbe2e85752e70: Gained carrier Jan 13 20:23:34.166016 kubelet[1743]: E0113 20:23:34.165657 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:34.506724 systemd-networkd[1390]: lxc_health: Gained IPv6LL Jan 13 20:23:34.831387 kubelet[1743]: E0113 20:23:34.831274 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:34.983079 kubelet[1743]: E0113 20:23:34.982751 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:23:35.338635 systemd-networkd[1390]: lxcbe2e85752e70: Gained IPv6LL Jan 13 20:23:35.832270 kubelet[1743]: E0113 20:23:35.832201 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:36.832550 kubelet[1743]: E0113 20:23:36.832498 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:37.458005 containerd[1441]: time="2025-01-13T20:23:37.457924757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:23:37.458005 containerd[1441]: time="2025-01-13T20:23:37.457974929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:23:37.458520 containerd[1441]: time="2025-01-13T20:23:37.458377390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:23:37.458520 containerd[1441]: time="2025-01-13T20:23:37.458476616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:23:37.480417 systemd[1]: Started cri-containerd-c87dc2d4890b65e5e73b35242e65c276dddffa36125369b282b16afb0b7779c1.scope - libcontainer container c87dc2d4890b65e5e73b35242e65c276dddffa36125369b282b16afb0b7779c1. Jan 13 20:23:37.489765 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:23:37.505085 containerd[1441]: time="2025-01-13T20:23:37.505050932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-x7wgg,Uid:68a2ab4d-5991-4ff5-ad05-28e5c0f3d074,Namespace:default,Attempt:0,} returns sandbox id \"c87dc2d4890b65e5e73b35242e65c276dddffa36125369b282b16afb0b7779c1\"" Jan 13 20:23:37.506679 containerd[1441]: time="2025-01-13T20:23:37.506651699Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:23:37.833213 kubelet[1743]: E0113 20:23:37.833089 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:38.833517 kubelet[1743]: E0113 20:23:38.833405 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:39.438439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3416792873.mount: Deactivated successfully. Jan 13 20:23:39.834543 kubelet[1743]: E0113 20:23:39.834279 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:40.176862 containerd[1441]: time="2025-01-13T20:23:40.176473202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:40.177438 containerd[1441]: time="2025-01-13T20:23:40.176826323Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67697045" Jan 13 20:23:40.178436 containerd[1441]: time="2025-01-13T20:23:40.178408412Z" level=info msg="ImageCreate event name:\"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:40.181218 containerd[1441]: time="2025-01-13T20:23:40.181149340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:40.182314 containerd[1441]: time="2025-01-13T20:23:40.182286909Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"67696923\" in 2.675600748s" Jan 13 20:23:40.182508 containerd[1441]: time="2025-01-13T20:23:40.182388543Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\"" Jan 13 20:23:40.184387 containerd[1441]: time="2025-01-13T20:23:40.184343065Z" level=info msg="CreateContainer within sandbox \"c87dc2d4890b65e5e73b35242e65c276dddffa36125369b282b16afb0b7779c1\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 13 20:23:40.194241 containerd[1441]: time="2025-01-13T20:23:40.194187081Z" level=info msg="CreateContainer within sandbox \"c87dc2d4890b65e5e73b35242e65c276dddffa36125369b282b16afb0b7779c1\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"399d57123ed6a69e092d863a1c7f11f28da7fe6db90c4a84cb4cf5124c0874b9\"" Jan 13 20:23:40.194635 containerd[1441]: time="2025-01-13T20:23:40.194589141Z" level=info msg="StartContainer for \"399d57123ed6a69e092d863a1c7f11f28da7fe6db90c4a84cb4cf5124c0874b9\"" Jan 13 20:23:40.212826 systemd[1]: run-containerd-runc-k8s.io-399d57123ed6a69e092d863a1c7f11f28da7fe6db90c4a84cb4cf5124c0874b9-runc.9fxLI5.mount: Deactivated successfully. Jan 13 20:23:40.222411 systemd[1]: Started cri-containerd-399d57123ed6a69e092d863a1c7f11f28da7fe6db90c4a84cb4cf5124c0874b9.scope - libcontainer container 399d57123ed6a69e092d863a1c7f11f28da7fe6db90c4a84cb4cf5124c0874b9. Jan 13 20:23:40.245360 containerd[1441]: time="2025-01-13T20:23:40.245323861Z" level=info msg="StartContainer for \"399d57123ed6a69e092d863a1c7f11f28da7fe6db90c4a84cb4cf5124c0874b9\" returns successfully" Jan 13 20:23:40.835431 kubelet[1743]: E0113 20:23:40.835387 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:41.836032 kubelet[1743]: E0113 20:23:41.835994 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:42.836878 kubelet[1743]: E0113 20:23:42.836836 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:43.837291 kubelet[1743]: E0113 20:23:43.837225 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:44.837667 kubelet[1743]: E0113 20:23:44.837617 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:45.832643 kubelet[1743]: I0113 20:23:45.832573 1743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-x7wgg" podStartSLOduration=10.155874407 podStartE2EDuration="12.832550125s" podCreationTimestamp="2025-01-13 20:23:33 +0000 UTC" firstStartedPulling="2025-01-13 20:23:37.506402915 +0000 UTC m=+27.264475279" lastFinishedPulling="2025-01-13 20:23:40.183078633 +0000 UTC m=+29.941150997" observedRunningTime="2025-01-13 20:23:40.997524676 +0000 UTC m=+30.755597040" watchObservedRunningTime="2025-01-13 20:23:45.832550125 +0000 UTC m=+35.590622489" Jan 13 20:23:45.838216 kubelet[1743]: E0113 20:23:45.837984 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:45.838851 systemd[1]: Created slice kubepods-besteffort-pod78cd72ac_12b5_4065_94ba_0198c3214a28.slice - libcontainer container kubepods-besteffort-pod78cd72ac_12b5_4065_94ba_0198c3214a28.slice. Jan 13 20:23:45.939808 kubelet[1743]: I0113 20:23:45.939741 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/78cd72ac-12b5-4065-94ba-0198c3214a28-data\") pod \"nfs-server-provisioner-0\" (UID: \"78cd72ac-12b5-4065-94ba-0198c3214a28\") " pod="default/nfs-server-provisioner-0" Jan 13 20:23:45.939808 kubelet[1743]: I0113 20:23:45.939785 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv6v4\" (UniqueName: \"kubernetes.io/projected/78cd72ac-12b5-4065-94ba-0198c3214a28-kube-api-access-vv6v4\") pod \"nfs-server-provisioner-0\" (UID: \"78cd72ac-12b5-4065-94ba-0198c3214a28\") " pod="default/nfs-server-provisioner-0" Jan 13 20:23:46.142636 containerd[1441]: time="2025-01-13T20:23:46.142270471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:78cd72ac-12b5-4065-94ba-0198c3214a28,Namespace:default,Attempt:0,}" Jan 13 20:23:46.168354 systemd-networkd[1390]: lxc8f154496af83: Link UP Jan 13 20:23:46.176270 kernel: eth0: renamed from tmp465d8 Jan 13 20:23:46.183788 systemd-networkd[1390]: lxc8f154496af83: Gained carrier Jan 13 20:23:46.369061 containerd[1441]: time="2025-01-13T20:23:46.368629458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:23:46.369189 containerd[1441]: time="2025-01-13T20:23:46.369073403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:23:46.369189 containerd[1441]: time="2025-01-13T20:23:46.369098155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:23:46.369381 containerd[1441]: time="2025-01-13T20:23:46.369211801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:23:46.388418 systemd[1]: Started cri-containerd-465d8b06ac964aabf12ebb84fbcd0923fbdbf026284c28bf5d92c798b627a8a4.scope - libcontainer container 465d8b06ac964aabf12ebb84fbcd0923fbdbf026284c28bf5d92c798b627a8a4. Jan 13 20:23:46.398996 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:23:46.414903 containerd[1441]: time="2025-01-13T20:23:46.414791017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:78cd72ac-12b5-4065-94ba-0198c3214a28,Namespace:default,Attempt:0,} returns sandbox id \"465d8b06ac964aabf12ebb84fbcd0923fbdbf026284c28bf5d92c798b627a8a4\"" Jan 13 20:23:46.416692 containerd[1441]: time="2025-01-13T20:23:46.416636694Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 13 20:23:46.838982 kubelet[1743]: E0113 20:23:46.838426 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:47.115368 update_engine[1431]: I20250113 20:23:47.115131 1431 update_attempter.cc:509] Updating boot flags... Jan 13 20:23:47.140367 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2980) Jan 13 20:23:47.170274 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2980) Jan 13 20:23:47.838581 kubelet[1743]: E0113 20:23:47.838539 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:47.882718 systemd-networkd[1390]: lxc8f154496af83: Gained IPv6LL Jan 13 20:23:48.169104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3635918159.mount: Deactivated successfully. Jan 13 20:23:48.839156 kubelet[1743]: E0113 20:23:48.839006 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:49.456362 containerd[1441]: time="2025-01-13T20:23:49.456311663Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:49.457551 containerd[1441]: time="2025-01-13T20:23:49.456809138Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Jan 13 20:23:49.458212 containerd[1441]: time="2025-01-13T20:23:49.457969806Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:49.460789 containerd[1441]: time="2025-01-13T20:23:49.460755146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:23:49.462731 containerd[1441]: time="2025-01-13T20:23:49.462693619Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.046018376s" Jan 13 20:23:49.462731 containerd[1441]: time="2025-01-13T20:23:49.462725491Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 13 20:23:49.464756 containerd[1441]: time="2025-01-13T20:23:49.464726828Z" level=info msg="CreateContainer within sandbox \"465d8b06ac964aabf12ebb84fbcd0923fbdbf026284c28bf5d92c798b627a8a4\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 13 20:23:49.473498 containerd[1441]: time="2025-01-13T20:23:49.473452075Z" level=info msg="CreateContainer within sandbox \"465d8b06ac964aabf12ebb84fbcd0923fbdbf026284c28bf5d92c798b627a8a4\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"4f6cbcf465322755b3fc6661a0411afdb2c8d5f438ee0ecb4b2d2a004bbc22c5\"" Jan 13 20:23:49.474116 containerd[1441]: time="2025-01-13T20:23:49.474090474Z" level=info msg="StartContainer for \"4f6cbcf465322755b3fc6661a0411afdb2c8d5f438ee0ecb4b2d2a004bbc22c5\"" Jan 13 20:23:49.551397 systemd[1]: Started cri-containerd-4f6cbcf465322755b3fc6661a0411afdb2c8d5f438ee0ecb4b2d2a004bbc22c5.scope - libcontainer container 4f6cbcf465322755b3fc6661a0411afdb2c8d5f438ee0ecb4b2d2a004bbc22c5. Jan 13 20:23:49.591598 containerd[1441]: time="2025-01-13T20:23:49.582141396Z" level=info msg="StartContainer for \"4f6cbcf465322755b3fc6661a0411afdb2c8d5f438ee0ecb4b2d2a004bbc22c5\" returns successfully" Jan 13 20:23:49.839549 kubelet[1743]: E0113 20:23:49.839425 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:50.840434 kubelet[1743]: E0113 20:23:50.840386 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:51.817538 kubelet[1743]: E0113 20:23:51.817501 1743 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:51.841096 kubelet[1743]: E0113 20:23:51.841051 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:52.841888 kubelet[1743]: E0113 20:23:52.841848 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:53.842348 kubelet[1743]: E0113 20:23:53.842300 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:54.843377 kubelet[1743]: E0113 20:23:54.843334 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:55.844364 kubelet[1743]: E0113 20:23:55.844313 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:56.844597 kubelet[1743]: E0113 20:23:56.844554 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:57.845003 kubelet[1743]: E0113 20:23:57.844961 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:58.845581 kubelet[1743]: E0113 20:23:58.845537 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:59.583946 kubelet[1743]: I0113 20:23:59.583874 1743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.536722026 podStartE2EDuration="14.583855817s" podCreationTimestamp="2025-01-13 20:23:45 +0000 UTC" firstStartedPulling="2025-01-13 20:23:46.416192789 +0000 UTC m=+36.174265153" lastFinishedPulling="2025-01-13 20:23:49.46332662 +0000 UTC m=+39.221398944" observedRunningTime="2025-01-13 20:23:50.021519693 +0000 UTC m=+39.779592057" watchObservedRunningTime="2025-01-13 20:23:59.583855817 +0000 UTC m=+49.341928181" Jan 13 20:23:59.589168 systemd[1]: Created slice kubepods-besteffort-podac980c2e_d587_4523_9c99_616282f336b6.slice - libcontainer container kubepods-besteffort-podac980c2e_d587_4523_9c99_616282f336b6.slice. Jan 13 20:23:59.723311 kubelet[1743]: I0113 20:23:59.723201 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ac097cb4-4cf2-4377-b494-e3b12c58f250\" (UniqueName: \"kubernetes.io/nfs/ac980c2e-d587-4523-9c99-616282f336b6-pvc-ac097cb4-4cf2-4377-b494-e3b12c58f250\") pod \"test-pod-1\" (UID: \"ac980c2e-d587-4523-9c99-616282f336b6\") " pod="default/test-pod-1" Jan 13 20:23:59.723311 kubelet[1743]: I0113 20:23:59.723267 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgn2c\" (UniqueName: \"kubernetes.io/projected/ac980c2e-d587-4523-9c99-616282f336b6-kube-api-access-lgn2c\") pod \"test-pod-1\" (UID: \"ac980c2e-d587-4523-9c99-616282f336b6\") " pod="default/test-pod-1" Jan 13 20:23:59.840254 kernel: FS-Cache: Loaded Jan 13 20:23:59.846606 kubelet[1743]: E0113 20:23:59.846558 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:23:59.864488 kernel: RPC: Registered named UNIX socket transport module. Jan 13 20:23:59.864543 kernel: RPC: Registered udp transport module. Jan 13 20:23:59.864567 kernel: RPC: Registered tcp transport module. Jan 13 20:23:59.865601 kernel: RPC: Registered tcp-with-tls transport module. Jan 13 20:23:59.865699 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 13 20:24:00.045472 kernel: NFS: Registering the id_resolver key type Jan 13 20:24:00.045676 kernel: Key type id_resolver registered Jan 13 20:24:00.045699 kernel: Key type id_legacy registered Jan 13 20:24:00.071241 nfsidmap[3152]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 20:24:00.074708 nfsidmap[3155]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 13 20:24:00.192885 containerd[1441]: time="2025-01-13T20:24:00.192834458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:ac980c2e-d587-4523-9c99-616282f336b6,Namespace:default,Attempt:0,}" Jan 13 20:24:00.220217 systemd-networkd[1390]: lxc5998b5a8d979: Link UP Jan 13 20:24:00.227267 kernel: eth0: renamed from tmpaee44 Jan 13 20:24:00.235544 systemd-networkd[1390]: lxc5998b5a8d979: Gained carrier Jan 13 20:24:00.377929 containerd[1441]: time="2025-01-13T20:24:00.377833358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:24:00.377929 containerd[1441]: time="2025-01-13T20:24:00.377886552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:24:00.377929 containerd[1441]: time="2025-01-13T20:24:00.377897990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:00.378173 containerd[1441]: time="2025-01-13T20:24:00.378013856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:00.401392 systemd[1]: Started cri-containerd-aee444ee0df8aa1dcbb76056471d24ca68bc860352ada7ca71d5add3f1916a37.scope - libcontainer container aee444ee0df8aa1dcbb76056471d24ca68bc860352ada7ca71d5add3f1916a37. Jan 13 20:24:00.411551 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 20:24:00.449506 containerd[1441]: time="2025-01-13T20:24:00.449361960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:ac980c2e-d587-4523-9c99-616282f336b6,Namespace:default,Attempt:0,} returns sandbox id \"aee444ee0df8aa1dcbb76056471d24ca68bc860352ada7ca71d5add3f1916a37\"" Jan 13 20:24:00.451323 containerd[1441]: time="2025-01-13T20:24:00.451295161Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 13 20:24:00.674675 containerd[1441]: time="2025-01-13T20:24:00.674629844Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:00.675361 containerd[1441]: time="2025-01-13T20:24:00.675310480Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 13 20:24:00.678171 containerd[1441]: time="2025-01-13T20:24:00.678078698Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:eca1d1ff18c7af45f86b7e0b572090f563a676ddca3da2ecff678390366335ad\", size \"67696923\" in 226.753341ms" Jan 13 20:24:00.678171 containerd[1441]: time="2025-01-13T20:24:00.678111454Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a86cd5b7fd4c45b8b60dbcc26c955515e3a36347f806d2b7092c4908f54e0a55\"" Jan 13 20:24:00.679959 containerd[1441]: time="2025-01-13T20:24:00.679930030Z" level=info msg="CreateContainer within sandbox \"aee444ee0df8aa1dcbb76056471d24ca68bc860352ada7ca71d5add3f1916a37\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 13 20:24:00.703571 containerd[1441]: time="2025-01-13T20:24:00.703303021Z" level=info msg="CreateContainer within sandbox \"aee444ee0df8aa1dcbb76056471d24ca68bc860352ada7ca71d5add3f1916a37\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"98fbed54f703e073ff5e5c4ae48801fcb09ab54a438edfecda301c68da125a64\"" Jan 13 20:24:00.704056 containerd[1441]: time="2025-01-13T20:24:00.704001575Z" level=info msg="StartContainer for \"98fbed54f703e073ff5e5c4ae48801fcb09ab54a438edfecda301c68da125a64\"" Jan 13 20:24:00.727414 systemd[1]: Started cri-containerd-98fbed54f703e073ff5e5c4ae48801fcb09ab54a438edfecda301c68da125a64.scope - libcontainer container 98fbed54f703e073ff5e5c4ae48801fcb09ab54a438edfecda301c68da125a64. Jan 13 20:24:00.747687 containerd[1441]: time="2025-01-13T20:24:00.747645422Z" level=info msg="StartContainer for \"98fbed54f703e073ff5e5c4ae48801fcb09ab54a438edfecda301c68da125a64\" returns successfully" Jan 13 20:24:00.846832 kubelet[1743]: E0113 20:24:00.846792 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:01.514411 systemd-networkd[1390]: lxc5998b5a8d979: Gained IPv6LL Jan 13 20:24:01.847199 kubelet[1743]: E0113 20:24:01.847073 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:02.847805 kubelet[1743]: E0113 20:24:02.847761 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:03.848717 kubelet[1743]: E0113 20:24:03.848671 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:04.238834 kubelet[1743]: I0113 20:24:04.238774 1743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.010903053 podStartE2EDuration="18.238758138s" podCreationTimestamp="2025-01-13 20:23:46 +0000 UTC" firstStartedPulling="2025-01-13 20:24:00.45089493 +0000 UTC m=+50.208967254" lastFinishedPulling="2025-01-13 20:24:00.678749975 +0000 UTC m=+50.436822339" observedRunningTime="2025-01-13 20:24:01.034268503 +0000 UTC m=+50.792340867" watchObservedRunningTime="2025-01-13 20:24:04.238758138 +0000 UTC m=+53.996830462" Jan 13 20:24:04.266277 containerd[1441]: time="2025-01-13T20:24:04.266201159Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 20:24:04.285152 containerd[1441]: time="2025-01-13T20:24:04.285100955Z" level=info msg="StopContainer for \"f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7\" with timeout 2 (s)" Jan 13 20:24:04.285419 containerd[1441]: time="2025-01-13T20:24:04.285392687Z" level=info msg="Stop container \"f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7\" with signal terminated" Jan 13 20:24:04.292854 systemd-networkd[1390]: lxc_health: Link DOWN Jan 13 20:24:04.292861 systemd-networkd[1390]: lxc_health: Lost carrier Jan 13 20:24:04.323670 systemd[1]: cri-containerd-f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7.scope: Deactivated successfully. Jan 13 20:24:04.323987 systemd[1]: cri-containerd-f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7.scope: Consumed 6.547s CPU time. Jan 13 20:24:04.342456 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7-rootfs.mount: Deactivated successfully. Jan 13 20:24:04.357174 containerd[1441]: time="2025-01-13T20:24:04.357092403Z" level=info msg="shim disconnected" id=f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7 namespace=k8s.io Jan 13 20:24:04.357174 containerd[1441]: time="2025-01-13T20:24:04.357146438Z" level=warning msg="cleaning up after shim disconnected" id=f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7 namespace=k8s.io Jan 13 20:24:04.357174 containerd[1441]: time="2025-01-13T20:24:04.357156317Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:04.373032 containerd[1441]: time="2025-01-13T20:24:04.372976647Z" level=info msg="StopContainer for \"f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7\" returns successfully" Jan 13 20:24:04.373627 containerd[1441]: time="2025-01-13T20:24:04.373588789Z" level=info msg="StopPodSandbox for \"e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9\"" Jan 13 20:24:04.377315 containerd[1441]: time="2025-01-13T20:24:04.377263918Z" level=info msg="Container to stop \"86fe1a58ed027328f8037b5a8c99609839295f45c93048074c55460534272b18\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:24:04.377315 containerd[1441]: time="2025-01-13T20:24:04.377302514Z" level=info msg="Container to stop \"814c71f31e43781701984bd49bce80b0c001277f54930ccca1356249951e20d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:24:04.377315 containerd[1441]: time="2025-01-13T20:24:04.377313993Z" level=info msg="Container to stop \"af026a6d62b7a4f39c0c0bac7db453fc059c70cd61f199ff6e0bc68a6d8d8729\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:24:04.377453 containerd[1441]: time="2025-01-13T20:24:04.377324592Z" level=info msg="Container to stop \"cca7e47d95b3b4e396b990863c62c2222b6b8ee4b6f7c1d7b7b39d4ff3df592d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:24:04.377453 containerd[1441]: time="2025-01-13T20:24:04.377333071Z" level=info msg="Container to stop \"f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 13 20:24:04.378841 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9-shm.mount: Deactivated successfully. Jan 13 20:24:04.382385 systemd[1]: cri-containerd-e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9.scope: Deactivated successfully. Jan 13 20:24:04.414930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9-rootfs.mount: Deactivated successfully. Jan 13 20:24:04.419657 containerd[1441]: time="2025-01-13T20:24:04.419599637Z" level=info msg="shim disconnected" id=e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9 namespace=k8s.io Jan 13 20:24:04.419657 containerd[1441]: time="2025-01-13T20:24:04.419656672Z" level=warning msg="cleaning up after shim disconnected" id=e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9 namespace=k8s.io Jan 13 20:24:04.419657 containerd[1441]: time="2025-01-13T20:24:04.419664591Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:04.430750 containerd[1441]: time="2025-01-13T20:24:04.430694898Z" level=info msg="TearDown network for sandbox \"e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9\" successfully" Jan 13 20:24:04.430750 containerd[1441]: time="2025-01-13T20:24:04.430730695Z" level=info msg="StopPodSandbox for \"e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9\" returns successfully" Jan 13 20:24:04.550096 kubelet[1743]: I0113 20:24:04.549575 1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-host-proc-sys-kernel\") pod \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " Jan 13 20:24:04.550096 kubelet[1743]: I0113 20:24:04.549629 1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-clustermesh-secrets\") pod \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " Jan 13 20:24:04.550096 kubelet[1743]: I0113 20:24:04.549653 1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-cni-path\") pod \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " Jan 13 20:24:04.550096 kubelet[1743]: I0113 20:24:04.549670 1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-lib-modules\") pod \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " Jan 13 20:24:04.550096 kubelet[1743]: I0113 20:24:04.549688 1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-cilium-config-path\") pod \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " Jan 13 20:24:04.550096 kubelet[1743]: I0113 20:24:04.549690 1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8f5872d5-ebbb-4ca5-a879-beb7d15629d4" (UID: "8f5872d5-ebbb-4ca5-a879-beb7d15629d4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:04.550390 kubelet[1743]: I0113 20:24:04.549705 1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-hubble-tls\") pod \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " Jan 13 20:24:04.550390 kubelet[1743]: I0113 20:24:04.549758 1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-cilium-run\") pod \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " Jan 13 20:24:04.550390 kubelet[1743]: I0113 20:24:04.549777 1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-hostproc\") pod \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " Jan 13 20:24:04.550390 kubelet[1743]: I0113 20:24:04.549793 1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-host-proc-sys-net\") pod \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " Jan 13 20:24:04.550390 kubelet[1743]: I0113 20:24:04.549815 1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf9lq\" (UniqueName: \"kubernetes.io/projected/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-kube-api-access-tf9lq\") pod \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " Jan 13 20:24:04.550390 kubelet[1743]: I0113 20:24:04.549833 1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-etc-cni-netd\") pod \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " Jan 13 20:24:04.550597 kubelet[1743]: I0113 20:24:04.549847 1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-cilium-cgroup\") pod \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " Jan 13 20:24:04.550597 kubelet[1743]: I0113 20:24:04.549861 1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-xtables-lock\") pod \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " Jan 13 20:24:04.550597 kubelet[1743]: I0113 20:24:04.549875 1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-bpf-maps\") pod \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\" (UID: \"8f5872d5-ebbb-4ca5-a879-beb7d15629d4\") " Jan 13 20:24:04.550597 kubelet[1743]: I0113 20:24:04.549905 1743 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-host-proc-sys-kernel\") on node \"10.0.0.112\" DevicePath \"\"" Jan 13 20:24:04.550597 kubelet[1743]: I0113 20:24:04.549923 1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8f5872d5-ebbb-4ca5-a879-beb7d15629d4" (UID: "8f5872d5-ebbb-4ca5-a879-beb7d15629d4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:04.550597 kubelet[1743]: I0113 20:24:04.549942 1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8f5872d5-ebbb-4ca5-a879-beb7d15629d4" (UID: "8f5872d5-ebbb-4ca5-a879-beb7d15629d4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:04.550728 kubelet[1743]: I0113 20:24:04.549958 1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-hostproc" (OuterVolumeSpecName: "hostproc") pod "8f5872d5-ebbb-4ca5-a879-beb7d15629d4" (UID: "8f5872d5-ebbb-4ca5-a879-beb7d15629d4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:04.550728 kubelet[1743]: I0113 20:24:04.549992 1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8f5872d5-ebbb-4ca5-a879-beb7d15629d4" (UID: "8f5872d5-ebbb-4ca5-a879-beb7d15629d4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:04.550728 kubelet[1743]: I0113 20:24:04.550015 1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-cni-path" (OuterVolumeSpecName: "cni-path") pod "8f5872d5-ebbb-4ca5-a879-beb7d15629d4" (UID: "8f5872d5-ebbb-4ca5-a879-beb7d15629d4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:04.551190 kubelet[1743]: I0113 20:24:04.550842 1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8f5872d5-ebbb-4ca5-a879-beb7d15629d4" (UID: "8f5872d5-ebbb-4ca5-a879-beb7d15629d4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:04.551190 kubelet[1743]: I0113 20:24:04.550902 1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8f5872d5-ebbb-4ca5-a879-beb7d15629d4" (UID: "8f5872d5-ebbb-4ca5-a879-beb7d15629d4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:04.551190 kubelet[1743]: I0113 20:24:04.550919 1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8f5872d5-ebbb-4ca5-a879-beb7d15629d4" (UID: "8f5872d5-ebbb-4ca5-a879-beb7d15629d4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:04.551190 kubelet[1743]: I0113 20:24:04.550935 1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8f5872d5-ebbb-4ca5-a879-beb7d15629d4" (UID: "8f5872d5-ebbb-4ca5-a879-beb7d15629d4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 13 20:24:04.552522 kubelet[1743]: I0113 20:24:04.552236 1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8f5872d5-ebbb-4ca5-a879-beb7d15629d4" (UID: "8f5872d5-ebbb-4ca5-a879-beb7d15629d4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 13 20:24:04.553709 kubelet[1743]: I0113 20:24:04.552631 1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8f5872d5-ebbb-4ca5-a879-beb7d15629d4" (UID: "8f5872d5-ebbb-4ca5-a879-beb7d15629d4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:24:04.553709 kubelet[1743]: I0113 20:24:04.553645 1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-kube-api-access-tf9lq" (OuterVolumeSpecName: "kube-api-access-tf9lq") pod "8f5872d5-ebbb-4ca5-a879-beb7d15629d4" (UID: "8f5872d5-ebbb-4ca5-a879-beb7d15629d4"). InnerVolumeSpecName "kube-api-access-tf9lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 13 20:24:04.553799 systemd[1]: var-lib-kubelet-pods-8f5872d5\x2debbb\x2d4ca5\x2da879\x2dbeb7d15629d4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 13 20:24:04.553911 systemd[1]: var-lib-kubelet-pods-8f5872d5\x2debbb\x2d4ca5\x2da879\x2dbeb7d15629d4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 13 20:24:04.555038 kubelet[1743]: I0113 20:24:04.555005 1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8f5872d5-ebbb-4ca5-a879-beb7d15629d4" (UID: "8f5872d5-ebbb-4ca5-a879-beb7d15629d4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 13 20:24:04.650724 kubelet[1743]: I0113 20:24:04.650561 1743 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-clustermesh-secrets\") on node \"10.0.0.112\" DevicePath \"\"" Jan 13 20:24:04.650724 kubelet[1743]: I0113 20:24:04.650595 1743 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-cni-path\") on node \"10.0.0.112\" DevicePath \"\"" Jan 13 20:24:04.650724 kubelet[1743]: I0113 20:24:04.650606 1743 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-cilium-config-path\") on node \"10.0.0.112\" DevicePath \"\"" Jan 13 20:24:04.650724 kubelet[1743]: I0113 20:24:04.650615 1743 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-hubble-tls\") on node \"10.0.0.112\" DevicePath \"\"" Jan 13 20:24:04.650724 kubelet[1743]: I0113 20:24:04.650625 1743 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-cilium-run\") on node \"10.0.0.112\" DevicePath \"\"" Jan 13 20:24:04.650724 kubelet[1743]: I0113 20:24:04.650639 1743 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-hostproc\") on node \"10.0.0.112\" DevicePath \"\"" Jan 13 20:24:04.650724 kubelet[1743]: I0113 20:24:04.650654 1743 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-lib-modules\") on node \"10.0.0.112\" DevicePath \"\"" Jan 13 20:24:04.650724 kubelet[1743]: I0113 20:24:04.650666 1743 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-host-proc-sys-net\") on node \"10.0.0.112\" DevicePath \"\"" Jan 13 20:24:04.650998 kubelet[1743]: I0113 20:24:04.650675 1743 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-tf9lq\" (UniqueName: \"kubernetes.io/projected/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-kube-api-access-tf9lq\") on node \"10.0.0.112\" DevicePath \"\"" Jan 13 20:24:04.650998 kubelet[1743]: I0113 20:24:04.650683 1743 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-etc-cni-netd\") on node \"10.0.0.112\" DevicePath \"\"" Jan 13 20:24:04.650998 kubelet[1743]: I0113 20:24:04.650689 1743 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-cilium-cgroup\") on node \"10.0.0.112\" DevicePath \"\"" Jan 13 20:24:04.650998 kubelet[1743]: I0113 20:24:04.650698 1743 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-xtables-lock\") on node \"10.0.0.112\" DevicePath \"\"" Jan 13 20:24:04.650998 kubelet[1743]: I0113 20:24:04.650705 1743 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8f5872d5-ebbb-4ca5-a879-beb7d15629d4-bpf-maps\") on node \"10.0.0.112\" DevicePath \"\"" Jan 13 20:24:04.850326 kubelet[1743]: E0113 20:24:04.849466 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:05.034098 kubelet[1743]: I0113 20:24:05.034069 1743 scope.go:117] "RemoveContainer" containerID="f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7" Jan 13 20:24:05.036328 containerd[1441]: time="2025-01-13T20:24:05.035894767Z" level=info msg="RemoveContainer for \"f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7\"" Jan 13 20:24:05.039176 containerd[1441]: time="2025-01-13T20:24:05.039137837Z" level=info msg="RemoveContainer for \"f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7\" returns successfully" Jan 13 20:24:05.039731 kubelet[1743]: I0113 20:24:05.039534 1743 scope.go:117] "RemoveContainer" containerID="814c71f31e43781701984bd49bce80b0c001277f54930ccca1356249951e20d1" Jan 13 20:24:05.041490 containerd[1441]: time="2025-01-13T20:24:05.041261167Z" level=info msg="RemoveContainer for \"814c71f31e43781701984bd49bce80b0c001277f54930ccca1356249951e20d1\"" Jan 13 20:24:05.041699 systemd[1]: Removed slice kubepods-burstable-pod8f5872d5_ebbb_4ca5_a879_beb7d15629d4.slice - libcontainer container kubepods-burstable-pod8f5872d5_ebbb_4ca5_a879_beb7d15629d4.slice. Jan 13 20:24:05.041985 systemd[1]: kubepods-burstable-pod8f5872d5_ebbb_4ca5_a879_beb7d15629d4.slice: Consumed 6.691s CPU time. Jan 13 20:24:05.050627 containerd[1441]: time="2025-01-13T20:24:05.050575293Z" level=info msg="RemoveContainer for \"814c71f31e43781701984bd49bce80b0c001277f54930ccca1356249951e20d1\" returns successfully" Jan 13 20:24:05.050971 kubelet[1743]: I0113 20:24:05.050813 1743 scope.go:117] "RemoveContainer" containerID="cca7e47d95b3b4e396b990863c62c2222b6b8ee4b6f7c1d7b7b39d4ff3df592d" Jan 13 20:24:05.059383 containerd[1441]: time="2025-01-13T20:24:05.059333789Z" level=info msg="RemoveContainer for \"cca7e47d95b3b4e396b990863c62c2222b6b8ee4b6f7c1d7b7b39d4ff3df592d\"" Jan 13 20:24:05.061808 containerd[1441]: time="2025-01-13T20:24:05.061780411Z" level=info msg="RemoveContainer for \"cca7e47d95b3b4e396b990863c62c2222b6b8ee4b6f7c1d7b7b39d4ff3df592d\" returns successfully" Jan 13 20:24:05.061993 kubelet[1743]: I0113 20:24:05.061959 1743 scope.go:117] "RemoveContainer" containerID="af026a6d62b7a4f39c0c0bac7db453fc059c70cd61f199ff6e0bc68a6d8d8729" Jan 13 20:24:05.063055 containerd[1441]: time="2025-01-13T20:24:05.063006461Z" level=info msg="RemoveContainer for \"af026a6d62b7a4f39c0c0bac7db453fc059c70cd61f199ff6e0bc68a6d8d8729\"" Jan 13 20:24:05.065131 containerd[1441]: time="2025-01-13T20:24:05.065092034Z" level=info msg="RemoveContainer for \"af026a6d62b7a4f39c0c0bac7db453fc059c70cd61f199ff6e0bc68a6d8d8729\" returns successfully" Jan 13 20:24:05.065326 kubelet[1743]: I0113 20:24:05.065280 1743 scope.go:117] "RemoveContainer" containerID="86fe1a58ed027328f8037b5a8c99609839295f45c93048074c55460534272b18" Jan 13 20:24:05.066219 containerd[1441]: time="2025-01-13T20:24:05.066187256Z" level=info msg="RemoveContainer for \"86fe1a58ed027328f8037b5a8c99609839295f45c93048074c55460534272b18\"" Jan 13 20:24:05.084160 containerd[1441]: time="2025-01-13T20:24:05.083569821Z" level=info msg="RemoveContainer for \"86fe1a58ed027328f8037b5a8c99609839295f45c93048074c55460534272b18\" returns successfully" Jan 13 20:24:05.084388 kubelet[1743]: I0113 20:24:05.084326 1743 scope.go:117] "RemoveContainer" containerID="f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7" Jan 13 20:24:05.084654 containerd[1441]: time="2025-01-13T20:24:05.084601368Z" level=error msg="ContainerStatus for \"f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7\": not found" Jan 13 20:24:05.084996 kubelet[1743]: E0113 20:24:05.084780 1743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7\": not found" containerID="f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7" Jan 13 20:24:05.084996 kubelet[1743]: I0113 20:24:05.084812 1743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7"} err="failed to get container status \"f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"f32503c8c28f7bafb24d41b6983b8683275276ae054179b5add866d167fac3d7\": not found" Jan 13 20:24:05.084996 kubelet[1743]: I0113 20:24:05.084886 1743 scope.go:117] "RemoveContainer" containerID="814c71f31e43781701984bd49bce80b0c001277f54930ccca1356249951e20d1" Jan 13 20:24:05.085382 containerd[1441]: time="2025-01-13T20:24:05.085348822Z" level=error msg="ContainerStatus for \"814c71f31e43781701984bd49bce80b0c001277f54930ccca1356249951e20d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"814c71f31e43781701984bd49bce80b0c001277f54930ccca1356249951e20d1\": not found" Jan 13 20:24:05.085498 kubelet[1743]: E0113 20:24:05.085475 1743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"814c71f31e43781701984bd49bce80b0c001277f54930ccca1356249951e20d1\": not found" containerID="814c71f31e43781701984bd49bce80b0c001277f54930ccca1356249951e20d1" Jan 13 20:24:05.085538 kubelet[1743]: I0113 20:24:05.085503 1743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"814c71f31e43781701984bd49bce80b0c001277f54930ccca1356249951e20d1"} err="failed to get container status \"814c71f31e43781701984bd49bce80b0c001277f54930ccca1356249951e20d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"814c71f31e43781701984bd49bce80b0c001277f54930ccca1356249951e20d1\": not found" Jan 13 20:24:05.085538 kubelet[1743]: I0113 20:24:05.085523 1743 scope.go:117] "RemoveContainer" containerID="cca7e47d95b3b4e396b990863c62c2222b6b8ee4b6f7c1d7b7b39d4ff3df592d" Jan 13 20:24:05.085776 containerd[1441]: time="2025-01-13T20:24:05.085740147Z" level=error msg="ContainerStatus for \"cca7e47d95b3b4e396b990863c62c2222b6b8ee4b6f7c1d7b7b39d4ff3df592d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cca7e47d95b3b4e396b990863c62c2222b6b8ee4b6f7c1d7b7b39d4ff3df592d\": not found" Jan 13 20:24:05.085972 kubelet[1743]: E0113 20:24:05.085948 1743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cca7e47d95b3b4e396b990863c62c2222b6b8ee4b6f7c1d7b7b39d4ff3df592d\": not found" containerID="cca7e47d95b3b4e396b990863c62c2222b6b8ee4b6f7c1d7b7b39d4ff3df592d" Jan 13 20:24:05.086018 kubelet[1743]: I0113 20:24:05.085992 1743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cca7e47d95b3b4e396b990863c62c2222b6b8ee4b6f7c1d7b7b39d4ff3df592d"} err="failed to get container status \"cca7e47d95b3b4e396b990863c62c2222b6b8ee4b6f7c1d7b7b39d4ff3df592d\": rpc error: code = NotFound desc = an error occurred when try to find container \"cca7e47d95b3b4e396b990863c62c2222b6b8ee4b6f7c1d7b7b39d4ff3df592d\": not found" Jan 13 20:24:05.086018 kubelet[1743]: I0113 20:24:05.086009 1743 scope.go:117] "RemoveContainer" containerID="af026a6d62b7a4f39c0c0bac7db453fc059c70cd61f199ff6e0bc68a6d8d8729" Jan 13 20:24:05.086304 containerd[1441]: time="2025-01-13T20:24:05.086271179Z" level=error msg="ContainerStatus for \"af026a6d62b7a4f39c0c0bac7db453fc059c70cd61f199ff6e0bc68a6d8d8729\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"af026a6d62b7a4f39c0c0bac7db453fc059c70cd61f199ff6e0bc68a6d8d8729\": not found" Jan 13 20:24:05.086400 kubelet[1743]: E0113 20:24:05.086381 1743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"af026a6d62b7a4f39c0c0bac7db453fc059c70cd61f199ff6e0bc68a6d8d8729\": not found" containerID="af026a6d62b7a4f39c0c0bac7db453fc059c70cd61f199ff6e0bc68a6d8d8729" Jan 13 20:24:05.086437 kubelet[1743]: I0113 20:24:05.086404 1743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"af026a6d62b7a4f39c0c0bac7db453fc059c70cd61f199ff6e0bc68a6d8d8729"} err="failed to get container status \"af026a6d62b7a4f39c0c0bac7db453fc059c70cd61f199ff6e0bc68a6d8d8729\": rpc error: code = NotFound desc = an error occurred when try to find container \"af026a6d62b7a4f39c0c0bac7db453fc059c70cd61f199ff6e0bc68a6d8d8729\": not found" Jan 13 20:24:05.086437 kubelet[1743]: I0113 20:24:05.086418 1743 scope.go:117] "RemoveContainer" containerID="86fe1a58ed027328f8037b5a8c99609839295f45c93048074c55460534272b18" Jan 13 20:24:05.086611 containerd[1441]: time="2025-01-13T20:24:05.086585871Z" level=error msg="ContainerStatus for \"86fe1a58ed027328f8037b5a8c99609839295f45c93048074c55460534272b18\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"86fe1a58ed027328f8037b5a8c99609839295f45c93048074c55460534272b18\": not found" Jan 13 20:24:05.086715 kubelet[1743]: E0113 20:24:05.086695 1743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"86fe1a58ed027328f8037b5a8c99609839295f45c93048074c55460534272b18\": not found" containerID="86fe1a58ed027328f8037b5a8c99609839295f45c93048074c55460534272b18" Jan 13 20:24:05.086741 kubelet[1743]: I0113 20:24:05.086725 1743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"86fe1a58ed027328f8037b5a8c99609839295f45c93048074c55460534272b18"} err="failed to get container status \"86fe1a58ed027328f8037b5a8c99609839295f45c93048074c55460534272b18\": rpc error: code = NotFound desc = an error occurred when try to find container \"86fe1a58ed027328f8037b5a8c99609839295f45c93048074c55460534272b18\": not found" Jan 13 20:24:05.253201 systemd[1]: var-lib-kubelet-pods-8f5872d5\x2debbb\x2d4ca5\x2da879\x2dbeb7d15629d4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtf9lq.mount: Deactivated successfully. Jan 13 20:24:05.849872 kubelet[1743]: E0113 20:24:05.849797 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:05.932342 kubelet[1743]: I0113 20:24:05.932302 1743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f5872d5-ebbb-4ca5-a879-beb7d15629d4" path="/var/lib/kubelet/pods/8f5872d5-ebbb-4ca5-a879-beb7d15629d4/volumes" Jan 13 20:24:06.850797 kubelet[1743]: E0113 20:24:06.850726 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:06.952964 kubelet[1743]: E0113 20:24:06.952920 1743 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:24:07.091256 kubelet[1743]: E0113 20:24:07.091171 1743 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8f5872d5-ebbb-4ca5-a879-beb7d15629d4" containerName="clean-cilium-state" Jan 13 20:24:07.091256 kubelet[1743]: E0113 20:24:07.091206 1743 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8f5872d5-ebbb-4ca5-a879-beb7d15629d4" containerName="cilium-agent" Jan 13 20:24:07.091256 kubelet[1743]: E0113 20:24:07.091216 1743 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8f5872d5-ebbb-4ca5-a879-beb7d15629d4" containerName="mount-cgroup" Jan 13 20:24:07.091256 kubelet[1743]: E0113 20:24:07.091221 1743 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8f5872d5-ebbb-4ca5-a879-beb7d15629d4" containerName="apply-sysctl-overwrites" Jan 13 20:24:07.091256 kubelet[1743]: E0113 20:24:07.091248 1743 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8f5872d5-ebbb-4ca5-a879-beb7d15629d4" containerName="mount-bpf-fs" Jan 13 20:24:07.091256 kubelet[1743]: I0113 20:24:07.091272 1743 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f5872d5-ebbb-4ca5-a879-beb7d15629d4" containerName="cilium-agent" Jan 13 20:24:07.093329 kubelet[1743]: W0113 20:24:07.093295 1743 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.0.0.112" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.112' and this object Jan 13 20:24:07.093385 kubelet[1743]: E0113 20:24:07.093337 1743 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:10.0.0.112\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.112' and this object" logger="UnhandledError" Jan 13 20:24:07.096705 systemd[1]: Created slice kubepods-besteffort-podf6b9640d_9520_4188_8336_9df2aa950718.slice - libcontainer container kubepods-besteffort-podf6b9640d_9520_4188_8336_9df2aa950718.slice. Jan 13 20:24:07.102138 systemd[1]: Created slice kubepods-burstable-podf19657f9_6977_469a_9655_16b2ca682f43.slice - libcontainer container kubepods-burstable-podf19657f9_6977_469a_9655_16b2ca682f43.slice. Jan 13 20:24:07.261270 kubelet[1743]: I0113 20:24:07.261185 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f19657f9-6977-469a-9655-16b2ca682f43-cilium-cgroup\") pod \"cilium-qqmt8\" (UID: \"f19657f9-6977-469a-9655-16b2ca682f43\") " pod="kube-system/cilium-qqmt8" Jan 13 20:24:07.261270 kubelet[1743]: I0113 20:24:07.261240 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f19657f9-6977-469a-9655-16b2ca682f43-lib-modules\") pod \"cilium-qqmt8\" (UID: \"f19657f9-6977-469a-9655-16b2ca682f43\") " pod="kube-system/cilium-qqmt8" Jan 13 20:24:07.261592 kubelet[1743]: I0113 20:24:07.261536 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f19657f9-6977-469a-9655-16b2ca682f43-xtables-lock\") pod \"cilium-qqmt8\" (UID: \"f19657f9-6977-469a-9655-16b2ca682f43\") " pod="kube-system/cilium-qqmt8" Jan 13 20:24:07.261644 kubelet[1743]: I0113 20:24:07.261599 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f19657f9-6977-469a-9655-16b2ca682f43-clustermesh-secrets\") pod \"cilium-qqmt8\" (UID: \"f19657f9-6977-469a-9655-16b2ca682f43\") " pod="kube-system/cilium-qqmt8" Jan 13 20:24:07.261644 kubelet[1743]: I0113 20:24:07.261630 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f19657f9-6977-469a-9655-16b2ca682f43-bpf-maps\") pod \"cilium-qqmt8\" (UID: \"f19657f9-6977-469a-9655-16b2ca682f43\") " pod="kube-system/cilium-qqmt8" Jan 13 20:24:07.261736 kubelet[1743]: I0113 20:24:07.261659 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f19657f9-6977-469a-9655-16b2ca682f43-hubble-tls\") pod \"cilium-qqmt8\" (UID: \"f19657f9-6977-469a-9655-16b2ca682f43\") " pod="kube-system/cilium-qqmt8" Jan 13 20:24:07.261736 kubelet[1743]: I0113 20:24:07.261680 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f19657f9-6977-469a-9655-16b2ca682f43-cilium-config-path\") pod \"cilium-qqmt8\" (UID: \"f19657f9-6977-469a-9655-16b2ca682f43\") " pod="kube-system/cilium-qqmt8" Jan 13 20:24:07.261736 kubelet[1743]: I0113 20:24:07.261706 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f19657f9-6977-469a-9655-16b2ca682f43-cilium-ipsec-secrets\") pod \"cilium-qqmt8\" (UID: \"f19657f9-6977-469a-9655-16b2ca682f43\") " pod="kube-system/cilium-qqmt8" Jan 13 20:24:07.261736 kubelet[1743]: I0113 20:24:07.261723 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f19657f9-6977-469a-9655-16b2ca682f43-host-proc-sys-kernel\") pod \"cilium-qqmt8\" (UID: \"f19657f9-6977-469a-9655-16b2ca682f43\") " pod="kube-system/cilium-qqmt8" Jan 13 20:24:07.261821 kubelet[1743]: I0113 20:24:07.261747 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f19657f9-6977-469a-9655-16b2ca682f43-etc-cni-netd\") pod \"cilium-qqmt8\" (UID: \"f19657f9-6977-469a-9655-16b2ca682f43\") " pod="kube-system/cilium-qqmt8" Jan 13 20:24:07.261821 kubelet[1743]: I0113 20:24:07.261763 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f19657f9-6977-469a-9655-16b2ca682f43-host-proc-sys-net\") pod \"cilium-qqmt8\" (UID: \"f19657f9-6977-469a-9655-16b2ca682f43\") " pod="kube-system/cilium-qqmt8" Jan 13 20:24:07.261821 kubelet[1743]: I0113 20:24:07.261782 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f19657f9-6977-469a-9655-16b2ca682f43-hostproc\") pod \"cilium-qqmt8\" (UID: \"f19657f9-6977-469a-9655-16b2ca682f43\") " pod="kube-system/cilium-qqmt8" Jan 13 20:24:07.261821 kubelet[1743]: I0113 20:24:07.261798 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f19657f9-6977-469a-9655-16b2ca682f43-cni-path\") pod \"cilium-qqmt8\" (UID: \"f19657f9-6977-469a-9655-16b2ca682f43\") " pod="kube-system/cilium-qqmt8" Jan 13 20:24:07.261821 kubelet[1743]: I0113 20:24:07.261815 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7ld5\" (UniqueName: \"kubernetes.io/projected/f19657f9-6977-469a-9655-16b2ca682f43-kube-api-access-n7ld5\") pod \"cilium-qqmt8\" (UID: \"f19657f9-6977-469a-9655-16b2ca682f43\") " pod="kube-system/cilium-qqmt8" Jan 13 20:24:07.261918 kubelet[1743]: I0113 20:24:07.261840 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6b9640d-9520-4188-8336-9df2aa950718-cilium-config-path\") pod \"cilium-operator-5d85765b45-4cjgb\" (UID: \"f6b9640d-9520-4188-8336-9df2aa950718\") " pod="kube-system/cilium-operator-5d85765b45-4cjgb" Jan 13 20:24:07.261918 kubelet[1743]: I0113 20:24:07.261869 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f19657f9-6977-469a-9655-16b2ca682f43-cilium-run\") pod \"cilium-qqmt8\" (UID: \"f19657f9-6977-469a-9655-16b2ca682f43\") " pod="kube-system/cilium-qqmt8" Jan 13 20:24:07.261918 kubelet[1743]: I0113 20:24:07.261886 1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2bpc\" (UniqueName: \"kubernetes.io/projected/f6b9640d-9520-4188-8336-9df2aa950718-kube-api-access-p2bpc\") pod \"cilium-operator-5d85765b45-4cjgb\" (UID: \"f6b9640d-9520-4188-8336-9df2aa950718\") " pod="kube-system/cilium-operator-5d85765b45-4cjgb" Jan 13 20:24:07.851645 kubelet[1743]: E0113 20:24:07.851600 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:08.299454 kubelet[1743]: E0113 20:24:08.299417 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:08.299958 containerd[1441]: time="2025-01-13T20:24:08.299911207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-4cjgb,Uid:f6b9640d-9520-4188-8336-9df2aa950718,Namespace:kube-system,Attempt:0,}" Jan 13 20:24:08.314792 kubelet[1743]: E0113 20:24:08.314516 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:08.314958 containerd[1441]: time="2025-01-13T20:24:08.314917883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qqmt8,Uid:f19657f9-6977-469a-9655-16b2ca682f43,Namespace:kube-system,Attempt:0,}" Jan 13 20:24:08.316043 containerd[1441]: time="2025-01-13T20:24:08.315974171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:24:08.316141 containerd[1441]: time="2025-01-13T20:24:08.316025770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:24:08.316141 containerd[1441]: time="2025-01-13T20:24:08.316036570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:08.316141 containerd[1441]: time="2025-01-13T20:24:08.316113527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:08.332410 systemd[1]: Started cri-containerd-6050c79e076030013678dbf17f9cf7e69a73abf4bb770a5a1d24a601f63fee2f.scope - libcontainer container 6050c79e076030013678dbf17f9cf7e69a73abf4bb770a5a1d24a601f63fee2f. Jan 13 20:24:08.343543 containerd[1441]: time="2025-01-13T20:24:08.343032211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 20:24:08.343543 containerd[1441]: time="2025-01-13T20:24:08.343087089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 20:24:08.343543 containerd[1441]: time="2025-01-13T20:24:08.343102249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:08.343543 containerd[1441]: time="2025-01-13T20:24:08.343205246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 20:24:08.364431 systemd[1]: Started cri-containerd-c1bf0261170651b4faf1894aeda96bd044512a6e199cee6a7055b8e5cf66a9c4.scope - libcontainer container c1bf0261170651b4faf1894aeda96bd044512a6e199cee6a7055b8e5cf66a9c4. Jan 13 20:24:08.368255 containerd[1441]: time="2025-01-13T20:24:08.367569325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-4cjgb,Uid:f6b9640d-9520-4188-8336-9df2aa950718,Namespace:kube-system,Attempt:0,} returns sandbox id \"6050c79e076030013678dbf17f9cf7e69a73abf4bb770a5a1d24a601f63fee2f\"" Jan 13 20:24:08.370394 kubelet[1743]: E0113 20:24:08.368873 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:08.372281 containerd[1441]: time="2025-01-13T20:24:08.372241827Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 13 20:24:08.386660 containerd[1441]: time="2025-01-13T20:24:08.386618882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qqmt8,Uid:f19657f9-6977-469a-9655-16b2ca682f43,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1bf0261170651b4faf1894aeda96bd044512a6e199cee6a7055b8e5cf66a9c4\"" Jan 13 20:24:08.387281 kubelet[1743]: E0113 20:24:08.387250 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:08.389796 containerd[1441]: time="2025-01-13T20:24:08.389763229Z" level=info msg="CreateContainer within sandbox \"c1bf0261170651b4faf1894aeda96bd044512a6e199cee6a7055b8e5cf66a9c4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 13 20:24:08.399935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount415731290.mount: Deactivated successfully. Jan 13 20:24:08.404945 containerd[1441]: time="2025-01-13T20:24:08.404896261Z" level=info msg="CreateContainer within sandbox \"c1bf0261170651b4faf1894aeda96bd044512a6e199cee6a7055b8e5cf66a9c4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ac07e232e456c1b7ee3a17e88fb20c1df556ee72a00ad1bf7e879678279ca27c\"" Jan 13 20:24:08.406392 containerd[1441]: time="2025-01-13T20:24:08.406339458Z" level=info msg="StartContainer for \"ac07e232e456c1b7ee3a17e88fb20c1df556ee72a00ad1bf7e879678279ca27c\"" Jan 13 20:24:08.441431 systemd[1]: Started cri-containerd-ac07e232e456c1b7ee3a17e88fb20c1df556ee72a00ad1bf7e879678279ca27c.scope - libcontainer container ac07e232e456c1b7ee3a17e88fb20c1df556ee72a00ad1bf7e879678279ca27c. Jan 13 20:24:08.461413 containerd[1441]: time="2025-01-13T20:24:08.461373111Z" level=info msg="StartContainer for \"ac07e232e456c1b7ee3a17e88fb20c1df556ee72a00ad1bf7e879678279ca27c\" returns successfully" Jan 13 20:24:08.508636 systemd[1]: cri-containerd-ac07e232e456c1b7ee3a17e88fb20c1df556ee72a00ad1bf7e879678279ca27c.scope: Deactivated successfully. Jan 13 20:24:08.534526 containerd[1441]: time="2025-01-13T20:24:08.534464429Z" level=info msg="shim disconnected" id=ac07e232e456c1b7ee3a17e88fb20c1df556ee72a00ad1bf7e879678279ca27c namespace=k8s.io Jan 13 20:24:08.534526 containerd[1441]: time="2025-01-13T20:24:08.534519987Z" level=warning msg="cleaning up after shim disconnected" id=ac07e232e456c1b7ee3a17e88fb20c1df556ee72a00ad1bf7e879678279ca27c namespace=k8s.io Jan 13 20:24:08.534526 containerd[1441]: time="2025-01-13T20:24:08.534528307Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:08.851763 kubelet[1743]: E0113 20:24:08.851702 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:09.041582 kubelet[1743]: E0113 20:24:09.041519 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:09.043102 containerd[1441]: time="2025-01-13T20:24:09.043066937Z" level=info msg="CreateContainer within sandbox \"c1bf0261170651b4faf1894aeda96bd044512a6e199cee6a7055b8e5cf66a9c4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 13 20:24:09.060208 containerd[1441]: time="2025-01-13T20:24:09.059962771Z" level=info msg="CreateContainer within sandbox \"c1bf0261170651b4faf1894aeda96bd044512a6e199cee6a7055b8e5cf66a9c4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"999c2cfb899d6978f833079b5c6e892b266d9b49b7786c222f3e8be1bb776a3b\"" Jan 13 20:24:09.060870 containerd[1441]: time="2025-01-13T20:24:09.060834466Z" level=info msg="StartContainer for \"999c2cfb899d6978f833079b5c6e892b266d9b49b7786c222f3e8be1bb776a3b\"" Jan 13 20:24:09.097531 systemd[1]: Started cri-containerd-999c2cfb899d6978f833079b5c6e892b266d9b49b7786c222f3e8be1bb776a3b.scope - libcontainer container 999c2cfb899d6978f833079b5c6e892b266d9b49b7786c222f3e8be1bb776a3b. Jan 13 20:24:09.119917 containerd[1441]: time="2025-01-13T20:24:09.119595495Z" level=info msg="StartContainer for \"999c2cfb899d6978f833079b5c6e892b266d9b49b7786c222f3e8be1bb776a3b\" returns successfully" Jan 13 20:24:09.140116 systemd[1]: cri-containerd-999c2cfb899d6978f833079b5c6e892b266d9b49b7786c222f3e8be1bb776a3b.scope: Deactivated successfully. Jan 13 20:24:09.157669 containerd[1441]: time="2025-01-13T20:24:09.157577843Z" level=info msg="shim disconnected" id=999c2cfb899d6978f833079b5c6e892b266d9b49b7786c222f3e8be1bb776a3b namespace=k8s.io Jan 13 20:24:09.157669 containerd[1441]: time="2025-01-13T20:24:09.157660841Z" level=warning msg="cleaning up after shim disconnected" id=999c2cfb899d6978f833079b5c6e892b266d9b49b7786c222f3e8be1bb776a3b namespace=k8s.io Jan 13 20:24:09.157669 containerd[1441]: time="2025-01-13T20:24:09.157669840Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:09.366127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac07e232e456c1b7ee3a17e88fb20c1df556ee72a00ad1bf7e879678279ca27c-rootfs.mount: Deactivated successfully. Jan 13 20:24:09.551561 containerd[1441]: time="2025-01-13T20:24:09.551515112Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:09.551979 containerd[1441]: time="2025-01-13T20:24:09.551935700Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138302" Jan 13 20:24:09.552873 containerd[1441]: time="2025-01-13T20:24:09.552821074Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 20:24:09.555359 containerd[1441]: time="2025-01-13T20:24:09.555315403Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.183031617s" Jan 13 20:24:09.555359 containerd[1441]: time="2025-01-13T20:24:09.555355001Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 13 20:24:09.557779 containerd[1441]: time="2025-01-13T20:24:09.557748853Z" level=info msg="CreateContainer within sandbox \"6050c79e076030013678dbf17f9cf7e69a73abf4bb770a5a1d24a601f63fee2f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 13 20:24:09.569456 containerd[1441]: time="2025-01-13T20:24:09.569411517Z" level=info msg="CreateContainer within sandbox \"6050c79e076030013678dbf17f9cf7e69a73abf4bb770a5a1d24a601f63fee2f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"34ad40e1ba20785e140e1fe4f145599bb5c8a2e9b900ab31e3d3b002918e01b7\"" Jan 13 20:24:09.570032 containerd[1441]: time="2025-01-13T20:24:09.570007060Z" level=info msg="StartContainer for \"34ad40e1ba20785e140e1fe4f145599bb5c8a2e9b900ab31e3d3b002918e01b7\"" Jan 13 20:24:09.596482 systemd[1]: Started cri-containerd-34ad40e1ba20785e140e1fe4f145599bb5c8a2e9b900ab31e3d3b002918e01b7.scope - libcontainer container 34ad40e1ba20785e140e1fe4f145599bb5c8a2e9b900ab31e3d3b002918e01b7. Jan 13 20:24:09.652141 containerd[1441]: time="2025-01-13T20:24:09.652015061Z" level=info msg="StartContainer for \"34ad40e1ba20785e140e1fe4f145599bb5c8a2e9b900ab31e3d3b002918e01b7\" returns successfully" Jan 13 20:24:09.852596 kubelet[1743]: E0113 20:24:09.852456 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:10.045899 kubelet[1743]: E0113 20:24:10.045841 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:10.048341 containerd[1441]: time="2025-01-13T20:24:10.048305799Z" level=info msg="CreateContainer within sandbox \"c1bf0261170651b4faf1894aeda96bd044512a6e199cee6a7055b8e5cf66a9c4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 13 20:24:10.049018 kubelet[1743]: E0113 20:24:10.048839 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:10.064846 containerd[1441]: time="2025-01-13T20:24:10.064787538Z" level=info msg="CreateContainer within sandbox \"c1bf0261170651b4faf1894aeda96bd044512a6e199cee6a7055b8e5cf66a9c4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"06ccd21c20e8270993ce78eccb0c2e609397c3cb009886a8a3119e870647b1e7\"" Jan 13 20:24:10.065329 containerd[1441]: time="2025-01-13T20:24:10.065287324Z" level=info msg="StartContainer for \"06ccd21c20e8270993ce78eccb0c2e609397c3cb009886a8a3119e870647b1e7\"" Jan 13 20:24:10.093460 systemd[1]: Started cri-containerd-06ccd21c20e8270993ce78eccb0c2e609397c3cb009886a8a3119e870647b1e7.scope - libcontainer container 06ccd21c20e8270993ce78eccb0c2e609397c3cb009886a8a3119e870647b1e7. Jan 13 20:24:10.121372 containerd[1441]: time="2025-01-13T20:24:10.121260319Z" level=info msg="StartContainer for \"06ccd21c20e8270993ce78eccb0c2e609397c3cb009886a8a3119e870647b1e7\" returns successfully" Jan 13 20:24:10.123745 systemd[1]: cri-containerd-06ccd21c20e8270993ce78eccb0c2e609397c3cb009886a8a3119e870647b1e7.scope: Deactivated successfully. Jan 13 20:24:10.151921 containerd[1441]: time="2025-01-13T20:24:10.151780665Z" level=info msg="shim disconnected" id=06ccd21c20e8270993ce78eccb0c2e609397c3cb009886a8a3119e870647b1e7 namespace=k8s.io Jan 13 20:24:10.151921 containerd[1441]: time="2025-01-13T20:24:10.151900782Z" level=warning msg="cleaning up after shim disconnected" id=06ccd21c20e8270993ce78eccb0c2e609397c3cb009886a8a3119e870647b1e7 namespace=k8s.io Jan 13 20:24:10.151921 containerd[1441]: time="2025-01-13T20:24:10.151910141Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:10.853378 kubelet[1743]: E0113 20:24:10.853328 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:11.052894 kubelet[1743]: E0113 20:24:11.052857 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:11.053023 kubelet[1743]: E0113 20:24:11.052960 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:11.054760 containerd[1441]: time="2025-01-13T20:24:11.054719367Z" level=info msg="CreateContainer within sandbox \"c1bf0261170651b4faf1894aeda96bd044512a6e199cee6a7055b8e5cf66a9c4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 13 20:24:11.070202 containerd[1441]: time="2025-01-13T20:24:11.070131908Z" level=info msg="CreateContainer within sandbox \"c1bf0261170651b4faf1894aeda96bd044512a6e199cee6a7055b8e5cf66a9c4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"64ee87e5c2ba101ffb8b95697db22cbb019fdcdb3b0011288dc8d51dd0e58e20\"" Jan 13 20:24:11.070820 kubelet[1743]: I0113 20:24:11.070728 1743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-4cjgb" podStartSLOduration=2.885418581 podStartE2EDuration="4.070699772s" podCreationTimestamp="2025-01-13 20:24:07 +0000 UTC" firstStartedPulling="2025-01-13 20:24:08.371014703 +0000 UTC m=+58.129087067" lastFinishedPulling="2025-01-13 20:24:09.556295894 +0000 UTC m=+59.314368258" observedRunningTime="2025-01-13 20:24:10.080786011 +0000 UTC m=+59.838858375" watchObservedRunningTime="2025-01-13 20:24:11.070699772 +0000 UTC m=+60.828772136" Jan 13 20:24:11.070968 containerd[1441]: time="2025-01-13T20:24:11.070774730Z" level=info msg="StartContainer for \"64ee87e5c2ba101ffb8b95697db22cbb019fdcdb3b0011288dc8d51dd0e58e20\"" Jan 13 20:24:11.072142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2558461495.mount: Deactivated successfully. Jan 13 20:24:11.095402 systemd[1]: Started cri-containerd-64ee87e5c2ba101ffb8b95697db22cbb019fdcdb3b0011288dc8d51dd0e58e20.scope - libcontainer container 64ee87e5c2ba101ffb8b95697db22cbb019fdcdb3b0011288dc8d51dd0e58e20. Jan 13 20:24:11.117592 systemd[1]: cri-containerd-64ee87e5c2ba101ffb8b95697db22cbb019fdcdb3b0011288dc8d51dd0e58e20.scope: Deactivated successfully. Jan 13 20:24:11.118952 containerd[1441]: time="2025-01-13T20:24:11.118916140Z" level=info msg="StartContainer for \"64ee87e5c2ba101ffb8b95697db22cbb019fdcdb3b0011288dc8d51dd0e58e20\" returns successfully" Jan 13 20:24:11.136918 containerd[1441]: time="2025-01-13T20:24:11.136742975Z" level=info msg="shim disconnected" id=64ee87e5c2ba101ffb8b95697db22cbb019fdcdb3b0011288dc8d51dd0e58e20 namespace=k8s.io Jan 13 20:24:11.136918 containerd[1441]: time="2025-01-13T20:24:11.136802214Z" level=warning msg="cleaning up after shim disconnected" id=64ee87e5c2ba101ffb8b95697db22cbb019fdcdb3b0011288dc8d51dd0e58e20 namespace=k8s.io Jan 13 20:24:11.136918 containerd[1441]: time="2025-01-13T20:24:11.136811413Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 20:24:11.366360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64ee87e5c2ba101ffb8b95697db22cbb019fdcdb3b0011288dc8d51dd0e58e20-rootfs.mount: Deactivated successfully. Jan 13 20:24:11.817223 kubelet[1743]: E0113 20:24:11.817168 1743 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:11.839761 containerd[1441]: time="2025-01-13T20:24:11.839720089Z" level=info msg="StopPodSandbox for \"e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9\"" Jan 13 20:24:11.840077 containerd[1441]: time="2025-01-13T20:24:11.839802726Z" level=info msg="TearDown network for sandbox \"e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9\" successfully" Jan 13 20:24:11.840077 containerd[1441]: time="2025-01-13T20:24:11.839814846Z" level=info msg="StopPodSandbox for \"e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9\" returns successfully" Jan 13 20:24:11.840275 containerd[1441]: time="2025-01-13T20:24:11.840218675Z" level=info msg="RemovePodSandbox for \"e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9\"" Jan 13 20:24:11.840275 containerd[1441]: time="2025-01-13T20:24:11.840261514Z" level=info msg="Forcibly stopping sandbox \"e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9\"" Jan 13 20:24:11.840343 containerd[1441]: time="2025-01-13T20:24:11.840316392Z" level=info msg="TearDown network for sandbox \"e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9\" successfully" Jan 13 20:24:11.843614 containerd[1441]: time="2025-01-13T20:24:11.843576504Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 13 20:24:11.843655 containerd[1441]: time="2025-01-13T20:24:11.843638382Z" level=info msg="RemovePodSandbox \"e610ed5a3b1a0e9b533aae05ae19ea7a8ecddf5e73a74d3ed92247975f0673a9\" returns successfully" Jan 13 20:24:11.854204 kubelet[1743]: E0113 20:24:11.854158 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:11.953464 kubelet[1743]: E0113 20:24:11.953428 1743 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 13 20:24:12.056252 kubelet[1743]: E0113 20:24:12.056185 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:12.058057 containerd[1441]: time="2025-01-13T20:24:12.057944825Z" level=info msg="CreateContainer within sandbox \"c1bf0261170651b4faf1894aeda96bd044512a6e199cee6a7055b8e5cf66a9c4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 13 20:24:12.076984 containerd[1441]: time="2025-01-13T20:24:12.076882284Z" level=info msg="CreateContainer within sandbox \"c1bf0261170651b4faf1894aeda96bd044512a6e199cee6a7055b8e5cf66a9c4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dbf8f854bbce6a663688585dd7eec7d25e49b4603ab0124956f21c236d63c851\"" Jan 13 20:24:12.077616 containerd[1441]: time="2025-01-13T20:24:12.077400431Z" level=info msg="StartContainer for \"dbf8f854bbce6a663688585dd7eec7d25e49b4603ab0124956f21c236d63c851\"" Jan 13 20:24:12.108450 systemd[1]: Started cri-containerd-dbf8f854bbce6a663688585dd7eec7d25e49b4603ab0124956f21c236d63c851.scope - libcontainer container dbf8f854bbce6a663688585dd7eec7d25e49b4603ab0124956f21c236d63c851. Jan 13 20:24:12.140284 containerd[1441]: time="2025-01-13T20:24:12.140205888Z" level=info msg="StartContainer for \"dbf8f854bbce6a663688585dd7eec7d25e49b4603ab0124956f21c236d63c851\" returns successfully" Jan 13 20:24:12.396316 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 13 20:24:12.855050 kubelet[1743]: E0113 20:24:12.854998 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:13.060750 kubelet[1743]: E0113 20:24:13.060683 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:13.076625 kubelet[1743]: I0113 20:24:13.076564 1743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qqmt8" podStartSLOduration=6.076547081 podStartE2EDuration="6.076547081s" podCreationTimestamp="2025-01-13 20:24:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:24:13.075682463 +0000 UTC m=+62.833754827" watchObservedRunningTime="2025-01-13 20:24:13.076547081 +0000 UTC m=+62.834619445" Jan 13 20:24:13.250960 kubelet[1743]: I0113 20:24:13.250700 1743 setters.go:600] "Node became not ready" node="10.0.0.112" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-13T20:24:13Z","lastTransitionTime":"2025-01-13T20:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 13 20:24:13.855742 kubelet[1743]: E0113 20:24:13.855685 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:14.316392 kubelet[1743]: E0113 20:24:14.316336 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:14.856211 kubelet[1743]: E0113 20:24:14.856159 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:15.237742 systemd-networkd[1390]: lxc_health: Link UP Jan 13 20:24:15.247103 systemd-networkd[1390]: lxc_health: Gained carrier Jan 13 20:24:15.856964 kubelet[1743]: E0113 20:24:15.856880 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:16.317512 kubelet[1743]: E0113 20:24:16.317210 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:16.857530 kubelet[1743]: E0113 20:24:16.857467 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:17.068899 kubelet[1743]: E0113 20:24:17.068852 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:17.195417 systemd-networkd[1390]: lxc_health: Gained IPv6LL Jan 13 20:24:17.858324 kubelet[1743]: E0113 20:24:17.858280 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:18.859074 kubelet[1743]: E0113 20:24:18.859021 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:19.859481 kubelet[1743]: E0113 20:24:19.859420 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:20.859649 kubelet[1743]: E0113 20:24:20.859600 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:20.929218 kubelet[1743]: E0113 20:24:20.929183 1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 20:24:21.860805 kubelet[1743]: E0113 20:24:21.860750 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:22.861115 kubelet[1743]: E0113 20:24:22.861058 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 13 20:24:23.861793 kubelet[1743]: E0113 20:24:23.861752 1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"