Jan 30 13:13:48.915173 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 13:13:48.915194 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 30 13:13:48.915204 kernel: KASLR enabled Jan 30 13:13:48.915209 kernel: efi: EFI v2.7 by EDK II Jan 30 13:13:48.915215 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Jan 30 13:13:48.915220 kernel: random: crng init done Jan 30 13:13:48.915227 kernel: secureboot: Secure boot disabled Jan 30 13:13:48.915233 kernel: ACPI: Early table checksum verification disabled Jan 30 13:13:48.915239 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 30 13:13:48.915246 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 30 13:13:48.915253 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:13:48.915259 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:13:48.915264 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:13:48.915271 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:13:48.915278 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:13:48.915285 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:13:48.915291 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:13:48.915297 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:13:48.915303 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:13:48.915309 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 30 13:13:48.915315 kernel: NUMA: Failed to initialise from firmware Jan 30 13:13:48.915321 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:13:48.915327 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 30 13:13:48.915333 kernel: Zone ranges: Jan 30 13:13:48.915339 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:13:48.915347 kernel: DMA32 empty Jan 30 13:13:48.915353 kernel: Normal empty Jan 30 13:13:48.915358 kernel: Movable zone start for each node Jan 30 13:13:48.915364 kernel: Early memory node ranges Jan 30 13:13:48.915370 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Jan 30 13:13:48.915376 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Jan 30 13:13:48.915382 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Jan 30 13:13:48.915388 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 30 13:13:48.915394 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 30 13:13:48.915400 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 30 13:13:48.915405 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 30 13:13:48.915411 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 30 13:13:48.915419 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 30 13:13:48.915425 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:13:48.915431 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 30 13:13:48.915440 kernel: psci: probing for conduit method from ACPI. Jan 30 13:13:48.915446 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 13:13:48.915453 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 13:13:48.915461 kernel: psci: Trusted OS migration not required Jan 30 13:13:48.915467 kernel: psci: SMC Calling Convention v1.1 Jan 30 13:13:48.915474 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 30 13:13:48.915480 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 13:13:48.915487 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 13:13:48.915493 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 30 13:13:48.915500 kernel: Detected PIPT I-cache on CPU0 Jan 30 13:13:48.915506 kernel: CPU features: detected: GIC system register CPU interface Jan 30 13:13:48.915513 kernel: CPU features: detected: Hardware dirty bit management Jan 30 13:13:48.915519 kernel: CPU features: detected: Spectre-v4 Jan 30 13:13:48.915527 kernel: CPU features: detected: Spectre-BHB Jan 30 13:13:48.915533 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 13:13:48.915540 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 13:13:48.915546 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 13:13:48.915552 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 13:13:48.915559 kernel: alternatives: applying boot alternatives Jan 30 13:13:48.915566 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:13:48.915573 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:13:48.915579 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:13:48.915586 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:13:48.915592 kernel: Fallback order for Node 0: 0 Jan 30 13:13:48.915600 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 30 13:13:48.915606 kernel: Policy zone: DMA Jan 30 13:13:48.915613 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:13:48.915619 kernel: software IO TLB: area num 4. Jan 30 13:13:48.915626 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 30 13:13:48.915632 kernel: Memory: 2385940K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 186348K reserved, 0K cma-reserved) Jan 30 13:13:48.915639 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:13:48.915645 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:13:48.915652 kernel: rcu: RCU event tracing is enabled. Jan 30 13:13:48.915659 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:13:48.915666 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:13:48.915672 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:13:48.915680 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:13:48.915687 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:13:48.915693 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 13:13:48.915699 kernel: GICv3: 256 SPIs implemented Jan 30 13:13:48.915705 kernel: GICv3: 0 Extended SPIs implemented Jan 30 13:13:48.915712 kernel: Root IRQ handler: gic_handle_irq Jan 30 13:13:48.915718 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 13:13:48.915724 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 30 13:13:48.915731 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 30 13:13:48.915737 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 13:13:48.915744 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 30 13:13:48.915751 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 30 13:13:48.915758 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 30 13:13:48.915764 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:13:48.915771 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:13:48.915777 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 13:13:48.915784 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 13:13:48.915790 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 13:13:48.915797 kernel: arm-pv: using stolen time PV Jan 30 13:13:48.915804 kernel: Console: colour dummy device 80x25 Jan 30 13:13:48.915810 kernel: ACPI: Core revision 20230628 Jan 30 13:13:48.915817 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 13:13:48.915825 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:13:48.915854 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:13:48.915861 kernel: landlock: Up and running. Jan 30 13:13:48.915867 kernel: SELinux: Initializing. Jan 30 13:13:48.915874 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:13:48.915881 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:13:48.915888 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:13:48.915901 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:13:48.915908 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:13:48.915917 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:13:48.915923 kernel: Platform MSI: ITS@0x8080000 domain created Jan 30 13:13:48.915930 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 30 13:13:48.915936 kernel: Remapping and enabling EFI services. Jan 30 13:13:48.915943 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:13:48.915950 kernel: Detected PIPT I-cache on CPU1 Jan 30 13:13:48.915956 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 30 13:13:48.915963 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 30 13:13:48.915969 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:13:48.915977 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 13:13:48.915984 kernel: Detected PIPT I-cache on CPU2 Jan 30 13:13:48.915996 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 30 13:13:48.916004 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 30 13:13:48.916012 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:13:48.916019 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 30 13:13:48.916025 kernel: Detected PIPT I-cache on CPU3 Jan 30 13:13:48.916032 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 30 13:13:48.916039 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 30 13:13:48.916048 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:13:48.916054 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 30 13:13:48.916061 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:13:48.916068 kernel: SMP: Total of 4 processors activated. Jan 30 13:13:48.916075 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 13:13:48.916082 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 13:13:48.916090 kernel: CPU features: detected: Common not Private translations Jan 30 13:13:48.916096 kernel: CPU features: detected: CRC32 instructions Jan 30 13:13:48.916105 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 30 13:13:48.916112 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 13:13:48.916119 kernel: CPU features: detected: LSE atomic instructions Jan 30 13:13:48.916126 kernel: CPU features: detected: Privileged Access Never Jan 30 13:13:48.916133 kernel: CPU features: detected: RAS Extension Support Jan 30 13:13:48.916140 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 30 13:13:48.916146 kernel: CPU: All CPU(s) started at EL1 Jan 30 13:13:48.916154 kernel: alternatives: applying system-wide alternatives Jan 30 13:13:48.916161 kernel: devtmpfs: initialized Jan 30 13:13:48.916168 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:13:48.916176 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:13:48.916184 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:13:48.916191 kernel: SMBIOS 3.0.0 present. Jan 30 13:13:48.916198 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 30 13:13:48.916205 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:13:48.916212 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 13:13:48.916219 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 13:13:48.916226 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 13:13:48.916233 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:13:48.916242 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Jan 30 13:13:48.916249 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:13:48.916256 kernel: cpuidle: using governor menu Jan 30 13:13:48.916262 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 13:13:48.916269 kernel: ASID allocator initialised with 32768 entries Jan 30 13:13:48.916276 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:13:48.916283 kernel: Serial: AMBA PL011 UART driver Jan 30 13:13:48.916290 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 13:13:48.916297 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 13:13:48.916305 kernel: Modules: 508880 pages in range for PLT usage Jan 30 13:13:48.916312 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:13:48.916319 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:13:48.916326 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 13:13:48.916333 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 13:13:48.916340 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:13:48.916347 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:13:48.916354 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 13:13:48.916361 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 13:13:48.916369 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:13:48.916376 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:13:48.916383 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:13:48.916390 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:13:48.916397 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:13:48.916404 kernel: ACPI: Interpreter enabled Jan 30 13:13:48.916411 kernel: ACPI: Using GIC for interrupt routing Jan 30 13:13:48.916418 kernel: ACPI: MCFG table detected, 1 entries Jan 30 13:13:48.916425 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 30 13:13:48.916434 kernel: printk: console [ttyAMA0] enabled Jan 30 13:13:48.916441 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:13:48.916569 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:13:48.916640 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 13:13:48.916703 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 13:13:48.916765 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 30 13:13:48.916826 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 30 13:13:48.916850 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 30 13:13:48.916857 kernel: PCI host bridge to bus 0000:00 Jan 30 13:13:48.916938 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 30 13:13:48.916998 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 13:13:48.917055 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 30 13:13:48.917117 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:13:48.917196 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 30 13:13:48.917277 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:13:48.917346 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 30 13:13:48.917410 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 30 13:13:48.917474 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 13:13:48.917537 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 13:13:48.917601 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 30 13:13:48.917666 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 30 13:13:48.917726 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 30 13:13:48.917782 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 13:13:48.917863 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 30 13:13:48.917874 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 13:13:48.917881 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 13:13:48.917888 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 13:13:48.917902 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 13:13:48.917912 kernel: iommu: Default domain type: Translated Jan 30 13:13:48.917919 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 13:13:48.917926 kernel: efivars: Registered efivars operations Jan 30 13:13:48.917933 kernel: vgaarb: loaded Jan 30 13:13:48.917940 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 13:13:48.917947 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:13:48.917954 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:13:48.917961 kernel: pnp: PnP ACPI init Jan 30 13:13:48.918036 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 30 13:13:48.918049 kernel: pnp: PnP ACPI: found 1 devices Jan 30 13:13:48.918056 kernel: NET: Registered PF_INET protocol family Jan 30 13:13:48.918063 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:13:48.918070 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:13:48.918077 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:13:48.918084 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:13:48.918091 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:13:48.918098 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:13:48.918106 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:13:48.918114 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:13:48.918121 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:13:48.918128 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:13:48.918134 kernel: kvm [1]: HYP mode not available Jan 30 13:13:48.918141 kernel: Initialise system trusted keyrings Jan 30 13:13:48.918148 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:13:48.918155 kernel: Key type asymmetric registered Jan 30 13:13:48.918162 kernel: Asymmetric key parser 'x509' registered Jan 30 13:13:48.918170 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 13:13:48.918178 kernel: io scheduler mq-deadline registered Jan 30 13:13:48.918186 kernel: io scheduler kyber registered Jan 30 13:13:48.918193 kernel: io scheduler bfq registered Jan 30 13:13:48.918200 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 13:13:48.918207 kernel: ACPI: button: Power Button [PWRB] Jan 30 13:13:48.918215 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 13:13:48.918282 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 30 13:13:48.918291 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:13:48.918299 kernel: thunder_xcv, ver 1.0 Jan 30 13:13:48.918308 kernel: thunder_bgx, ver 1.0 Jan 30 13:13:48.918315 kernel: nicpf, ver 1.0 Jan 30 13:13:48.918322 kernel: nicvf, ver 1.0 Jan 30 13:13:48.918392 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 13:13:48.918456 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T13:13:48 UTC (1738242828) Jan 30 13:13:48.918466 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:13:48.918474 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 30 13:13:48.918481 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 13:13:48.918491 kernel: watchdog: Hard watchdog permanently disabled Jan 30 13:13:48.918511 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:13:48.918518 kernel: Segment Routing with IPv6 Jan 30 13:13:48.918525 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:13:48.918533 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:13:48.918540 kernel: Key type dns_resolver registered Jan 30 13:13:48.918547 kernel: registered taskstats version 1 Jan 30 13:13:48.918554 kernel: Loading compiled-in X.509 certificates Jan 30 13:13:48.918562 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 30 13:13:48.918570 kernel: Key type .fscrypt registered Jan 30 13:13:48.918577 kernel: Key type fscrypt-provisioning registered Jan 30 13:13:48.918585 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:13:48.918592 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:13:48.918599 kernel: ima: No architecture policies found Jan 30 13:13:48.918606 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 13:13:48.918613 kernel: clk: Disabling unused clocks Jan 30 13:13:48.918620 kernel: Freeing unused kernel memory: 39936K Jan 30 13:13:48.918627 kernel: Run /init as init process Jan 30 13:13:48.918635 kernel: with arguments: Jan 30 13:13:48.918642 kernel: /init Jan 30 13:13:48.918649 kernel: with environment: Jan 30 13:13:48.918656 kernel: HOME=/ Jan 30 13:13:48.918663 kernel: TERM=linux Jan 30 13:13:48.918669 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:13:48.918678 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:13:48.918687 systemd[1]: Detected virtualization kvm. Jan 30 13:13:48.918697 systemd[1]: Detected architecture arm64. Jan 30 13:13:48.918704 systemd[1]: Running in initrd. Jan 30 13:13:48.918712 systemd[1]: No hostname configured, using default hostname. Jan 30 13:13:48.918719 systemd[1]: Hostname set to . Jan 30 13:13:48.918726 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:13:48.918734 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:13:48.918741 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:13:48.918749 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:13:48.918758 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:13:48.918766 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:13:48.918774 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:13:48.918782 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:13:48.918791 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:13:48.918799 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:13:48.918808 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:13:48.918816 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:13:48.918824 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:13:48.918848 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:13:48.918856 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:13:48.918863 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:13:48.918871 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:13:48.918878 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:13:48.918886 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:13:48.918903 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:13:48.918911 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:13:48.918918 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:13:48.918926 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:13:48.918933 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:13:48.918941 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:13:48.918949 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:13:48.918956 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:13:48.918966 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:13:48.918974 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:13:48.918981 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:13:48.918989 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:13:48.918997 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:13:48.919004 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:13:48.919011 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:13:48.919021 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:13:48.919029 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:13:48.919055 systemd-journald[239]: Collecting audit messages is disabled. Jan 30 13:13:48.919076 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:13:48.919085 systemd-journald[239]: Journal started Jan 30 13:13:48.919107 systemd-journald[239]: Runtime Journal (/run/log/journal/de14eaed69d84deab9b8a07b9c28a1c0) is 5.9M, max 47.3M, 41.4M free. Jan 30 13:13:48.926934 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:13:48.926964 kernel: Bridge firewalling registered Jan 30 13:13:48.910368 systemd-modules-load[240]: Inserted module 'overlay' Jan 30 13:13:48.924655 systemd-modules-load[240]: Inserted module 'br_netfilter' Jan 30 13:13:48.929917 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:13:48.932447 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:13:48.932474 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:13:48.934882 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:13:48.938291 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:13:48.940015 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:13:48.941964 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:13:48.949118 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:13:48.950221 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:13:48.966011 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:13:48.966890 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:13:48.969797 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:13:48.978563 dracut-cmdline[275]: dracut-dracut-053 Jan 30 13:13:48.981217 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:13:49.000486 systemd-resolved[281]: Positive Trust Anchors: Jan 30 13:13:49.000506 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:13:49.000537 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:13:49.005478 systemd-resolved[281]: Defaulting to hostname 'linux'. Jan 30 13:13:49.008548 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:13:49.009497 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:13:49.055873 kernel: SCSI subsystem initialized Jan 30 13:13:49.059843 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:13:49.066850 kernel: iscsi: registered transport (tcp) Jan 30 13:13:49.081870 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:13:49.081894 kernel: QLogic iSCSI HBA Driver Jan 30 13:13:49.124962 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:13:49.139025 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:13:49.160625 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:13:49.160685 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:13:49.160704 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:13:49.208867 kernel: raid6: neonx8 gen() 15206 MB/s Jan 30 13:13:49.225855 kernel: raid6: neonx4 gen() 15227 MB/s Jan 30 13:13:49.242851 kernel: raid6: neonx2 gen() 12758 MB/s Jan 30 13:13:49.259848 kernel: raid6: neonx1 gen() 10248 MB/s Jan 30 13:13:49.276850 kernel: raid6: int64x8 gen() 6793 MB/s Jan 30 13:13:49.293845 kernel: raid6: int64x4 gen() 7349 MB/s Jan 30 13:13:49.310842 kernel: raid6: int64x2 gen() 6109 MB/s Jan 30 13:13:49.327846 kernel: raid6: int64x1 gen() 5055 MB/s Jan 30 13:13:49.327863 kernel: raid6: using algorithm neonx4 gen() 15227 MB/s Jan 30 13:13:49.344847 kernel: raid6: .... xor() 12489 MB/s, rmw enabled Jan 30 13:13:49.344862 kernel: raid6: using neon recovery algorithm Jan 30 13:13:49.350026 kernel: xor: measuring software checksum speed Jan 30 13:13:49.350043 kernel: 8regs : 21618 MB/sec Jan 30 13:13:49.351134 kernel: 32regs : 21681 MB/sec Jan 30 13:13:49.351152 kernel: arm64_neon : 27917 MB/sec Jan 30 13:13:49.351162 kernel: xor: using function: arm64_neon (27917 MB/sec) Jan 30 13:13:49.404578 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:13:49.418088 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:13:49.430017 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:13:49.442404 systemd-udevd[460]: Using default interface naming scheme 'v255'. Jan 30 13:13:49.445532 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:13:49.448211 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:13:49.462163 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Jan 30 13:13:49.491217 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:13:49.503085 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:13:49.542955 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:13:49.554064 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:13:49.566057 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:13:49.567478 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:13:49.568864 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:13:49.570934 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:13:49.580198 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:13:49.593248 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 30 13:13:49.598155 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:13:49.598270 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:13:49.598283 kernel: GPT:9289727 != 19775487 Jan 30 13:13:49.598293 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:13:49.598303 kernel: GPT:9289727 != 19775487 Jan 30 13:13:49.598311 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:13:49.598320 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:13:49.593531 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:13:49.597287 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:13:49.597414 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:13:49.603097 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:13:49.605784 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:13:49.605954 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:13:49.608346 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:13:49.617859 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (523) Jan 30 13:13:49.620659 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:13:49.622881 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (514) Jan 30 13:13:49.633049 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:13:49.634966 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:13:49.640006 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:13:49.650608 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:13:49.654677 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:13:49.655705 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:13:49.668010 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:13:49.670045 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:13:49.675836 disk-uuid[551]: Primary Header is updated. Jan 30 13:13:49.675836 disk-uuid[551]: Secondary Entries is updated. Jan 30 13:13:49.675836 disk-uuid[551]: Secondary Header is updated. Jan 30 13:13:49.678861 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:13:49.689972 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:13:50.692681 disk-uuid[555]: The operation has completed successfully. Jan 30 13:13:50.693771 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:13:50.726104 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:13:50.726230 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:13:50.746053 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:13:50.749358 sh[576]: Success Jan 30 13:13:50.769853 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 13:13:50.808949 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:13:50.818180 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:13:50.819787 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:13:50.834098 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 30 13:13:50.834151 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:13:50.834161 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:13:50.836302 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:13:50.836320 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:13:50.841096 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:13:50.842352 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:13:50.864056 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:13:50.865560 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:13:50.877071 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:13:50.877125 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:13:50.877142 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:13:50.879898 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:13:50.889644 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:13:50.891530 kernel: BTRFS info (device vda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:13:50.900789 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:13:50.908016 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:13:50.980154 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:13:50.989007 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:13:51.014419 systemd-networkd[765]: lo: Link UP Jan 30 13:13:51.014431 systemd-networkd[765]: lo: Gained carrier Jan 30 13:13:51.015342 systemd-networkd[765]: Enumeration completed Jan 30 13:13:51.015460 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:13:51.015761 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:13:51.015764 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:13:51.016636 systemd[1]: Reached target network.target - Network. Jan 30 13:13:51.016905 systemd-networkd[765]: eth0: Link UP Jan 30 13:13:51.016908 systemd-networkd[765]: eth0: Gained carrier Jan 30 13:13:51.016915 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:13:51.024507 ignition[677]: Ignition 2.20.0 Jan 30 13:13:51.024514 ignition[677]: Stage: fetch-offline Jan 30 13:13:51.026405 ignition[677]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:13:51.026419 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:13:51.026588 ignition[677]: parsed url from cmdline: "" Jan 30 13:13:51.026592 ignition[677]: no config URL provided Jan 30 13:13:51.026599 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:13:51.026607 ignition[677]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:13:51.026636 ignition[677]: op(1): [started] loading QEMU firmware config module Jan 30 13:13:51.026641 ignition[677]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:13:51.037672 ignition[677]: op(1): [finished] loading QEMU firmware config module Jan 30 13:13:51.037921 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.148/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:13:51.044771 ignition[677]: parsing config with SHA512: 79c9c17aedd01cbe9df875921de6c0cefabec86a39464ae403516fc8c8a9d3a5e12298c15070087e7497c6b83394edb6bbb942a2f13f230e578cedba8ea1e2fb Jan 30 13:13:51.048573 unknown[677]: fetched base config from "system" Jan 30 13:13:51.048588 unknown[677]: fetched user config from "qemu" Jan 30 13:13:51.048933 ignition[677]: fetch-offline: fetch-offline passed Jan 30 13:13:51.049015 ignition[677]: Ignition finished successfully Jan 30 13:13:51.051010 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:13:51.052240 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:13:51.065000 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:13:51.077533 ignition[777]: Ignition 2.20.0 Jan 30 13:13:51.077544 ignition[777]: Stage: kargs Jan 30 13:13:51.077718 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:13:51.077729 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:13:51.078501 ignition[777]: kargs: kargs passed Jan 30 13:13:51.078555 ignition[777]: Ignition finished successfully Jan 30 13:13:51.082526 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:13:51.094015 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:13:51.103982 ignition[785]: Ignition 2.20.0 Jan 30 13:13:51.103992 ignition[785]: Stage: disks Jan 30 13:13:51.104176 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:13:51.106396 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:13:51.104187 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:13:51.107563 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:13:51.104881 ignition[785]: disks: disks passed Jan 30 13:13:51.108768 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:13:51.104941 ignition[785]: Ignition finished successfully Jan 30 13:13:51.110314 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:13:51.111702 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:13:51.112808 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:13:51.119986 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:13:51.130109 systemd-fsck[796]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:13:51.133969 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:13:51.136037 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:13:51.192850 kernel: EXT4-fs (vda9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 30 13:13:51.192951 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:13:51.194108 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:13:51.212959 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:13:51.215758 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:13:51.216753 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:13:51.216822 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:13:51.221899 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (804) Jan 30 13:13:51.216905 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:13:51.224848 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:13:51.224865 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:13:51.224875 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:13:51.222409 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:13:51.226988 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:13:51.228848 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:13:51.231296 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:13:51.279894 initrd-setup-root[828]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:13:51.284641 initrd-setup-root[835]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:13:51.289286 initrd-setup-root[842]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:13:51.293139 initrd-setup-root[849]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:13:51.380609 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:13:51.390002 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:13:51.391496 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:13:51.396848 kernel: BTRFS info (device vda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:13:51.415983 ignition[917]: INFO : Ignition 2.20.0 Jan 30 13:13:51.415983 ignition[917]: INFO : Stage: mount Jan 30 13:13:51.417483 ignition[917]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:13:51.417483 ignition[917]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:13:51.417483 ignition[917]: INFO : mount: mount passed Jan 30 13:13:51.417483 ignition[917]: INFO : Ignition finished successfully Jan 30 13:13:51.421093 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:13:51.432055 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:13:51.433039 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:13:51.831796 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:13:51.846118 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:13:51.851853 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (931) Jan 30 13:13:51.856265 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:13:51.856294 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:13:51.856313 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:13:51.863857 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:13:51.865580 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:13:51.885613 ignition[948]: INFO : Ignition 2.20.0 Jan 30 13:13:51.885613 ignition[948]: INFO : Stage: files Jan 30 13:13:51.887119 ignition[948]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:13:51.887119 ignition[948]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:13:51.887119 ignition[948]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:13:51.890300 ignition[948]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:13:51.890300 ignition[948]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:13:51.890300 ignition[948]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:13:51.890300 ignition[948]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:13:51.894982 ignition[948]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:13:51.894982 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:13:51.894982 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:13:51.894982 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:13:51.894982 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:13:51.894982 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 30 13:13:51.894982 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 30 13:13:51.894982 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 30 13:13:51.894982 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Jan 30 13:13:51.891451 unknown[948]: wrote ssh authorized keys file for user: core Jan 30 13:13:52.213729 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 30 13:13:52.449625 ignition[948]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Jan 30 13:13:52.449625 ignition[948]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 30 13:13:52.452746 ignition[948]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:13:52.452746 ignition[948]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:13:52.452746 ignition[948]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 30 13:13:52.452746 ignition[948]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:13:52.481105 systemd-networkd[765]: eth0: Gained IPv6LL Jan 30 13:13:52.488669 ignition[948]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:13:52.493627 ignition[948]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:13:52.495752 ignition[948]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:13:52.495752 ignition[948]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:13:52.495752 ignition[948]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:13:52.495752 ignition[948]: INFO : files: files passed Jan 30 13:13:52.495752 ignition[948]: INFO : Ignition finished successfully Jan 30 13:13:52.496567 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:13:52.508073 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:13:52.511047 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:13:52.514647 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:13:52.514759 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:13:52.519201 initrd-setup-root-after-ignition[977]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:13:52.522393 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:13:52.522393 initrd-setup-root-after-ignition[979]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:13:52.524762 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:13:52.525172 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:13:52.527292 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:13:52.541050 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:13:52.562653 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:13:52.562774 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:13:52.564573 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:13:52.565382 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:13:52.566174 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:13:52.567057 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:13:52.583603 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:13:52.603033 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:13:52.611457 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:13:52.612538 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:13:52.614078 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:13:52.615399 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:13:52.615534 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:13:52.617503 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:13:52.619114 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:13:52.620508 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:13:52.622009 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:13:52.623608 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:13:52.625237 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:13:52.626652 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:13:52.628146 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:13:52.629740 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:13:52.631088 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:13:52.632398 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:13:52.632538 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:13:52.634536 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:13:52.636350 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:13:52.637725 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:13:52.637860 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:13:52.639325 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:13:52.639455 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:13:52.641550 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:13:52.641678 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:13:52.643070 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:13:52.644194 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:13:52.644893 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:13:52.646563 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:13:52.647709 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:13:52.649077 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:13:52.649177 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:13:52.650673 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:13:52.650774 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:13:52.651931 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:13:52.652049 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:13:52.653502 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:13:52.653603 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:13:52.661087 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:13:52.662608 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:13:52.663351 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:13:52.663488 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:13:52.665066 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:13:52.665176 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:13:52.670005 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:13:52.670111 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:13:52.681990 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:13:52.683406 ignition[1004]: INFO : Ignition 2.20.0 Jan 30 13:13:52.683406 ignition[1004]: INFO : Stage: umount Jan 30 13:13:52.687177 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:13:52.687177 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:13:52.687177 ignition[1004]: INFO : umount: umount passed Jan 30 13:13:52.687177 ignition[1004]: INFO : Ignition finished successfully Jan 30 13:13:52.689435 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:13:52.689549 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:13:52.691410 systemd[1]: Stopped target network.target - Network. Jan 30 13:13:52.692871 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:13:52.692944 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:13:52.694519 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:13:52.694567 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:13:52.695860 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:13:52.695913 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:13:52.697346 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:13:52.697386 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:13:52.699096 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:13:52.700438 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:13:52.705106 systemd-networkd[765]: eth0: DHCPv6 lease lost Jan 30 13:13:52.707380 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:13:52.707505 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:13:52.709192 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:13:52.709225 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:13:52.716049 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:13:52.716852 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:13:52.716926 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:13:52.718780 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:13:52.721843 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:13:52.722005 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:13:52.726552 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:13:52.726642 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:13:52.728222 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:13:52.728274 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:13:52.729631 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:13:52.729670 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:13:52.742287 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:13:52.742468 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:13:52.744937 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:13:52.745030 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:13:52.746798 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:13:52.747053 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:13:52.748580 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:13:52.748616 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:13:52.750143 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:13:52.750200 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:13:52.752416 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:13:52.752472 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:13:52.754482 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:13:52.754530 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:13:52.770440 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:13:52.772152 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:13:52.772238 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:13:52.773943 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:13:52.774089 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:13:52.775749 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:13:52.775801 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:13:52.777479 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:13:52.777530 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:13:52.779978 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:13:52.780095 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:13:52.781303 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:13:52.781395 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:13:52.783698 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:13:52.785267 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:13:52.785367 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:13:52.800075 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:13:52.808015 systemd[1]: Switching root. Jan 30 13:13:52.843112 systemd-journald[239]: Journal stopped Jan 30 13:13:53.657653 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jan 30 13:13:53.657711 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:13:53.657723 kernel: SELinux: policy capability open_perms=1 Jan 30 13:13:53.657733 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:13:53.657743 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:13:53.657752 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:13:53.657762 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:13:53.657774 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:13:53.657784 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:13:53.657795 kernel: audit: type=1403 audit(1738242832.970:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:13:53.657807 systemd[1]: Successfully loaded SELinux policy in 32.970ms. Jan 30 13:13:53.657826 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.619ms. Jan 30 13:13:53.657863 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:13:53.657883 systemd[1]: Detected virtualization kvm. Jan 30 13:13:53.657895 systemd[1]: Detected architecture arm64. Jan 30 13:13:53.657906 systemd[1]: Detected first boot. Jan 30 13:13:53.657916 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:13:53.657934 zram_generator::config[1049]: No configuration found. Jan 30 13:13:53.657945 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:13:53.657959 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:13:53.657970 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:13:53.657980 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:13:53.657991 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:13:53.658004 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:13:53.658014 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:13:53.658025 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:13:53.658035 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:13:53.658046 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:13:53.658056 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:13:53.658067 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:13:53.658077 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:13:53.658089 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:13:53.658101 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:13:53.658113 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:13:53.658123 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:13:53.658134 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:13:53.658144 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 13:13:53.658155 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:13:53.658165 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:13:53.658175 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:13:53.658189 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:13:53.658200 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:13:53.658210 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:13:53.658220 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:13:53.658231 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:13:53.658241 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:13:53.658251 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:13:53.658262 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:13:53.658274 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:13:53.658284 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:13:53.658295 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:13:53.658309 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:13:53.658319 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:13:53.658330 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:13:53.658340 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:13:53.658351 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:13:53.658362 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:13:53.658374 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:13:53.658385 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:13:53.658396 systemd[1]: Reached target machines.target - Containers. Jan 30 13:13:53.658406 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:13:53.658418 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:13:53.658428 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:13:53.658438 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:13:53.658448 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:13:53.658460 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:13:53.658471 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:13:53.658481 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:13:53.658492 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:13:53.658503 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:13:53.658514 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:13:53.658524 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:13:53.658534 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:13:53.658545 kernel: fuse: init (API version 7.39) Jan 30 13:13:53.658556 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:13:53.658566 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:13:53.658576 kernel: loop: module loaded Jan 30 13:13:53.658585 kernel: ACPI: bus type drm_connector registered Jan 30 13:13:53.658595 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:13:53.658606 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:13:53.658617 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:13:53.658627 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:13:53.658639 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:13:53.658651 systemd[1]: Stopped verity-setup.service. Jan 30 13:13:53.658661 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:13:53.658690 systemd-journald[1113]: Collecting audit messages is disabled. Jan 30 13:13:53.658713 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:13:53.658728 systemd-journald[1113]: Journal started Jan 30 13:13:53.658756 systemd-journald[1113]: Runtime Journal (/run/log/journal/de14eaed69d84deab9b8a07b9c28a1c0) is 5.9M, max 47.3M, 41.4M free. Jan 30 13:13:53.430797 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:13:53.453539 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:13:53.454052 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:13:53.661520 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:13:53.662236 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:13:53.663162 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:13:53.664136 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:13:53.665133 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:13:53.666221 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:13:53.667465 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:13:53.667654 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:13:53.668988 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:13:53.669139 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:13:53.670479 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:13:53.670640 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:13:53.671991 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:13:53.672124 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:13:53.675330 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:13:53.676564 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:13:53.676712 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:13:53.678151 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:13:53.678303 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:13:53.679508 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:13:53.680693 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:13:53.682189 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:13:53.695064 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:13:53.700967 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:13:53.702937 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:13:53.703816 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:13:53.703866 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:13:53.705736 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:13:53.707951 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:13:53.710069 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:13:53.711028 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:13:53.712764 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:13:53.714799 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:13:53.715704 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:13:53.719066 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:13:53.720114 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:13:53.723475 systemd-journald[1113]: Time spent on flushing to /var/log/journal/de14eaed69d84deab9b8a07b9c28a1c0 is 22.903ms for 840 entries. Jan 30 13:13:53.723475 systemd-journald[1113]: System Journal (/var/log/journal/de14eaed69d84deab9b8a07b9c28a1c0) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:13:53.843306 systemd-journald[1113]: Received client request to flush runtime journal. Jan 30 13:13:53.843406 kernel: loop0: detected capacity change from 0 to 113552 Jan 30 13:13:53.843426 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:13:53.843446 kernel: loop1: detected capacity change from 0 to 201592 Jan 30 13:13:53.724127 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:13:53.729093 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:13:53.731583 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:13:53.736355 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:13:53.737585 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:13:53.738741 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:13:53.740119 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:13:53.745048 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:13:53.747851 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:13:53.749128 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:13:53.751364 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:13:53.771118 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:13:53.785754 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:13:53.802945 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Jan 30 13:13:53.802956 systemd-tmpfiles[1161]: ACLs are not supported, ignoring. Jan 30 13:13:53.807151 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:13:53.823031 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:13:53.848972 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:13:53.852731 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:13:53.853464 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:13:53.855282 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:13:53.859861 kernel: loop2: detected capacity change from 0 to 116784 Jan 30 13:13:53.864035 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:13:53.882435 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Jan 30 13:13:53.882457 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Jan 30 13:13:53.887889 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:13:53.902025 kernel: loop3: detected capacity change from 0 to 113552 Jan 30 13:13:53.906924 kernel: loop4: detected capacity change from 0 to 201592 Jan 30 13:13:53.914171 kernel: loop5: detected capacity change from 0 to 116784 Jan 30 13:13:53.917708 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:13:53.918204 (sd-merge)[1189]: Merged extensions into '/usr'. Jan 30 13:13:53.923519 systemd[1]: Reloading requested from client PID 1160 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:13:53.923536 systemd[1]: Reloading... Jan 30 13:13:53.984877 zram_generator::config[1215]: No configuration found. Jan 30 13:13:54.055427 ldconfig[1155]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:13:54.093912 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:13:54.130176 systemd[1]: Reloading finished in 206 ms. Jan 30 13:13:54.162779 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:13:54.164176 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:13:54.180081 systemd[1]: Starting ensure-sysext.service... Jan 30 13:13:54.182030 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:13:54.197546 systemd[1]: Reloading requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:13:54.197569 systemd[1]: Reloading... Jan 30 13:13:54.208609 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:13:54.209409 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:13:54.210299 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:13:54.210655 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 30 13:13:54.210773 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Jan 30 13:13:54.213705 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:13:54.213852 systemd-tmpfiles[1250]: Skipping /boot Jan 30 13:13:54.223734 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:13:54.224006 systemd-tmpfiles[1250]: Skipping /boot Jan 30 13:13:54.249760 zram_generator::config[1280]: No configuration found. Jan 30 13:13:54.339403 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:13:54.376217 systemd[1]: Reloading finished in 178 ms. Jan 30 13:13:54.391884 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:13:54.404292 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:13:54.416151 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:13:54.418771 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:13:54.419866 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:13:54.421326 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:13:54.428996 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:13:54.431599 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:13:54.432647 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:13:54.435426 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:13:54.440756 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:13:54.454658 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:13:54.459127 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:13:54.462676 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:13:54.463245 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:13:54.464946 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:13:54.467194 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:13:54.468746 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:13:54.468924 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:13:54.470421 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:13:54.479696 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:13:54.494222 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:13:54.496532 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Jan 30 13:13:54.496686 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:13:54.499003 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:13:54.499841 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:13:54.505099 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:13:54.510945 augenrules[1349]: No rules Jan 30 13:13:54.516199 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:13:54.518520 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:13:54.518771 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:13:54.521865 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:13:54.523497 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:13:54.523658 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:13:54.526601 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:13:54.528391 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:13:54.528561 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:13:54.530302 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:13:54.531873 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:13:54.533592 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:13:54.539891 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:13:54.552867 systemd[1]: Finished ensure-sysext.service. Jan 30 13:13:54.557452 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:13:54.570333 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:13:54.571324 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:13:54.573148 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:13:54.587437 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:13:54.593345 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:13:54.597060 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:13:54.599085 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:13:54.599844 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1366) Jan 30 13:13:54.603086 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:13:54.608568 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:13:54.612051 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:13:54.612654 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:13:54.612809 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:13:54.614143 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:13:54.614298 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:13:54.616267 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:13:54.616422 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:13:54.623339 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 30 13:13:54.629373 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:13:54.642335 augenrules[1386]: /sbin/augenrules: No change Jan 30 13:13:54.643393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:13:54.645892 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:13:54.649060 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:13:54.658542 augenrules[1419]: No rules Jan 30 13:13:54.661222 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:13:54.662858 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:13:54.666191 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:13:54.675098 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:13:54.677686 systemd-resolved[1325]: Positive Trust Anchors: Jan 30 13:13:54.678000 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:13:54.678099 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:13:54.701806 systemd-resolved[1325]: Defaulting to hostname 'linux'. Jan 30 13:13:54.704699 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:13:54.706277 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:13:54.707598 systemd-networkd[1402]: lo: Link UP Jan 30 13:13:54.707609 systemd-networkd[1402]: lo: Gained carrier Jan 30 13:13:54.707671 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:13:54.708593 systemd-networkd[1402]: Enumeration completed Jan 30 13:13:54.709150 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:13:54.709159 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:13:54.709331 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:13:54.709914 systemd-networkd[1402]: eth0: Link UP Jan 30 13:13:54.709924 systemd-networkd[1402]: eth0: Gained carrier Jan 30 13:13:54.709939 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:13:54.711006 systemd[1]: Reached target network.target - Network. Jan 30 13:13:54.720097 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:13:54.725505 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:13:54.726601 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:13:54.729084 systemd-networkd[1402]: eth0: DHCPv4 address 10.0.0.148/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:13:54.732458 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Jan 30 13:13:55.195996 systemd-resolved[1325]: Clock change detected. Flushing caches. Jan 30 13:13:55.196068 systemd-timesyncd[1403]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:13:55.196133 systemd-timesyncd[1403]: Initial clock synchronization to Thu 2025-01-30 13:13:55.195940 UTC. Jan 30 13:13:55.206158 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:13:55.215167 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:13:55.219903 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:13:55.248104 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:13:55.262884 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:13:55.270079 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:13:55.271776 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:13:55.274004 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:13:55.274972 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:13:55.275937 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:13:55.277099 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:13:55.278021 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:13:55.278944 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:13:55.279800 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:13:55.279831 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:13:55.280485 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:13:55.282217 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:13:55.284676 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:13:55.295865 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:13:55.298075 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:13:55.304078 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:13:55.305145 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:13:55.305912 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:13:55.306636 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:13:55.306670 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:13:55.306895 lvm[1445]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:13:55.307792 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:13:55.309878 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:13:55.312616 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:13:55.314875 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:13:55.315652 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:13:55.316810 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:13:55.319078 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:13:55.325800 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:13:55.331586 jq[1448]: false Jan 30 13:13:55.337159 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:13:55.347146 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:13:55.347673 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:13:55.351065 extend-filesystems[1449]: Found loop3 Jan 30 13:13:55.351065 extend-filesystems[1449]: Found loop4 Jan 30 13:13:55.351065 extend-filesystems[1449]: Found loop5 Jan 30 13:13:55.351065 extend-filesystems[1449]: Found vda Jan 30 13:13:55.351065 extend-filesystems[1449]: Found vda1 Jan 30 13:13:55.351065 extend-filesystems[1449]: Found vda2 Jan 30 13:13:55.351065 extend-filesystems[1449]: Found vda3 Jan 30 13:13:55.351065 extend-filesystems[1449]: Found usr Jan 30 13:13:55.351065 extend-filesystems[1449]: Found vda4 Jan 30 13:13:55.351065 extend-filesystems[1449]: Found vda6 Jan 30 13:13:55.351065 extend-filesystems[1449]: Found vda7 Jan 30 13:13:55.351065 extend-filesystems[1449]: Found vda9 Jan 30 13:13:55.351065 extend-filesystems[1449]: Checking size of /dev/vda9 Jan 30 13:13:55.350008 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:13:55.352133 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:13:55.357039 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:13:55.362423 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:13:55.371701 jq[1465]: true Jan 30 13:13:55.362590 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:13:55.362845 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:13:55.363022 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:13:55.364104 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:13:55.364246 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:13:55.374671 extend-filesystems[1449]: Resized partition /dev/vda9 Jan 30 13:13:55.377040 extend-filesystems[1475]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:13:55.381974 (ntainerd)[1474]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:13:55.383955 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:13:55.386322 dbus-daemon[1447]: [system] SELinux support is enabled Jan 30 13:13:55.386576 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:13:55.392343 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:13:55.392382 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:13:55.393435 update_engine[1463]: I20250130 13:13:55.393268 1463 main.cc:92] Flatcar Update Engine starting Jan 30 13:13:55.394506 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:13:55.394533 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:13:55.397907 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:13:55.398567 update_engine[1463]: I20250130 13:13:55.397699 1463 update_check_scheduler.cc:74] Next update check in 5m13s Jan 30 13:13:55.407175 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1374) Jan 30 13:13:55.413002 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:13:55.426897 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:13:55.445902 jq[1468]: true Jan 30 13:13:55.454116 extend-filesystems[1475]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:13:55.454116 extend-filesystems[1475]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:13:55.454116 extend-filesystems[1475]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:13:55.467646 extend-filesystems[1449]: Resized filesystem in /dev/vda9 Jan 30 13:13:55.454872 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:13:55.455087 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:13:55.458498 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 13:13:55.458821 systemd-logind[1456]: New seat seat0. Jan 30 13:13:55.461083 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:13:55.501760 bash[1503]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:13:55.504913 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:13:55.506577 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:13:55.512358 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:13:55.603233 containerd[1474]: time="2025-01-30T13:13:55.603090314Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 13:13:55.627675 containerd[1474]: time="2025-01-30T13:13:55.627590874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:13:55.629199 containerd[1474]: time="2025-01-30T13:13:55.629151874Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:13:55.629199 containerd[1474]: time="2025-01-30T13:13:55.629193874Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:13:55.629255 containerd[1474]: time="2025-01-30T13:13:55.629213114Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:13:55.629406 containerd[1474]: time="2025-01-30T13:13:55.629377714Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:13:55.629406 containerd[1474]: time="2025-01-30T13:13:55.629400994Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:13:55.629476 containerd[1474]: time="2025-01-30T13:13:55.629459914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:13:55.629497 containerd[1474]: time="2025-01-30T13:13:55.629475914Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:13:55.629676 containerd[1474]: time="2025-01-30T13:13:55.629647114Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:13:55.629676 containerd[1474]: time="2025-01-30T13:13:55.629669394Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:13:55.629720 containerd[1474]: time="2025-01-30T13:13:55.629683594Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:13:55.629720 containerd[1474]: time="2025-01-30T13:13:55.629693354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:13:55.629778 containerd[1474]: time="2025-01-30T13:13:55.629763194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:13:55.630026 containerd[1474]: time="2025-01-30T13:13:55.629996754Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:13:55.630122 containerd[1474]: time="2025-01-30T13:13:55.630105714Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:13:55.630150 containerd[1474]: time="2025-01-30T13:13:55.630123274Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:13:55.630221 containerd[1474]: time="2025-01-30T13:13:55.630207034Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:13:55.630268 containerd[1474]: time="2025-01-30T13:13:55.630256314Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:13:55.634942 containerd[1474]: time="2025-01-30T13:13:55.634903714Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:13:55.635029 containerd[1474]: time="2025-01-30T13:13:55.634970514Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:13:55.635029 containerd[1474]: time="2025-01-30T13:13:55.634988474Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:13:55.635029 containerd[1474]: time="2025-01-30T13:13:55.635006234Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:13:55.635029 containerd[1474]: time="2025-01-30T13:13:55.635022434Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:13:55.635205 containerd[1474]: time="2025-01-30T13:13:55.635184474Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:13:55.635462 containerd[1474]: time="2025-01-30T13:13:55.635441994Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:13:55.635569 containerd[1474]: time="2025-01-30T13:13:55.635551834Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:13:55.635866 containerd[1474]: time="2025-01-30T13:13:55.635758754Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:13:55.635866 containerd[1474]: time="2025-01-30T13:13:55.635795994Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:13:55.635926 containerd[1474]: time="2025-01-30T13:13:55.635872674Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:13:55.635946 containerd[1474]: time="2025-01-30T13:13:55.635924314Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:13:55.635963 containerd[1474]: time="2025-01-30T13:13:55.635945834Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:13:55.635998 containerd[1474]: time="2025-01-30T13:13:55.635967074Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:13:55.635998 containerd[1474]: time="2025-01-30T13:13:55.635987114Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:13:55.636034 containerd[1474]: time="2025-01-30T13:13:55.636006794Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:13:55.636034 containerd[1474]: time="2025-01-30T13:13:55.636023994Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:13:55.636066 containerd[1474]: time="2025-01-30T13:13:55.636037194Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:13:55.636083 containerd[1474]: time="2025-01-30T13:13:55.636069514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:13:55.636103 containerd[1474]: time="2025-01-30T13:13:55.636088634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:13:55.636123 containerd[1474]: time="2025-01-30T13:13:55.636103514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:13:55.636147 containerd[1474]: time="2025-01-30T13:13:55.636127194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:13:55.636169 containerd[1474]: time="2025-01-30T13:13:55.636144314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:13:55.636169 containerd[1474]: time="2025-01-30T13:13:55.636163394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:13:55.636203 containerd[1474]: time="2025-01-30T13:13:55.636179994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:13:55.636203 containerd[1474]: time="2025-01-30T13:13:55.636197274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:13:55.636236 containerd[1474]: time="2025-01-30T13:13:55.636215554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:13:55.636254 containerd[1474]: time="2025-01-30T13:13:55.636239434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:13:55.636272 containerd[1474]: time="2025-01-30T13:13:55.636257914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:13:55.636292 containerd[1474]: time="2025-01-30T13:13:55.636274354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:13:55.636309 containerd[1474]: time="2025-01-30T13:13:55.636290754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:13:55.636327 containerd[1474]: time="2025-01-30T13:13:55.636307114Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:13:55.636620 containerd[1474]: time="2025-01-30T13:13:55.636342234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:13:55.636620 containerd[1474]: time="2025-01-30T13:13:55.636369794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:13:55.636620 containerd[1474]: time="2025-01-30T13:13:55.636386274Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:13:55.636620 containerd[1474]: time="2025-01-30T13:13:55.636578794Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:13:55.636745 containerd[1474]: time="2025-01-30T13:13:55.636717114Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:13:55.636778 containerd[1474]: time="2025-01-30T13:13:55.636740954Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:13:55.636778 containerd[1474]: time="2025-01-30T13:13:55.636760234Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:13:55.636778 containerd[1474]: time="2025-01-30T13:13:55.636773714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:13:55.636827 containerd[1474]: time="2025-01-30T13:13:55.636791314Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:13:55.636827 containerd[1474]: time="2025-01-30T13:13:55.636805954Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:13:55.636827 containerd[1474]: time="2025-01-30T13:13:55.636818314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:13:55.637265 containerd[1474]: time="2025-01-30T13:13:55.637214154Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:13:55.637373 containerd[1474]: time="2025-01-30T13:13:55.637277274Z" level=info msg="Connect containerd service" Jan 30 13:13:55.637373 containerd[1474]: time="2025-01-30T13:13:55.637324034Z" level=info msg="using legacy CRI server" Jan 30 13:13:55.637373 containerd[1474]: time="2025-01-30T13:13:55.637332594Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:13:55.637601 containerd[1474]: time="2025-01-30T13:13:55.637582954Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:13:55.638915 containerd[1474]: time="2025-01-30T13:13:55.638875394Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:13:55.639129 containerd[1474]: time="2025-01-30T13:13:55.639100714Z" level=info msg="Start subscribing containerd event" Jan 30 13:13:55.639162 containerd[1474]: time="2025-01-30T13:13:55.639151834Z" level=info msg="Start recovering state" Jan 30 13:13:55.639345 containerd[1474]: time="2025-01-30T13:13:55.639219314Z" level=info msg="Start event monitor" Jan 30 13:13:55.639345 containerd[1474]: time="2025-01-30T13:13:55.639240874Z" level=info msg="Start snapshots syncer" Jan 30 13:13:55.639345 containerd[1474]: time="2025-01-30T13:13:55.639252514Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:13:55.639345 containerd[1474]: time="2025-01-30T13:13:55.639260114Z" level=info msg="Start streaming server" Jan 30 13:13:55.640167 containerd[1474]: time="2025-01-30T13:13:55.640121554Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:13:55.640201 containerd[1474]: time="2025-01-30T13:13:55.640175954Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:13:55.640326 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:13:55.641858 containerd[1474]: time="2025-01-30T13:13:55.641809634Z" level=info msg="containerd successfully booted in 0.039594s" Jan 30 13:13:56.449584 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:13:56.469181 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:13:56.477145 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:13:56.483280 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:13:56.484971 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:13:56.488192 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:13:56.503434 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:13:56.516272 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:13:56.518402 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 13:13:56.519476 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:13:56.525014 systemd-networkd[1402]: eth0: Gained IPv6LL Jan 30 13:13:56.528324 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:13:56.529879 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:13:56.532160 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:13:56.534505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:13:56.536561 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:13:56.555103 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:13:56.555289 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:13:56.556697 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:13:56.561330 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:13:57.124615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:13:57.126033 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:13:57.129361 (kubelet)[1552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:13:57.132166 systemd[1]: Startup finished in 598ms (kernel) + 4.238s (initrd) + 3.737s (userspace) = 8.574s. Jan 30 13:13:57.146049 agetty[1528]: failed to open credentials directory Jan 30 13:13:57.146050 agetty[1529]: failed to open credentials directory Jan 30 13:13:57.590973 kubelet[1552]: E0130 13:13:57.589928 1552 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:13:57.593456 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:13:57.593597 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:14:01.666816 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:14:01.668244 systemd[1]: Started sshd@0-10.0.0.148:22-10.0.0.1:57886.service - OpenSSH per-connection server daemon (10.0.0.1:57886). Jan 30 13:14:01.726013 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 57886 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:01.728024 sshd-session[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:01.736006 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:14:01.746148 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:14:01.747761 systemd-logind[1456]: New session 1 of user core. Jan 30 13:14:01.755959 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:14:01.760168 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:14:01.765273 (systemd)[1569]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:14:01.845490 systemd[1569]: Queued start job for default target default.target. Jan 30 13:14:01.859958 systemd[1569]: Created slice app.slice - User Application Slice. Jan 30 13:14:01.860010 systemd[1569]: Reached target paths.target - Paths. Jan 30 13:14:01.860022 systemd[1569]: Reached target timers.target - Timers. Jan 30 13:14:01.861378 systemd[1569]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:14:01.875241 systemd[1569]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:14:01.875355 systemd[1569]: Reached target sockets.target - Sockets. Jan 30 13:14:01.875368 systemd[1569]: Reached target basic.target - Basic System. Jan 30 13:14:01.875406 systemd[1569]: Reached target default.target - Main User Target. Jan 30 13:14:01.875433 systemd[1569]: Startup finished in 104ms. Jan 30 13:14:01.875764 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:14:01.877681 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:14:01.941490 systemd[1]: Started sshd@1-10.0.0.148:22-10.0.0.1:57888.service - OpenSSH per-connection server daemon (10.0.0.1:57888). Jan 30 13:14:01.997242 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 57888 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:01.998519 sshd-session[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:02.003764 systemd-logind[1456]: New session 2 of user core. Jan 30 13:14:02.017087 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:14:02.069495 sshd[1582]: Connection closed by 10.0.0.1 port 57888 Jan 30 13:14:02.070102 sshd-session[1580]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:02.081379 systemd[1]: sshd@1-10.0.0.148:22-10.0.0.1:57888.service: Deactivated successfully. Jan 30 13:14:02.086221 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:14:02.088213 systemd-logind[1456]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:14:02.089798 systemd-logind[1456]: Removed session 2. Jan 30 13:14:02.101582 systemd[1]: Started sshd@2-10.0.0.148:22-10.0.0.1:57896.service - OpenSSH per-connection server daemon (10.0.0.1:57896). Jan 30 13:14:02.142736 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 57896 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:02.144278 sshd-session[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:02.148020 systemd-logind[1456]: New session 3 of user core. Jan 30 13:14:02.157088 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:14:02.205929 sshd[1589]: Connection closed by 10.0.0.1 port 57896 Jan 30 13:14:02.206182 sshd-session[1587]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:02.218404 systemd[1]: sshd@2-10.0.0.148:22-10.0.0.1:57896.service: Deactivated successfully. Jan 30 13:14:02.220227 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:14:02.223057 systemd-logind[1456]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:14:02.224705 systemd[1]: Started sshd@3-10.0.0.148:22-10.0.0.1:57912.service - OpenSSH per-connection server daemon (10.0.0.1:57912). Jan 30 13:14:02.225425 systemd-logind[1456]: Removed session 3. Jan 30 13:14:02.272595 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 57912 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:02.273832 sshd-session[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:02.277925 systemd-logind[1456]: New session 4 of user core. Jan 30 13:14:02.286119 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:14:02.339038 sshd[1596]: Connection closed by 10.0.0.1 port 57912 Jan 30 13:14:02.339505 sshd-session[1594]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:02.347160 systemd[1]: sshd@3-10.0.0.148:22-10.0.0.1:57912.service: Deactivated successfully. Jan 30 13:14:02.348559 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:14:02.351897 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:14:02.371267 systemd[1]: Started sshd@4-10.0.0.148:22-10.0.0.1:57924.service - OpenSSH per-connection server daemon (10.0.0.1:57924). Jan 30 13:14:02.372145 systemd-logind[1456]: Removed session 4. Jan 30 13:14:02.406969 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 57924 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:02.408239 sshd-session[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:02.414205 systemd-logind[1456]: New session 5 of user core. Jan 30 13:14:02.423017 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:14:02.483442 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:14:02.483709 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:14:02.498869 sudo[1604]: pam_unix(sudo:session): session closed for user root Jan 30 13:14:02.500971 sshd[1603]: Connection closed by 10.0.0.1 port 57924 Jan 30 13:14:02.500804 sshd-session[1601]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:02.512194 systemd[1]: sshd@4-10.0.0.148:22-10.0.0.1:57924.service: Deactivated successfully. Jan 30 13:14:02.513551 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:14:02.516191 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:14:02.525358 systemd[1]: Started sshd@5-10.0.0.148:22-10.0.0.1:51048.service - OpenSSH per-connection server daemon (10.0.0.1:51048). Jan 30 13:14:02.526805 systemd-logind[1456]: Removed session 5. Jan 30 13:14:02.569432 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 51048 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:02.570056 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:02.573454 systemd-logind[1456]: New session 6 of user core. Jan 30 13:14:02.585003 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:14:02.636225 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:14:02.636502 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:14:02.639663 sudo[1613]: pam_unix(sudo:session): session closed for user root Jan 30 13:14:02.644590 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 13:14:02.644880 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:14:02.670229 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:14:02.699483 augenrules[1635]: No rules Jan 30 13:14:02.700728 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:14:02.700973 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:14:02.701907 sudo[1612]: pam_unix(sudo:session): session closed for user root Jan 30 13:14:02.703124 sshd[1611]: Connection closed by 10.0.0.1 port 51048 Jan 30 13:14:02.703489 sshd-session[1609]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:02.714420 systemd[1]: sshd@5-10.0.0.148:22-10.0.0.1:51048.service: Deactivated successfully. Jan 30 13:14:02.715746 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:14:02.716961 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:14:02.718100 systemd[1]: Started sshd@6-10.0.0.148:22-10.0.0.1:51050.service - OpenSSH per-connection server daemon (10.0.0.1:51050). Jan 30 13:14:02.720237 systemd-logind[1456]: Removed session 6. Jan 30 13:14:02.758431 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 51050 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:14:02.759618 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:14:02.763319 systemd-logind[1456]: New session 7 of user core. Jan 30 13:14:02.782070 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:14:02.834594 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:14:02.835219 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:14:02.860213 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:14:02.877050 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:14:02.877247 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:14:03.335605 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:14:03.344120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:14:03.364947 systemd[1]: Reloading requested from client PID 1687 ('systemctl') (unit session-7.scope)... Jan 30 13:14:03.364964 systemd[1]: Reloading... Jan 30 13:14:03.443989 zram_generator::config[1725]: No configuration found. Jan 30 13:14:03.717795 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:14:03.776774 systemd[1]: Reloading finished in 411 ms. Jan 30 13:14:03.826361 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:14:03.830033 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:14:03.830251 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:14:03.832534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:14:03.936983 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:14:03.941974 (kubelet)[1772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:14:03.984335 kubelet[1772]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:14:03.984335 kubelet[1772]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:14:03.984335 kubelet[1772]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:14:03.984335 kubelet[1772]: I0130 13:14:03.984312 1772 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:14:04.587114 kubelet[1772]: I0130 13:14:04.586395 1772 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:14:04.587114 kubelet[1772]: I0130 13:14:04.586430 1772 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:14:04.587114 kubelet[1772]: I0130 13:14:04.586699 1772 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:14:04.646938 kubelet[1772]: I0130 13:14:04.646905 1772 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:14:04.652588 kubelet[1772]: E0130 13:14:04.652522 1772 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:14:04.652588 kubelet[1772]: I0130 13:14:04.652568 1772 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:14:04.657589 kubelet[1772]: I0130 13:14:04.657549 1772 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:14:04.657860 kubelet[1772]: I0130 13:14:04.657806 1772 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:14:04.658034 kubelet[1772]: I0130 13:14:04.657837 1772 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.148","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:14:04.658139 kubelet[1772]: I0130 13:14:04.658106 1772 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:14:04.658139 kubelet[1772]: I0130 13:14:04.658116 1772 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:14:04.658336 kubelet[1772]: I0130 13:14:04.658306 1772 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:14:04.660983 kubelet[1772]: I0130 13:14:04.660926 1772 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:14:04.660983 kubelet[1772]: I0130 13:14:04.660957 1772 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:14:04.660983 kubelet[1772]: I0130 13:14:04.660983 1772 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:14:04.661065 kubelet[1772]: I0130 13:14:04.660993 1772 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:14:04.662448 kubelet[1772]: E0130 13:14:04.662400 1772 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:04.662494 kubelet[1772]: E0130 13:14:04.662466 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:04.665776 kubelet[1772]: I0130 13:14:04.665737 1772 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:14:04.669112 kubelet[1772]: I0130 13:14:04.669014 1772 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:14:04.669425 kubelet[1772]: W0130 13:14:04.669408 1772 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:14:04.670368 kubelet[1772]: I0130 13:14:04.670335 1772 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:14:04.670425 kubelet[1772]: I0130 13:14:04.670376 1772 server.go:1287] "Started kubelet" Jan 30 13:14:04.670680 kubelet[1772]: I0130 13:14:04.670629 1772 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:14:04.671549 kubelet[1772]: I0130 13:14:04.671512 1772 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:14:04.672616 kubelet[1772]: I0130 13:14:04.672541 1772 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:14:04.672863 kubelet[1772]: I0130 13:14:04.672828 1772 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:14:04.673834 kubelet[1772]: I0130 13:14:04.673798 1772 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:14:04.674025 kubelet[1772]: I0130 13:14:04.673993 1772 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:14:04.674899 kubelet[1772]: E0130 13:14:04.674838 1772 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.148\" not found" Jan 30 13:14:04.674899 kubelet[1772]: I0130 13:14:04.674887 1772 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:14:04.675174 kubelet[1772]: I0130 13:14:04.675052 1772 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:14:04.675174 kubelet[1772]: I0130 13:14:04.675120 1772 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:14:04.678256 kubelet[1772]: I0130 13:14:04.676474 1772 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:14:04.678256 kubelet[1772]: I0130 13:14:04.676657 1772 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:14:04.678256 kubelet[1772]: E0130 13:14:04.677078 1772 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:14:04.678256 kubelet[1772]: I0130 13:14:04.677992 1772 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:14:04.685105 kubelet[1772]: W0130 13:14:04.684003 1772 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.148" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 13:14:04.685105 kubelet[1772]: E0130 13:14:04.684099 1772 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.148\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 30 13:14:04.685105 kubelet[1772]: E0130 13:14:04.684669 1772 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.148.181f7aa859f0dd22 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.148,UID:10.0.0.148,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.148,},FirstTimestamp:2025-01-30 13:14:04.670352674 +0000 UTC m=+0.725352481,LastTimestamp:2025-01-30 13:14:04.670352674 +0000 UTC m=+0.725352481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.148,}" Jan 30 13:14:04.685337 kubelet[1772]: W0130 13:14:04.685318 1772 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 13:14:04.685421 kubelet[1772]: E0130 13:14:04.685404 1772 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 30 13:14:04.685568 kubelet[1772]: W0130 13:14:04.685555 1772 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 30 13:14:04.685649 kubelet[1772]: E0130 13:14:04.685632 1772 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jan 30 13:14:04.686052 kubelet[1772]: E0130 13:14:04.685953 1772 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.148.181f7aa85a5592b2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.148,UID:10.0.0.148,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.148,},FirstTimestamp:2025-01-30 13:14:04.676952754 +0000 UTC m=+0.731952561,LastTimestamp:2025-01-30 13:14:04.676952754 +0000 UTC m=+0.731952561,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.148,}" Jan 30 13:14:04.686706 kubelet[1772]: E0130 13:14:04.686674 1772 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.148\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 30 13:14:04.687571 kubelet[1772]: I0130 13:14:04.687549 1772 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:14:04.687571 kubelet[1772]: I0130 13:14:04.687567 1772 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:14:04.687648 kubelet[1772]: I0130 13:14:04.687586 1772 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:14:04.695365 kubelet[1772]: E0130 13:14:04.694400 1772 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.148.181f7aa85ae9ddfa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.148,UID:10.0.0.148,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.148 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.148,},FirstTimestamp:2025-01-30 13:14:04.686671354 +0000 UTC m=+0.741671161,LastTimestamp:2025-01-30 13:14:04.686671354 +0000 UTC m=+0.741671161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.148,}" Jan 30 13:14:04.695965 kubelet[1772]: E0130 13:14:04.695757 1772 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.148.181f7aa85ae9f51a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.148,UID:10.0.0.148,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.148 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.148,},FirstTimestamp:2025-01-30 13:14:04.686677274 +0000 UTC m=+0.741677081,LastTimestamp:2025-01-30 13:14:04.686677274 +0000 UTC m=+0.741677081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.148,}" Jan 30 13:14:04.762367 kubelet[1772]: I0130 13:14:04.762199 1772 policy_none.go:49] "None policy: Start" Jan 30 13:14:04.762367 kubelet[1772]: I0130 13:14:04.762265 1772 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:14:04.762367 kubelet[1772]: I0130 13:14:04.762280 1772 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:14:04.770798 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:14:04.781642 kubelet[1772]: E0130 13:14:04.775399 1772 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.148\" not found" Jan 30 13:14:04.786125 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:14:04.788807 kubelet[1772]: I0130 13:14:04.788754 1772 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:14:04.790811 kubelet[1772]: I0130 13:14:04.790037 1772 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:14:04.790811 kubelet[1772]: I0130 13:14:04.790064 1772 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:14:04.790811 kubelet[1772]: I0130 13:14:04.790085 1772 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:14:04.790811 kubelet[1772]: I0130 13:14:04.790093 1772 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:14:04.790811 kubelet[1772]: E0130 13:14:04.790197 1772 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:14:04.794517 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:14:04.795793 kubelet[1772]: I0130 13:14:04.795771 1772 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:14:04.796056 kubelet[1772]: I0130 13:14:04.796036 1772 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:14:04.796122 kubelet[1772]: I0130 13:14:04.796054 1772 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:14:04.797090 kubelet[1772]: I0130 13:14:04.796900 1772 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:14:04.799355 kubelet[1772]: E0130 13:14:04.797767 1772 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:14:04.799355 kubelet[1772]: E0130 13:14:04.797816 1772 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.148\" not found" Jan 30 13:14:04.891991 kubelet[1772]: E0130 13:14:04.891875 1772 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.148\" not found" node="10.0.0.148" Jan 30 13:14:04.897711 kubelet[1772]: I0130 13:14:04.897685 1772 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.148" Jan 30 13:14:04.903181 kubelet[1772]: I0130 13:14:04.903146 1772 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.148" Jan 30 13:14:04.903366 kubelet[1772]: E0130 13:14:04.903349 1772 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.0.0.148\": node \"10.0.0.148\" not found" Jan 30 13:14:04.910024 kubelet[1772]: E0130 13:14:04.909993 1772 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.148\" not found" Jan 30 13:14:04.973629 sudo[1646]: pam_unix(sudo:session): session closed for user root Jan 30 13:14:04.974900 sshd[1645]: Connection closed by 10.0.0.1 port 51050 Jan 30 13:14:04.975368 sshd-session[1643]: pam_unix(sshd:session): session closed for user core Jan 30 13:14:04.979090 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:14:04.979718 systemd[1]: sshd@6-10.0.0.148:22-10.0.0.1:51050.service: Deactivated successfully. Jan 30 13:14:04.982012 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:14:04.983043 systemd-logind[1456]: Removed session 7. Jan 30 13:14:05.010433 kubelet[1772]: E0130 13:14:05.010384 1772 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.148\" not found" Jan 30 13:14:05.110923 kubelet[1772]: E0130 13:14:05.110879 1772 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.148\" not found" Jan 30 13:14:05.212089 kubelet[1772]: E0130 13:14:05.211977 1772 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.148\" not found" Jan 30 13:14:05.312614 kubelet[1772]: E0130 13:14:05.312571 1772 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.148\" not found" Jan 30 13:14:05.413169 kubelet[1772]: E0130 13:14:05.413125 1772 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.148\" not found" Jan 30 13:14:05.513981 kubelet[1772]: E0130 13:14:05.513837 1772 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.148\" not found" Jan 30 13:14:05.593177 kubelet[1772]: I0130 13:14:05.593125 1772 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 13:14:05.593353 kubelet[1772]: W0130 13:14:05.593307 1772 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 13:14:05.593353 kubelet[1772]: W0130 13:14:05.593336 1772 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 13:14:05.615265 kubelet[1772]: I0130 13:14:05.615232 1772 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 30 13:14:05.615653 containerd[1474]: time="2025-01-30T13:14:05.615620834Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:14:05.616314 kubelet[1772]: I0130 13:14:05.616143 1772 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 30 13:14:05.663283 kubelet[1772]: E0130 13:14:05.663233 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:05.665444 kubelet[1772]: I0130 13:14:05.665393 1772 apiserver.go:52] "Watching apiserver" Jan 30 13:14:05.690011 systemd[1]: Created slice kubepods-besteffort-podfdd1be05_d17d_4fd4_8f82_7692d6fca59a.slice - libcontainer container kubepods-besteffort-podfdd1be05_d17d_4fd4_8f82_7692d6fca59a.slice. Jan 30 13:14:05.706778 systemd[1]: Created slice kubepods-burstable-pod49b334b6_6fd0_4c35_970e_088deffe04f2.slice - libcontainer container kubepods-burstable-pod49b334b6_6fd0_4c35_970e_088deffe04f2.slice. Jan 30 13:14:05.777705 kubelet[1772]: I0130 13:14:05.777565 1772 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:14:05.781895 kubelet[1772]: I0130 13:14:05.781786 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-cilium-run\") pod \"cilium-btc96\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " pod="kube-system/cilium-btc96" Jan 30 13:14:05.781895 kubelet[1772]: I0130 13:14:05.781833 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-cilium-cgroup\") pod \"cilium-btc96\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " pod="kube-system/cilium-btc96" Jan 30 13:14:05.781895 kubelet[1772]: I0130 13:14:05.781883 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49b334b6-6fd0-4c35-970e-088deffe04f2-cilium-config-path\") pod \"cilium-btc96\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " pod="kube-system/cilium-btc96" Jan 30 13:14:05.781895 kubelet[1772]: I0130 13:14:05.781904 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fdd1be05-d17d-4fd4-8f82-7692d6fca59a-kube-proxy\") pod \"kube-proxy-2vmj9\" (UID: \"fdd1be05-d17d-4fd4-8f82-7692d6fca59a\") " pod="kube-system/kube-proxy-2vmj9" Jan 30 13:14:05.782120 kubelet[1772]: I0130 13:14:05.781945 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n77vv\" (UniqueName: \"kubernetes.io/projected/fdd1be05-d17d-4fd4-8f82-7692d6fca59a-kube-api-access-n77vv\") pod \"kube-proxy-2vmj9\" (UID: \"fdd1be05-d17d-4fd4-8f82-7692d6fca59a\") " pod="kube-system/kube-proxy-2vmj9" Jan 30 13:14:05.782120 kubelet[1772]: I0130 13:14:05.781978 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/49b334b6-6fd0-4c35-970e-088deffe04f2-clustermesh-secrets\") pod \"cilium-btc96\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " pod="kube-system/cilium-btc96" Jan 30 13:14:05.782120 kubelet[1772]: I0130 13:14:05.781998 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-host-proc-sys-net\") pod \"cilium-btc96\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " pod="kube-system/cilium-btc96" Jan 30 13:14:05.782120 kubelet[1772]: I0130 13:14:05.782014 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fdd1be05-d17d-4fd4-8f82-7692d6fca59a-lib-modules\") pod \"kube-proxy-2vmj9\" (UID: \"fdd1be05-d17d-4fd4-8f82-7692d6fca59a\") " pod="kube-system/kube-proxy-2vmj9" Jan 30 13:14:05.782120 kubelet[1772]: I0130 13:14:05.782036 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-hostproc\") pod \"cilium-btc96\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " pod="kube-system/cilium-btc96" Jan 30 13:14:05.782120 kubelet[1772]: I0130 13:14:05.782051 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-cni-path\") pod \"cilium-btc96\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " pod="kube-system/cilium-btc96" Jan 30 13:14:05.782300 kubelet[1772]: I0130 13:14:05.782066 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-etc-cni-netd\") pod \"cilium-btc96\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " pod="kube-system/cilium-btc96" Jan 30 13:14:05.782300 kubelet[1772]: I0130 13:14:05.782082 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-lib-modules\") pod \"cilium-btc96\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " pod="kube-system/cilium-btc96" Jan 30 13:14:05.782300 kubelet[1772]: I0130 13:14:05.782101 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-xtables-lock\") pod \"cilium-btc96\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " pod="kube-system/cilium-btc96" Jan 30 13:14:05.782300 kubelet[1772]: I0130 13:14:05.782159 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-bpf-maps\") pod \"cilium-btc96\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " pod="kube-system/cilium-btc96" Jan 30 13:14:05.782300 kubelet[1772]: I0130 13:14:05.782187 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-host-proc-sys-kernel\") pod \"cilium-btc96\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " pod="kube-system/cilium-btc96" Jan 30 13:14:05.782300 kubelet[1772]: I0130 13:14:05.782202 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/49b334b6-6fd0-4c35-970e-088deffe04f2-hubble-tls\") pod \"cilium-btc96\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " pod="kube-system/cilium-btc96" Jan 30 13:14:05.782542 kubelet[1772]: I0130 13:14:05.782218 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt6d8\" (UniqueName: \"kubernetes.io/projected/49b334b6-6fd0-4c35-970e-088deffe04f2-kube-api-access-mt6d8\") pod \"cilium-btc96\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " pod="kube-system/cilium-btc96" Jan 30 13:14:05.782542 kubelet[1772]: I0130 13:14:05.782234 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fdd1be05-d17d-4fd4-8f82-7692d6fca59a-xtables-lock\") pod \"kube-proxy-2vmj9\" (UID: \"fdd1be05-d17d-4fd4-8f82-7692d6fca59a\") " pod="kube-system/kube-proxy-2vmj9" Jan 30 13:14:06.008906 kubelet[1772]: E0130 13:14:06.005107 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:06.009341 containerd[1474]: time="2025-01-30T13:14:06.009281114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2vmj9,Uid:fdd1be05-d17d-4fd4-8f82-7692d6fca59a,Namespace:kube-system,Attempt:0,}" Jan 30 13:14:06.017181 kubelet[1772]: E0130 13:14:06.017141 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:06.021598 containerd[1474]: time="2025-01-30T13:14:06.021189674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-btc96,Uid:49b334b6-6fd0-4c35-970e-088deffe04f2,Namespace:kube-system,Attempt:0,}" Jan 30 13:14:06.504652 containerd[1474]: time="2025-01-30T13:14:06.504603794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:14:06.505733 containerd[1474]: time="2025-01-30T13:14:06.505690834Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 30 13:14:06.506592 containerd[1474]: time="2025-01-30T13:14:06.506556274Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:14:06.507392 containerd[1474]: time="2025-01-30T13:14:06.507330954Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:14:06.508158 containerd[1474]: time="2025-01-30T13:14:06.508127074Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:14:06.511056 containerd[1474]: time="2025-01-30T13:14:06.511008234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:14:06.511963 containerd[1474]: time="2025-01-30T13:14:06.511901634Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 502.5296ms" Jan 30 13:14:06.512774 containerd[1474]: time="2025-01-30T13:14:06.512741914Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 491.4472ms" Jan 30 13:14:06.628511 containerd[1474]: time="2025-01-30T13:14:06.628257594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:14:06.628511 containerd[1474]: time="2025-01-30T13:14:06.628320954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:14:06.628511 containerd[1474]: time="2025-01-30T13:14:06.628336514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:14:06.628511 containerd[1474]: time="2025-01-30T13:14:06.628414634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:14:06.630563 containerd[1474]: time="2025-01-30T13:14:06.630440834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:14:06.630563 containerd[1474]: time="2025-01-30T13:14:06.630504234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:14:06.630563 containerd[1474]: time="2025-01-30T13:14:06.630515554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:14:06.630725 containerd[1474]: time="2025-01-30T13:14:06.630594714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:14:06.667758 kubelet[1772]: E0130 13:14:06.663388 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:06.713038 systemd[1]: Started cri-containerd-2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224.scope - libcontainer container 2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224. Jan 30 13:14:06.716062 systemd[1]: Started cri-containerd-e93b8d78feeee0e65cd8acd6eceeed8712d7c9065e5ffb21901dc218eedd9ad1.scope - libcontainer container e93b8d78feeee0e65cd8acd6eceeed8712d7c9065e5ffb21901dc218eedd9ad1. Jan 30 13:14:06.733933 containerd[1474]: time="2025-01-30T13:14:06.733896394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-btc96,Uid:49b334b6-6fd0-4c35-970e-088deffe04f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224\"" Jan 30 13:14:06.735251 kubelet[1772]: E0130 13:14:06.735215 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:06.736785 containerd[1474]: time="2025-01-30T13:14:06.736665914Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 13:14:06.746927 containerd[1474]: time="2025-01-30T13:14:06.746886794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2vmj9,Uid:fdd1be05-d17d-4fd4-8f82-7692d6fca59a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e93b8d78feeee0e65cd8acd6eceeed8712d7c9065e5ffb21901dc218eedd9ad1\"" Jan 30 13:14:06.747865 kubelet[1772]: E0130 13:14:06.747803 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:06.891919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3545530701.mount: Deactivated successfully. Jan 30 13:14:07.663597 kubelet[1772]: E0130 13:14:07.663552 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:08.663893 kubelet[1772]: E0130 13:14:08.663830 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:09.664226 kubelet[1772]: E0130 13:14:09.664148 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:10.664322 kubelet[1772]: E0130 13:14:10.664287 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:11.146780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1351398930.mount: Deactivated successfully. Jan 30 13:14:11.665905 kubelet[1772]: E0130 13:14:11.665806 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:12.371763 containerd[1474]: time="2025-01-30T13:14:12.371708314Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:14:12.372672 containerd[1474]: time="2025-01-30T13:14:12.372628794Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 30 13:14:12.373884 containerd[1474]: time="2025-01-30T13:14:12.373409954Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:14:12.375704 containerd[1474]: time="2025-01-30T13:14:12.375247994Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.63821736s" Jan 30 13:14:12.375704 containerd[1474]: time="2025-01-30T13:14:12.375281914Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 30 13:14:12.376527 containerd[1474]: time="2025-01-30T13:14:12.376506154Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 13:14:12.377909 containerd[1474]: time="2025-01-30T13:14:12.377869114Z" level=info msg="CreateContainer within sandbox \"2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:14:12.388830 containerd[1474]: time="2025-01-30T13:14:12.388780634Z" level=info msg="CreateContainer within sandbox \"2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"07cb05e2923364e81258e9a1274508ca8c9e65a54e4e74ef225e89da7eaa2c76\"" Jan 30 13:14:12.389611 containerd[1474]: time="2025-01-30T13:14:12.389576994Z" level=info msg="StartContainer for \"07cb05e2923364e81258e9a1274508ca8c9e65a54e4e74ef225e89da7eaa2c76\"" Jan 30 13:14:12.415022 systemd[1]: Started cri-containerd-07cb05e2923364e81258e9a1274508ca8c9e65a54e4e74ef225e89da7eaa2c76.scope - libcontainer container 07cb05e2923364e81258e9a1274508ca8c9e65a54e4e74ef225e89da7eaa2c76. Jan 30 13:14:12.443722 containerd[1474]: time="2025-01-30T13:14:12.443672114Z" level=info msg="StartContainer for \"07cb05e2923364e81258e9a1274508ca8c9e65a54e4e74ef225e89da7eaa2c76\" returns successfully" Jan 30 13:14:12.488087 systemd[1]: cri-containerd-07cb05e2923364e81258e9a1274508ca8c9e65a54e4e74ef225e89da7eaa2c76.scope: Deactivated successfully. Jan 30 13:14:12.630250 containerd[1474]: time="2025-01-30T13:14:12.629978194Z" level=info msg="shim disconnected" id=07cb05e2923364e81258e9a1274508ca8c9e65a54e4e74ef225e89da7eaa2c76 namespace=k8s.io Jan 30 13:14:12.630250 containerd[1474]: time="2025-01-30T13:14:12.630028754Z" level=warning msg="cleaning up after shim disconnected" id=07cb05e2923364e81258e9a1274508ca8c9e65a54e4e74ef225e89da7eaa2c76 namespace=k8s.io Jan 30 13:14:12.630250 containerd[1474]: time="2025-01-30T13:14:12.630038674Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:14:12.666694 kubelet[1772]: E0130 13:14:12.666644 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:12.809823 kubelet[1772]: E0130 13:14:12.809754 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:12.811903 containerd[1474]: time="2025-01-30T13:14:12.811575234Z" level=info msg="CreateContainer within sandbox \"2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:14:12.823971 containerd[1474]: time="2025-01-30T13:14:12.823928034Z" level=info msg="CreateContainer within sandbox \"2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c8f2aceef8c110eaf32acf7033692623a05b7ad40b38512c368d49630625785b\"" Jan 30 13:14:12.824634 containerd[1474]: time="2025-01-30T13:14:12.824597314Z" level=info msg="StartContainer for \"c8f2aceef8c110eaf32acf7033692623a05b7ad40b38512c368d49630625785b\"" Jan 30 13:14:12.852997 systemd[1]: Started cri-containerd-c8f2aceef8c110eaf32acf7033692623a05b7ad40b38512c368d49630625785b.scope - libcontainer container c8f2aceef8c110eaf32acf7033692623a05b7ad40b38512c368d49630625785b. Jan 30 13:14:12.876210 containerd[1474]: time="2025-01-30T13:14:12.876077354Z" level=info msg="StartContainer for \"c8f2aceef8c110eaf32acf7033692623a05b7ad40b38512c368d49630625785b\" returns successfully" Jan 30 13:14:12.896308 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:14:12.897139 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:14:12.897249 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:14:12.904271 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:14:12.904445 systemd[1]: cri-containerd-c8f2aceef8c110eaf32acf7033692623a05b7ad40b38512c368d49630625785b.scope: Deactivated successfully. Jan 30 13:14:12.925835 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:14:12.929050 containerd[1474]: time="2025-01-30T13:14:12.928974314Z" level=info msg="shim disconnected" id=c8f2aceef8c110eaf32acf7033692623a05b7ad40b38512c368d49630625785b namespace=k8s.io Jan 30 13:14:12.929050 containerd[1474]: time="2025-01-30T13:14:12.929027314Z" level=warning msg="cleaning up after shim disconnected" id=c8f2aceef8c110eaf32acf7033692623a05b7ad40b38512c368d49630625785b namespace=k8s.io Jan 30 13:14:12.929050 containerd[1474]: time="2025-01-30T13:14:12.929035754Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:14:13.386478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07cb05e2923364e81258e9a1274508ca8c9e65a54e4e74ef225e89da7eaa2c76-rootfs.mount: Deactivated successfully. Jan 30 13:14:13.512486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2340353916.mount: Deactivated successfully. Jan 30 13:14:13.667512 kubelet[1772]: E0130 13:14:13.667380 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:13.739123 containerd[1474]: time="2025-01-30T13:14:13.739066034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:14:13.739582 containerd[1474]: time="2025-01-30T13:14:13.739533754Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=27364399" Jan 30 13:14:13.741019 containerd[1474]: time="2025-01-30T13:14:13.740980674Z" level=info msg="ImageCreate event name:\"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:14:13.742678 containerd[1474]: time="2025-01-30T13:14:13.742642794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:14:13.744117 containerd[1474]: time="2025-01-30T13:14:13.744087674Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"27363416\" in 1.3674772s" Jan 30 13:14:13.744117 containerd[1474]: time="2025-01-30T13:14:13.744118394Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\"" Jan 30 13:14:13.746042 containerd[1474]: time="2025-01-30T13:14:13.746014354Z" level=info msg="CreateContainer within sandbox \"e93b8d78feeee0e65cd8acd6eceeed8712d7c9065e5ffb21901dc218eedd9ad1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:14:13.760715 containerd[1474]: time="2025-01-30T13:14:13.760666754Z" level=info msg="CreateContainer within sandbox \"e93b8d78feeee0e65cd8acd6eceeed8712d7c9065e5ffb21901dc218eedd9ad1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4bdc978af911a5d2d589ce32a77f9c048e6c2afec033fe1d3120e689d4de62f7\"" Jan 30 13:14:13.761230 containerd[1474]: time="2025-01-30T13:14:13.761211114Z" level=info msg="StartContainer for \"4bdc978af911a5d2d589ce32a77f9c048e6c2afec033fe1d3120e689d4de62f7\"" Jan 30 13:14:13.789052 systemd[1]: Started cri-containerd-4bdc978af911a5d2d589ce32a77f9c048e6c2afec033fe1d3120e689d4de62f7.scope - libcontainer container 4bdc978af911a5d2d589ce32a77f9c048e6c2afec033fe1d3120e689d4de62f7. Jan 30 13:14:13.813117 kubelet[1772]: E0130 13:14:13.813081 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:13.815118 containerd[1474]: time="2025-01-30T13:14:13.815077594Z" level=info msg="CreateContainer within sandbox \"2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:14:13.816729 containerd[1474]: time="2025-01-30T13:14:13.816670874Z" level=info msg="StartContainer for \"4bdc978af911a5d2d589ce32a77f9c048e6c2afec033fe1d3120e689d4de62f7\" returns successfully" Jan 30 13:14:13.846340 containerd[1474]: time="2025-01-30T13:14:13.846282634Z" level=info msg="CreateContainer within sandbox \"2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e0c6b94932bba9ca560068efd7ac24c739ef7fba889a270822ea2564913a68d7\"" Jan 30 13:14:13.847010 containerd[1474]: time="2025-01-30T13:14:13.846983554Z" level=info msg="StartContainer for \"e0c6b94932bba9ca560068efd7ac24c739ef7fba889a270822ea2564913a68d7\"" Jan 30 13:14:13.876029 systemd[1]: Started cri-containerd-e0c6b94932bba9ca560068efd7ac24c739ef7fba889a270822ea2564913a68d7.scope - libcontainer container e0c6b94932bba9ca560068efd7ac24c739ef7fba889a270822ea2564913a68d7. Jan 30 13:14:13.907957 containerd[1474]: time="2025-01-30T13:14:13.907219954Z" level=info msg="StartContainer for \"e0c6b94932bba9ca560068efd7ac24c739ef7fba889a270822ea2564913a68d7\" returns successfully" Jan 30 13:14:13.925205 systemd[1]: cri-containerd-e0c6b94932bba9ca560068efd7ac24c739ef7fba889a270822ea2564913a68d7.scope: Deactivated successfully. Jan 30 13:14:14.071074 containerd[1474]: time="2025-01-30T13:14:14.070985874Z" level=info msg="shim disconnected" id=e0c6b94932bba9ca560068efd7ac24c739ef7fba889a270822ea2564913a68d7 namespace=k8s.io Jan 30 13:14:14.071074 containerd[1474]: time="2025-01-30T13:14:14.071059954Z" level=warning msg="cleaning up after shim disconnected" id=e0c6b94932bba9ca560068efd7ac24c739ef7fba889a270822ea2564913a68d7 namespace=k8s.io Jan 30 13:14:14.071074 containerd[1474]: time="2025-01-30T13:14:14.071068194Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:14:14.385154 systemd[1]: run-containerd-runc-k8s.io-4bdc978af911a5d2d589ce32a77f9c048e6c2afec033fe1d3120e689d4de62f7-runc.j6tnmP.mount: Deactivated successfully. Jan 30 13:14:14.667983 kubelet[1772]: E0130 13:14:14.667834 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:14.821843 kubelet[1772]: E0130 13:14:14.821803 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:14.823345 kubelet[1772]: E0130 13:14:14.823111 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:14.823513 containerd[1474]: time="2025-01-30T13:14:14.823477474Z" level=info msg="CreateContainer within sandbox \"2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:14:14.839733 containerd[1474]: time="2025-01-30T13:14:14.839685274Z" level=info msg="CreateContainer within sandbox \"2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e54e1d0fa5ac9f1d156f6f98a566b9490132e7ed25b3f8c96242164d5e55ba49\"" Jan 30 13:14:14.840383 containerd[1474]: time="2025-01-30T13:14:14.840353594Z" level=info msg="StartContainer for \"e54e1d0fa5ac9f1d156f6f98a566b9490132e7ed25b3f8c96242164d5e55ba49\"" Jan 30 13:14:14.880060 systemd[1]: Started cri-containerd-e54e1d0fa5ac9f1d156f6f98a566b9490132e7ed25b3f8c96242164d5e55ba49.scope - libcontainer container e54e1d0fa5ac9f1d156f6f98a566b9490132e7ed25b3f8c96242164d5e55ba49. Jan 30 13:14:14.899644 systemd[1]: cri-containerd-e54e1d0fa5ac9f1d156f6f98a566b9490132e7ed25b3f8c96242164d5e55ba49.scope: Deactivated successfully. Jan 30 13:14:14.901998 containerd[1474]: time="2025-01-30T13:14:14.901665194Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49b334b6_6fd0_4c35_970e_088deffe04f2.slice/cri-containerd-e54e1d0fa5ac9f1d156f6f98a566b9490132e7ed25b3f8c96242164d5e55ba49.scope/memory.events\": no such file or directory" Jan 30 13:14:14.904167 containerd[1474]: time="2025-01-30T13:14:14.904129834Z" level=info msg="StartContainer for \"e54e1d0fa5ac9f1d156f6f98a566b9490132e7ed25b3f8c96242164d5e55ba49\" returns successfully" Jan 30 13:14:14.923782 containerd[1474]: time="2025-01-30T13:14:14.923646874Z" level=info msg="shim disconnected" id=e54e1d0fa5ac9f1d156f6f98a566b9490132e7ed25b3f8c96242164d5e55ba49 namespace=k8s.io Jan 30 13:14:14.923782 containerd[1474]: time="2025-01-30T13:14:14.923702234Z" level=warning msg="cleaning up after shim disconnected" id=e54e1d0fa5ac9f1d156f6f98a566b9490132e7ed25b3f8c96242164d5e55ba49 namespace=k8s.io Jan 30 13:14:14.923782 containerd[1474]: time="2025-01-30T13:14:14.923717674Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:14:15.385283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e54e1d0fa5ac9f1d156f6f98a566b9490132e7ed25b3f8c96242164d5e55ba49-rootfs.mount: Deactivated successfully. Jan 30 13:14:15.668487 kubelet[1772]: E0130 13:14:15.668363 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:15.827128 kubelet[1772]: E0130 13:14:15.827099 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:15.827263 kubelet[1772]: E0130 13:14:15.827154 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:15.829033 containerd[1474]: time="2025-01-30T13:14:15.828985154Z" level=info msg="CreateContainer within sandbox \"2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:14:15.848103 containerd[1474]: time="2025-01-30T13:14:15.848048834Z" level=info msg="CreateContainer within sandbox \"2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e\"" Jan 30 13:14:15.850081 containerd[1474]: time="2025-01-30T13:14:15.850008754Z" level=info msg="StartContainer for \"b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e\"" Jan 30 13:14:15.854182 kubelet[1772]: I0130 13:14:15.853741 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2vmj9" podStartSLOduration=4.857389714 podStartE2EDuration="11.853723514s" podCreationTimestamp="2025-01-30 13:14:04 +0000 UTC" firstStartedPulling="2025-01-30 13:14:06.748477594 +0000 UTC m=+2.803477401" lastFinishedPulling="2025-01-30 13:14:13.744811434 +0000 UTC m=+9.799811201" observedRunningTime="2025-01-30 13:14:14.861988874 +0000 UTC m=+10.916988681" watchObservedRunningTime="2025-01-30 13:14:15.853723514 +0000 UTC m=+11.908723281" Jan 30 13:14:15.875039 systemd[1]: Started cri-containerd-b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e.scope - libcontainer container b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e. Jan 30 13:14:15.900509 containerd[1474]: time="2025-01-30T13:14:15.899018914Z" level=info msg="StartContainer for \"b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e\" returns successfully" Jan 30 13:14:16.016318 kubelet[1772]: I0130 13:14:16.016202 1772 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 13:14:16.425897 kernel: Initializing XFRM netlink socket Jan 30 13:14:16.670337 kubelet[1772]: E0130 13:14:16.668821 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:16.831963 kubelet[1772]: E0130 13:14:16.831888 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:17.669244 kubelet[1772]: E0130 13:14:17.669195 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:17.833547 kubelet[1772]: E0130 13:14:17.833522 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:18.059131 systemd-networkd[1402]: cilium_host: Link UP Jan 30 13:14:18.059502 systemd-networkd[1402]: cilium_net: Link UP Jan 30 13:14:18.059505 systemd-networkd[1402]: cilium_net: Gained carrier Jan 30 13:14:18.059977 systemd-networkd[1402]: cilium_host: Gained carrier Jan 30 13:14:18.060164 systemd-networkd[1402]: cilium_host: Gained IPv6LL Jan 30 13:14:18.154811 systemd-networkd[1402]: cilium_vxlan: Link UP Jan 30 13:14:18.154820 systemd-networkd[1402]: cilium_vxlan: Gained carrier Jan 30 13:14:18.546881 kernel: NET: Registered PF_ALG protocol family Jan 30 13:14:18.670071 kubelet[1772]: E0130 13:14:18.670010 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:18.834996 kubelet[1772]: E0130 13:14:18.834925 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:18.990150 systemd-networkd[1402]: cilium_net: Gained IPv6LL Jan 30 13:14:19.123580 systemd-networkd[1402]: lxc_health: Link UP Jan 30 13:14:19.130639 systemd-networkd[1402]: lxc_health: Gained carrier Jan 30 13:14:19.670494 kubelet[1772]: E0130 13:14:19.670445 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:19.872946 kubelet[1772]: E0130 13:14:19.872840 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:19.895108 kubelet[1772]: I0130 13:14:19.895058 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-btc96" podStartSLOduration=10.254921114 podStartE2EDuration="15.895039314s" podCreationTimestamp="2025-01-30 13:14:04 +0000 UTC" firstStartedPulling="2025-01-30 13:14:06.736256714 +0000 UTC m=+2.791256521" lastFinishedPulling="2025-01-30 13:14:12.376374914 +0000 UTC m=+8.431374721" observedRunningTime="2025-01-30 13:14:16.857516194 +0000 UTC m=+12.912516001" watchObservedRunningTime="2025-01-30 13:14:19.895039314 +0000 UTC m=+15.950039121" Jan 30 13:14:19.949036 systemd-networkd[1402]: cilium_vxlan: Gained IPv6LL Jan 30 13:14:20.332994 systemd-networkd[1402]: lxc_health: Gained IPv6LL Jan 30 13:14:20.670719 kubelet[1772]: E0130 13:14:20.670545 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:21.134283 systemd[1]: Created slice kubepods-besteffort-pod24daa2a3_1301_4388_9f00_3f5e7aaae7fa.slice - libcontainer container kubepods-besteffort-pod24daa2a3_1301_4388_9f00_3f5e7aaae7fa.slice. Jan 30 13:14:21.179015 kubelet[1772]: I0130 13:14:21.178973 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kh6b\" (UniqueName: \"kubernetes.io/projected/24daa2a3-1301-4388-9f00-3f5e7aaae7fa-kube-api-access-2kh6b\") pod \"nginx-deployment-7fcdb87857-tpj9w\" (UID: \"24daa2a3-1301-4388-9f00-3f5e7aaae7fa\") " pod="default/nginx-deployment-7fcdb87857-tpj9w" Jan 30 13:14:21.440751 containerd[1474]: time="2025-01-30T13:14:21.440013754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-tpj9w,Uid:24daa2a3-1301-4388-9f00-3f5e7aaae7fa,Namespace:default,Attempt:0,}" Jan 30 13:14:21.538106 systemd-networkd[1402]: lxc38a4fc9575d8: Link UP Jan 30 13:14:21.546728 kernel: eth0: renamed from tmpdbc56 Jan 30 13:14:21.556136 systemd-networkd[1402]: lxc38a4fc9575d8: Gained carrier Jan 30 13:14:21.671672 kubelet[1772]: E0130 13:14:21.671603 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:22.672210 kubelet[1772]: E0130 13:14:22.672156 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:23.152028 systemd-networkd[1402]: lxc38a4fc9575d8: Gained IPv6LL Jan 30 13:14:23.631207 containerd[1474]: time="2025-01-30T13:14:23.631052074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:14:23.631207 containerd[1474]: time="2025-01-30T13:14:23.631110914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:14:23.631207 containerd[1474]: time="2025-01-30T13:14:23.631122154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:14:23.631207 containerd[1474]: time="2025-01-30T13:14:23.631197634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:14:23.654081 systemd[1]: Started cri-containerd-dbc567a569cbd2822fdfbc284b9f643c914cd74ef02a40b42477103adefd4af0.scope - libcontainer container dbc567a569cbd2822fdfbc284b9f643c914cd74ef02a40b42477103adefd4af0. Jan 30 13:14:23.662892 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:14:23.672306 kubelet[1772]: E0130 13:14:23.672275 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:23.679487 containerd[1474]: time="2025-01-30T13:14:23.679429594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-tpj9w,Uid:24daa2a3-1301-4388-9f00-3f5e7aaae7fa,Namespace:default,Attempt:0,} returns sandbox id \"dbc567a569cbd2822fdfbc284b9f643c914cd74ef02a40b42477103adefd4af0\"" Jan 30 13:14:23.680996 containerd[1474]: time="2025-01-30T13:14:23.680977434Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:14:24.661892 kubelet[1772]: E0130 13:14:24.661685 1772 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:24.673353 kubelet[1772]: E0130 13:14:24.673305 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:25.308167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount558702969.mount: Deactivated successfully. Jan 30 13:14:25.675000 kubelet[1772]: E0130 13:14:25.673571 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:25.965535 containerd[1474]: time="2025-01-30T13:14:25.965408834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:14:25.966709 containerd[1474]: time="2025-01-30T13:14:25.966654194Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67680490" Jan 30 13:14:25.967612 containerd[1474]: time="2025-01-30T13:14:25.967574194Z" level=info msg="ImageCreate event name:\"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:14:25.970327 containerd[1474]: time="2025-01-30T13:14:25.970292554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:14:25.972079 containerd[1474]: time="2025-01-30T13:14:25.972046194Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 2.2910402s" Jan 30 13:14:25.972122 containerd[1474]: time="2025-01-30T13:14:25.972079274Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\"" Jan 30 13:14:25.973787 containerd[1474]: time="2025-01-30T13:14:25.973738554Z" level=info msg="CreateContainer within sandbox \"dbc567a569cbd2822fdfbc284b9f643c914cd74ef02a40b42477103adefd4af0\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 30 13:14:25.984228 containerd[1474]: time="2025-01-30T13:14:25.984184874Z" level=info msg="CreateContainer within sandbox \"dbc567a569cbd2822fdfbc284b9f643c914cd74ef02a40b42477103adefd4af0\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"cbbacc8e6b714f2bfad5ae8ef77bd0ef9253bc941b6778ad1b1fb989ef34969e\"" Jan 30 13:14:25.984674 containerd[1474]: time="2025-01-30T13:14:25.984647674Z" level=info msg="StartContainer for \"cbbacc8e6b714f2bfad5ae8ef77bd0ef9253bc941b6778ad1b1fb989ef34969e\"" Jan 30 13:14:26.011039 systemd[1]: Started cri-containerd-cbbacc8e6b714f2bfad5ae8ef77bd0ef9253bc941b6778ad1b1fb989ef34969e.scope - libcontainer container cbbacc8e6b714f2bfad5ae8ef77bd0ef9253bc941b6778ad1b1fb989ef34969e. Jan 30 13:14:26.036290 containerd[1474]: time="2025-01-30T13:14:26.036232354Z" level=info msg="StartContainer for \"cbbacc8e6b714f2bfad5ae8ef77bd0ef9253bc941b6778ad1b1fb989ef34969e\" returns successfully" Jan 30 13:14:26.674099 kubelet[1772]: E0130 13:14:26.674050 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:26.860692 kubelet[1772]: I0130 13:14:26.860520 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-tpj9w" podStartSLOduration=3.568247074 podStartE2EDuration="5.860489034s" podCreationTimestamp="2025-01-30 13:14:21 +0000 UTC" firstStartedPulling="2025-01-30 13:14:23.680398354 +0000 UTC m=+19.735398161" lastFinishedPulling="2025-01-30 13:14:25.972640314 +0000 UTC m=+22.027640121" observedRunningTime="2025-01-30 13:14:26.859673714 +0000 UTC m=+22.914673521" watchObservedRunningTime="2025-01-30 13:14:26.860489034 +0000 UTC m=+22.915488841" Jan 30 13:14:27.674668 kubelet[1772]: E0130 13:14:27.674621 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:28.675346 kubelet[1772]: E0130 13:14:28.675293 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:29.675502 kubelet[1772]: E0130 13:14:29.675451 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:30.676209 kubelet[1772]: E0130 13:14:30.676153 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:31.677144 kubelet[1772]: E0130 13:14:31.677094 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:32.678272 kubelet[1772]: E0130 13:14:32.678212 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:33.005992 systemd[1]: Created slice kubepods-besteffort-pod83fb9f9e_6d4a_48ad_8a1c_69a7f591a699.slice - libcontainer container kubepods-besteffort-pod83fb9f9e_6d4a_48ad_8a1c_69a7f591a699.slice. Jan 30 13:14:33.042320 kubelet[1772]: I0130 13:14:33.042282 1772 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:14:33.042821 kubelet[1772]: E0130 13:14:33.042737 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:33.056627 kubelet[1772]: I0130 13:14:33.056575 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/83fb9f9e-6d4a-48ad-8a1c-69a7f591a699-data\") pod \"nfs-server-provisioner-0\" (UID: \"83fb9f9e-6d4a-48ad-8a1c-69a7f591a699\") " pod="default/nfs-server-provisioner-0" Jan 30 13:14:33.056794 kubelet[1772]: I0130 13:14:33.056669 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2p56\" (UniqueName: \"kubernetes.io/projected/83fb9f9e-6d4a-48ad-8a1c-69a7f591a699-kube-api-access-m2p56\") pod \"nfs-server-provisioner-0\" (UID: \"83fb9f9e-6d4a-48ad-8a1c-69a7f591a699\") " pod="default/nfs-server-provisioner-0" Jan 30 13:14:33.310258 containerd[1474]: time="2025-01-30T13:14:33.309787386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:83fb9f9e-6d4a-48ad-8a1c-69a7f591a699,Namespace:default,Attempt:0,}" Jan 30 13:14:33.337472 systemd-networkd[1402]: lxc3c3c76cc4022: Link UP Jan 30 13:14:33.348892 kernel: eth0: renamed from tmpda4d9 Jan 30 13:14:33.365554 systemd-networkd[1402]: lxc3c3c76cc4022: Gained carrier Jan 30 13:14:33.574006 containerd[1474]: time="2025-01-30T13:14:33.573773807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:14:33.574006 containerd[1474]: time="2025-01-30T13:14:33.573834526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:14:33.574006 containerd[1474]: time="2025-01-30T13:14:33.573865246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:14:33.574235 containerd[1474]: time="2025-01-30T13:14:33.573954125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:14:33.598091 systemd[1]: Started cri-containerd-da4d95fcafc4e66e57aac8b62c2e7e8bad8d865f3caaf728cd433260f9dde6aa.scope - libcontainer container da4d95fcafc4e66e57aac8b62c2e7e8bad8d865f3caaf728cd433260f9dde6aa. Jan 30 13:14:33.611318 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:14:33.630844 containerd[1474]: time="2025-01-30T13:14:33.630765783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:83fb9f9e-6d4a-48ad-8a1c-69a7f591a699,Namespace:default,Attempt:0,} returns sandbox id \"da4d95fcafc4e66e57aac8b62c2e7e8bad8d865f3caaf728cd433260f9dde6aa\"" Jan 30 13:14:33.632515 containerd[1474]: time="2025-01-30T13:14:33.632472162Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 30 13:14:33.679398 kubelet[1772]: E0130 13:14:33.679340 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:33.863274 kubelet[1772]: E0130 13:14:33.863166 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:34.169699 systemd[1]: run-containerd-runc-k8s.io-da4d95fcafc4e66e57aac8b62c2e7e8bad8d865f3caaf728cd433260f9dde6aa-runc.8eyqIB.mount: Deactivated successfully. Jan 30 13:14:34.680093 kubelet[1772]: E0130 13:14:34.679931 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:34.861004 systemd-networkd[1402]: lxc3c3c76cc4022: Gained IPv6LL Jan 30 13:14:35.203204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1623225202.mount: Deactivated successfully. Jan 30 13:14:35.680826 kubelet[1772]: E0130 13:14:35.680782 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:36.622819 containerd[1474]: time="2025-01-30T13:14:36.622734061Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Jan 30 13:14:36.627059 containerd[1474]: time="2025-01-30T13:14:36.626996098Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 2.994478216s" Jan 30 13:14:36.627059 containerd[1474]: time="2025-01-30T13:14:36.627035977Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 30 13:14:36.631316 containerd[1474]: time="2025-01-30T13:14:36.631279574Z" level=info msg="CreateContainer within sandbox \"da4d95fcafc4e66e57aac8b62c2e7e8bad8d865f3caaf728cd433260f9dde6aa\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 30 13:14:36.643042 containerd[1474]: time="2025-01-30T13:14:36.642985015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:14:36.643915 containerd[1474]: time="2025-01-30T13:14:36.643397531Z" level=info msg="CreateContainer within sandbox \"da4d95fcafc4e66e57aac8b62c2e7e8bad8d865f3caaf728cd433260f9dde6aa\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"3d80d74853724c22d077d578ce5425b2b211cf3e40646afaf2dc6a6404a16ed4\"" Jan 30 13:14:36.644089 containerd[1474]: time="2025-01-30T13:14:36.644051804Z" level=info msg="StartContainer for \"3d80d74853724c22d077d578ce5425b2b211cf3e40646afaf2dc6a6404a16ed4\"" Jan 30 13:14:36.644316 containerd[1474]: time="2025-01-30T13:14:36.644280282Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:14:36.645151 containerd[1474]: time="2025-01-30T13:14:36.645122393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:14:36.681863 kubelet[1772]: E0130 13:14:36.681813 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:36.737047 systemd[1]: Started cri-containerd-3d80d74853724c22d077d578ce5425b2b211cf3e40646afaf2dc6a6404a16ed4.scope - libcontainer container 3d80d74853724c22d077d578ce5425b2b211cf3e40646afaf2dc6a6404a16ed4. Jan 30 13:14:36.777657 containerd[1474]: time="2025-01-30T13:14:36.777538526Z" level=info msg="StartContainer for \"3d80d74853724c22d077d578ce5425b2b211cf3e40646afaf2dc6a6404a16ed4\" returns successfully" Jan 30 13:14:37.682231 kubelet[1772]: E0130 13:14:37.682182 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:38.683066 kubelet[1772]: E0130 13:14:38.683017 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:39.684057 kubelet[1772]: E0130 13:14:39.684007 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:40.562572 update_engine[1463]: I20250130 13:14:40.562480 1463 update_attempter.cc:509] Updating boot flags... Jan 30 13:14:40.595887 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3166) Jan 30 13:14:40.638894 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3166) Jan 30 13:14:40.684698 kubelet[1772]: E0130 13:14:40.684662 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:41.685341 kubelet[1772]: E0130 13:14:41.685303 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:42.685895 kubelet[1772]: E0130 13:14:42.685824 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:43.686862 kubelet[1772]: E0130 13:14:43.686809 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:44.661981 kubelet[1772]: E0130 13:14:44.661933 1772 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:44.687530 kubelet[1772]: E0130 13:14:44.687483 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:45.688250 kubelet[1772]: E0130 13:14:45.688189 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:46.689011 kubelet[1772]: E0130 13:14:46.688953 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:46.911916 kubelet[1772]: I0130 13:14:46.911823 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.915829791 podStartE2EDuration="14.91180415s" podCreationTimestamp="2025-01-30 13:14:32 +0000 UTC" firstStartedPulling="2025-01-30 13:14:33.632028768 +0000 UTC m=+29.687028575" lastFinishedPulling="2025-01-30 13:14:36.628003127 +0000 UTC m=+32.683002934" observedRunningTime="2025-01-30 13:14:36.884130922 +0000 UTC m=+32.939130729" watchObservedRunningTime="2025-01-30 13:14:46.91180415 +0000 UTC m=+42.966803957" Jan 30 13:14:46.917706 systemd[1]: Created slice kubepods-besteffort-pod156b36af_b532_4897_995d_ebb134003148.slice - libcontainer container kubepods-besteffort-pod156b36af_b532_4897_995d_ebb134003148.slice. Jan 30 13:14:46.939669 kubelet[1772]: I0130 13:14:46.939546 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-760ad2ea-d2f9-4d01-bec4-ee0119156b60\" (UniqueName: \"kubernetes.io/nfs/156b36af-b532-4897-995d-ebb134003148-pvc-760ad2ea-d2f9-4d01-bec4-ee0119156b60\") pod \"test-pod-1\" (UID: \"156b36af-b532-4897-995d-ebb134003148\") " pod="default/test-pod-1" Jan 30 13:14:46.939669 kubelet[1772]: I0130 13:14:46.939592 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v6gj\" (UniqueName: \"kubernetes.io/projected/156b36af-b532-4897-995d-ebb134003148-kube-api-access-6v6gj\") pod \"test-pod-1\" (UID: \"156b36af-b532-4897-995d-ebb134003148\") " pod="default/test-pod-1" Jan 30 13:14:47.070961 kernel: FS-Cache: Loaded Jan 30 13:14:47.099116 kernel: RPC: Registered named UNIX socket transport module. Jan 30 13:14:47.099230 kernel: RPC: Registered udp transport module. Jan 30 13:14:47.099246 kernel: RPC: Registered tcp transport module. Jan 30 13:14:47.099264 kernel: RPC: Registered tcp-with-tls transport module. Jan 30 13:14:47.100109 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 30 13:14:47.262991 kernel: NFS: Registering the id_resolver key type Jan 30 13:14:47.263150 kernel: Key type id_resolver registered Jan 30 13:14:47.263180 kernel: Key type id_legacy registered Jan 30 13:14:47.287941 nfsidmap[3194]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 30 13:14:47.291701 nfsidmap[3197]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 30 13:14:47.520894 containerd[1474]: time="2025-01-30T13:14:47.520764113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:156b36af-b532-4897-995d-ebb134003148,Namespace:default,Attempt:0,}" Jan 30 13:14:47.548965 systemd-networkd[1402]: lxc457aec396f62: Link UP Jan 30 13:14:47.563917 kernel: eth0: renamed from tmpee71f Jan 30 13:14:47.568929 systemd-networkd[1402]: lxc457aec396f62: Gained carrier Jan 30 13:14:47.690034 kubelet[1772]: E0130 13:14:47.689980 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:47.714017 containerd[1474]: time="2025-01-30T13:14:47.713917187Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:14:47.714017 containerd[1474]: time="2025-01-30T13:14:47.713987867Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:14:47.714017 containerd[1474]: time="2025-01-30T13:14:47.714003907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:14:47.714318 containerd[1474]: time="2025-01-30T13:14:47.714082267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:14:47.736076 systemd[1]: Started cri-containerd-ee71f81ed6f486d910afa61dce42e128bf359526883ec629bc7619ae7015c24d.scope - libcontainer container ee71f81ed6f486d910afa61dce42e128bf359526883ec629bc7619ae7015c24d. Jan 30 13:14:47.746574 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:14:47.763471 containerd[1474]: time="2025-01-30T13:14:47.763433420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:156b36af-b532-4897-995d-ebb134003148,Namespace:default,Attempt:0,} returns sandbox id \"ee71f81ed6f486d910afa61dce42e128bf359526883ec629bc7619ae7015c24d\"" Jan 30 13:14:47.764862 containerd[1474]: time="2025-01-30T13:14:47.764827733Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:14:48.019394 containerd[1474]: time="2025-01-30T13:14:48.018968787Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:14:48.019597 containerd[1474]: time="2025-01-30T13:14:48.019560264Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 30 13:14:48.032288 containerd[1474]: time="2025-01-30T13:14:48.031747127Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 266.857155ms" Jan 30 13:14:48.032288 containerd[1474]: time="2025-01-30T13:14:48.031804847Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\"" Jan 30 13:14:48.035171 containerd[1474]: time="2025-01-30T13:14:48.035136951Z" level=info msg="CreateContainer within sandbox \"ee71f81ed6f486d910afa61dce42e128bf359526883ec629bc7619ae7015c24d\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 30 13:14:48.045544 containerd[1474]: time="2025-01-30T13:14:48.045415663Z" level=info msg="CreateContainer within sandbox \"ee71f81ed6f486d910afa61dce42e128bf359526883ec629bc7619ae7015c24d\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"cb50e6863d9ca7d537db44cfe69b95357a91b684e32a14755095e1dbee187054\"" Jan 30 13:14:48.045875 containerd[1474]: time="2025-01-30T13:14:48.045833701Z" level=info msg="StartContainer for \"cb50e6863d9ca7d537db44cfe69b95357a91b684e32a14755095e1dbee187054\"" Jan 30 13:14:48.076074 systemd[1]: Started cri-containerd-cb50e6863d9ca7d537db44cfe69b95357a91b684e32a14755095e1dbee187054.scope - libcontainer container cb50e6863d9ca7d537db44cfe69b95357a91b684e32a14755095e1dbee187054. Jan 30 13:14:48.098148 containerd[1474]: time="2025-01-30T13:14:48.097863817Z" level=info msg="StartContainer for \"cb50e6863d9ca7d537db44cfe69b95357a91b684e32a14755095e1dbee187054\" returns successfully" Jan 30 13:14:48.690940 kubelet[1772]: E0130 13:14:48.690891 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:49.517189 systemd-networkd[1402]: lxc457aec396f62: Gained IPv6LL Jan 30 13:14:49.692047 kubelet[1772]: E0130 13:14:49.691997 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:50.692709 kubelet[1772]: E0130 13:14:50.692653 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:50.922609 kubelet[1772]: I0130 13:14:50.922166 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.653602197 podStartE2EDuration="17.922145344s" podCreationTimestamp="2025-01-30 13:14:33 +0000 UTC" firstStartedPulling="2025-01-30 13:14:47.764541774 +0000 UTC m=+43.819541581" lastFinishedPulling="2025-01-30 13:14:48.033084921 +0000 UTC m=+44.088084728" observedRunningTime="2025-01-30 13:14:48.906773464 +0000 UTC m=+44.961773311" watchObservedRunningTime="2025-01-30 13:14:50.922145344 +0000 UTC m=+46.977145191" Jan 30 13:14:50.957089 containerd[1474]: time="2025-01-30T13:14:50.956987040Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:14:50.962491 containerd[1474]: time="2025-01-30T13:14:50.962428498Z" level=info msg="StopContainer for \"b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e\" with timeout 2 (s)" Jan 30 13:14:50.962813 containerd[1474]: time="2025-01-30T13:14:50.962784617Z" level=info msg="Stop container \"b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e\" with signal terminated" Jan 30 13:14:50.968636 systemd-networkd[1402]: lxc_health: Link DOWN Jan 30 13:14:50.968643 systemd-networkd[1402]: lxc_health: Lost carrier Jan 30 13:14:50.986236 systemd[1]: cri-containerd-b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e.scope: Deactivated successfully. Jan 30 13:14:50.986639 systemd[1]: cri-containerd-b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e.scope: Consumed 6.588s CPU time. Jan 30 13:14:51.022266 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e-rootfs.mount: Deactivated successfully. Jan 30 13:14:51.034103 containerd[1474]: time="2025-01-30T13:14:51.034044931Z" level=info msg="shim disconnected" id=b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e namespace=k8s.io Jan 30 13:14:51.034103 containerd[1474]: time="2025-01-30T13:14:51.034099531Z" level=warning msg="cleaning up after shim disconnected" id=b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e namespace=k8s.io Jan 30 13:14:51.034103 containerd[1474]: time="2025-01-30T13:14:51.034108131Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:14:51.050394 containerd[1474]: time="2025-01-30T13:14:51.050284269Z" level=info msg="StopContainer for \"b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e\" returns successfully" Jan 30 13:14:51.051028 containerd[1474]: time="2025-01-30T13:14:51.050990746Z" level=info msg="StopPodSandbox for \"2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224\"" Jan 30 13:14:51.053604 containerd[1474]: time="2025-01-30T13:14:51.053552216Z" level=info msg="Container to stop \"e0c6b94932bba9ca560068efd7ac24c739ef7fba889a270822ea2564913a68d7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:14:51.053604 containerd[1474]: time="2025-01-30T13:14:51.053592136Z" level=info msg="Container to stop \"e54e1d0fa5ac9f1d156f6f98a566b9490132e7ed25b3f8c96242164d5e55ba49\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:14:51.053604 containerd[1474]: time="2025-01-30T13:14:51.053602816Z" level=info msg="Container to stop \"b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:14:51.053754 containerd[1474]: time="2025-01-30T13:14:51.053612736Z" level=info msg="Container to stop \"07cb05e2923364e81258e9a1274508ca8c9e65a54e4e74ef225e89da7eaa2c76\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:14:51.053754 containerd[1474]: time="2025-01-30T13:14:51.053622296Z" level=info msg="Container to stop \"c8f2aceef8c110eaf32acf7033692623a05b7ad40b38512c368d49630625785b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:14:51.055121 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224-shm.mount: Deactivated successfully. Jan 30 13:14:51.060042 systemd[1]: cri-containerd-2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224.scope: Deactivated successfully. Jan 30 13:14:51.075230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224-rootfs.mount: Deactivated successfully. Jan 30 13:14:51.080933 containerd[1474]: time="2025-01-30T13:14:51.080799671Z" level=info msg="shim disconnected" id=2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224 namespace=k8s.io Jan 30 13:14:51.080933 containerd[1474]: time="2025-01-30T13:14:51.080917470Z" level=warning msg="cleaning up after shim disconnected" id=2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224 namespace=k8s.io Jan 30 13:14:51.080933 containerd[1474]: time="2025-01-30T13:14:51.080927270Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:14:51.091537 containerd[1474]: time="2025-01-30T13:14:51.091411550Z" level=info msg="TearDown network for sandbox \"2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224\" successfully" Jan 30 13:14:51.091537 containerd[1474]: time="2025-01-30T13:14:51.091453990Z" level=info msg="StopPodSandbox for \"2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224\" returns successfully" Jan 30 13:14:51.166916 kubelet[1772]: I0130 13:14:51.166374 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/49b334b6-6fd0-4c35-970e-088deffe04f2-hubble-tls\") pod \"49b334b6-6fd0-4c35-970e-088deffe04f2\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " Jan 30 13:14:51.166916 kubelet[1772]: I0130 13:14:51.166415 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt6d8\" (UniqueName: \"kubernetes.io/projected/49b334b6-6fd0-4c35-970e-088deffe04f2-kube-api-access-mt6d8\") pod \"49b334b6-6fd0-4c35-970e-088deffe04f2\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " Jan 30 13:14:51.166916 kubelet[1772]: I0130 13:14:51.166437 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-host-proc-sys-net\") pod \"49b334b6-6fd0-4c35-970e-088deffe04f2\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " Jan 30 13:14:51.166916 kubelet[1772]: I0130 13:14:51.166459 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-lib-modules\") pod \"49b334b6-6fd0-4c35-970e-088deffe04f2\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " Jan 30 13:14:51.166916 kubelet[1772]: I0130 13:14:51.166478 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-bpf-maps\") pod \"49b334b6-6fd0-4c35-970e-088deffe04f2\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " Jan 30 13:14:51.166916 kubelet[1772]: I0130 13:14:51.166493 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-cilium-cgroup\") pod \"49b334b6-6fd0-4c35-970e-088deffe04f2\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " Jan 30 13:14:51.167510 kubelet[1772]: I0130 13:14:51.166513 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/49b334b6-6fd0-4c35-970e-088deffe04f2-clustermesh-secrets\") pod \"49b334b6-6fd0-4c35-970e-088deffe04f2\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " Jan 30 13:14:51.167510 kubelet[1772]: I0130 13:14:51.166526 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-etc-cni-netd\") pod \"49b334b6-6fd0-4c35-970e-088deffe04f2\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " Jan 30 13:14:51.167510 kubelet[1772]: I0130 13:14:51.166540 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-host-proc-sys-kernel\") pod \"49b334b6-6fd0-4c35-970e-088deffe04f2\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " Jan 30 13:14:51.167510 kubelet[1772]: I0130 13:14:51.166558 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49b334b6-6fd0-4c35-970e-088deffe04f2-cilium-config-path\") pod \"49b334b6-6fd0-4c35-970e-088deffe04f2\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " Jan 30 13:14:51.167510 kubelet[1772]: I0130 13:14:51.166572 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-hostproc\") pod \"49b334b6-6fd0-4c35-970e-088deffe04f2\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " Jan 30 13:14:51.167510 kubelet[1772]: I0130 13:14:51.166586 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-cni-path\") pod \"49b334b6-6fd0-4c35-970e-088deffe04f2\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " Jan 30 13:14:51.167637 kubelet[1772]: I0130 13:14:51.166599 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-xtables-lock\") pod \"49b334b6-6fd0-4c35-970e-088deffe04f2\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " Jan 30 13:14:51.167637 kubelet[1772]: I0130 13:14:51.166615 1772 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-cilium-run\") pod \"49b334b6-6fd0-4c35-970e-088deffe04f2\" (UID: \"49b334b6-6fd0-4c35-970e-088deffe04f2\") " Jan 30 13:14:51.167637 kubelet[1772]: I0130 13:14:51.166952 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "49b334b6-6fd0-4c35-970e-088deffe04f2" (UID: "49b334b6-6fd0-4c35-970e-088deffe04f2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:14:51.167637 kubelet[1772]: I0130 13:14:51.167041 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "49b334b6-6fd0-4c35-970e-088deffe04f2" (UID: "49b334b6-6fd0-4c35-970e-088deffe04f2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:14:51.167637 kubelet[1772]: I0130 13:14:51.167071 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "49b334b6-6fd0-4c35-970e-088deffe04f2" (UID: "49b334b6-6fd0-4c35-970e-088deffe04f2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:14:51.167739 kubelet[1772]: I0130 13:14:51.167089 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "49b334b6-6fd0-4c35-970e-088deffe04f2" (UID: "49b334b6-6fd0-4c35-970e-088deffe04f2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:14:51.167739 kubelet[1772]: I0130 13:14:51.167104 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "49b334b6-6fd0-4c35-970e-088deffe04f2" (UID: "49b334b6-6fd0-4c35-970e-088deffe04f2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:14:51.167739 kubelet[1772]: I0130 13:14:51.167117 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-hostproc" (OuterVolumeSpecName: "hostproc") pod "49b334b6-6fd0-4c35-970e-088deffe04f2" (UID: "49b334b6-6fd0-4c35-970e-088deffe04f2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:14:51.167739 kubelet[1772]: I0130 13:14:51.167136 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "49b334b6-6fd0-4c35-970e-088deffe04f2" (UID: "49b334b6-6fd0-4c35-970e-088deffe04f2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:14:51.167739 kubelet[1772]: I0130 13:14:51.167123 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "49b334b6-6fd0-4c35-970e-088deffe04f2" (UID: "49b334b6-6fd0-4c35-970e-088deffe04f2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:14:51.167840 kubelet[1772]: I0130 13:14:51.167163 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-cni-path" (OuterVolumeSpecName: "cni-path") pod "49b334b6-6fd0-4c35-970e-088deffe04f2" (UID: "49b334b6-6fd0-4c35-970e-088deffe04f2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:14:51.167840 kubelet[1772]: I0130 13:14:51.167177 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "49b334b6-6fd0-4c35-970e-088deffe04f2" (UID: "49b334b6-6fd0-4c35-970e-088deffe04f2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 30 13:14:51.174194 systemd[1]: var-lib-kubelet-pods-49b334b6\x2d6fd0\x2d4c35\x2d970e\x2d088deffe04f2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 13:14:51.176105 kubelet[1772]: I0130 13:14:51.174332 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49b334b6-6fd0-4c35-970e-088deffe04f2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "49b334b6-6fd0-4c35-970e-088deffe04f2" (UID: "49b334b6-6fd0-4c35-970e-088deffe04f2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 13:14:51.176105 kubelet[1772]: I0130 13:14:51.175342 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49b334b6-6fd0-4c35-970e-088deffe04f2-kube-api-access-mt6d8" (OuterVolumeSpecName: "kube-api-access-mt6d8") pod "49b334b6-6fd0-4c35-970e-088deffe04f2" (UID: "49b334b6-6fd0-4c35-970e-088deffe04f2"). InnerVolumeSpecName "kube-api-access-mt6d8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 30 13:14:51.176755 kubelet[1772]: I0130 13:14:51.176700 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49b334b6-6fd0-4c35-970e-088deffe04f2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "49b334b6-6fd0-4c35-970e-088deffe04f2" (UID: "49b334b6-6fd0-4c35-970e-088deffe04f2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 30 13:14:51.179384 kubelet[1772]: I0130 13:14:51.179351 1772 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49b334b6-6fd0-4c35-970e-088deffe04f2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "49b334b6-6fd0-4c35-970e-088deffe04f2" (UID: "49b334b6-6fd0-4c35-970e-088deffe04f2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 30 13:14:51.267960 kubelet[1772]: I0130 13:14:51.267711 1772 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-lib-modules\") on node \"10.0.0.148\" DevicePath \"\"" Jan 30 13:14:51.267960 kubelet[1772]: I0130 13:14:51.267745 1772 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-bpf-maps\") on node \"10.0.0.148\" DevicePath \"\"" Jan 30 13:14:51.267960 kubelet[1772]: I0130 13:14:51.267754 1772 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/49b334b6-6fd0-4c35-970e-088deffe04f2-hubble-tls\") on node \"10.0.0.148\" DevicePath \"\"" Jan 30 13:14:51.267960 kubelet[1772]: I0130 13:14:51.267762 1772 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mt6d8\" (UniqueName: \"kubernetes.io/projected/49b334b6-6fd0-4c35-970e-088deffe04f2-kube-api-access-mt6d8\") on node \"10.0.0.148\" DevicePath \"\"" Jan 30 13:14:51.267960 kubelet[1772]: I0130 13:14:51.267773 1772 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-host-proc-sys-net\") on node \"10.0.0.148\" DevicePath \"\"" Jan 30 13:14:51.267960 kubelet[1772]: I0130 13:14:51.267782 1772 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-host-proc-sys-kernel\") on node \"10.0.0.148\" DevicePath \"\"" Jan 30 13:14:51.267960 kubelet[1772]: I0130 13:14:51.267790 1772 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-cilium-cgroup\") on node \"10.0.0.148\" DevicePath \"\"" Jan 30 13:14:51.267960 kubelet[1772]: I0130 13:14:51.267798 1772 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/49b334b6-6fd0-4c35-970e-088deffe04f2-clustermesh-secrets\") on node \"10.0.0.148\" DevicePath \"\"" Jan 30 13:14:51.268198 kubelet[1772]: I0130 13:14:51.267806 1772 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-etc-cni-netd\") on node \"10.0.0.148\" DevicePath \"\"" Jan 30 13:14:51.268198 kubelet[1772]: I0130 13:14:51.267813 1772 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-cni-path\") on node \"10.0.0.148\" DevicePath \"\"" Jan 30 13:14:51.268198 kubelet[1772]: I0130 13:14:51.267820 1772 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-xtables-lock\") on node \"10.0.0.148\" DevicePath \"\"" Jan 30 13:14:51.268198 kubelet[1772]: I0130 13:14:51.267828 1772 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49b334b6-6fd0-4c35-970e-088deffe04f2-cilium-config-path\") on node \"10.0.0.148\" DevicePath \"\"" Jan 30 13:14:51.268198 kubelet[1772]: I0130 13:14:51.267835 1772 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-hostproc\") on node \"10.0.0.148\" DevicePath \"\"" Jan 30 13:14:51.268198 kubelet[1772]: I0130 13:14:51.267843 1772 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/49b334b6-6fd0-4c35-970e-088deffe04f2-cilium-run\") on node \"10.0.0.148\" DevicePath \"\"" Jan 30 13:14:51.693783 kubelet[1772]: E0130 13:14:51.693736 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:51.905412 kubelet[1772]: I0130 13:14:51.905383 1772 scope.go:117] "RemoveContainer" containerID="b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e" Jan 30 13:14:51.907154 containerd[1474]: time="2025-01-30T13:14:51.906799680Z" level=info msg="RemoveContainer for \"b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e\"" Jan 30 13:14:51.910313 containerd[1474]: time="2025-01-30T13:14:51.910196146Z" level=info msg="RemoveContainer for \"b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e\" returns successfully" Jan 30 13:14:51.910586 kubelet[1772]: I0130 13:14:51.910562 1772 scope.go:117] "RemoveContainer" containerID="e54e1d0fa5ac9f1d156f6f98a566b9490132e7ed25b3f8c96242164d5e55ba49" Jan 30 13:14:51.910986 systemd[1]: Removed slice kubepods-burstable-pod49b334b6_6fd0_4c35_970e_088deffe04f2.slice - libcontainer container kubepods-burstable-pod49b334b6_6fd0_4c35_970e_088deffe04f2.slice. Jan 30 13:14:51.911089 systemd[1]: kubepods-burstable-pod49b334b6_6fd0_4c35_970e_088deffe04f2.slice: Consumed 6.730s CPU time. Jan 30 13:14:51.912066 containerd[1474]: time="2025-01-30T13:14:51.912038219Z" level=info msg="RemoveContainer for \"e54e1d0fa5ac9f1d156f6f98a566b9490132e7ed25b3f8c96242164d5e55ba49\"" Jan 30 13:14:51.920907 containerd[1474]: time="2025-01-30T13:14:51.920839225Z" level=info msg="RemoveContainer for \"e54e1d0fa5ac9f1d156f6f98a566b9490132e7ed25b3f8c96242164d5e55ba49\" returns successfully" Jan 30 13:14:51.921144 kubelet[1772]: I0130 13:14:51.921099 1772 scope.go:117] "RemoveContainer" containerID="e0c6b94932bba9ca560068efd7ac24c739ef7fba889a270822ea2564913a68d7" Jan 30 13:14:51.922649 containerd[1474]: time="2025-01-30T13:14:51.922580459Z" level=info msg="RemoveContainer for \"e0c6b94932bba9ca560068efd7ac24c739ef7fba889a270822ea2564913a68d7\"" Jan 30 13:14:51.925922 containerd[1474]: time="2025-01-30T13:14:51.925865486Z" level=info msg="RemoveContainer for \"e0c6b94932bba9ca560068efd7ac24c739ef7fba889a270822ea2564913a68d7\" returns successfully" Jan 30 13:14:51.926160 kubelet[1772]: I0130 13:14:51.926122 1772 scope.go:117] "RemoveContainer" containerID="c8f2aceef8c110eaf32acf7033692623a05b7ad40b38512c368d49630625785b" Jan 30 13:14:51.927230 containerd[1474]: time="2025-01-30T13:14:51.927198961Z" level=info msg="RemoveContainer for \"c8f2aceef8c110eaf32acf7033692623a05b7ad40b38512c368d49630625785b\"" Jan 30 13:14:51.930216 containerd[1474]: time="2025-01-30T13:14:51.930175149Z" level=info msg="RemoveContainer for \"c8f2aceef8c110eaf32acf7033692623a05b7ad40b38512c368d49630625785b\" returns successfully" Jan 30 13:14:51.930384 kubelet[1772]: I0130 13:14:51.930360 1772 scope.go:117] "RemoveContainer" containerID="07cb05e2923364e81258e9a1274508ca8c9e65a54e4e74ef225e89da7eaa2c76" Jan 30 13:14:51.931585 containerd[1474]: time="2025-01-30T13:14:51.931559424Z" level=info msg="RemoveContainer for \"07cb05e2923364e81258e9a1274508ca8c9e65a54e4e74ef225e89da7eaa2c76\"" Jan 30 13:14:51.933754 containerd[1474]: time="2025-01-30T13:14:51.933719775Z" level=info msg="RemoveContainer for \"07cb05e2923364e81258e9a1274508ca8c9e65a54e4e74ef225e89da7eaa2c76\" returns successfully" Jan 30 13:14:51.938377 kubelet[1772]: I0130 13:14:51.933911 1772 scope.go:117] "RemoveContainer" containerID="b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e" Jan 30 13:14:51.938692 containerd[1474]: time="2025-01-30T13:14:51.938645636Z" level=error msg="ContainerStatus for \"b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e\": not found" Jan 30 13:14:51.938831 kubelet[1772]: E0130 13:14:51.938806 1772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e\": not found" containerID="b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e" Jan 30 13:14:51.938883 kubelet[1772]: I0130 13:14:51.938840 1772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e"} err="failed to get container status \"b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e\": rpc error: code = NotFound desc = an error occurred when try to find container \"b7010ad74abb36f7ec391259d6768eaa61f6d8c1ce53af4de43d89b84282022e\": not found" Jan 30 13:14:51.938908 kubelet[1772]: I0130 13:14:51.938889 1772 scope.go:117] "RemoveContainer" containerID="e54e1d0fa5ac9f1d156f6f98a566b9490132e7ed25b3f8c96242164d5e55ba49" Jan 30 13:14:51.939286 containerd[1474]: time="2025-01-30T13:14:51.939146035Z" level=error msg="ContainerStatus for \"e54e1d0fa5ac9f1d156f6f98a566b9490132e7ed25b3f8c96242164d5e55ba49\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e54e1d0fa5ac9f1d156f6f98a566b9490132e7ed25b3f8c96242164d5e55ba49\": not found" Jan 30 13:14:51.939346 kubelet[1772]: E0130 13:14:51.939301 1772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e54e1d0fa5ac9f1d156f6f98a566b9490132e7ed25b3f8c96242164d5e55ba49\": not found" containerID="e54e1d0fa5ac9f1d156f6f98a566b9490132e7ed25b3f8c96242164d5e55ba49" Jan 30 13:14:51.939346 kubelet[1772]: I0130 13:14:51.939328 1772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e54e1d0fa5ac9f1d156f6f98a566b9490132e7ed25b3f8c96242164d5e55ba49"} err="failed to get container status \"e54e1d0fa5ac9f1d156f6f98a566b9490132e7ed25b3f8c96242164d5e55ba49\": rpc error: code = NotFound desc = an error occurred when try to find container \"e54e1d0fa5ac9f1d156f6f98a566b9490132e7ed25b3f8c96242164d5e55ba49\": not found" Jan 30 13:14:51.939346 kubelet[1772]: I0130 13:14:51.939345 1772 scope.go:117] "RemoveContainer" containerID="e0c6b94932bba9ca560068efd7ac24c739ef7fba889a270822ea2564913a68d7" Jan 30 13:14:51.939533 containerd[1474]: time="2025-01-30T13:14:51.939506593Z" level=error msg="ContainerStatus for \"e0c6b94932bba9ca560068efd7ac24c739ef7fba889a270822ea2564913a68d7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e0c6b94932bba9ca560068efd7ac24c739ef7fba889a270822ea2564913a68d7\": not found" Jan 30 13:14:51.939680 kubelet[1772]: E0130 13:14:51.939630 1772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e0c6b94932bba9ca560068efd7ac24c739ef7fba889a270822ea2564913a68d7\": not found" containerID="e0c6b94932bba9ca560068efd7ac24c739ef7fba889a270822ea2564913a68d7" Jan 30 13:14:51.939757 kubelet[1772]: I0130 13:14:51.939730 1772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e0c6b94932bba9ca560068efd7ac24c739ef7fba889a270822ea2564913a68d7"} err="failed to get container status \"e0c6b94932bba9ca560068efd7ac24c739ef7fba889a270822ea2564913a68d7\": rpc error: code = NotFound desc = an error occurred when try to find container \"e0c6b94932bba9ca560068efd7ac24c739ef7fba889a270822ea2564913a68d7\": not found" Jan 30 13:14:51.939858 kubelet[1772]: I0130 13:14:51.939808 1772 scope.go:117] "RemoveContainer" containerID="c8f2aceef8c110eaf32acf7033692623a05b7ad40b38512c368d49630625785b" Jan 30 13:14:51.940073 containerd[1474]: time="2025-01-30T13:14:51.940047151Z" level=error msg="ContainerStatus for \"c8f2aceef8c110eaf32acf7033692623a05b7ad40b38512c368d49630625785b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c8f2aceef8c110eaf32acf7033692623a05b7ad40b38512c368d49630625785b\": not found" Jan 30 13:14:51.940172 kubelet[1772]: E0130 13:14:51.940153 1772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c8f2aceef8c110eaf32acf7033692623a05b7ad40b38512c368d49630625785b\": not found" containerID="c8f2aceef8c110eaf32acf7033692623a05b7ad40b38512c368d49630625785b" Jan 30 13:14:51.940251 kubelet[1772]: I0130 13:14:51.940176 1772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c8f2aceef8c110eaf32acf7033692623a05b7ad40b38512c368d49630625785b"} err="failed to get container status \"c8f2aceef8c110eaf32acf7033692623a05b7ad40b38512c368d49630625785b\": rpc error: code = NotFound desc = an error occurred when try to find container \"c8f2aceef8c110eaf32acf7033692623a05b7ad40b38512c368d49630625785b\": not found" Jan 30 13:14:51.940251 kubelet[1772]: I0130 13:14:51.940193 1772 scope.go:117] "RemoveContainer" containerID="07cb05e2923364e81258e9a1274508ca8c9e65a54e4e74ef225e89da7eaa2c76" Jan 30 13:14:51.940418 containerd[1474]: time="2025-01-30T13:14:51.940317790Z" level=error msg="ContainerStatus for \"07cb05e2923364e81258e9a1274508ca8c9e65a54e4e74ef225e89da7eaa2c76\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"07cb05e2923364e81258e9a1274508ca8c9e65a54e4e74ef225e89da7eaa2c76\": not found" Jan 30 13:14:51.940482 kubelet[1772]: E0130 13:14:51.940400 1772 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"07cb05e2923364e81258e9a1274508ca8c9e65a54e4e74ef225e89da7eaa2c76\": not found" containerID="07cb05e2923364e81258e9a1274508ca8c9e65a54e4e74ef225e89da7eaa2c76" Jan 30 13:14:51.940482 kubelet[1772]: I0130 13:14:51.940416 1772 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"07cb05e2923364e81258e9a1274508ca8c9e65a54e4e74ef225e89da7eaa2c76"} err="failed to get container status \"07cb05e2923364e81258e9a1274508ca8c9e65a54e4e74ef225e89da7eaa2c76\": rpc error: code = NotFound desc = an error occurred when try to find container \"07cb05e2923364e81258e9a1274508ca8c9e65a54e4e74ef225e89da7eaa2c76\": not found" Jan 30 13:14:51.944420 systemd[1]: var-lib-kubelet-pods-49b334b6\x2d6fd0\x2d4c35\x2d970e\x2d088deffe04f2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmt6d8.mount: Deactivated successfully. Jan 30 13:14:51.944542 systemd[1]: var-lib-kubelet-pods-49b334b6\x2d6fd0\x2d4c35\x2d970e\x2d088deffe04f2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 13:14:52.696496 kubelet[1772]: E0130 13:14:52.694540 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:52.793078 kubelet[1772]: I0130 13:14:52.793029 1772 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49b334b6-6fd0-4c35-970e-088deffe04f2" path="/var/lib/kubelet/pods/49b334b6-6fd0-4c35-970e-088deffe04f2/volumes" Jan 30 13:14:53.477059 kubelet[1772]: I0130 13:14:53.477009 1772 memory_manager.go:355] "RemoveStaleState removing state" podUID="49b334b6-6fd0-4c35-970e-088deffe04f2" containerName="cilium-agent" Jan 30 13:14:53.494884 systemd[1]: Created slice kubepods-burstable-podf65002c7_6b36_46b9_8de4_a4ad96606dcf.slice - libcontainer container kubepods-burstable-podf65002c7_6b36_46b9_8de4_a4ad96606dcf.slice. Jan 30 13:14:53.510524 systemd[1]: Created slice kubepods-besteffort-podd20376b7_949b_4866_8002_991a226c0075.slice - libcontainer container kubepods-besteffort-podd20376b7_949b_4866_8002_991a226c0075.slice. Jan 30 13:14:53.582466 kubelet[1772]: I0130 13:14:53.582369 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f65002c7-6b36-46b9-8de4-a4ad96606dcf-bpf-maps\") pod \"cilium-9kg7z\" (UID: \"f65002c7-6b36-46b9-8de4-a4ad96606dcf\") " pod="kube-system/cilium-9kg7z" Jan 30 13:14:53.582466 kubelet[1772]: I0130 13:14:53.582414 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f65002c7-6b36-46b9-8de4-a4ad96606dcf-cni-path\") pod \"cilium-9kg7z\" (UID: \"f65002c7-6b36-46b9-8de4-a4ad96606dcf\") " pod="kube-system/cilium-9kg7z" Jan 30 13:14:53.582466 kubelet[1772]: I0130 13:14:53.582437 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f65002c7-6b36-46b9-8de4-a4ad96606dcf-xtables-lock\") pod \"cilium-9kg7z\" (UID: \"f65002c7-6b36-46b9-8de4-a4ad96606dcf\") " pod="kube-system/cilium-9kg7z" Jan 30 13:14:53.582655 kubelet[1772]: I0130 13:14:53.582488 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6pxd\" (UniqueName: \"kubernetes.io/projected/f65002c7-6b36-46b9-8de4-a4ad96606dcf-kube-api-access-w6pxd\") pod \"cilium-9kg7z\" (UID: \"f65002c7-6b36-46b9-8de4-a4ad96606dcf\") " pod="kube-system/cilium-9kg7z" Jan 30 13:14:53.582655 kubelet[1772]: I0130 13:14:53.582533 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f65002c7-6b36-46b9-8de4-a4ad96606dcf-hostproc\") pod \"cilium-9kg7z\" (UID: \"f65002c7-6b36-46b9-8de4-a4ad96606dcf\") " pod="kube-system/cilium-9kg7z" Jan 30 13:14:53.582655 kubelet[1772]: I0130 13:14:53.582552 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f65002c7-6b36-46b9-8de4-a4ad96606dcf-hubble-tls\") pod \"cilium-9kg7z\" (UID: \"f65002c7-6b36-46b9-8de4-a4ad96606dcf\") " pod="kube-system/cilium-9kg7z" Jan 30 13:14:53.582655 kubelet[1772]: I0130 13:14:53.582568 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f65002c7-6b36-46b9-8de4-a4ad96606dcf-cilium-config-path\") pod \"cilium-9kg7z\" (UID: \"f65002c7-6b36-46b9-8de4-a4ad96606dcf\") " pod="kube-system/cilium-9kg7z" Jan 30 13:14:53.582655 kubelet[1772]: I0130 13:14:53.582583 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f65002c7-6b36-46b9-8de4-a4ad96606dcf-host-proc-sys-net\") pod \"cilium-9kg7z\" (UID: \"f65002c7-6b36-46b9-8de4-a4ad96606dcf\") " pod="kube-system/cilium-9kg7z" Jan 30 13:14:53.582754 kubelet[1772]: I0130 13:14:53.582598 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p7tp\" (UniqueName: \"kubernetes.io/projected/d20376b7-949b-4866-8002-991a226c0075-kube-api-access-2p7tp\") pod \"cilium-operator-6c4d7847fc-9nqps\" (UID: \"d20376b7-949b-4866-8002-991a226c0075\") " pod="kube-system/cilium-operator-6c4d7847fc-9nqps" Jan 30 13:14:53.582754 kubelet[1772]: I0130 13:14:53.582621 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f65002c7-6b36-46b9-8de4-a4ad96606dcf-cilium-run\") pod \"cilium-9kg7z\" (UID: \"f65002c7-6b36-46b9-8de4-a4ad96606dcf\") " pod="kube-system/cilium-9kg7z" Jan 30 13:14:53.582754 kubelet[1772]: I0130 13:14:53.582635 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f65002c7-6b36-46b9-8de4-a4ad96606dcf-lib-modules\") pod \"cilium-9kg7z\" (UID: \"f65002c7-6b36-46b9-8de4-a4ad96606dcf\") " pod="kube-system/cilium-9kg7z" Jan 30 13:14:53.582754 kubelet[1772]: I0130 13:14:53.582651 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f65002c7-6b36-46b9-8de4-a4ad96606dcf-cilium-ipsec-secrets\") pod \"cilium-9kg7z\" (UID: \"f65002c7-6b36-46b9-8de4-a4ad96606dcf\") " pod="kube-system/cilium-9kg7z" Jan 30 13:14:53.582754 kubelet[1772]: I0130 13:14:53.582679 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f65002c7-6b36-46b9-8de4-a4ad96606dcf-host-proc-sys-kernel\") pod \"cilium-9kg7z\" (UID: \"f65002c7-6b36-46b9-8de4-a4ad96606dcf\") " pod="kube-system/cilium-9kg7z" Jan 30 13:14:53.582882 kubelet[1772]: I0130 13:14:53.582698 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d20376b7-949b-4866-8002-991a226c0075-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-9nqps\" (UID: \"d20376b7-949b-4866-8002-991a226c0075\") " pod="kube-system/cilium-operator-6c4d7847fc-9nqps" Jan 30 13:14:53.582882 kubelet[1772]: I0130 13:14:53.582716 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f65002c7-6b36-46b9-8de4-a4ad96606dcf-cilium-cgroup\") pod \"cilium-9kg7z\" (UID: \"f65002c7-6b36-46b9-8de4-a4ad96606dcf\") " pod="kube-system/cilium-9kg7z" Jan 30 13:14:53.582882 kubelet[1772]: I0130 13:14:53.582730 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f65002c7-6b36-46b9-8de4-a4ad96606dcf-etc-cni-netd\") pod \"cilium-9kg7z\" (UID: \"f65002c7-6b36-46b9-8de4-a4ad96606dcf\") " pod="kube-system/cilium-9kg7z" Jan 30 13:14:53.582882 kubelet[1772]: I0130 13:14:53.582751 1772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f65002c7-6b36-46b9-8de4-a4ad96606dcf-clustermesh-secrets\") pod \"cilium-9kg7z\" (UID: \"f65002c7-6b36-46b9-8de4-a4ad96606dcf\") " pod="kube-system/cilium-9kg7z" Jan 30 13:14:53.694718 kubelet[1772]: E0130 13:14:53.694654 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:53.808948 kubelet[1772]: E0130 13:14:53.808825 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:53.809805 containerd[1474]: time="2025-01-30T13:14:53.809491860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9kg7z,Uid:f65002c7-6b36-46b9-8de4-a4ad96606dcf,Namespace:kube-system,Attempt:0,}" Jan 30 13:14:53.812942 kubelet[1772]: E0130 13:14:53.812785 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:53.813334 containerd[1474]: time="2025-01-30T13:14:53.813286847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9nqps,Uid:d20376b7-949b-4866-8002-991a226c0075,Namespace:kube-system,Attempt:0,}" Jan 30 13:14:53.831296 containerd[1474]: time="2025-01-30T13:14:53.831143826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:14:53.832325 containerd[1474]: time="2025-01-30T13:14:53.831222266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:14:53.832325 containerd[1474]: time="2025-01-30T13:14:53.831242226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:14:53.832325 containerd[1474]: time="2025-01-30T13:14:53.831333666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:14:53.856421 containerd[1474]: time="2025-01-30T13:14:53.856327301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:14:53.856421 containerd[1474]: time="2025-01-30T13:14:53.856402421Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:14:53.856421 containerd[1474]: time="2025-01-30T13:14:53.856418981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:14:53.856628 containerd[1474]: time="2025-01-30T13:14:53.856524500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:14:53.865090 systemd[1]: Started cri-containerd-1ce8b4b0eeaa647a8667f91f210a8875e2296e679ae1eb387644189d0f27c5ad.scope - libcontainer container 1ce8b4b0eeaa647a8667f91f210a8875e2296e679ae1eb387644189d0f27c5ad. Jan 30 13:14:53.867965 systemd[1]: Started cri-containerd-2e78380a3d502a74558ffa36ecb4f4519564dfc06a31f9e4db46dbf364b61f6c.scope - libcontainer container 2e78380a3d502a74558ffa36ecb4f4519564dfc06a31f9e4db46dbf364b61f6c. Jan 30 13:14:53.889123 containerd[1474]: time="2025-01-30T13:14:53.889011670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9kg7z,Uid:f65002c7-6b36-46b9-8de4-a4ad96606dcf,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ce8b4b0eeaa647a8667f91f210a8875e2296e679ae1eb387644189d0f27c5ad\"" Jan 30 13:14:53.890188 kubelet[1772]: E0130 13:14:53.890149 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:53.893473 containerd[1474]: time="2025-01-30T13:14:53.893337455Z" level=info msg="CreateContainer within sandbox \"1ce8b4b0eeaa647a8667f91f210a8875e2296e679ae1eb387644189d0f27c5ad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:14:53.935768 containerd[1474]: time="2025-01-30T13:14:53.935718551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-9nqps,Uid:d20376b7-949b-4866-8002-991a226c0075,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e78380a3d502a74558ffa36ecb4f4519564dfc06a31f9e4db46dbf364b61f6c\"" Jan 30 13:14:53.936335 kubelet[1772]: E0130 13:14:53.936314 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:53.937154 containerd[1474]: time="2025-01-30T13:14:53.937065667Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 13:14:53.937385 containerd[1474]: time="2025-01-30T13:14:53.937289066Z" level=info msg="CreateContainer within sandbox \"1ce8b4b0eeaa647a8667f91f210a8875e2296e679ae1eb387644189d0f27c5ad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b4cffdaae03395b088f1ce2feec6d495707162edd008707e44c445ded6abdcc1\"" Jan 30 13:14:53.937925 containerd[1474]: time="2025-01-30T13:14:53.937899984Z" level=info msg="StartContainer for \"b4cffdaae03395b088f1ce2feec6d495707162edd008707e44c445ded6abdcc1\"" Jan 30 13:14:53.969056 systemd[1]: Started cri-containerd-b4cffdaae03395b088f1ce2feec6d495707162edd008707e44c445ded6abdcc1.scope - libcontainer container b4cffdaae03395b088f1ce2feec6d495707162edd008707e44c445ded6abdcc1. Jan 30 13:14:54.036216 containerd[1474]: time="2025-01-30T13:14:54.036155738Z" level=info msg="StartContainer for \"b4cffdaae03395b088f1ce2feec6d495707162edd008707e44c445ded6abdcc1\" returns successfully" Jan 30 13:14:54.048076 systemd[1]: cri-containerd-b4cffdaae03395b088f1ce2feec6d495707162edd008707e44c445ded6abdcc1.scope: Deactivated successfully. Jan 30 13:14:54.209248 containerd[1474]: time="2025-01-30T13:14:54.209190667Z" level=info msg="shim disconnected" id=b4cffdaae03395b088f1ce2feec6d495707162edd008707e44c445ded6abdcc1 namespace=k8s.io Jan 30 13:14:54.209248 containerd[1474]: time="2025-01-30T13:14:54.209242987Z" level=warning msg="cleaning up after shim disconnected" id=b4cffdaae03395b088f1ce2feec6d495707162edd008707e44c445ded6abdcc1 namespace=k8s.io Jan 30 13:14:54.209248 containerd[1474]: time="2025-01-30T13:14:54.209254547Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:14:54.698091 kubelet[1772]: E0130 13:14:54.695706 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:54.807949 kubelet[1772]: E0130 13:14:54.807910 1772 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:14:54.914227 kubelet[1772]: E0130 13:14:54.914009 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:54.916758 containerd[1474]: time="2025-01-30T13:14:54.916510855Z" level=info msg="CreateContainer within sandbox \"1ce8b4b0eeaa647a8667f91f210a8875e2296e679ae1eb387644189d0f27c5ad\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:14:54.934580 containerd[1474]: time="2025-01-30T13:14:54.934517878Z" level=info msg="CreateContainer within sandbox \"1ce8b4b0eeaa647a8667f91f210a8875e2296e679ae1eb387644189d0f27c5ad\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5c05eafc99dfcc7a61558f7c2bbcc8e042d5bb42ab8c7cb024646baa6149ac05\"" Jan 30 13:14:54.935293 containerd[1474]: time="2025-01-30T13:14:54.935242116Z" level=info msg="StartContainer for \"5c05eafc99dfcc7a61558f7c2bbcc8e042d5bb42ab8c7cb024646baa6149ac05\"" Jan 30 13:14:54.964039 systemd[1]: Started cri-containerd-5c05eafc99dfcc7a61558f7c2bbcc8e042d5bb42ab8c7cb024646baa6149ac05.scope - libcontainer container 5c05eafc99dfcc7a61558f7c2bbcc8e042d5bb42ab8c7cb024646baa6149ac05. Jan 30 13:14:54.987916 containerd[1474]: time="2025-01-30T13:14:54.987634149Z" level=info msg="StartContainer for \"5c05eafc99dfcc7a61558f7c2bbcc8e042d5bb42ab8c7cb024646baa6149ac05\" returns successfully" Jan 30 13:14:55.001018 systemd[1]: cri-containerd-5c05eafc99dfcc7a61558f7c2bbcc8e042d5bb42ab8c7cb024646baa6149ac05.scope: Deactivated successfully. Jan 30 13:14:55.023078 containerd[1474]: time="2025-01-30T13:14:55.022845841Z" level=info msg="shim disconnected" id=5c05eafc99dfcc7a61558f7c2bbcc8e042d5bb42ab8c7cb024646baa6149ac05 namespace=k8s.io Jan 30 13:14:55.023078 containerd[1474]: time="2025-01-30T13:14:55.022918921Z" level=warning msg="cleaning up after shim disconnected" id=5c05eafc99dfcc7a61558f7c2bbcc8e042d5bb42ab8c7cb024646baa6149ac05 namespace=k8s.io Jan 30 13:14:55.023078 containerd[1474]: time="2025-01-30T13:14:55.022927481Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:14:55.591067 containerd[1474]: time="2025-01-30T13:14:55.590998185Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:14:55.591790 containerd[1474]: time="2025-01-30T13:14:55.591749743Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 30 13:14:55.592376 containerd[1474]: time="2025-01-30T13:14:55.592346741Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:14:55.594512 containerd[1474]: time="2025-01-30T13:14:55.594383135Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.657225069s" Jan 30 13:14:55.594512 containerd[1474]: time="2025-01-30T13:14:55.594422375Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 30 13:14:55.596362 containerd[1474]: time="2025-01-30T13:14:55.596331769Z" level=info msg="CreateContainer within sandbox \"2e78380a3d502a74558ffa36ecb4f4519564dfc06a31f9e4db46dbf364b61f6c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 13:14:55.609424 containerd[1474]: time="2025-01-30T13:14:55.609365571Z" level=info msg="CreateContainer within sandbox \"2e78380a3d502a74558ffa36ecb4f4519564dfc06a31f9e4db46dbf364b61f6c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3579e1fbba218454a6b81f562625841ceed61d8f0dfb8725f2400c71a6a02287\"" Jan 30 13:14:55.609903 containerd[1474]: time="2025-01-30T13:14:55.609874449Z" level=info msg="StartContainer for \"3579e1fbba218454a6b81f562625841ceed61d8f0dfb8725f2400c71a6a02287\"" Jan 30 13:14:55.637048 systemd[1]: Started cri-containerd-3579e1fbba218454a6b81f562625841ceed61d8f0dfb8725f2400c71a6a02287.scope - libcontainer container 3579e1fbba218454a6b81f562625841ceed61d8f0dfb8725f2400c71a6a02287. Jan 30 13:14:55.690033 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c05eafc99dfcc7a61558f7c2bbcc8e042d5bb42ab8c7cb024646baa6149ac05-rootfs.mount: Deactivated successfully. Jan 30 13:14:55.696277 kubelet[1772]: E0130 13:14:55.696245 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:55.701134 containerd[1474]: time="2025-01-30T13:14:55.701094497Z" level=info msg="StartContainer for \"3579e1fbba218454a6b81f562625841ceed61d8f0dfb8725f2400c71a6a02287\" returns successfully" Jan 30 13:14:55.918301 kubelet[1772]: E0130 13:14:55.918269 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:55.919843 kubelet[1772]: E0130 13:14:55.919821 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:55.919964 containerd[1474]: time="2025-01-30T13:14:55.919828204Z" level=info msg="CreateContainer within sandbox \"1ce8b4b0eeaa647a8667f91f210a8875e2296e679ae1eb387644189d0f27c5ad\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:14:55.933470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1701285886.mount: Deactivated successfully. Jan 30 13:14:55.937943 containerd[1474]: time="2025-01-30T13:14:55.936114355Z" level=info msg="CreateContainer within sandbox \"1ce8b4b0eeaa647a8667f91f210a8875e2296e679ae1eb387644189d0f27c5ad\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a8d655ce79e7fd1a6ee0857948f2d58799cd5b73f567ec6e953cff4c9ef726ff\"" Jan 30 13:14:55.937943 containerd[1474]: time="2025-01-30T13:14:55.937275672Z" level=info msg="StartContainer for \"a8d655ce79e7fd1a6ee0857948f2d58799cd5b73f567ec6e953cff4c9ef726ff\"" Jan 30 13:14:55.944697 kubelet[1772]: I0130 13:14:55.944619 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-9nqps" podStartSLOduration=1.286301265 podStartE2EDuration="2.94459405s" podCreationTimestamp="2025-01-30 13:14:53 +0000 UTC" firstStartedPulling="2025-01-30 13:14:53.936732348 +0000 UTC m=+49.991732155" lastFinishedPulling="2025-01-30 13:14:55.595025133 +0000 UTC m=+51.650024940" observedRunningTime="2025-01-30 13:14:55.944106012 +0000 UTC m=+51.999105819" watchObservedRunningTime="2025-01-30 13:14:55.94459405 +0000 UTC m=+51.999593857" Jan 30 13:14:55.968044 systemd[1]: Started cri-containerd-a8d655ce79e7fd1a6ee0857948f2d58799cd5b73f567ec6e953cff4c9ef726ff.scope - libcontainer container a8d655ce79e7fd1a6ee0857948f2d58799cd5b73f567ec6e953cff4c9ef726ff. Jan 30 13:14:55.993972 containerd[1474]: time="2025-01-30T13:14:55.993079625Z" level=info msg="StartContainer for \"a8d655ce79e7fd1a6ee0857948f2d58799cd5b73f567ec6e953cff4c9ef726ff\" returns successfully" Jan 30 13:14:55.994973 systemd[1]: cri-containerd-a8d655ce79e7fd1a6ee0857948f2d58799cd5b73f567ec6e953cff4c9ef726ff.scope: Deactivated successfully. Jan 30 13:14:56.022039 containerd[1474]: time="2025-01-30T13:14:56.021958183Z" level=info msg="shim disconnected" id=a8d655ce79e7fd1a6ee0857948f2d58799cd5b73f567ec6e953cff4c9ef726ff namespace=k8s.io Jan 30 13:14:56.022039 containerd[1474]: time="2025-01-30T13:14:56.022020463Z" level=warning msg="cleaning up after shim disconnected" id=a8d655ce79e7fd1a6ee0857948f2d58799cd5b73f567ec6e953cff4c9ef726ff namespace=k8s.io Jan 30 13:14:56.022039 containerd[1474]: time="2025-01-30T13:14:56.022028903Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:14:56.395883 kubelet[1772]: I0130 13:14:56.395670 1772 setters.go:602] "Node became not ready" node="10.0.0.148" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T13:14:56Z","lastTransitionTime":"2025-01-30T13:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 13:14:56.689247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8d655ce79e7fd1a6ee0857948f2d58799cd5b73f567ec6e953cff4c9ef726ff-rootfs.mount: Deactivated successfully. Jan 30 13:14:56.696905 kubelet[1772]: E0130 13:14:56.696861 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:56.923693 kubelet[1772]: E0130 13:14:56.923656 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:56.923693 kubelet[1772]: E0130 13:14:56.923674 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:56.925790 containerd[1474]: time="2025-01-30T13:14:56.925754894Z" level=info msg="CreateContainer within sandbox \"1ce8b4b0eeaa647a8667f91f210a8875e2296e679ae1eb387644189d0f27c5ad\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:14:56.971912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2736994392.mount: Deactivated successfully. Jan 30 13:14:56.975684 containerd[1474]: time="2025-01-30T13:14:56.975622835Z" level=info msg="CreateContainer within sandbox \"1ce8b4b0eeaa647a8667f91f210a8875e2296e679ae1eb387644189d0f27c5ad\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"93a9a59724ab122591e6b72b255aacb02b44ffa1c496393b594fe2518ebedfee\"" Jan 30 13:14:56.976442 containerd[1474]: time="2025-01-30T13:14:56.976374073Z" level=info msg="StartContainer for \"93a9a59724ab122591e6b72b255aacb02b44ffa1c496393b594fe2518ebedfee\"" Jan 30 13:14:57.003018 systemd[1]: Started cri-containerd-93a9a59724ab122591e6b72b255aacb02b44ffa1c496393b594fe2518ebedfee.scope - libcontainer container 93a9a59724ab122591e6b72b255aacb02b44ffa1c496393b594fe2518ebedfee. Jan 30 13:14:57.022027 systemd[1]: cri-containerd-93a9a59724ab122591e6b72b255aacb02b44ffa1c496393b594fe2518ebedfee.scope: Deactivated successfully. Jan 30 13:14:57.023154 containerd[1474]: time="2025-01-30T13:14:57.022993466Z" level=info msg="StartContainer for \"93a9a59724ab122591e6b72b255aacb02b44ffa1c496393b594fe2518ebedfee\" returns successfully" Jan 30 13:14:57.042024 containerd[1474]: time="2025-01-30T13:14:57.041968336Z" level=info msg="shim disconnected" id=93a9a59724ab122591e6b72b255aacb02b44ffa1c496393b594fe2518ebedfee namespace=k8s.io Jan 30 13:14:57.042024 containerd[1474]: time="2025-01-30T13:14:57.042020296Z" level=warning msg="cleaning up after shim disconnected" id=93a9a59724ab122591e6b72b255aacb02b44ffa1c496393b594fe2518ebedfee namespace=k8s.io Jan 30 13:14:57.042024 containerd[1474]: time="2025-01-30T13:14:57.042029136Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:14:57.689334 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93a9a59724ab122591e6b72b255aacb02b44ffa1c496393b594fe2518ebedfee-rootfs.mount: Deactivated successfully. Jan 30 13:14:57.697760 kubelet[1772]: E0130 13:14:57.697714 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:57.928095 kubelet[1772]: E0130 13:14:57.927866 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:57.929628 containerd[1474]: time="2025-01-30T13:14:57.929594128Z" level=info msg="CreateContainer within sandbox \"1ce8b4b0eeaa647a8667f91f210a8875e2296e679ae1eb387644189d0f27c5ad\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:14:57.942949 containerd[1474]: time="2025-01-30T13:14:57.942827493Z" level=info msg="CreateContainer within sandbox \"1ce8b4b0eeaa647a8667f91f210a8875e2296e679ae1eb387644189d0f27c5ad\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"88397fd3c01527f87cb9f2e06961e47271f3ec86b49f6d93a45e9b3309577c53\"" Jan 30 13:14:57.943514 containerd[1474]: time="2025-01-30T13:14:57.943488932Z" level=info msg="StartContainer for \"88397fd3c01527f87cb9f2e06961e47271f3ec86b49f6d93a45e9b3309577c53\"" Jan 30 13:14:57.971082 systemd[1]: Started cri-containerd-88397fd3c01527f87cb9f2e06961e47271f3ec86b49f6d93a45e9b3309577c53.scope - libcontainer container 88397fd3c01527f87cb9f2e06961e47271f3ec86b49f6d93a45e9b3309577c53. Jan 30 13:14:57.993162 containerd[1474]: time="2025-01-30T13:14:57.993112681Z" level=info msg="StartContainer for \"88397fd3c01527f87cb9f2e06961e47271f3ec86b49f6d93a45e9b3309577c53\" returns successfully" Jan 30 13:14:58.275969 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 30 13:14:58.697878 kubelet[1772]: E0130 13:14:58.697812 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:58.932886 kubelet[1772]: E0130 13:14:58.932759 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:14:58.947990 kubelet[1772]: I0130 13:14:58.947621 1772 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9kg7z" podStartSLOduration=5.947601373 podStartE2EDuration="5.947601373s" podCreationTimestamp="2025-01-30 13:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:14:58.947353694 +0000 UTC m=+55.002353501" watchObservedRunningTime="2025-01-30 13:14:58.947601373 +0000 UTC m=+55.002601180" Jan 30 13:14:59.698205 kubelet[1772]: E0130 13:14:59.698156 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:14:59.934183 kubelet[1772]: E0130 13:14:59.934092 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:00.698926 kubelet[1772]: E0130 13:15:00.698879 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:15:00.935975 kubelet[1772]: E0130 13:15:00.935881 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:01.189310 systemd-networkd[1402]: lxc_health: Link UP Jan 30 13:15:01.202052 systemd-networkd[1402]: lxc_health: Gained carrier Jan 30 13:15:01.699070 kubelet[1772]: E0130 13:15:01.699028 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:15:01.937096 kubelet[1772]: E0130 13:15:01.937051 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:02.699767 kubelet[1772]: E0130 13:15:02.699716 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:15:02.939014 kubelet[1772]: E0130 13:15:02.938662 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:02.957289 systemd-networkd[1402]: lxc_health: Gained IPv6LL Jan 30 13:15:03.700038 kubelet[1772]: E0130 13:15:03.699985 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:15:03.940001 kubelet[1772]: E0130 13:15:03.939968 1772 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:15:04.661299 kubelet[1772]: E0130 13:15:04.661261 1772 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:15:04.682736 containerd[1474]: time="2025-01-30T13:15:04.681106773Z" level=info msg="StopPodSandbox for \"2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224\"" Jan 30 13:15:04.682736 containerd[1474]: time="2025-01-30T13:15:04.681210932Z" level=info msg="TearDown network for sandbox \"2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224\" successfully" Jan 30 13:15:04.682736 containerd[1474]: time="2025-01-30T13:15:04.681222332Z" level=info msg="StopPodSandbox for \"2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224\" returns successfully" Jan 30 13:15:04.684084 containerd[1474]: time="2025-01-30T13:15:04.684043008Z" level=info msg="RemovePodSandbox for \"2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224\"" Jan 30 13:15:04.684084 containerd[1474]: time="2025-01-30T13:15:04.684087048Z" level=info msg="Forcibly stopping sandbox \"2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224\"" Jan 30 13:15:04.684204 containerd[1474]: time="2025-01-30T13:15:04.684150648Z" level=info msg="TearDown network for sandbox \"2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224\" successfully" Jan 30 13:15:04.700984 kubelet[1772]: E0130 13:15:04.700943 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:15:04.704247 containerd[1474]: time="2025-01-30T13:15:04.704189654Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:15:04.704350 containerd[1474]: time="2025-01-30T13:15:04.704277694Z" level=info msg="RemovePodSandbox \"2565444cfbae21d94027d9d4810d4711651b42bacd766ec3c81a8b4789fd6224\" returns successfully" Jan 30 13:15:05.701267 kubelet[1772]: E0130 13:15:05.701213 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:15:06.409952 systemd[1]: run-containerd-runc-k8s.io-88397fd3c01527f87cb9f2e06961e47271f3ec86b49f6d93a45e9b3309577c53-runc.JIbLGU.mount: Deactivated successfully. Jan 30 13:15:06.702463 kubelet[1772]: E0130 13:15:06.702333 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:15:07.702732 kubelet[1772]: E0130 13:15:07.702650 1772 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"