Jan 30 12:50:52.976322 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 12:50:52.976349 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 30 12:50:52.976361 kernel: KASLR enabled Jan 30 12:50:52.976369 kernel: efi: EFI v2.7 by EDK II Jan 30 12:50:52.976376 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jan 30 12:50:52.976382 kernel: random: crng init done Jan 30 12:50:52.976389 kernel: ACPI: Early table checksum verification disabled Jan 30 12:50:52.976395 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jan 30 12:50:52.976401 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 30 12:50:52.976409 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:50:52.976430 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:50:52.976436 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:50:52.976442 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:50:52.976448 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:50:52.976456 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:50:52.976465 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:50:52.976471 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:50:52.976478 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 12:50:52.976484 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 30 12:50:52.976490 kernel: NUMA: Failed to initialise from firmware Jan 30 12:50:52.976497 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 12:50:52.976503 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 30 12:50:52.976509 kernel: Zone ranges: Jan 30 12:50:52.976516 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 12:50:52.976522 kernel: DMA32 empty Jan 30 12:50:52.976530 kernel: Normal empty Jan 30 12:50:52.976536 kernel: Movable zone start for each node Jan 30 12:50:52.976543 kernel: Early memory node ranges Jan 30 12:50:52.976549 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jan 30 12:50:52.976556 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 30 12:50:52.976562 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 30 12:50:52.976569 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 30 12:50:52.976579 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 30 12:50:52.976588 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 30 12:50:52.976596 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 30 12:50:52.976604 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 12:50:52.976611 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 30 12:50:52.976619 kernel: psci: probing for conduit method from ACPI. Jan 30 12:50:52.976625 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 12:50:52.976632 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 12:50:52.976642 kernel: psci: Trusted OS migration not required Jan 30 12:50:52.976649 kernel: psci: SMC Calling Convention v1.1 Jan 30 12:50:52.976656 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 30 12:50:52.976664 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 12:50:52.976671 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 12:50:52.976678 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 30 12:50:52.976692 kernel: Detected PIPT I-cache on CPU0 Jan 30 12:50:52.976699 kernel: CPU features: detected: GIC system register CPU interface Jan 30 12:50:52.976706 kernel: CPU features: detected: Hardware dirty bit management Jan 30 12:50:52.976712 kernel: CPU features: detected: Spectre-v4 Jan 30 12:50:52.976719 kernel: CPU features: detected: Spectre-BHB Jan 30 12:50:52.976726 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 12:50:52.976732 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 12:50:52.976742 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 12:50:52.976748 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 12:50:52.976755 kernel: alternatives: applying boot alternatives Jan 30 12:50:52.976763 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 12:50:52.976770 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 12:50:52.976777 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 12:50:52.976784 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 12:50:52.976791 kernel: Fallback order for Node 0: 0 Jan 30 12:50:52.976798 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 30 12:50:52.976804 kernel: Policy zone: DMA Jan 30 12:50:52.976811 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 12:50:52.976819 kernel: software IO TLB: area num 4. Jan 30 12:50:52.976826 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 30 12:50:52.976833 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Jan 30 12:50:52.976840 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 12:50:52.976847 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 12:50:52.976854 kernel: rcu: RCU event tracing is enabled. Jan 30 12:50:52.976861 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 12:50:52.976868 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 12:50:52.976874 kernel: Tracing variant of Tasks RCU enabled. Jan 30 12:50:52.976881 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 12:50:52.976888 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 12:50:52.976895 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 12:50:52.976903 kernel: GICv3: 256 SPIs implemented Jan 30 12:50:52.976910 kernel: GICv3: 0 Extended SPIs implemented Jan 30 12:50:52.976916 kernel: Root IRQ handler: gic_handle_irq Jan 30 12:50:52.976923 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 12:50:52.976930 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 30 12:50:52.976936 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 30 12:50:52.976943 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 12:50:52.976950 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 30 12:50:52.976957 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 30 12:50:52.976964 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 30 12:50:52.976971 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 12:50:52.976979 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:50:52.976986 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 12:50:52.976993 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 12:50:52.977000 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 12:50:52.977035 kernel: arm-pv: using stolen time PV Jan 30 12:50:52.977043 kernel: Console: colour dummy device 80x25 Jan 30 12:50:52.977050 kernel: ACPI: Core revision 20230628 Jan 30 12:50:52.977057 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 12:50:52.977064 kernel: pid_max: default: 32768 minimum: 301 Jan 30 12:50:52.977071 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 12:50:52.977081 kernel: landlock: Up and running. Jan 30 12:50:52.977088 kernel: SELinux: Initializing. Jan 30 12:50:52.977095 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 12:50:52.977102 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 12:50:52.977109 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 12:50:52.977118 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 12:50:52.977126 kernel: rcu: Hierarchical SRCU implementation. Jan 30 12:50:52.977133 kernel: rcu: Max phase no-delay instances is 400. Jan 30 12:50:52.977140 kernel: Platform MSI: ITS@0x8080000 domain created Jan 30 12:50:52.977149 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 30 12:50:52.977158 kernel: Remapping and enabling EFI services. Jan 30 12:50:52.977165 kernel: smp: Bringing up secondary CPUs ... Jan 30 12:50:52.977172 kernel: Detected PIPT I-cache on CPU1 Jan 30 12:50:52.977179 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 30 12:50:52.977186 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 30 12:50:52.977193 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:50:52.977200 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 12:50:52.977207 kernel: Detected PIPT I-cache on CPU2 Jan 30 12:50:52.977214 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 30 12:50:52.977225 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 30 12:50:52.977232 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:50:52.977246 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 30 12:50:52.977255 kernel: Detected PIPT I-cache on CPU3 Jan 30 12:50:52.977263 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 30 12:50:52.977270 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 30 12:50:52.977280 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 12:50:52.977288 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 30 12:50:52.977297 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 12:50:52.977307 kernel: SMP: Total of 4 processors activated. Jan 30 12:50:52.977315 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 12:50:52.977324 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 12:50:52.977333 kernel: CPU features: detected: Common not Private translations Jan 30 12:50:52.977341 kernel: CPU features: detected: CRC32 instructions Jan 30 12:50:52.977348 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 30 12:50:52.977357 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 12:50:52.977365 kernel: CPU features: detected: LSE atomic instructions Jan 30 12:50:52.977374 kernel: CPU features: detected: Privileged Access Never Jan 30 12:50:52.977381 kernel: CPU features: detected: RAS Extension Support Jan 30 12:50:52.977390 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 30 12:50:52.977398 kernel: CPU: All CPU(s) started at EL1 Jan 30 12:50:52.977405 kernel: alternatives: applying system-wide alternatives Jan 30 12:50:52.977414 kernel: devtmpfs: initialized Jan 30 12:50:52.977423 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 12:50:52.977431 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 12:50:52.977438 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 12:50:52.977455 kernel: SMBIOS 3.0.0 present. Jan 30 12:50:52.977462 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jan 30 12:50:52.977472 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 12:50:52.977481 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 12:50:52.977492 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 12:50:52.977503 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 12:50:52.977512 kernel: audit: initializing netlink subsys (disabled) Jan 30 12:50:52.977525 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 Jan 30 12:50:52.977534 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 12:50:52.977545 kernel: cpuidle: using governor menu Jan 30 12:50:52.977552 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 12:50:52.977561 kernel: ASID allocator initialised with 32768 entries Jan 30 12:50:52.977569 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 12:50:52.977576 kernel: Serial: AMBA PL011 UART driver Jan 30 12:50:52.977583 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 12:50:52.977591 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 12:50:52.977600 kernel: Modules: 509040 pages in range for PLT usage Jan 30 12:50:52.977608 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 12:50:52.977617 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 12:50:52.977625 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 12:50:52.977634 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 12:50:52.977641 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 12:50:52.977649 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 12:50:52.977656 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 12:50:52.977664 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 12:50:52.977673 kernel: ACPI: Added _OSI(Module Device) Jan 30 12:50:52.977684 kernel: ACPI: Added _OSI(Processor Device) Jan 30 12:50:52.977695 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 12:50:52.977702 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 12:50:52.977709 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 12:50:52.977717 kernel: ACPI: Interpreter enabled Jan 30 12:50:52.977724 kernel: ACPI: Using GIC for interrupt routing Jan 30 12:50:52.977731 kernel: ACPI: MCFG table detected, 1 entries Jan 30 12:50:52.977741 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 30 12:50:52.977749 kernel: printk: console [ttyAMA0] enabled Jan 30 12:50:52.977756 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 12:50:52.977926 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 12:50:52.978116 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 12:50:52.978203 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 12:50:52.978269 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 30 12:50:52.978332 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 30 12:50:52.978342 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 30 12:50:52.978349 kernel: PCI host bridge to bus 0000:00 Jan 30 12:50:52.978426 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 30 12:50:52.978486 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 12:50:52.978545 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 30 12:50:52.978602 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 12:50:52.978694 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 30 12:50:52.978777 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 12:50:52.978848 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 30 12:50:52.978913 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 30 12:50:52.978978 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 12:50:52.979058 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 12:50:52.979127 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 30 12:50:52.979194 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 30 12:50:52.979252 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 30 12:50:52.979313 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 12:50:52.979371 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 30 12:50:52.979380 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 12:50:52.979388 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 12:50:52.979395 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 12:50:52.979403 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 12:50:52.979410 kernel: iommu: Default domain type: Translated Jan 30 12:50:52.979417 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 12:50:52.979427 kernel: efivars: Registered efivars operations Jan 30 12:50:52.979434 kernel: vgaarb: loaded Jan 30 12:50:52.979442 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 12:50:52.979449 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 12:50:52.979457 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 12:50:52.979464 kernel: pnp: PnP ACPI init Jan 30 12:50:52.979535 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 30 12:50:52.979545 kernel: pnp: PnP ACPI: found 1 devices Jan 30 12:50:52.979552 kernel: NET: Registered PF_INET protocol family Jan 30 12:50:52.979562 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 12:50:52.979569 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 12:50:52.979577 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 12:50:52.979584 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 12:50:52.979591 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 12:50:52.979599 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 12:50:52.979606 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 12:50:52.979614 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 12:50:52.979622 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 12:50:52.979630 kernel: PCI: CLS 0 bytes, default 64 Jan 30 12:50:52.979637 kernel: kvm [1]: HYP mode not available Jan 30 12:50:52.979644 kernel: Initialise system trusted keyrings Jan 30 12:50:52.979652 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 12:50:52.979659 kernel: Key type asymmetric registered Jan 30 12:50:52.979666 kernel: Asymmetric key parser 'x509' registered Jan 30 12:50:52.979674 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 12:50:52.979688 kernel: io scheduler mq-deadline registered Jan 30 12:50:52.979696 kernel: io scheduler kyber registered Jan 30 12:50:52.979705 kernel: io scheduler bfq registered Jan 30 12:50:52.979713 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 12:50:52.979720 kernel: ACPI: button: Power Button [PWRB] Jan 30 12:50:52.979728 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 12:50:52.979798 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 30 12:50:52.979809 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 12:50:52.979816 kernel: thunder_xcv, ver 1.0 Jan 30 12:50:52.979823 kernel: thunder_bgx, ver 1.0 Jan 30 12:50:52.979830 kernel: nicpf, ver 1.0 Jan 30 12:50:52.979840 kernel: nicvf, ver 1.0 Jan 30 12:50:52.979923 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 12:50:52.979987 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T12:50:52 UTC (1738241452) Jan 30 12:50:52.979997 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 12:50:52.980005 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 30 12:50:52.980035 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 12:50:52.980044 kernel: watchdog: Hard watchdog permanently disabled Jan 30 12:50:52.980051 kernel: NET: Registered PF_INET6 protocol family Jan 30 12:50:52.980062 kernel: Segment Routing with IPv6 Jan 30 12:50:52.980070 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 12:50:52.980078 kernel: NET: Registered PF_PACKET protocol family Jan 30 12:50:52.980085 kernel: Key type dns_resolver registered Jan 30 12:50:52.980092 kernel: registered taskstats version 1 Jan 30 12:50:52.980100 kernel: Loading compiled-in X.509 certificates Jan 30 12:50:52.980107 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 30 12:50:52.980115 kernel: Key type .fscrypt registered Jan 30 12:50:52.980122 kernel: Key type fscrypt-provisioning registered Jan 30 12:50:52.980131 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 12:50:52.980139 kernel: ima: Allocated hash algorithm: sha1 Jan 30 12:50:52.980147 kernel: ima: No architecture policies found Jan 30 12:50:52.980154 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 12:50:52.980161 kernel: clk: Disabling unused clocks Jan 30 12:50:52.980169 kernel: Freeing unused kernel memory: 39360K Jan 30 12:50:52.980176 kernel: Run /init as init process Jan 30 12:50:52.980183 kernel: with arguments: Jan 30 12:50:52.980191 kernel: /init Jan 30 12:50:52.980200 kernel: with environment: Jan 30 12:50:52.980207 kernel: HOME=/ Jan 30 12:50:52.980214 kernel: TERM=linux Jan 30 12:50:52.980222 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 12:50:52.980231 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 12:50:52.980240 systemd[1]: Detected virtualization kvm. Jan 30 12:50:52.980248 systemd[1]: Detected architecture arm64. Jan 30 12:50:52.980257 systemd[1]: Running in initrd. Jan 30 12:50:52.980265 systemd[1]: No hostname configured, using default hostname. Jan 30 12:50:52.980273 systemd[1]: Hostname set to . Jan 30 12:50:52.980281 systemd[1]: Initializing machine ID from VM UUID. Jan 30 12:50:52.980289 systemd[1]: Queued start job for default target initrd.target. Jan 30 12:50:52.980297 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:50:52.980305 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:50:52.980313 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 12:50:52.980322 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 12:50:52.980331 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 12:50:52.980339 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 12:50:52.980348 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 12:50:52.980356 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 12:50:52.980364 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:50:52.980372 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:50:52.980382 systemd[1]: Reached target paths.target - Path Units. Jan 30 12:50:52.980390 systemd[1]: Reached target slices.target - Slice Units. Jan 30 12:50:52.980398 systemd[1]: Reached target swap.target - Swaps. Jan 30 12:50:52.980406 systemd[1]: Reached target timers.target - Timer Units. Jan 30 12:50:52.980413 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 12:50:52.980421 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 12:50:52.980444 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 12:50:52.980452 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 12:50:52.980460 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:50:52.980470 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 12:50:52.980478 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:50:52.980486 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 12:50:52.980494 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 12:50:52.980502 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 12:50:52.980511 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 12:50:52.980519 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 12:50:52.980527 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 12:50:52.980536 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 12:50:52.980545 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:50:52.980553 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 12:50:52.980561 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:50:52.980569 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 12:50:52.980578 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 12:50:52.980588 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:50:52.980618 systemd-journald[239]: Collecting audit messages is disabled. Jan 30 12:50:52.980639 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:50:52.980650 systemd-journald[239]: Journal started Jan 30 12:50:52.980668 systemd-journald[239]: Runtime Journal (/run/log/journal/e6ad8e09f81e4a269552963491c6e0a4) is 5.9M, max 47.3M, 41.4M free. Jan 30 12:50:52.971034 systemd-modules-load[240]: Inserted module 'overlay' Jan 30 12:50:52.984044 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 12:50:52.986069 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 12:50:52.986102 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 12:50:52.989289 systemd-modules-load[240]: Inserted module 'br_netfilter' Jan 30 12:50:52.990037 kernel: Bridge firewalling registered Jan 30 12:50:52.996667 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 12:50:53.003936 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:50:53.006677 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 12:50:53.009973 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 12:50:53.016662 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:50:53.019649 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 12:50:53.020633 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:50:53.022187 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:50:53.024029 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:50:53.036738 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 12:50:53.047753 dracut-cmdline[272]: dracut-dracut-053 Jan 30 12:50:53.050926 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 12:50:53.066238 systemd-resolved[278]: Positive Trust Anchors: Jan 30 12:50:53.066258 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 12:50:53.066290 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 12:50:53.072859 systemd-resolved[278]: Defaulting to hostname 'linux'. Jan 30 12:50:53.074301 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 12:50:53.075410 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:50:53.128084 kernel: SCSI subsystem initialized Jan 30 12:50:53.133041 kernel: Loading iSCSI transport class v2.0-870. Jan 30 12:50:53.147057 kernel: iscsi: registered transport (tcp) Jan 30 12:50:53.162076 kernel: iscsi: registered transport (qla4xxx) Jan 30 12:50:53.162141 kernel: QLogic iSCSI HBA Driver Jan 30 12:50:53.209333 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 12:50:53.218208 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 12:50:53.237766 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 12:50:53.237859 kernel: device-mapper: uevent: version 1.0.3 Jan 30 12:50:53.237872 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 12:50:53.290059 kernel: raid6: neonx8 gen() 15150 MB/s Jan 30 12:50:53.307070 kernel: raid6: neonx4 gen() 14764 MB/s Jan 30 12:50:53.324060 kernel: raid6: neonx2 gen() 10911 MB/s Jan 30 12:50:53.341080 kernel: raid6: neonx1 gen() 10431 MB/s Jan 30 12:50:53.358058 kernel: raid6: int64x8 gen() 6933 MB/s Jan 30 12:50:53.375066 kernel: raid6: int64x4 gen() 7287 MB/s Jan 30 12:50:53.392058 kernel: raid6: int64x2 gen() 6120 MB/s Jan 30 12:50:53.409060 kernel: raid6: int64x1 gen() 5037 MB/s Jan 30 12:50:53.409132 kernel: raid6: using algorithm neonx8 gen() 15150 MB/s Jan 30 12:50:53.426055 kernel: raid6: .... xor() 11887 MB/s, rmw enabled Jan 30 12:50:53.426123 kernel: raid6: using neon recovery algorithm Jan 30 12:50:53.431076 kernel: xor: measuring software checksum speed Jan 30 12:50:53.431141 kernel: 8regs : 18086 MB/sec Jan 30 12:50:53.432053 kernel: 32regs : 19679 MB/sec Jan 30 12:50:53.433055 kernel: arm64_neon : 25417 MB/sec Jan 30 12:50:53.433083 kernel: xor: using function: arm64_neon (25417 MB/sec) Jan 30 12:50:53.487060 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 12:50:53.498986 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 12:50:53.516261 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:50:53.532478 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jan 30 12:50:53.535804 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:50:53.547312 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 12:50:53.566833 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jan 30 12:50:53.608060 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 12:50:53.622234 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 12:50:53.667961 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:50:53.678557 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 12:50:53.691671 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 12:50:53.693782 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 12:50:53.696161 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:50:53.698663 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 12:50:53.708415 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 12:50:53.714028 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 30 12:50:53.729642 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 12:50:53.729766 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 12:50:53.729778 kernel: GPT:9289727 != 19775487 Jan 30 12:50:53.729788 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 12:50:53.729798 kernel: GPT:9289727 != 19775487 Jan 30 12:50:53.729807 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 12:50:53.729820 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:50:53.725075 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 12:50:53.725207 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:50:53.726372 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:50:53.727307 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 12:50:53.727488 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:50:53.728603 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:50:53.740299 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:50:53.741975 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 12:50:53.754125 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (510) Jan 30 12:50:53.754186 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (513) Jan 30 12:50:53.757821 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 12:50:53.759119 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:50:53.770190 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 12:50:53.774665 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 12:50:53.778667 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 12:50:53.779715 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 12:50:53.790230 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 12:50:53.792413 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 12:50:53.799884 disk-uuid[549]: Primary Header is updated. Jan 30 12:50:53.799884 disk-uuid[549]: Secondary Entries is updated. Jan 30 12:50:53.799884 disk-uuid[549]: Secondary Header is updated. Jan 30 12:50:53.806798 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:50:53.820073 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:50:54.832634 disk-uuid[550]: The operation has completed successfully. Jan 30 12:50:54.833552 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 12:50:54.870866 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 12:50:54.870976 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 12:50:54.892242 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 12:50:54.895484 sh[572]: Success Jan 30 12:50:54.911150 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 12:50:54.950593 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 12:50:54.967525 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 12:50:54.969670 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 12:50:54.982633 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 30 12:50:54.982707 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:50:54.982719 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 12:50:54.982729 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 12:50:54.983212 kernel: BTRFS info (device dm-0): using free space tree Jan 30 12:50:54.991876 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 12:50:54.992832 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 12:50:55.002250 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 12:50:55.003677 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 12:50:55.011049 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:50:55.011102 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:50:55.011114 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:50:55.014046 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:50:55.022228 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 12:50:55.023694 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:50:55.031030 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 12:50:55.037267 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 12:50:55.114079 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 12:50:55.123257 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 12:50:55.170284 systemd-networkd[764]: lo: Link UP Jan 30 12:50:55.170294 systemd-networkd[764]: lo: Gained carrier Jan 30 12:50:55.171060 systemd-networkd[764]: Enumeration completed Jan 30 12:50:55.171778 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 12:50:55.175151 ignition[660]: Ignition 2.19.0 Jan 30 12:50:55.171966 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:50:55.175158 ignition[660]: Stage: fetch-offline Jan 30 12:50:55.171970 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 12:50:55.175201 ignition[660]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:50:55.172945 systemd-networkd[764]: eth0: Link UP Jan 30 12:50:55.175210 ignition[660]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:50:55.172949 systemd-networkd[764]: eth0: Gained carrier Jan 30 12:50:55.175430 ignition[660]: parsed url from cmdline: "" Jan 30 12:50:55.172956 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:50:55.175433 ignition[660]: no config URL provided Jan 30 12:50:55.174311 systemd[1]: Reached target network.target - Network. Jan 30 12:50:55.175438 ignition[660]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 12:50:55.175445 ignition[660]: no config at "/usr/lib/ignition/user.ign" Jan 30 12:50:55.175471 ignition[660]: op(1): [started] loading QEMU firmware config module Jan 30 12:50:55.175477 ignition[660]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 12:50:55.190075 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 12:50:55.194159 ignition[660]: op(1): [finished] loading QEMU firmware config module Jan 30 12:50:55.202250 ignition[660]: parsing config with SHA512: 39820290a869c0f0d3e2b3691c584160afbecd25eade71655b38f0c20a55de4166ee954922244484cfaffd0c697578efd67022b8caea8e03c4a46af2b7464b19 Jan 30 12:50:55.205806 unknown[660]: fetched base config from "system" Jan 30 12:50:55.205816 unknown[660]: fetched user config from "qemu" Jan 30 12:50:55.206177 ignition[660]: fetch-offline: fetch-offline passed Jan 30 12:50:55.206248 ignition[660]: Ignition finished successfully Jan 30 12:50:55.208247 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 12:50:55.210213 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 12:50:55.222239 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 12:50:55.233348 ignition[771]: Ignition 2.19.0 Jan 30 12:50:55.233360 ignition[771]: Stage: kargs Jan 30 12:50:55.233536 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:50:55.233546 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:50:55.236347 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 12:50:55.234310 ignition[771]: kargs: kargs passed Jan 30 12:50:55.234358 ignition[771]: Ignition finished successfully Jan 30 12:50:55.252228 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 12:50:55.263397 ignition[780]: Ignition 2.19.0 Jan 30 12:50:55.263408 ignition[780]: Stage: disks Jan 30 12:50:55.263598 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jan 30 12:50:55.263608 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:50:55.264389 ignition[780]: disks: disks passed Jan 30 12:50:55.267082 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 12:50:55.264439 ignition[780]: Ignition finished successfully Jan 30 12:50:55.268900 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 12:50:55.269890 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 12:50:55.271629 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 12:50:55.273035 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 12:50:55.274759 systemd[1]: Reached target basic.target - Basic System. Jan 30 12:50:55.290248 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 12:50:55.308491 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 12:50:55.314825 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 12:50:55.327178 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 12:50:55.370037 kernel: EXT4-fs (vda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 30 12:50:55.370210 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 12:50:55.371429 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 12:50:55.383122 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 12:50:55.385797 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 12:50:55.386784 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 12:50:55.386832 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 12:50:55.386857 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 12:50:55.393046 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 12:50:55.395366 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 12:50:55.399163 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) Jan 30 12:50:55.401419 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:50:55.401468 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:50:55.401480 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:50:55.405056 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:50:55.406853 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 12:50:55.459446 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 12:50:55.463052 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Jan 30 12:50:55.466799 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 12:50:55.471459 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 12:50:55.563520 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 12:50:55.570186 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 12:50:55.571710 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 12:50:55.577031 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:50:55.600146 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 12:50:55.610872 ignition[914]: INFO : Ignition 2.19.0 Jan 30 12:50:55.610872 ignition[914]: INFO : Stage: mount Jan 30 12:50:55.612292 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:50:55.612292 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:50:55.612292 ignition[914]: INFO : mount: mount passed Jan 30 12:50:55.612292 ignition[914]: INFO : Ignition finished successfully Jan 30 12:50:55.615059 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 12:50:55.620186 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 12:50:55.981166 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 12:50:55.998047 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 12:50:56.007057 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (927) Jan 30 12:50:56.009504 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 12:50:56.009565 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 12:50:56.009577 kernel: BTRFS info (device vda6): using free space tree Jan 30 12:50:56.014275 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 12:50:56.015796 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 12:50:56.036515 ignition[944]: INFO : Ignition 2.19.0 Jan 30 12:50:56.036515 ignition[944]: INFO : Stage: files Jan 30 12:50:56.040351 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:50:56.040351 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:50:56.040351 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Jan 30 12:50:56.045533 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 12:50:56.045533 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 12:50:56.050245 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 12:50:56.050245 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 12:50:56.050245 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 12:50:56.050245 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 12:50:56.050245 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 12:50:56.050245 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 12:50:56.050245 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 12:50:56.050245 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 12:50:56.050245 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 12:50:56.050245 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 12:50:56.046377 unknown[944]: wrote ssh authorized keys file for user: core Jan 30 12:50:56.075734 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 12:50:56.075734 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 12:50:56.075734 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 30 12:50:56.480730 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jan 30 12:50:56.733703 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 12:50:56.733703 ignition[944]: INFO : files: op(8): [started] processing unit "containerd.service" Jan 30 12:50:56.736573 ignition[944]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 12:50:56.736573 ignition[944]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 12:50:56.736573 ignition[944]: INFO : files: op(8): [finished] processing unit "containerd.service" Jan 30 12:50:56.736573 ignition[944]: INFO : files: op(a): [started] processing unit "coreos-metadata.service" Jan 30 12:50:56.736573 ignition[944]: INFO : files: op(a): op(b): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 12:50:56.736573 ignition[944]: INFO : files: op(a): op(b): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 12:50:56.736573 ignition[944]: INFO : files: op(a): [finished] processing unit "coreos-metadata.service" Jan 30 12:50:56.736573 ignition[944]: INFO : files: op(c): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 12:50:56.777150 ignition[944]: INFO : files: op(c): op(d): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 12:50:56.781243 ignition[944]: INFO : files: op(c): op(d): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 12:50:56.782463 ignition[944]: INFO : files: op(c): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 12:50:56.782463 ignition[944]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 12:50:56.782463 ignition[944]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 12:50:56.782463 ignition[944]: INFO : files: files passed Jan 30 12:50:56.782463 ignition[944]: INFO : Ignition finished successfully Jan 30 12:50:56.784125 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 12:50:56.796246 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 12:50:56.798765 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 12:50:56.801119 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 12:50:56.802070 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 12:50:56.807039 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 12:50:56.809463 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:50:56.809463 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:50:56.814067 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 12:50:56.812295 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 12:50:56.813922 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 12:50:56.825268 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 12:50:56.851168 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 12:50:56.851281 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 12:50:56.853007 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 12:50:56.854438 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 12:50:56.855964 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 12:50:56.856921 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 12:50:56.878101 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 12:50:56.888215 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 12:50:56.898949 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:50:56.899940 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:50:56.901577 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 12:50:56.902957 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 12:50:56.903104 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 12:50:56.905084 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 12:50:56.906585 systemd[1]: Stopped target basic.target - Basic System. Jan 30 12:50:56.907830 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 12:50:56.909188 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 12:50:56.910744 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 12:50:56.912383 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 12:50:56.913826 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 12:50:56.915352 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 12:50:56.916839 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 12:50:56.918178 systemd[1]: Stopped target swap.target - Swaps. Jan 30 12:50:56.919313 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 12:50:56.919447 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 12:50:56.921307 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:50:56.922791 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:50:56.924239 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 12:50:56.925845 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:50:56.926837 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 12:50:56.926959 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 12:50:56.929151 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 12:50:56.929273 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 12:50:56.930744 systemd[1]: Stopped target paths.target - Path Units. Jan 30 12:50:56.931894 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 12:50:56.935198 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:50:56.936180 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 12:50:56.937731 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 12:50:56.938902 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 12:50:56.938991 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 12:50:56.940142 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 12:50:56.940220 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 12:50:56.941387 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 12:50:56.941497 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 12:50:56.942876 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 12:50:56.942970 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 12:50:56.952225 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 12:50:56.953760 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 12:50:56.954462 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 12:50:56.954583 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:50:56.956156 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 12:50:56.956250 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 12:50:56.960930 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 12:50:56.961059 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 12:50:56.971974 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 12:50:56.973989 ignition[999]: INFO : Ignition 2.19.0 Jan 30 12:50:56.973989 ignition[999]: INFO : Stage: umount Jan 30 12:50:56.975432 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 12:50:56.975432 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 12:50:56.977270 ignition[999]: INFO : umount: umount passed Jan 30 12:50:56.977270 ignition[999]: INFO : Ignition finished successfully Jan 30 12:50:56.977516 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 12:50:56.979056 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 12:50:56.982043 systemd[1]: Stopped target network.target - Network. Jan 30 12:50:56.983188 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 12:50:56.983263 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 12:50:56.984540 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 12:50:56.984584 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 12:50:56.986449 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 12:50:56.986603 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 12:50:56.988211 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 12:50:56.988275 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 12:50:56.990193 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 12:50:56.993711 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 12:50:57.002050 systemd-networkd[764]: eth0: DHCPv6 lease lost Jan 30 12:50:57.004765 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 12:50:57.005665 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 12:50:57.006806 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 12:50:57.006910 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 12:50:57.010324 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 12:50:57.010379 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:50:57.020274 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 12:50:57.021157 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 12:50:57.021240 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 12:50:57.024070 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 12:50:57.024167 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:50:57.025931 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 12:50:57.025990 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 12:50:57.027915 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 12:50:57.027970 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:50:57.029898 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:50:57.042388 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 12:50:57.042543 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 12:50:57.051439 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 12:50:57.051696 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:50:57.054509 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 12:50:57.054574 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 12:50:57.055462 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 12:50:57.055493 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:50:57.057380 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 12:50:57.057435 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 12:50:57.059968 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 12:50:57.060027 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 12:50:57.062728 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 12:50:57.062778 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 12:50:57.073269 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 12:50:57.074161 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 12:50:57.074229 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:50:57.078654 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 12:50:57.079092 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 12:50:57.081071 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 12:50:57.081135 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:50:57.084750 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 12:50:57.084821 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:50:57.087097 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 12:50:57.088106 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 12:50:57.105140 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 12:50:57.105255 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 12:50:57.107063 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 12:50:57.108370 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 12:50:57.108432 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 12:50:57.122220 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 12:50:57.130231 systemd[1]: Switching root. Jan 30 12:50:57.165043 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Jan 30 12:50:57.165100 systemd-journald[239]: Journal stopped Jan 30 12:50:58.132872 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 12:50:58.132923 kernel: SELinux: policy capability open_perms=1 Jan 30 12:50:58.132935 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 12:50:58.132947 kernel: SELinux: policy capability always_check_network=0 Jan 30 12:50:58.132957 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 12:50:58.132966 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 12:50:58.132976 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 12:50:58.132985 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 12:50:58.132997 kernel: audit: type=1403 audit(1738241457.443:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 12:50:58.134187 systemd[1]: Successfully loaded SELinux policy in 36.054ms. Jan 30 12:50:58.134236 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.424ms. Jan 30 12:50:58.134250 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 12:50:58.134265 systemd[1]: Detected virtualization kvm. Jan 30 12:50:58.134276 systemd[1]: Detected architecture arm64. Jan 30 12:50:58.134287 systemd[1]: Detected first boot. Jan 30 12:50:58.134298 systemd[1]: Initializing machine ID from VM UUID. Jan 30 12:50:58.134308 zram_generator::config[1060]: No configuration found. Jan 30 12:50:58.134322 systemd[1]: Populated /etc with preset unit settings. Jan 30 12:50:58.134332 systemd[1]: Queued start job for default target multi-user.target. Jan 30 12:50:58.134343 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 12:50:58.134353 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 12:50:58.134364 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 12:50:58.134375 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 12:50:58.134390 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 12:50:58.134401 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 12:50:58.134411 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 12:50:58.134424 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 12:50:58.134434 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 12:50:58.134445 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 12:50:58.134455 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 12:50:58.134466 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 12:50:58.134477 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 12:50:58.134491 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 12:50:58.134501 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 12:50:58.134513 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 12:50:58.134524 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 12:50:58.134534 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 12:50:58.134544 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 12:50:58.134560 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 12:50:58.134570 systemd[1]: Reached target slices.target - Slice Units. Jan 30 12:50:58.134581 systemd[1]: Reached target swap.target - Swaps. Jan 30 12:50:58.134591 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 12:50:58.134603 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 12:50:58.134614 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 12:50:58.134626 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 12:50:58.134647 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 12:50:58.134659 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 12:50:58.134670 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 12:50:58.134681 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 12:50:58.134691 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 12:50:58.134701 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 12:50:58.134712 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 12:50:58.134725 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 12:50:58.134735 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 12:50:58.134746 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 12:50:58.134756 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 12:50:58.134766 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:50:58.134777 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 12:50:58.134788 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 12:50:58.134798 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:50:58.135693 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 12:50:58.135726 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:50:58.135738 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 12:50:58.135749 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:50:58.135760 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 12:50:58.135771 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 12:50:58.135782 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 12:50:58.135792 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 12:50:58.135803 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 12:50:58.135819 kernel: fuse: init (API version 7.39) Jan 30 12:50:58.135831 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 12:50:58.135842 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 12:50:58.135882 systemd-journald[1135]: Collecting audit messages is disabled. Jan 30 12:50:58.135906 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 12:50:58.135917 kernel: loop: module loaded Jan 30 12:50:58.135931 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 12:50:58.135942 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 12:50:58.135953 systemd-journald[1135]: Journal started Jan 30 12:50:58.135973 systemd-journald[1135]: Runtime Journal (/run/log/journal/e6ad8e09f81e4a269552963491c6e0a4) is 5.9M, max 47.3M, 41.4M free. Jan 30 12:50:58.141056 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 12:50:58.141118 kernel: ACPI: bus type drm_connector registered Jan 30 12:50:58.144069 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 12:50:58.144972 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 12:50:58.146046 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 12:50:58.147283 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 12:50:58.148460 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 12:50:58.149712 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 12:50:58.149873 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 12:50:58.150997 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:50:58.151166 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:50:58.152302 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 12:50:58.152566 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 12:50:58.153730 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:50:58.153875 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:50:58.155050 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 12:50:58.155182 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 12:50:58.156287 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:50:58.156496 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:50:58.157702 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 12:50:58.159665 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 12:50:58.161089 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 12:50:58.171916 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 12:50:58.184242 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 12:50:58.189390 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 12:50:58.190505 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 12:50:58.192847 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 12:50:58.198232 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 12:50:58.199239 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 12:50:58.203549 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 12:50:58.204538 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 12:50:58.210063 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:50:58.212381 systemd-journald[1135]: Time spent on flushing to /var/log/journal/e6ad8e09f81e4a269552963491c6e0a4 is 20.362ms for 826 entries. Jan 30 12:50:58.212381 systemd-journald[1135]: System Journal (/var/log/journal/e6ad8e09f81e4a269552963491c6e0a4) is 8.0M, max 195.6M, 187.6M free. Jan 30 12:50:58.254563 systemd-journald[1135]: Received client request to flush runtime journal. Jan 30 12:50:58.216333 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 12:50:58.222150 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 12:50:58.226832 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 12:50:58.228165 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 12:50:58.229321 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 12:50:58.243266 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 12:50:58.248625 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 12:50:58.250161 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:50:58.251858 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 12:50:58.257566 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 12:50:58.258485 udevadm[1202]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 12:50:58.265713 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jan 30 12:50:58.265732 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jan 30 12:50:58.270332 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 12:50:58.276340 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 12:50:58.312007 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 12:50:58.322290 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 12:50:58.337140 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jan 30 12:50:58.337493 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Jan 30 12:50:58.341654 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 12:50:58.733962 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 12:50:58.742219 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 12:50:58.763514 systemd-udevd[1224]: Using default interface naming scheme 'v255'. Jan 30 12:50:58.782318 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 12:50:58.794291 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 12:50:58.811871 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jan 30 12:50:58.829299 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 12:50:58.832416 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1231) Jan 30 12:50:58.857345 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 12:50:58.879433 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 12:50:58.939332 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 12:50:58.940598 systemd-networkd[1234]: lo: Link UP Jan 30 12:50:58.940605 systemd-networkd[1234]: lo: Gained carrier Jan 30 12:50:58.941330 systemd-networkd[1234]: Enumeration completed Jan 30 12:50:58.941461 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 12:50:58.941802 systemd-networkd[1234]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:50:58.941810 systemd-networkd[1234]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 12:50:58.942530 systemd-networkd[1234]: eth0: Link UP Jan 30 12:50:58.942534 systemd-networkd[1234]: eth0: Gained carrier Jan 30 12:50:58.942547 systemd-networkd[1234]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 12:50:58.945327 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 12:50:58.946578 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 12:50:58.950162 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 12:50:58.963102 systemd-networkd[1234]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 12:50:58.976259 lvm[1262]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 12:50:58.984894 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 12:50:59.007571 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 12:50:59.009045 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 12:50:59.018291 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 12:50:59.023431 lvm[1270]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 12:50:59.056669 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 12:50:59.057916 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 12:50:59.058949 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 12:50:59.058978 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 12:50:59.059800 systemd[1]: Reached target machines.target - Containers. Jan 30 12:50:59.061725 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 12:50:59.077259 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 12:50:59.079580 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 12:50:59.080658 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:50:59.081731 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 12:50:59.084475 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 12:50:59.086903 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 12:50:59.093244 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 12:50:59.103184 kernel: loop0: detected capacity change from 0 to 114328 Jan 30 12:50:59.138400 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 12:50:59.146066 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 12:50:59.161684 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 12:50:59.162476 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 12:50:59.180049 kernel: loop1: detected capacity change from 0 to 194096 Jan 30 12:50:59.228062 kernel: loop2: detected capacity change from 0 to 114432 Jan 30 12:50:59.276041 kernel: loop3: detected capacity change from 0 to 114328 Jan 30 12:50:59.283034 kernel: loop4: detected capacity change from 0 to 194096 Jan 30 12:50:59.292070 kernel: loop5: detected capacity change from 0 to 114432 Jan 30 12:50:59.295606 (sd-merge)[1290]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 12:50:59.296032 (sd-merge)[1290]: Merged extensions into '/usr'. Jan 30 12:50:59.300066 systemd[1]: Reloading requested from client PID 1278 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 12:50:59.300084 systemd[1]: Reloading... Jan 30 12:50:59.344059 zram_generator::config[1319]: No configuration found. Jan 30 12:50:59.460821 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:50:59.489670 ldconfig[1274]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 12:50:59.507062 systemd[1]: Reloading finished in 206 ms. Jan 30 12:50:59.523365 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 12:50:59.524670 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 12:50:59.540239 systemd[1]: Starting ensure-sysext.service... Jan 30 12:50:59.542223 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 12:50:59.547762 systemd[1]: Reloading requested from client PID 1360 ('systemctl') (unit ensure-sysext.service)... Jan 30 12:50:59.547778 systemd[1]: Reloading... Jan 30 12:50:59.562049 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 12:50:59.562359 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 12:50:59.563173 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 12:50:59.563453 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Jan 30 12:50:59.563510 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Jan 30 12:50:59.570762 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 12:50:59.570778 systemd-tmpfiles[1361]: Skipping /boot Jan 30 12:50:59.578379 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 12:50:59.578392 systemd-tmpfiles[1361]: Skipping /boot Jan 30 12:50:59.608101 zram_generator::config[1390]: No configuration found. Jan 30 12:50:59.709359 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:50:59.754213 systemd[1]: Reloading finished in 206 ms. Jan 30 12:50:59.767181 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 12:50:59.787218 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 12:50:59.789747 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 12:50:59.792053 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 12:50:59.797250 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 12:50:59.812277 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 12:50:59.816627 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 12:50:59.821274 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:50:59.823437 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:50:59.828540 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:50:59.834201 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:50:59.836912 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:50:59.845292 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 12:50:59.850194 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:50:59.850375 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:50:59.851545 augenrules[1462]: No rules Jan 30 12:50:59.851969 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:50:59.852149 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:50:59.853781 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:50:59.854001 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:50:59.855550 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 12:50:59.866531 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 12:50:59.868196 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 12:50:59.870994 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:50:59.880378 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:50:59.883331 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:50:59.887397 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:50:59.888365 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:50:59.889177 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:50:59.889360 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:50:59.894413 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:50:59.894600 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:50:59.897845 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:50:59.898157 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:50:59.898290 systemd-resolved[1436]: Positive Trust Anchors: Jan 30 12:50:59.898308 systemd-resolved[1436]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 12:50:59.898340 systemd-resolved[1436]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 12:50:59.904516 systemd-resolved[1436]: Defaulting to hostname 'linux'. Jan 30 12:50:59.905941 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 12:50:59.916363 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 12:50:59.918497 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 12:50:59.920614 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 12:50:59.925343 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 12:50:59.926304 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 12:50:59.926972 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 12:50:59.928734 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 12:50:59.939405 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 12:50:59.939568 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 12:50:59.941206 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 12:50:59.941380 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 12:50:59.942724 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 12:50:59.942907 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 12:50:59.944506 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 12:50:59.944743 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 12:50:59.948517 systemd[1]: Finished ensure-sysext.service. Jan 30 12:50:59.953235 systemd[1]: Reached target network.target - Network. Jan 30 12:50:59.954005 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 12:50:59.955394 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 12:50:59.955475 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 12:50:59.973217 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 12:50:59.974206 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 12:51:00.021484 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 12:51:00.022805 systemd-timesyncd[1504]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 12:51:00.022858 systemd-timesyncd[1504]: Initial clock synchronization to Thu 2025-01-30 12:51:00.174881 UTC. Jan 30 12:51:00.022910 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 12:51:00.023845 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 12:51:00.024916 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 12:51:00.026050 systemd-networkd[1234]: eth0: Gained IPv6LL Jan 30 12:51:00.026657 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 12:51:00.027716 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 12:51:00.027755 systemd[1]: Reached target paths.target - Path Units. Jan 30 12:51:00.028486 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 12:51:00.029574 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 12:51:00.030542 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 12:51:00.031485 systemd[1]: Reached target timers.target - Timer Units. Jan 30 12:51:00.032992 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 12:51:00.035492 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 12:51:00.037600 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 12:51:00.046398 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 12:51:00.047578 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 12:51:00.049116 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 12:51:00.049974 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 12:51:00.050747 systemd[1]: Reached target basic.target - Basic System. Jan 30 12:51:00.051684 systemd[1]: System is tainted: cgroupsv1 Jan 30 12:51:00.051738 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 12:51:00.051764 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 12:51:00.053160 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 12:51:00.055416 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 12:51:00.057515 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 12:51:00.062179 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 12:51:00.064396 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 12:51:00.067097 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 12:51:00.072209 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:51:00.075234 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 12:51:00.079528 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 12:51:00.090592 jq[1514]: false Jan 30 12:51:00.091327 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 12:51:00.097070 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 12:51:00.104361 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 12:51:00.108774 extend-filesystems[1516]: Found loop3 Jan 30 12:51:00.108774 extend-filesystems[1516]: Found loop4 Jan 30 12:51:00.108774 extend-filesystems[1516]: Found loop5 Jan 30 12:51:00.108774 extend-filesystems[1516]: Found vda Jan 30 12:51:00.108774 extend-filesystems[1516]: Found vda1 Jan 30 12:51:00.108774 extend-filesystems[1516]: Found vda2 Jan 30 12:51:00.108774 extend-filesystems[1516]: Found vda3 Jan 30 12:51:00.108774 extend-filesystems[1516]: Found usr Jan 30 12:51:00.108774 extend-filesystems[1516]: Found vda4 Jan 30 12:51:00.108774 extend-filesystems[1516]: Found vda6 Jan 30 12:51:00.108774 extend-filesystems[1516]: Found vda7 Jan 30 12:51:00.108774 extend-filesystems[1516]: Found vda9 Jan 30 12:51:00.108774 extend-filesystems[1516]: Checking size of /dev/vda9 Jan 30 12:51:00.130414 dbus-daemon[1512]: [system] SELinux support is enabled Jan 30 12:51:00.142251 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1233) Jan 30 12:51:00.115359 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 12:51:00.117831 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 12:51:00.122730 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 12:51:00.130629 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 12:51:00.142794 extend-filesystems[1516]: Resized partition /dev/vda9 Jan 30 12:51:00.144896 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 12:51:00.145238 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 12:51:00.149739 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 12:51:00.150001 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 12:51:00.151172 jq[1539]: true Jan 30 12:51:00.164417 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 12:51:00.164695 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 12:51:00.179364 extend-filesystems[1548]: resize2fs 1.47.1 (20-May-2024) Jan 30 12:51:00.180423 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 12:51:00.185016 jq[1553]: true Jan 30 12:51:00.191432 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 12:51:00.191713 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 12:51:00.194795 update_engine[1537]: I20250130 12:51:00.192724 1537 main.cc:92] Flatcar Update Engine starting Jan 30 12:51:00.196447 (ntainerd)[1564]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 12:51:00.198785 update_engine[1537]: I20250130 12:51:00.197444 1537 update_check_scheduler.cc:74] Next update check in 8m26s Jan 30 12:51:00.200604 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 12:51:00.205380 systemd-logind[1526]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 12:51:00.212369 systemd-logind[1526]: New seat seat0. Jan 30 12:51:00.219268 systemd[1]: Started update-engine.service - Update Engine. Jan 30 12:51:00.220661 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 12:51:00.220773 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 12:51:00.220840 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 12:51:00.223669 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 12:51:00.223697 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 12:51:00.225494 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 12:51:00.229282 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 12:51:00.230428 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 12:51:00.258038 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 12:51:00.279423 extend-filesystems[1548]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 12:51:00.279423 extend-filesystems[1548]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 12:51:00.279423 extend-filesystems[1548]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 12:51:00.288033 extend-filesystems[1516]: Resized filesystem in /dev/vda9 Jan 30 12:51:00.289956 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 12:51:00.290292 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 12:51:00.294402 bash[1588]: Updated "/home/core/.ssh/authorized_keys" Jan 30 12:51:00.294987 locksmithd[1581]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 12:51:00.296346 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 12:51:00.298568 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 12:51:00.463957 containerd[1564]: time="2025-01-30T12:51:00.463126440Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 12:51:00.506565 containerd[1564]: time="2025-01-30T12:51:00.505390240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:51:00.508167 containerd[1564]: time="2025-01-30T12:51:00.508118080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:51:00.509032 containerd[1564]: time="2025-01-30T12:51:00.508814760Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 12:51:00.509032 containerd[1564]: time="2025-01-30T12:51:00.508845800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 12:51:00.509138 containerd[1564]: time="2025-01-30T12:51:00.509040800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 12:51:00.509138 containerd[1564]: time="2025-01-30T12:51:00.509061080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 12:51:00.509138 containerd[1564]: time="2025-01-30T12:51:00.509121160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:51:00.509138 containerd[1564]: time="2025-01-30T12:51:00.509133560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:51:00.511196 containerd[1564]: time="2025-01-30T12:51:00.511152160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:51:00.511245 containerd[1564]: time="2025-01-30T12:51:00.511191480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 12:51:00.511265 containerd[1564]: time="2025-01-30T12:51:00.511242840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:51:00.511265 containerd[1564]: time="2025-01-30T12:51:00.511256120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 12:51:00.511411 containerd[1564]: time="2025-01-30T12:51:00.511381480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:51:00.511646 containerd[1564]: time="2025-01-30T12:51:00.511606360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 12:51:00.511828 containerd[1564]: time="2025-01-30T12:51:00.511800080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 12:51:00.511828 containerd[1564]: time="2025-01-30T12:51:00.511821400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 12:51:00.511937 containerd[1564]: time="2025-01-30T12:51:00.511920240Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 12:51:00.511990 containerd[1564]: time="2025-01-30T12:51:00.511978560Z" level=info msg="metadata content store policy set" policy=shared Jan 30 12:51:00.516349 containerd[1564]: time="2025-01-30T12:51:00.516302360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 12:51:00.516440 containerd[1564]: time="2025-01-30T12:51:00.516365560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 12:51:00.516440 containerd[1564]: time="2025-01-30T12:51:00.516386960Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 12:51:00.516440 containerd[1564]: time="2025-01-30T12:51:00.516405240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 12:51:00.516440 containerd[1564]: time="2025-01-30T12:51:00.516423680Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 12:51:00.516631 containerd[1564]: time="2025-01-30T12:51:00.516595080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 12:51:00.519449 containerd[1564]: time="2025-01-30T12:51:00.517049400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 12:51:00.519449 containerd[1564]: time="2025-01-30T12:51:00.517204200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 12:51:00.519449 containerd[1564]: time="2025-01-30T12:51:00.517221000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 12:51:00.519449 containerd[1564]: time="2025-01-30T12:51:00.517235680Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 12:51:00.519449 containerd[1564]: time="2025-01-30T12:51:00.517251800Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 12:51:00.519449 containerd[1564]: time="2025-01-30T12:51:00.517265120Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 12:51:00.519449 containerd[1564]: time="2025-01-30T12:51:00.517279160Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 12:51:00.519449 containerd[1564]: time="2025-01-30T12:51:00.517294440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 12:51:00.519449 containerd[1564]: time="2025-01-30T12:51:00.517308800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 12:51:00.519449 containerd[1564]: time="2025-01-30T12:51:00.517322680Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 12:51:00.519449 containerd[1564]: time="2025-01-30T12:51:00.517335520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 12:51:00.519449 containerd[1564]: time="2025-01-30T12:51:00.517347720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 12:51:00.519449 containerd[1564]: time="2025-01-30T12:51:00.517371360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 12:51:00.519449 containerd[1564]: time="2025-01-30T12:51:00.517386960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 12:51:00.519812 containerd[1564]: time="2025-01-30T12:51:00.517400760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 12:51:00.519812 containerd[1564]: time="2025-01-30T12:51:00.517414040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 12:51:00.519812 containerd[1564]: time="2025-01-30T12:51:00.517430280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 12:51:00.519812 containerd[1564]: time="2025-01-30T12:51:00.517444520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 12:51:00.519812 containerd[1564]: time="2025-01-30T12:51:00.517461520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 12:51:00.519812 containerd[1564]: time="2025-01-30T12:51:00.517477760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 12:51:00.519812 containerd[1564]: time="2025-01-30T12:51:00.517491200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 12:51:00.519812 containerd[1564]: time="2025-01-30T12:51:00.517505040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 12:51:00.519812 containerd[1564]: time="2025-01-30T12:51:00.517517480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 12:51:00.519812 containerd[1564]: time="2025-01-30T12:51:00.517528880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 12:51:00.519812 containerd[1564]: time="2025-01-30T12:51:00.517541280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 12:51:00.519812 containerd[1564]: time="2025-01-30T12:51:00.517561160Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 12:51:00.519812 containerd[1564]: time="2025-01-30T12:51:00.517587040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 12:51:00.519812 containerd[1564]: time="2025-01-30T12:51:00.517599560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 12:51:00.519812 containerd[1564]: time="2025-01-30T12:51:00.517611040Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 12:51:00.520075 containerd[1564]: time="2025-01-30T12:51:00.517742640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 12:51:00.520075 containerd[1564]: time="2025-01-30T12:51:00.517762400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 12:51:00.520075 containerd[1564]: time="2025-01-30T12:51:00.517774400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 12:51:00.520075 containerd[1564]: time="2025-01-30T12:51:00.517786560Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 12:51:00.520075 containerd[1564]: time="2025-01-30T12:51:00.517796280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 12:51:00.520075 containerd[1564]: time="2025-01-30T12:51:00.517812640Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 12:51:00.520075 containerd[1564]: time="2025-01-30T12:51:00.517822920Z" level=info msg="NRI interface is disabled by configuration." Jan 30 12:51:00.520075 containerd[1564]: time="2025-01-30T12:51:00.517834160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 12:51:00.520215 containerd[1564]: time="2025-01-30T12:51:00.518199640Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 12:51:00.520215 containerd[1564]: time="2025-01-30T12:51:00.518266640Z" level=info msg="Connect containerd service" Jan 30 12:51:00.520215 containerd[1564]: time="2025-01-30T12:51:00.518299360Z" level=info msg="using legacy CRI server" Jan 30 12:51:00.520215 containerd[1564]: time="2025-01-30T12:51:00.518306880Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 12:51:00.520215 containerd[1564]: time="2025-01-30T12:51:00.518399560Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 12:51:00.520432 containerd[1564]: time="2025-01-30T12:51:00.520380040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 12:51:00.522048 containerd[1564]: time="2025-01-30T12:51:00.520678720Z" level=info msg="Start subscribing containerd event" Jan 30 12:51:00.522048 containerd[1564]: time="2025-01-30T12:51:00.520738600Z" level=info msg="Start recovering state" Jan 30 12:51:00.522048 containerd[1564]: time="2025-01-30T12:51:00.520835200Z" level=info msg="Start event monitor" Jan 30 12:51:00.522048 containerd[1564]: time="2025-01-30T12:51:00.520858560Z" level=info msg="Start snapshots syncer" Jan 30 12:51:00.522048 containerd[1564]: time="2025-01-30T12:51:00.520871440Z" level=info msg="Start cni network conf syncer for default" Jan 30 12:51:00.522048 containerd[1564]: time="2025-01-30T12:51:00.520884920Z" level=info msg="Start streaming server" Jan 30 12:51:00.522048 containerd[1564]: time="2025-01-30T12:51:00.520944040Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 12:51:00.522048 containerd[1564]: time="2025-01-30T12:51:00.520992880Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 12:51:00.521240 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 12:51:00.522500 containerd[1564]: time="2025-01-30T12:51:00.522458760Z" level=info msg="containerd successfully booted in 0.060967s" Jan 30 12:51:00.822433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:51:00.827140 (kubelet)[1617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 12:51:01.050111 sshd_keygen[1544]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 12:51:01.072820 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 12:51:01.086449 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 12:51:01.094248 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 12:51:01.094526 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 12:51:01.106532 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 12:51:01.118354 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 12:51:01.137549 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 12:51:01.140210 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 12:51:01.141727 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 12:51:01.142862 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 12:51:01.144229 systemd[1]: Startup finished in 5.345s (kernel) + 3.741s (userspace) = 9.086s. Jan 30 12:51:01.461325 kubelet[1617]: E0130 12:51:01.461252 1617 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 12:51:01.464169 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 12:51:01.464379 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 12:51:05.991822 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 12:51:06.001296 systemd[1]: Started sshd@0-10.0.0.39:22-10.0.0.1:38820.service - OpenSSH per-connection server daemon (10.0.0.1:38820). Jan 30 12:51:06.055727 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 38820 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:06.057755 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:06.068152 systemd-logind[1526]: New session 1 of user core. Jan 30 12:51:06.068934 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 12:51:06.078281 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 12:51:06.088271 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 12:51:06.090472 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 12:51:06.098494 (systemd)[1657]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 12:51:06.186639 systemd[1657]: Queued start job for default target default.target. Jan 30 12:51:06.187015 systemd[1657]: Created slice app.slice - User Application Slice. Jan 30 12:51:06.187051 systemd[1657]: Reached target paths.target - Paths. Jan 30 12:51:06.187063 systemd[1657]: Reached target timers.target - Timers. Jan 30 12:51:06.206181 systemd[1657]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 12:51:06.212716 systemd[1657]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 12:51:06.212781 systemd[1657]: Reached target sockets.target - Sockets. Jan 30 12:51:06.212794 systemd[1657]: Reached target basic.target - Basic System. Jan 30 12:51:06.212835 systemd[1657]: Reached target default.target - Main User Target. Jan 30 12:51:06.212861 systemd[1657]: Startup finished in 108ms. Jan 30 12:51:06.213134 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 12:51:06.214801 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 12:51:06.273312 systemd[1]: Started sshd@1-10.0.0.39:22-10.0.0.1:38826.service - OpenSSH per-connection server daemon (10.0.0.1:38826). Jan 30 12:51:06.303170 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 38826 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:06.304543 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:06.309094 systemd-logind[1526]: New session 2 of user core. Jan 30 12:51:06.320368 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 12:51:06.376972 sshd[1669]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:06.391329 systemd[1]: Started sshd@2-10.0.0.39:22-10.0.0.1:38842.service - OpenSSH per-connection server daemon (10.0.0.1:38842). Jan 30 12:51:06.391744 systemd[1]: sshd@1-10.0.0.39:22-10.0.0.1:38826.service: Deactivated successfully. Jan 30 12:51:06.393729 systemd-logind[1526]: Session 2 logged out. Waiting for processes to exit. Jan 30 12:51:06.394282 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 12:51:06.395526 systemd-logind[1526]: Removed session 2. Jan 30 12:51:06.422749 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 38842 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:06.424303 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:06.429494 systemd-logind[1526]: New session 3 of user core. Jan 30 12:51:06.440311 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 12:51:06.489181 sshd[1674]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:06.504351 systemd[1]: Started sshd@3-10.0.0.39:22-10.0.0.1:38856.service - OpenSSH per-connection server daemon (10.0.0.1:38856). Jan 30 12:51:06.504792 systemd[1]: sshd@2-10.0.0.39:22-10.0.0.1:38842.service: Deactivated successfully. Jan 30 12:51:06.507486 systemd-logind[1526]: Session 3 logged out. Waiting for processes to exit. Jan 30 12:51:06.508173 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 12:51:06.509880 systemd-logind[1526]: Removed session 3. Jan 30 12:51:06.535416 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 38856 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:06.536789 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:06.542032 systemd-logind[1526]: New session 4 of user core. Jan 30 12:51:06.555330 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 12:51:06.611167 sshd[1682]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:06.620347 systemd[1]: Started sshd@4-10.0.0.39:22-10.0.0.1:38862.service - OpenSSH per-connection server daemon (10.0.0.1:38862). Jan 30 12:51:06.620839 systemd[1]: sshd@3-10.0.0.39:22-10.0.0.1:38856.service: Deactivated successfully. Jan 30 12:51:06.622687 systemd-logind[1526]: Session 4 logged out. Waiting for processes to exit. Jan 30 12:51:06.623322 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 12:51:06.632219 systemd-logind[1526]: Removed session 4. Jan 30 12:51:06.660564 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 38862 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:06.662257 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:06.667390 systemd-logind[1526]: New session 5 of user core. Jan 30 12:51:06.677354 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 12:51:06.740559 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 12:51:06.740969 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:51:06.761940 sudo[1697]: pam_unix(sudo:session): session closed for user root Jan 30 12:51:06.764053 sshd[1690]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:06.778344 systemd[1]: Started sshd@5-10.0.0.39:22-10.0.0.1:38864.service - OpenSSH per-connection server daemon (10.0.0.1:38864). Jan 30 12:51:06.778758 systemd[1]: sshd@4-10.0.0.39:22-10.0.0.1:38862.service: Deactivated successfully. Jan 30 12:51:06.781243 systemd-logind[1526]: Session 5 logged out. Waiting for processes to exit. Jan 30 12:51:06.782273 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 12:51:06.783588 systemd-logind[1526]: Removed session 5. Jan 30 12:51:06.811401 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 38864 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:06.813291 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:06.819811 systemd-logind[1526]: New session 6 of user core. Jan 30 12:51:06.828350 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 12:51:06.881640 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 12:51:06.881944 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:51:06.885354 sudo[1707]: pam_unix(sudo:session): session closed for user root Jan 30 12:51:06.891951 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 12:51:06.892279 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:51:06.909301 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 12:51:06.910988 auditctl[1710]: No rules Jan 30 12:51:06.911850 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 12:51:06.912116 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 12:51:06.913938 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 12:51:06.941845 augenrules[1729]: No rules Jan 30 12:51:06.943310 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 12:51:06.944331 sudo[1706]: pam_unix(sudo:session): session closed for user root Jan 30 12:51:06.946360 sshd[1699]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:06.959365 systemd[1]: Started sshd@6-10.0.0.39:22-10.0.0.1:38874.service - OpenSSH per-connection server daemon (10.0.0.1:38874). Jan 30 12:51:06.959843 systemd[1]: sshd@5-10.0.0.39:22-10.0.0.1:38864.service: Deactivated successfully. Jan 30 12:51:06.962450 systemd-logind[1526]: Session 6 logged out. Waiting for processes to exit. Jan 30 12:51:06.962846 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 12:51:06.964934 systemd-logind[1526]: Removed session 6. Jan 30 12:51:06.990521 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 38874 ssh2: RSA SHA256:L/o4MUo/PVZc4DGxUVgHzL5cb5Nt9LAG1MaqWWrUsp0 Jan 30 12:51:06.992273 sshd[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 12:51:06.997879 systemd-logind[1526]: New session 7 of user core. Jan 30 12:51:07.018419 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 12:51:07.076115 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 12:51:07.076418 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 12:51:07.106451 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 12:51:07.127397 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 12:51:07.127646 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 12:51:07.703484 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:51:07.712301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:51:07.729996 systemd[1]: Reloading requested from client PID 1793 ('systemctl') (unit session-7.scope)... Jan 30 12:51:07.730015 systemd[1]: Reloading... Jan 30 12:51:07.794046 zram_generator::config[1834]: No configuration found. Jan 30 12:51:07.916165 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 12:51:07.965037 systemd[1]: Reloading finished in 234 ms. Jan 30 12:51:08.004818 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 12:51:08.004886 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 12:51:08.005158 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:51:08.007569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 12:51:08.111313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 12:51:08.116175 (kubelet)[1889]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 12:51:08.159565 kubelet[1889]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:51:08.159565 kubelet[1889]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 12:51:08.159565 kubelet[1889]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 12:51:08.160693 kubelet[1889]: I0130 12:51:08.160639 1889 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 12:51:08.592578 kubelet[1889]: I0130 12:51:08.592535 1889 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 12:51:08.592578 kubelet[1889]: I0130 12:51:08.592567 1889 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 12:51:08.592829 kubelet[1889]: I0130 12:51:08.592810 1889 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 12:51:08.629594 kubelet[1889]: I0130 12:51:08.629462 1889 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 12:51:08.649332 kubelet[1889]: I0130 12:51:08.649301 1889 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 12:51:08.651941 kubelet[1889]: I0130 12:51:08.651505 1889 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 12:51:08.651941 kubelet[1889]: I0130 12:51:08.651562 1889 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.39","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 12:51:08.652131 kubelet[1889]: I0130 12:51:08.651983 1889 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 12:51:08.652131 kubelet[1889]: I0130 12:51:08.651994 1889 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 12:51:08.652322 kubelet[1889]: I0130 12:51:08.652283 1889 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:51:08.658487 kubelet[1889]: I0130 12:51:08.658459 1889 kubelet.go:400] "Attempting to sync node with API server" Jan 30 12:51:08.659421 kubelet[1889]: I0130 12:51:08.658588 1889 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 12:51:08.659421 kubelet[1889]: I0130 12:51:08.659120 1889 kubelet.go:312] "Adding apiserver pod source" Jan 30 12:51:08.659421 kubelet[1889]: I0130 12:51:08.659274 1889 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 12:51:08.659613 kubelet[1889]: E0130 12:51:08.659550 1889 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:08.659975 kubelet[1889]: E0130 12:51:08.659758 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:08.662916 kubelet[1889]: I0130 12:51:08.662880 1889 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 12:51:08.663471 kubelet[1889]: I0130 12:51:08.663457 1889 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 12:51:08.663651 kubelet[1889]: W0130 12:51:08.663626 1889 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 12:51:08.664985 kubelet[1889]: I0130 12:51:08.664608 1889 server.go:1264] "Started kubelet" Jan 30 12:51:08.665584 kubelet[1889]: I0130 12:51:08.665147 1889 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 12:51:08.666818 kubelet[1889]: I0130 12:51:08.666715 1889 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 12:51:08.667055 kubelet[1889]: I0130 12:51:08.667033 1889 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 12:51:08.667150 kubelet[1889]: I0130 12:51:08.667125 1889 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 12:51:08.667971 kubelet[1889]: W0130 12:51:08.667942 1889 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.39" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 12:51:08.668120 kubelet[1889]: E0130 12:51:08.668101 1889 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.39" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 12:51:08.669631 kubelet[1889]: I0130 12:51:08.669521 1889 server.go:455] "Adding debug handlers to kubelet server" Jan 30 12:51:08.673942 kubelet[1889]: W0130 12:51:08.670253 1889 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 12:51:08.673942 kubelet[1889]: E0130 12:51:08.670277 1889 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 12:51:08.673942 kubelet[1889]: E0130 12:51:08.670350 1889 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.39\" not found" Jan 30 12:51:08.673942 kubelet[1889]: I0130 12:51:08.670615 1889 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 12:51:08.673942 kubelet[1889]: I0130 12:51:08.670688 1889 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 12:51:08.673942 kubelet[1889]: E0130 12:51:08.671384 1889 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.39.181f7967f99abbf9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.39,UID:10.0.0.39,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.39,},FirstTimestamp:2025-01-30 12:51:08.664560633 +0000 UTC m=+0.545028209,LastTimestamp:2025-01-30 12:51:08.664560633 +0000 UTC m=+0.545028209,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.39,}" Jan 30 12:51:08.673942 kubelet[1889]: I0130 12:51:08.671888 1889 reconciler.go:26] "Reconciler: start to sync state" Jan 30 12:51:08.674513 kubelet[1889]: E0130 12:51:08.674190 1889 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.39\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 30 12:51:08.674636 kubelet[1889]: W0130 12:51:08.674600 1889 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 30 12:51:08.674636 kubelet[1889]: E0130 12:51:08.674635 1889 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 30 12:51:08.675135 kubelet[1889]: I0130 12:51:08.675110 1889 factory.go:221] Registration of the systemd container factory successfully Jan 30 12:51:08.675416 kubelet[1889]: I0130 12:51:08.675206 1889 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 12:51:08.676434 kubelet[1889]: E0130 12:51:08.676396 1889 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 12:51:08.676808 kubelet[1889]: I0130 12:51:08.676741 1889 factory.go:221] Registration of the containerd container factory successfully Jan 30 12:51:08.688040 kubelet[1889]: E0130 12:51:08.686223 1889 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.39.181f7967fa4f232b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.39,UID:10.0.0.39,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.39,},FirstTimestamp:2025-01-30 12:51:08.676383531 +0000 UTC m=+0.556851108,LastTimestamp:2025-01-30 12:51:08.676383531 +0000 UTC m=+0.556851108,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.39,}" Jan 30 12:51:08.693313 kubelet[1889]: I0130 12:51:08.693286 1889 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 12:51:08.693313 kubelet[1889]: I0130 12:51:08.693303 1889 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 12:51:08.693313 kubelet[1889]: I0130 12:51:08.693323 1889 state_mem.go:36] "Initialized new in-memory state store" Jan 30 12:51:08.772529 kubelet[1889]: I0130 12:51:08.772473 1889 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.39" Jan 30 12:51:08.774453 kubelet[1889]: I0130 12:51:08.774415 1889 policy_none.go:49] "None policy: Start" Jan 30 12:51:08.775884 kubelet[1889]: I0130 12:51:08.775855 1889 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 12:51:08.775884 kubelet[1889]: I0130 12:51:08.775881 1889 state_mem.go:35] "Initializing new in-memory state store" Jan 30 12:51:08.782977 kubelet[1889]: I0130 12:51:08.782934 1889 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.39" Jan 30 12:51:08.789294 kubelet[1889]: I0130 12:51:08.789174 1889 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 12:51:08.790116 kubelet[1889]: I0130 12:51:08.789455 1889 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 12:51:08.790116 kubelet[1889]: I0130 12:51:08.789571 1889 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 12:51:08.792004 kubelet[1889]: E0130 12:51:08.791952 1889 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.39\" not found" Jan 30 12:51:08.801587 kubelet[1889]: E0130 12:51:08.801528 1889 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.39\" not found" Jan 30 12:51:08.805503 kubelet[1889]: I0130 12:51:08.805442 1889 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 12:51:08.806885 kubelet[1889]: I0130 12:51:08.806836 1889 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 12:51:08.807009 kubelet[1889]: I0130 12:51:08.806998 1889 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 12:51:08.807216 kubelet[1889]: I0130 12:51:08.807165 1889 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 12:51:08.807457 kubelet[1889]: E0130 12:51:08.807437 1889 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 30 12:51:08.901883 kubelet[1889]: E0130 12:51:08.901824 1889 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.39\" not found" Jan 30 12:51:09.002135 kubelet[1889]: E0130 12:51:09.002089 1889 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.39\" not found" Jan 30 12:51:09.102324 kubelet[1889]: E0130 12:51:09.102275 1889 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.39\" not found" Jan 30 12:51:09.136243 sudo[1742]: pam_unix(sudo:session): session closed for user root Jan 30 12:51:09.137845 sshd[1735]: pam_unix(sshd:session): session closed for user core Jan 30 12:51:09.141306 systemd[1]: sshd@6-10.0.0.39:22-10.0.0.1:38874.service: Deactivated successfully. Jan 30 12:51:09.143259 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 12:51:09.143636 systemd-logind[1526]: Session 7 logged out. Waiting for processes to exit. Jan 30 12:51:09.144882 systemd-logind[1526]: Removed session 7. Jan 30 12:51:09.202899 kubelet[1889]: E0130 12:51:09.202745 1889 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.39\" not found" Jan 30 12:51:09.303003 kubelet[1889]: E0130 12:51:09.302913 1889 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.39\" not found" Jan 30 12:51:09.403244 kubelet[1889]: E0130 12:51:09.403166 1889 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.39\" not found" Jan 30 12:51:09.504454 kubelet[1889]: E0130 12:51:09.504299 1889 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.39\" not found" Jan 30 12:51:09.595508 kubelet[1889]: I0130 12:51:09.595444 1889 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 12:51:09.595657 kubelet[1889]: W0130 12:51:09.595623 1889 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 12:51:09.604782 kubelet[1889]: E0130 12:51:09.604739 1889 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.39\" not found" Jan 30 12:51:09.660655 kubelet[1889]: E0130 12:51:09.660586 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:09.706214 kubelet[1889]: I0130 12:51:09.706152 1889 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 30 12:51:09.706594 containerd[1564]: time="2025-01-30T12:51:09.706535974Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 12:51:09.706899 kubelet[1889]: I0130 12:51:09.706757 1889 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 30 12:51:10.661352 kubelet[1889]: I0130 12:51:10.661286 1889 apiserver.go:52] "Watching apiserver" Jan 30 12:51:10.661352 kubelet[1889]: E0130 12:51:10.661319 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:10.668161 kubelet[1889]: I0130 12:51:10.668106 1889 topology_manager.go:215] "Topology Admit Handler" podUID="da1ce413-1b6b-4c40-9eff-d692a2a68331" podNamespace="kube-system" podName="cilium-cbgr5" Jan 30 12:51:10.668338 kubelet[1889]: I0130 12:51:10.668294 1889 topology_manager.go:215] "Topology Admit Handler" podUID="293aca2b-b268-43bc-8bea-03c0c5516b15" podNamespace="kube-system" podName="kube-proxy-rs5f8" Jan 30 12:51:10.671469 kubelet[1889]: I0130 12:51:10.671373 1889 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 12:51:10.682998 kubelet[1889]: I0130 12:51:10.682952 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-cilium-cgroup\") pod \"cilium-cbgr5\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " pod="kube-system/cilium-cbgr5" Jan 30 12:51:10.683513 kubelet[1889]: I0130 12:51:10.683161 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-cni-path\") pod \"cilium-cbgr5\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " pod="kube-system/cilium-cbgr5" Jan 30 12:51:10.683513 kubelet[1889]: I0130 12:51:10.683194 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/da1ce413-1b6b-4c40-9eff-d692a2a68331-clustermesh-secrets\") pod \"cilium-cbgr5\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " pod="kube-system/cilium-cbgr5" Jan 30 12:51:10.683513 kubelet[1889]: I0130 12:51:10.683211 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/da1ce413-1b6b-4c40-9eff-d692a2a68331-hubble-tls\") pod \"cilium-cbgr5\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " pod="kube-system/cilium-cbgr5" Jan 30 12:51:10.683513 kubelet[1889]: I0130 12:51:10.683229 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dghjf\" (UniqueName: \"kubernetes.io/projected/da1ce413-1b6b-4c40-9eff-d692a2a68331-kube-api-access-dghjf\") pod \"cilium-cbgr5\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " pod="kube-system/cilium-cbgr5" Jan 30 12:51:10.683513 kubelet[1889]: I0130 12:51:10.683245 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-hostproc\") pod \"cilium-cbgr5\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " pod="kube-system/cilium-cbgr5" Jan 30 12:51:10.683513 kubelet[1889]: I0130 12:51:10.683259 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-lib-modules\") pod \"cilium-cbgr5\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " pod="kube-system/cilium-cbgr5" Jan 30 12:51:10.683689 kubelet[1889]: I0130 12:51:10.683280 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-host-proc-sys-kernel\") pod \"cilium-cbgr5\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " pod="kube-system/cilium-cbgr5" Jan 30 12:51:10.683689 kubelet[1889]: I0130 12:51:10.683301 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/293aca2b-b268-43bc-8bea-03c0c5516b15-xtables-lock\") pod \"kube-proxy-rs5f8\" (UID: \"293aca2b-b268-43bc-8bea-03c0c5516b15\") " pod="kube-system/kube-proxy-rs5f8" Jan 30 12:51:10.683689 kubelet[1889]: I0130 12:51:10.683316 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/293aca2b-b268-43bc-8bea-03c0c5516b15-lib-modules\") pod \"kube-proxy-rs5f8\" (UID: \"293aca2b-b268-43bc-8bea-03c0c5516b15\") " pod="kube-system/kube-proxy-rs5f8" Jan 30 12:51:10.683689 kubelet[1889]: I0130 12:51:10.683330 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-cilium-run\") pod \"cilium-cbgr5\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " pod="kube-system/cilium-cbgr5" Jan 30 12:51:10.683689 kubelet[1889]: I0130 12:51:10.683344 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-xtables-lock\") pod \"cilium-cbgr5\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " pod="kube-system/cilium-cbgr5" Jan 30 12:51:10.683689 kubelet[1889]: I0130 12:51:10.683373 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-etc-cni-netd\") pod \"cilium-cbgr5\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " pod="kube-system/cilium-cbgr5" Jan 30 12:51:10.683798 kubelet[1889]: I0130 12:51:10.683395 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da1ce413-1b6b-4c40-9eff-d692a2a68331-cilium-config-path\") pod \"cilium-cbgr5\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " pod="kube-system/cilium-cbgr5" Jan 30 12:51:10.683798 kubelet[1889]: I0130 12:51:10.683416 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-host-proc-sys-net\") pod \"cilium-cbgr5\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " pod="kube-system/cilium-cbgr5" Jan 30 12:51:10.683798 kubelet[1889]: I0130 12:51:10.683431 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/293aca2b-b268-43bc-8bea-03c0c5516b15-kube-proxy\") pod \"kube-proxy-rs5f8\" (UID: \"293aca2b-b268-43bc-8bea-03c0c5516b15\") " pod="kube-system/kube-proxy-rs5f8" Jan 30 12:51:10.683798 kubelet[1889]: I0130 12:51:10.683447 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc7b5\" (UniqueName: \"kubernetes.io/projected/293aca2b-b268-43bc-8bea-03c0c5516b15-kube-api-access-gc7b5\") pod \"kube-proxy-rs5f8\" (UID: \"293aca2b-b268-43bc-8bea-03c0c5516b15\") " pod="kube-system/kube-proxy-rs5f8" Jan 30 12:51:10.683798 kubelet[1889]: I0130 12:51:10.683463 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-bpf-maps\") pod \"cilium-cbgr5\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " pod="kube-system/cilium-cbgr5" Jan 30 12:51:10.971380 kubelet[1889]: E0130 12:51:10.971234 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:10.972222 containerd[1564]: time="2025-01-30T12:51:10.972166935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rs5f8,Uid:293aca2b-b268-43bc-8bea-03c0c5516b15,Namespace:kube-system,Attempt:0,}" Jan 30 12:51:10.974292 kubelet[1889]: E0130 12:51:10.974263 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:10.977411 containerd[1564]: time="2025-01-30T12:51:10.977338221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cbgr5,Uid:da1ce413-1b6b-4c40-9eff-d692a2a68331,Namespace:kube-system,Attempt:0,}" Jan 30 12:51:11.515121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount211486379.mount: Deactivated successfully. Jan 30 12:51:11.523099 containerd[1564]: time="2025-01-30T12:51:11.523038845Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:51:11.525150 containerd[1564]: time="2025-01-30T12:51:11.525098781Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 12:51:11.525880 containerd[1564]: time="2025-01-30T12:51:11.525712718Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:51:11.526973 containerd[1564]: time="2025-01-30T12:51:11.526934482Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 30 12:51:11.527301 containerd[1564]: time="2025-01-30T12:51:11.527270796Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:51:11.530753 containerd[1564]: time="2025-01-30T12:51:11.530691147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 12:51:11.531835 containerd[1564]: time="2025-01-30T12:51:11.531783506Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 559.523693ms" Jan 30 12:51:11.532584 containerd[1564]: time="2025-01-30T12:51:11.532411111Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 554.920992ms" Jan 30 12:51:11.661872 kubelet[1889]: E0130 12:51:11.661816 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:11.667238 containerd[1564]: time="2025-01-30T12:51:11.667098104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:51:11.667238 containerd[1564]: time="2025-01-30T12:51:11.667172715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:51:11.667452 containerd[1564]: time="2025-01-30T12:51:11.667197961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:51:11.667452 containerd[1564]: time="2025-01-30T12:51:11.667292592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:51:11.670516 containerd[1564]: time="2025-01-30T12:51:11.670408185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:51:11.670516 containerd[1564]: time="2025-01-30T12:51:11.670461893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:51:11.670516 containerd[1564]: time="2025-01-30T12:51:11.670475762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:51:11.670817 containerd[1564]: time="2025-01-30T12:51:11.670572081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:51:11.793222 containerd[1564]: time="2025-01-30T12:51:11.793099368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rs5f8,Uid:293aca2b-b268-43bc-8bea-03c0c5516b15,Namespace:kube-system,Attempt:0,} returns sandbox id \"74a4b7f387f7c98c1309cb7c61b3b16a1456fcb9ddee63c876d24a256b6b487c\"" Jan 30 12:51:11.795282 kubelet[1889]: E0130 12:51:11.795255 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:11.796709 containerd[1564]: time="2025-01-30T12:51:11.796463318Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 12:51:11.796787 containerd[1564]: time="2025-01-30T12:51:11.796742749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cbgr5,Uid:da1ce413-1b6b-4c40-9eff-d692a2a68331,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12\"" Jan 30 12:51:11.797541 kubelet[1889]: E0130 12:51:11.797509 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:12.662169 kubelet[1889]: E0130 12:51:12.662123 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:12.806927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount96461250.mount: Deactivated successfully. Jan 30 12:51:13.020504 containerd[1564]: time="2025-01-30T12:51:13.020351277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:51:13.024131 containerd[1564]: time="2025-01-30T12:51:13.024072961Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662714" Jan 30 12:51:13.025263 containerd[1564]: time="2025-01-30T12:51:13.025230539Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:51:13.029312 containerd[1564]: time="2025-01-30T12:51:13.029213621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:51:13.029699 containerd[1564]: time="2025-01-30T12:51:13.029662976Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.233084818s" Jan 30 12:51:13.029748 containerd[1564]: time="2025-01-30T12:51:13.029701042Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 30 12:51:13.031373 containerd[1564]: time="2025-01-30T12:51:13.031288982Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 12:51:13.032414 containerd[1564]: time="2025-01-30T12:51:13.032229612Z" level=info msg="CreateContainer within sandbox \"74a4b7f387f7c98c1309cb7c61b3b16a1456fcb9ddee63c876d24a256b6b487c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 12:51:13.053493 containerd[1564]: time="2025-01-30T12:51:13.053439724Z" level=info msg="CreateContainer within sandbox \"74a4b7f387f7c98c1309cb7c61b3b16a1456fcb9ddee63c876d24a256b6b487c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"07d99e6d577d11724ecc2bc5c4676a0fa645d09320aa28eae6cf2c6d980b676c\"" Jan 30 12:51:13.054548 containerd[1564]: time="2025-01-30T12:51:13.054456926Z" level=info msg="StartContainer for \"07d99e6d577d11724ecc2bc5c4676a0fa645d09320aa28eae6cf2c6d980b676c\"" Jan 30 12:51:13.108387 containerd[1564]: time="2025-01-30T12:51:13.108340381Z" level=info msg="StartContainer for \"07d99e6d577d11724ecc2bc5c4676a0fa645d09320aa28eae6cf2c6d980b676c\" returns successfully" Jan 30 12:51:13.662663 kubelet[1889]: E0130 12:51:13.662609 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:13.822549 kubelet[1889]: E0130 12:51:13.822503 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:14.663562 kubelet[1889]: E0130 12:51:14.663514 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:14.824063 kubelet[1889]: E0130 12:51:14.823965 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:15.664401 kubelet[1889]: E0130 12:51:15.664205 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:16.664666 kubelet[1889]: E0130 12:51:16.664614 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:16.684656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2296946119.mount: Deactivated successfully. Jan 30 12:51:17.664948 kubelet[1889]: E0130 12:51:17.664912 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:18.089123 containerd[1564]: time="2025-01-30T12:51:18.088789004Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:51:18.089480 containerd[1564]: time="2025-01-30T12:51:18.089420402Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 30 12:51:18.092528 containerd[1564]: time="2025-01-30T12:51:18.091112160Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:51:18.094176 containerd[1564]: time="2025-01-30T12:51:18.094138455Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.062805971s" Jan 30 12:51:18.094292 containerd[1564]: time="2025-01-30T12:51:18.094276005Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 30 12:51:18.096172 containerd[1564]: time="2025-01-30T12:51:18.096141063Z" level=info msg="CreateContainer within sandbox \"7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 12:51:18.108305 containerd[1564]: time="2025-01-30T12:51:18.108220994Z" level=info msg="CreateContainer within sandbox \"7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"61112361e2355b8da067c8482570fcf8df4a1797bfad90fade65da836b4ece6b\"" Jan 30 12:51:18.109042 containerd[1564]: time="2025-01-30T12:51:18.108760933Z" level=info msg="StartContainer for \"61112361e2355b8da067c8482570fcf8df4a1797bfad90fade65da836b4ece6b\"" Jan 30 12:51:18.159516 containerd[1564]: time="2025-01-30T12:51:18.157418402Z" level=info msg="StartContainer for \"61112361e2355b8da067c8482570fcf8df4a1797bfad90fade65da836b4ece6b\" returns successfully" Jan 30 12:51:18.417952 containerd[1564]: time="2025-01-30T12:51:18.417884198Z" level=info msg="shim disconnected" id=61112361e2355b8da067c8482570fcf8df4a1797bfad90fade65da836b4ece6b namespace=k8s.io Jan 30 12:51:18.417952 containerd[1564]: time="2025-01-30T12:51:18.417937503Z" level=warning msg="cleaning up after shim disconnected" id=61112361e2355b8da067c8482570fcf8df4a1797bfad90fade65da836b4ece6b namespace=k8s.io Jan 30 12:51:18.417952 containerd[1564]: time="2025-01-30T12:51:18.417946240Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:51:18.665822 kubelet[1889]: E0130 12:51:18.665771 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:18.831127 kubelet[1889]: E0130 12:51:18.830984 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:18.832974 containerd[1564]: time="2025-01-30T12:51:18.832924712Z" level=info msg="CreateContainer within sandbox \"7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 12:51:18.847505 kubelet[1889]: I0130 12:51:18.847407 1889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rs5f8" podStartSLOduration=9.612669266 podStartE2EDuration="10.847391164s" podCreationTimestamp="2025-01-30 12:51:08 +0000 UTC" firstStartedPulling="2025-01-30 12:51:11.79592142 +0000 UTC m=+3.676388956" lastFinishedPulling="2025-01-30 12:51:13.030643238 +0000 UTC m=+4.911110854" observedRunningTime="2025-01-30 12:51:13.951998528 +0000 UTC m=+5.832466144" watchObservedRunningTime="2025-01-30 12:51:18.847391164 +0000 UTC m=+10.727858740" Jan 30 12:51:18.849345 containerd[1564]: time="2025-01-30T12:51:18.849258467Z" level=info msg="CreateContainer within sandbox \"7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"13edadc044c60bf860f72622f97b60fe33456b1fb9e3012e53e0a6b96bfb46a3\"" Jan 30 12:51:18.849741 containerd[1564]: time="2025-01-30T12:51:18.849716084Z" level=info msg="StartContainer for \"13edadc044c60bf860f72622f97b60fe33456b1fb9e3012e53e0a6b96bfb46a3\"" Jan 30 12:51:18.901119 containerd[1564]: time="2025-01-30T12:51:18.901061745Z" level=info msg="StartContainer for \"13edadc044c60bf860f72622f97b60fe33456b1fb9e3012e53e0a6b96bfb46a3\" returns successfully" Jan 30 12:51:18.914065 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 12:51:18.914374 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:51:18.914441 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:51:18.921430 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 12:51:18.934329 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 12:51:18.937340 containerd[1564]: time="2025-01-30T12:51:18.937137779Z" level=info msg="shim disconnected" id=13edadc044c60bf860f72622f97b60fe33456b1fb9e3012e53e0a6b96bfb46a3 namespace=k8s.io Jan 30 12:51:18.937340 containerd[1564]: time="2025-01-30T12:51:18.937201504Z" level=warning msg="cleaning up after shim disconnected" id=13edadc044c60bf860f72622f97b60fe33456b1fb9e3012e53e0a6b96bfb46a3 namespace=k8s.io Jan 30 12:51:18.937340 containerd[1564]: time="2025-01-30T12:51:18.937210201Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:51:19.104669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61112361e2355b8da067c8482570fcf8df4a1797bfad90fade65da836b4ece6b-rootfs.mount: Deactivated successfully. Jan 30 12:51:19.666507 kubelet[1889]: E0130 12:51:19.666461 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:19.837107 kubelet[1889]: E0130 12:51:19.837077 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:19.838997 containerd[1564]: time="2025-01-30T12:51:19.838892440Z" level=info msg="CreateContainer within sandbox \"7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 12:51:19.857574 containerd[1564]: time="2025-01-30T12:51:19.857529630Z" level=info msg="CreateContainer within sandbox \"7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2733b0ae2483392c7754ce847a9a9d688dacc2701f13be1b7f156d5a1c723a4f\"" Jan 30 12:51:19.858088 containerd[1564]: time="2025-01-30T12:51:19.857991784Z" level=info msg="StartContainer for \"2733b0ae2483392c7754ce847a9a9d688dacc2701f13be1b7f156d5a1c723a4f\"" Jan 30 12:51:19.909000 containerd[1564]: time="2025-01-30T12:51:19.908884581Z" level=info msg="StartContainer for \"2733b0ae2483392c7754ce847a9a9d688dacc2701f13be1b7f156d5a1c723a4f\" returns successfully" Jan 30 12:51:19.992083 containerd[1564]: time="2025-01-30T12:51:19.991923398Z" level=info msg="shim disconnected" id=2733b0ae2483392c7754ce847a9a9d688dacc2701f13be1b7f156d5a1c723a4f namespace=k8s.io Jan 30 12:51:19.992083 containerd[1564]: time="2025-01-30T12:51:19.991988470Z" level=warning msg="cleaning up after shim disconnected" id=2733b0ae2483392c7754ce847a9a9d688dacc2701f13be1b7f156d5a1c723a4f namespace=k8s.io Jan 30 12:51:19.992083 containerd[1564]: time="2025-01-30T12:51:19.991997926Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:51:20.104586 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2733b0ae2483392c7754ce847a9a9d688dacc2701f13be1b7f156d5a1c723a4f-rootfs.mount: Deactivated successfully. Jan 30 12:51:20.667336 kubelet[1889]: E0130 12:51:20.667287 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:20.841944 kubelet[1889]: E0130 12:51:20.841750 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:20.844005 containerd[1564]: time="2025-01-30T12:51:20.843954482Z" level=info msg="CreateContainer within sandbox \"7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 12:51:20.892023 containerd[1564]: time="2025-01-30T12:51:20.891924266Z" level=info msg="CreateContainer within sandbox \"7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"87ef1bdd1690494b5e7c22ec5a589c4f5eca32a73235d8c47b2773c484039ceb\"" Jan 30 12:51:20.893173 containerd[1564]: time="2025-01-30T12:51:20.892943076Z" level=info msg="StartContainer for \"87ef1bdd1690494b5e7c22ec5a589c4f5eca32a73235d8c47b2773c484039ceb\"" Jan 30 12:51:20.945453 containerd[1564]: time="2025-01-30T12:51:20.945238277Z" level=info msg="StartContainer for \"87ef1bdd1690494b5e7c22ec5a589c4f5eca32a73235d8c47b2773c484039ceb\" returns successfully" Jan 30 12:51:20.959610 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87ef1bdd1690494b5e7c22ec5a589c4f5eca32a73235d8c47b2773c484039ceb-rootfs.mount: Deactivated successfully. Jan 30 12:51:20.971815 containerd[1564]: time="2025-01-30T12:51:20.971584617Z" level=info msg="shim disconnected" id=87ef1bdd1690494b5e7c22ec5a589c4f5eca32a73235d8c47b2773c484039ceb namespace=k8s.io Jan 30 12:51:20.971815 containerd[1564]: time="2025-01-30T12:51:20.971651197Z" level=warning msg="cleaning up after shim disconnected" id=87ef1bdd1690494b5e7c22ec5a589c4f5eca32a73235d8c47b2773c484039ceb namespace=k8s.io Jan 30 12:51:20.971815 containerd[1564]: time="2025-01-30T12:51:20.971659850Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:51:21.667786 kubelet[1889]: E0130 12:51:21.667738 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:21.846639 kubelet[1889]: E0130 12:51:21.846546 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:21.849132 containerd[1564]: time="2025-01-30T12:51:21.848967376Z" level=info msg="CreateContainer within sandbox \"7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 12:51:21.873430 containerd[1564]: time="2025-01-30T12:51:21.873364852Z" level=info msg="CreateContainer within sandbox \"7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3534406de37ede3796f4777f9d2534bd4772f4deb6f638b238312c63efc0f9a8\"" Jan 30 12:51:21.873986 containerd[1564]: time="2025-01-30T12:51:21.873892907Z" level=info msg="StartContainer for \"3534406de37ede3796f4777f9d2534bd4772f4deb6f638b238312c63efc0f9a8\"" Jan 30 12:51:21.936080 containerd[1564]: time="2025-01-30T12:51:21.935868308Z" level=info msg="StartContainer for \"3534406de37ede3796f4777f9d2534bd4772f4deb6f638b238312c63efc0f9a8\" returns successfully" Jan 30 12:51:22.059773 kubelet[1889]: I0130 12:51:22.059157 1889 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 12:51:22.525065 kernel: Initializing XFRM netlink socket Jan 30 12:51:22.668877 kubelet[1889]: E0130 12:51:22.668829 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:22.850995 kubelet[1889]: E0130 12:51:22.850849 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:22.875180 kubelet[1889]: I0130 12:51:22.875126 1889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cbgr5" podStartSLOduration=8.578270417 podStartE2EDuration="14.875108189s" podCreationTimestamp="2025-01-30 12:51:08 +0000 UTC" firstStartedPulling="2025-01-30 12:51:11.79809456 +0000 UTC m=+3.678562096" lastFinishedPulling="2025-01-30 12:51:18.094932332 +0000 UTC m=+9.975399868" observedRunningTime="2025-01-30 12:51:22.875036787 +0000 UTC m=+14.755504363" watchObservedRunningTime="2025-01-30 12:51:22.875108189 +0000 UTC m=+14.755575725" Jan 30 12:51:23.669311 kubelet[1889]: E0130 12:51:23.669270 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:23.831843 systemd-networkd[1234]: cilium_host: Link UP Jan 30 12:51:23.831964 systemd-networkd[1234]: cilium_net: Link UP Jan 30 12:51:23.831968 systemd-networkd[1234]: cilium_net: Gained carrier Jan 30 12:51:23.832127 systemd-networkd[1234]: cilium_host: Gained carrier Jan 30 12:51:23.833264 systemd-networkd[1234]: cilium_host: Gained IPv6LL Jan 30 12:51:23.852754 kubelet[1889]: E0130 12:51:23.852723 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:23.926887 systemd-networkd[1234]: cilium_vxlan: Link UP Jan 30 12:51:23.926893 systemd-networkd[1234]: cilium_vxlan: Gained carrier Jan 30 12:51:24.281047 kernel: NET: Registered PF_ALG protocol family Jan 30 12:51:24.669477 kubelet[1889]: E0130 12:51:24.669397 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:24.794335 systemd-networkd[1234]: cilium_net: Gained IPv6LL Jan 30 12:51:24.854032 kubelet[1889]: E0130 12:51:24.853974 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:24.897593 systemd-networkd[1234]: lxc_health: Link UP Jan 30 12:51:24.912463 systemd-networkd[1234]: lxc_health: Gained carrier Jan 30 12:51:25.291114 kubelet[1889]: I0130 12:51:25.288397 1889 topology_manager.go:215] "Topology Admit Handler" podUID="82427034-3189-4750-9e79-25c0c88988bb" podNamespace="default" podName="nginx-deployment-85f456d6dd-g4fb7" Jan 30 12:51:25.473613 kubelet[1889]: I0130 12:51:25.473579 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k49h\" (UniqueName: \"kubernetes.io/projected/82427034-3189-4750-9e79-25c0c88988bb-kube-api-access-2k49h\") pod \"nginx-deployment-85f456d6dd-g4fb7\" (UID: \"82427034-3189-4750-9e79-25c0c88988bb\") " pod="default/nginx-deployment-85f456d6dd-g4fb7" Jan 30 12:51:25.591826 containerd[1564]: time="2025-01-30T12:51:25.591735565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-g4fb7,Uid:82427034-3189-4750-9e79-25c0c88988bb,Namespace:default,Attempt:0,}" Jan 30 12:51:25.640331 systemd-networkd[1234]: lxcb3e9de496c7a: Link UP Jan 30 12:51:25.649043 kernel: eth0: renamed from tmpb0e6e Jan 30 12:51:25.660604 systemd-networkd[1234]: lxcb3e9de496c7a: Gained carrier Jan 30 12:51:25.670592 kubelet[1889]: E0130 12:51:25.670564 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:25.691420 systemd-networkd[1234]: cilium_vxlan: Gained IPv6LL Jan 30 12:51:25.855801 kubelet[1889]: E0130 12:51:25.855539 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:26.457271 systemd-networkd[1234]: lxc_health: Gained IPv6LL Jan 30 12:51:26.672117 kubelet[1889]: E0130 12:51:26.672066 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:27.545231 systemd-networkd[1234]: lxcb3e9de496c7a: Gained IPv6LL Jan 30 12:51:27.672706 kubelet[1889]: E0130 12:51:27.672652 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:28.659704 kubelet[1889]: E0130 12:51:28.659653 1889 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:28.673060 kubelet[1889]: E0130 12:51:28.672985 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:29.053774 kubelet[1889]: I0130 12:51:29.052280 1889 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 12:51:29.053774 kubelet[1889]: E0130 12:51:29.053375 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:29.293252 containerd[1564]: time="2025-01-30T12:51:29.293173396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:51:29.293252 containerd[1564]: time="2025-01-30T12:51:29.293222458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:51:29.293725 containerd[1564]: time="2025-01-30T12:51:29.293233383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:51:29.293725 containerd[1564]: time="2025-01-30T12:51:29.293313419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:51:29.306502 systemd[1]: run-containerd-runc-k8s.io-b0e6ec4e52bf4bd2bf3f78901e5c91b359fa02959483e67e45aabf803ac1ba38-runc.Nu0uZF.mount: Deactivated successfully. Jan 30 12:51:29.315629 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:51:29.331660 containerd[1564]: time="2025-01-30T12:51:29.331617938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-g4fb7,Uid:82427034-3189-4750-9e79-25c0c88988bb,Namespace:default,Attempt:0,} returns sandbox id \"b0e6ec4e52bf4bd2bf3f78901e5c91b359fa02959483e67e45aabf803ac1ba38\"" Jan 30 12:51:29.333649 containerd[1564]: time="2025-01-30T12:51:29.333466414Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 12:51:29.673989 kubelet[1889]: E0130 12:51:29.673937 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:29.865444 kubelet[1889]: E0130 12:51:29.865399 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:30.674942 kubelet[1889]: E0130 12:51:30.674897 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:30.933063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1281589537.mount: Deactivated successfully. Jan 30 12:51:31.675153 kubelet[1889]: E0130 12:51:31.675114 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:31.692887 containerd[1564]: time="2025-01-30T12:51:31.692413896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:51:31.696578 containerd[1564]: time="2025-01-30T12:51:31.696524559Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67680490" Jan 30 12:51:31.697783 containerd[1564]: time="2025-01-30T12:51:31.697729897Z" level=info msg="ImageCreate event name:\"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:51:31.700670 containerd[1564]: time="2025-01-30T12:51:31.700630501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:51:31.701926 containerd[1564]: time="2025-01-30T12:51:31.701788342Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 2.368290034s" Jan 30 12:51:31.701926 containerd[1564]: time="2025-01-30T12:51:31.701820353Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\"" Jan 30 12:51:31.703802 containerd[1564]: time="2025-01-30T12:51:31.703766507Z" level=info msg="CreateContainer within sandbox \"b0e6ec4e52bf4bd2bf3f78901e5c91b359fa02959483e67e45aabf803ac1ba38\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 30 12:51:31.714189 containerd[1564]: time="2025-01-30T12:51:31.714142859Z" level=info msg="CreateContainer within sandbox \"b0e6ec4e52bf4bd2bf3f78901e5c91b359fa02959483e67e45aabf803ac1ba38\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"9f76474205b1477d593ff4c14f9906213507d6f2dad46ab721aaa27d8f91e996\"" Jan 30 12:51:31.714782 containerd[1564]: time="2025-01-30T12:51:31.714756071Z" level=info msg="StartContainer for \"9f76474205b1477d593ff4c14f9906213507d6f2dad46ab721aaa27d8f91e996\"" Jan 30 12:51:31.756969 containerd[1564]: time="2025-01-30T12:51:31.756924230Z" level=info msg="StartContainer for \"9f76474205b1477d593ff4c14f9906213507d6f2dad46ab721aaa27d8f91e996\" returns successfully" Jan 30 12:51:31.878623 kubelet[1889]: I0130 12:51:31.878559 1889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-g4fb7" podStartSLOduration=4.508834276 podStartE2EDuration="6.878542455s" podCreationTimestamp="2025-01-30 12:51:25 +0000 UTC" firstStartedPulling="2025-01-30 12:51:29.332981875 +0000 UTC m=+21.213449451" lastFinishedPulling="2025-01-30 12:51:31.702690054 +0000 UTC m=+23.583157630" observedRunningTime="2025-01-30 12:51:31.878400086 +0000 UTC m=+23.758867662" watchObservedRunningTime="2025-01-30 12:51:31.878542455 +0000 UTC m=+23.759010031" Jan 30 12:51:32.675343 kubelet[1889]: E0130 12:51:32.675291 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:33.676466 kubelet[1889]: E0130 12:51:33.676414 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:34.677335 kubelet[1889]: E0130 12:51:34.677288 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:35.677833 kubelet[1889]: E0130 12:51:35.677775 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:36.678445 kubelet[1889]: E0130 12:51:36.678384 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:37.626474 kubelet[1889]: I0130 12:51:37.626415 1889 topology_manager.go:215] "Topology Admit Handler" podUID="d49c1b00-c629-4863-a234-fe20341f31dd" podNamespace="default" podName="nfs-server-provisioner-0" Jan 30 12:51:37.641484 kubelet[1889]: I0130 12:51:37.641183 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6tgx\" (UniqueName: \"kubernetes.io/projected/d49c1b00-c629-4863-a234-fe20341f31dd-kube-api-access-n6tgx\") pod \"nfs-server-provisioner-0\" (UID: \"d49c1b00-c629-4863-a234-fe20341f31dd\") " pod="default/nfs-server-provisioner-0" Jan 30 12:51:37.641484 kubelet[1889]: I0130 12:51:37.641233 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d49c1b00-c629-4863-a234-fe20341f31dd-data\") pod \"nfs-server-provisioner-0\" (UID: \"d49c1b00-c629-4863-a234-fe20341f31dd\") " pod="default/nfs-server-provisioner-0" Jan 30 12:51:37.679143 kubelet[1889]: E0130 12:51:37.679093 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:37.931396 containerd[1564]: time="2025-01-30T12:51:37.931276483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d49c1b00-c629-4863-a234-fe20341f31dd,Namespace:default,Attempt:0,}" Jan 30 12:51:38.171518 systemd-networkd[1234]: lxcf41a784e1d7f: Link UP Jan 30 12:51:38.184046 kernel: eth0: renamed from tmp2dce3 Jan 30 12:51:38.196086 systemd-networkd[1234]: lxcf41a784e1d7f: Gained carrier Jan 30 12:51:38.407411 containerd[1564]: time="2025-01-30T12:51:38.407277835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:51:38.407411 containerd[1564]: time="2025-01-30T12:51:38.407359088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:51:38.407728 containerd[1564]: time="2025-01-30T12:51:38.407380211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:51:38.407728 containerd[1564]: time="2025-01-30T12:51:38.407540116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:51:38.439515 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:51:38.459252 containerd[1564]: time="2025-01-30T12:51:38.459139897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d49c1b00-c629-4863-a234-fe20341f31dd,Namespace:default,Attempt:0,} returns sandbox id \"2dce31b9072c846482f1faa3f437cfb7eb116b7b6a0f6e573dbe1e0be17ee293\"" Jan 30 12:51:38.460900 containerd[1564]: time="2025-01-30T12:51:38.460861810Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 30 12:51:38.679255 kubelet[1889]: E0130 12:51:38.679192 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:39.679764 kubelet[1889]: E0130 12:51:39.679722 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:39.769301 systemd-networkd[1234]: lxcf41a784e1d7f: Gained IPv6LL Jan 30 12:51:40.160717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount676024613.mount: Deactivated successfully. Jan 30 12:51:40.680497 kubelet[1889]: E0130 12:51:40.680441 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:41.669554 containerd[1564]: time="2025-01-30T12:51:41.669493427Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Jan 30 12:51:41.681162 kubelet[1889]: E0130 12:51:41.681107 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:41.703038 containerd[1564]: time="2025-01-30T12:51:41.702024331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:51:41.707534 containerd[1564]: time="2025-01-30T12:51:41.707482146Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:51:41.707906 containerd[1564]: time="2025-01-30T12:51:41.707511270Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.246589571s" Jan 30 12:51:41.707906 containerd[1564]: time="2025-01-30T12:51:41.707823552Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 30 12:51:41.709139 containerd[1564]: time="2025-01-30T12:51:41.708593896Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:51:41.713230 containerd[1564]: time="2025-01-30T12:51:41.713176634Z" level=info msg="CreateContainer within sandbox \"2dce31b9072c846482f1faa3f437cfb7eb116b7b6a0f6e573dbe1e0be17ee293\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 30 12:51:41.736711 containerd[1564]: time="2025-01-30T12:51:41.736657918Z" level=info msg="CreateContainer within sandbox \"2dce31b9072c846482f1faa3f437cfb7eb116b7b6a0f6e573dbe1e0be17ee293\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"4eac0ac9559ddbcb359b56ce74fa48e9937e651bba8d5d7ceb8c104d7754a031\"" Jan 30 12:51:41.737280 containerd[1564]: time="2025-01-30T12:51:41.737255958Z" level=info msg="StartContainer for \"4eac0ac9559ddbcb359b56ce74fa48e9937e651bba8d5d7ceb8c104d7754a031\"" Jan 30 12:51:41.859061 containerd[1564]: time="2025-01-30T12:51:41.858988921Z" level=info msg="StartContainer for \"4eac0ac9559ddbcb359b56ce74fa48e9937e651bba8d5d7ceb8c104d7754a031\" returns successfully" Jan 30 12:51:41.922799 kubelet[1889]: I0130 12:51:41.922561 1889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.673138448 podStartE2EDuration="4.922544845s" podCreationTimestamp="2025-01-30 12:51:37 +0000 UTC" firstStartedPulling="2025-01-30 12:51:38.460634014 +0000 UTC m=+30.341101590" lastFinishedPulling="2025-01-30 12:51:41.710040411 +0000 UTC m=+33.590507987" observedRunningTime="2025-01-30 12:51:41.921603798 +0000 UTC m=+33.802071374" watchObservedRunningTime="2025-01-30 12:51:41.922544845 +0000 UTC m=+33.803012421" Jan 30 12:51:42.681919 kubelet[1889]: E0130 12:51:42.681863 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:43.685596 kubelet[1889]: E0130 12:51:43.685542 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:44.686676 kubelet[1889]: E0130 12:51:44.686622 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:45.080218 update_engine[1537]: I20250130 12:51:45.080047 1537 update_attempter.cc:509] Updating boot flags... Jan 30 12:51:45.100071 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3267) Jan 30 12:51:45.162260 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3267) Jan 30 12:51:45.687261 kubelet[1889]: E0130 12:51:45.687194 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:46.687651 kubelet[1889]: E0130 12:51:46.687596 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:47.687852 kubelet[1889]: E0130 12:51:47.687792 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:48.659935 kubelet[1889]: E0130 12:51:48.659887 1889 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:48.688308 kubelet[1889]: E0130 12:51:48.688259 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:49.689103 kubelet[1889]: E0130 12:51:49.689064 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:50.689600 kubelet[1889]: E0130 12:51:50.689555 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:51.320535 kubelet[1889]: I0130 12:51:51.320118 1889 topology_manager.go:215] "Topology Admit Handler" podUID="4c15fcea-61b1-4bdb-811f-155bec9d94f6" podNamespace="default" podName="test-pod-1" Jan 30 12:51:51.518226 kubelet[1889]: I0130 12:51:51.518147 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-31fe67cd-c61b-403f-8ea6-645b2611b439\" (UniqueName: \"kubernetes.io/nfs/4c15fcea-61b1-4bdb-811f-155bec9d94f6-pvc-31fe67cd-c61b-403f-8ea6-645b2611b439\") pod \"test-pod-1\" (UID: \"4c15fcea-61b1-4bdb-811f-155bec9d94f6\") " pod="default/test-pod-1" Jan 30 12:51:51.518226 kubelet[1889]: I0130 12:51:51.518198 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zfwt\" (UniqueName: \"kubernetes.io/projected/4c15fcea-61b1-4bdb-811f-155bec9d94f6-kube-api-access-7zfwt\") pod \"test-pod-1\" (UID: \"4c15fcea-61b1-4bdb-811f-155bec9d94f6\") " pod="default/test-pod-1" Jan 30 12:51:51.641050 kernel: FS-Cache: Loaded Jan 30 12:51:51.667465 kernel: RPC: Registered named UNIX socket transport module. Jan 30 12:51:51.667580 kernel: RPC: Registered udp transport module. Jan 30 12:51:51.667597 kernel: RPC: Registered tcp transport module. Jan 30 12:51:51.667614 kernel: RPC: Registered tcp-with-tls transport module. Jan 30 12:51:51.667628 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 30 12:51:51.689764 kubelet[1889]: E0130 12:51:51.689679 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:51.847309 kernel: NFS: Registering the id_resolver key type Jan 30 12:51:51.847455 kernel: Key type id_resolver registered Jan 30 12:51:51.847475 kernel: Key type id_legacy registered Jan 30 12:51:51.877280 nfsidmap[3293]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 30 12:51:51.881535 nfsidmap[3296]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 30 12:51:51.931579 containerd[1564]: time="2025-01-30T12:51:51.931464955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4c15fcea-61b1-4bdb-811f-155bec9d94f6,Namespace:default,Attempt:0,}" Jan 30 12:51:51.957086 systemd-networkd[1234]: lxcb91c102ac53a: Link UP Jan 30 12:51:51.973090 kernel: eth0: renamed from tmpf192d Jan 30 12:51:51.980724 systemd-networkd[1234]: lxcb91c102ac53a: Gained carrier Jan 30 12:51:52.135651 containerd[1564]: time="2025-01-30T12:51:52.135413421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:51:52.135651 containerd[1564]: time="2025-01-30T12:51:52.135479667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:51:52.135651 containerd[1564]: time="2025-01-30T12:51:52.135502948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:51:52.135651 containerd[1564]: time="2025-01-30T12:51:52.135598716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:51:52.160209 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 12:51:52.187849 containerd[1564]: time="2025-01-30T12:51:52.187715667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4c15fcea-61b1-4bdb-811f-155bec9d94f6,Namespace:default,Attempt:0,} returns sandbox id \"f192dc1de407ba6fbf258730b0a2e641392b3a62b57f0e90a6b6b99b6455c744\"" Jan 30 12:51:52.189626 containerd[1564]: time="2025-01-30T12:51:52.189418400Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 12:51:52.413956 containerd[1564]: time="2025-01-30T12:51:52.413907657Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:51:52.415139 containerd[1564]: time="2025-01-30T12:51:52.415084389Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 30 12:51:52.418131 containerd[1564]: time="2025-01-30T12:51:52.418097144Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 228.645622ms" Jan 30 12:51:52.418131 containerd[1564]: time="2025-01-30T12:51:52.418133707Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\"" Jan 30 12:51:52.419964 containerd[1564]: time="2025-01-30T12:51:52.419938328Z" level=info msg="CreateContainer within sandbox \"f192dc1de407ba6fbf258730b0a2e641392b3a62b57f0e90a6b6b99b6455c744\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 30 12:51:52.439431 containerd[1564]: time="2025-01-30T12:51:52.439295280Z" level=info msg="CreateContainer within sandbox \"f192dc1de407ba6fbf258730b0a2e641392b3a62b57f0e90a6b6b99b6455c744\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"5981ee6c553ce217f35e59fe9302b2311dd70968c8218856d7f7848ba3997141\"" Jan 30 12:51:52.440221 containerd[1564]: time="2025-01-30T12:51:52.440172469Z" level=info msg="StartContainer for \"5981ee6c553ce217f35e59fe9302b2311dd70968c8218856d7f7848ba3997141\"" Jan 30 12:51:52.492901 containerd[1564]: time="2025-01-30T12:51:52.492853464Z" level=info msg="StartContainer for \"5981ee6c553ce217f35e59fe9302b2311dd70968c8218856d7f7848ba3997141\" returns successfully" Jan 30 12:51:52.690161 kubelet[1889]: E0130 12:51:52.690030 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:52.933416 kubelet[1889]: I0130 12:51:52.933295 1889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.703665733 podStartE2EDuration="15.93327651s" podCreationTimestamp="2025-01-30 12:51:37 +0000 UTC" firstStartedPulling="2025-01-30 12:51:52.189138258 +0000 UTC m=+44.069605834" lastFinishedPulling="2025-01-30 12:51:52.418749035 +0000 UTC m=+44.299216611" observedRunningTime="2025-01-30 12:51:52.93251173 +0000 UTC m=+44.812979266" watchObservedRunningTime="2025-01-30 12:51:52.93327651 +0000 UTC m=+44.813744086" Jan 30 12:51:53.690976 kubelet[1889]: E0130 12:51:53.690916 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:53.721223 systemd-networkd[1234]: lxcb91c102ac53a: Gained IPv6LL Jan 30 12:51:54.692039 kubelet[1889]: E0130 12:51:54.691979 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:55.692956 kubelet[1889]: E0130 12:51:55.692897 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:55.840728 containerd[1564]: time="2025-01-30T12:51:55.840671785Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 12:51:55.847485 containerd[1564]: time="2025-01-30T12:51:55.846557428Z" level=info msg="StopContainer for \"3534406de37ede3796f4777f9d2534bd4772f4deb6f638b238312c63efc0f9a8\" with timeout 2 (s)" Jan 30 12:51:55.847902 containerd[1564]: time="2025-01-30T12:51:55.847869957Z" level=info msg="Stop container \"3534406de37ede3796f4777f9d2534bd4772f4deb6f638b238312c63efc0f9a8\" with signal terminated" Jan 30 12:51:55.853985 systemd-networkd[1234]: lxc_health: Link DOWN Jan 30 12:51:55.853990 systemd-networkd[1234]: lxc_health: Lost carrier Jan 30 12:51:55.901339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3534406de37ede3796f4777f9d2534bd4772f4deb6f638b238312c63efc0f9a8-rootfs.mount: Deactivated successfully. Jan 30 12:51:55.924163 containerd[1564]: time="2025-01-30T12:51:55.924082776Z" level=info msg="shim disconnected" id=3534406de37ede3796f4777f9d2534bd4772f4deb6f638b238312c63efc0f9a8 namespace=k8s.io Jan 30 12:51:55.924163 containerd[1564]: time="2025-01-30T12:51:55.924141060Z" level=warning msg="cleaning up after shim disconnected" id=3534406de37ede3796f4777f9d2534bd4772f4deb6f638b238312c63efc0f9a8 namespace=k8s.io Jan 30 12:51:55.924163 containerd[1564]: time="2025-01-30T12:51:55.924151981Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:51:55.945584 containerd[1564]: time="2025-01-30T12:51:55.945451040Z" level=info msg="StopContainer for \"3534406de37ede3796f4777f9d2534bd4772f4deb6f638b238312c63efc0f9a8\" returns successfully" Jan 30 12:51:55.946236 containerd[1564]: time="2025-01-30T12:51:55.946192130Z" level=info msg="StopPodSandbox for \"7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12\"" Jan 30 12:51:55.946294 containerd[1564]: time="2025-01-30T12:51:55.946239294Z" level=info msg="Container to stop \"61112361e2355b8da067c8482570fcf8df4a1797bfad90fade65da836b4ece6b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:51:55.946294 containerd[1564]: time="2025-01-30T12:51:55.946253895Z" level=info msg="Container to stop \"87ef1bdd1690494b5e7c22ec5a589c4f5eca32a73235d8c47b2773c484039ceb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:51:55.946294 containerd[1564]: time="2025-01-30T12:51:55.946264655Z" level=info msg="Container to stop \"3534406de37ede3796f4777f9d2534bd4772f4deb6f638b238312c63efc0f9a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:51:55.946294 containerd[1564]: time="2025-01-30T12:51:55.946274856Z" level=info msg="Container to stop \"13edadc044c60bf860f72622f97b60fe33456b1fb9e3012e53e0a6b96bfb46a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:51:55.946294 containerd[1564]: time="2025-01-30T12:51:55.946285257Z" level=info msg="Container to stop \"2733b0ae2483392c7754ce847a9a9d688dacc2701f13be1b7f156d5a1c723a4f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 12:51:55.948423 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12-shm.mount: Deactivated successfully. Jan 30 12:51:55.974894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12-rootfs.mount: Deactivated successfully. Jan 30 12:51:55.989758 containerd[1564]: time="2025-01-30T12:51:55.989716911Z" level=info msg="shim disconnected" id=7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12 namespace=k8s.io Jan 30 12:51:55.989758 containerd[1564]: time="2025-01-30T12:51:55.989751553Z" level=warning msg="cleaning up after shim disconnected" id=7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12 namespace=k8s.io Jan 30 12:51:55.989758 containerd[1564]: time="2025-01-30T12:51:55.989761114Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:51:56.000719 containerd[1564]: time="2025-01-30T12:51:56.000666021Z" level=info msg="TearDown network for sandbox \"7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12\" successfully" Jan 30 12:51:56.000719 containerd[1564]: time="2025-01-30T12:51:56.000705543Z" level=info msg="StopPodSandbox for \"7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12\" returns successfully" Jan 30 12:51:56.149213 kubelet[1889]: I0130 12:51:56.149165 1889 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-cilium-run\") pod \"da1ce413-1b6b-4c40-9eff-d692a2a68331\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " Jan 30 12:51:56.149213 kubelet[1889]: I0130 12:51:56.149209 1889 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-xtables-lock\") pod \"da1ce413-1b6b-4c40-9eff-d692a2a68331\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " Jan 30 12:51:56.149213 kubelet[1889]: I0130 12:51:56.149227 1889 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-etc-cni-netd\") pod \"da1ce413-1b6b-4c40-9eff-d692a2a68331\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " Jan 30 12:51:56.149431 kubelet[1889]: I0130 12:51:56.149242 1889 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-hostproc\") pod \"da1ce413-1b6b-4c40-9eff-d692a2a68331\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " Jan 30 12:51:56.149431 kubelet[1889]: I0130 12:51:56.149258 1889 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-lib-modules\") pod \"da1ce413-1b6b-4c40-9eff-d692a2a68331\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " Jan 30 12:51:56.149431 kubelet[1889]: I0130 12:51:56.149275 1889 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-host-proc-sys-kernel\") pod \"da1ce413-1b6b-4c40-9eff-d692a2a68331\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " Jan 30 12:51:56.149431 kubelet[1889]: I0130 12:51:56.149291 1889 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-host-proc-sys-net\") pod \"da1ce413-1b6b-4c40-9eff-d692a2a68331\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " Jan 30 12:51:56.149431 kubelet[1889]: I0130 12:51:56.149314 1889 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dghjf\" (UniqueName: \"kubernetes.io/projected/da1ce413-1b6b-4c40-9eff-d692a2a68331-kube-api-access-dghjf\") pod \"da1ce413-1b6b-4c40-9eff-d692a2a68331\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " Jan 30 12:51:56.149431 kubelet[1889]: I0130 12:51:56.149332 1889 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/da1ce413-1b6b-4c40-9eff-d692a2a68331-clustermesh-secrets\") pod \"da1ce413-1b6b-4c40-9eff-d692a2a68331\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " Jan 30 12:51:56.149603 kubelet[1889]: I0130 12:51:56.149345 1889 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-cni-path\") pod \"da1ce413-1b6b-4c40-9eff-d692a2a68331\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " Jan 30 12:51:56.149603 kubelet[1889]: I0130 12:51:56.149361 1889 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/da1ce413-1b6b-4c40-9eff-d692a2a68331-hubble-tls\") pod \"da1ce413-1b6b-4c40-9eff-d692a2a68331\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " Jan 30 12:51:56.149603 kubelet[1889]: I0130 12:51:56.149379 1889 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da1ce413-1b6b-4c40-9eff-d692a2a68331-cilium-config-path\") pod \"da1ce413-1b6b-4c40-9eff-d692a2a68331\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " Jan 30 12:51:56.149603 kubelet[1889]: I0130 12:51:56.149393 1889 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-bpf-maps\") pod \"da1ce413-1b6b-4c40-9eff-d692a2a68331\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " Jan 30 12:51:56.149603 kubelet[1889]: I0130 12:51:56.149407 1889 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-cilium-cgroup\") pod \"da1ce413-1b6b-4c40-9eff-d692a2a68331\" (UID: \"da1ce413-1b6b-4c40-9eff-d692a2a68331\") " Jan 30 12:51:56.149603 kubelet[1889]: I0130 12:51:56.149504 1889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "da1ce413-1b6b-4c40-9eff-d692a2a68331" (UID: "da1ce413-1b6b-4c40-9eff-d692a2a68331"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:51:56.149787 kubelet[1889]: I0130 12:51:56.149541 1889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "da1ce413-1b6b-4c40-9eff-d692a2a68331" (UID: "da1ce413-1b6b-4c40-9eff-d692a2a68331"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:51:56.149787 kubelet[1889]: I0130 12:51:56.149571 1889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "da1ce413-1b6b-4c40-9eff-d692a2a68331" (UID: "da1ce413-1b6b-4c40-9eff-d692a2a68331"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:51:56.149787 kubelet[1889]: I0130 12:51:56.149591 1889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "da1ce413-1b6b-4c40-9eff-d692a2a68331" (UID: "da1ce413-1b6b-4c40-9eff-d692a2a68331"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:51:56.149787 kubelet[1889]: I0130 12:51:56.149606 1889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-hostproc" (OuterVolumeSpecName: "hostproc") pod "da1ce413-1b6b-4c40-9eff-d692a2a68331" (UID: "da1ce413-1b6b-4c40-9eff-d692a2a68331"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:51:56.149787 kubelet[1889]: I0130 12:51:56.149624 1889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "da1ce413-1b6b-4c40-9eff-d692a2a68331" (UID: "da1ce413-1b6b-4c40-9eff-d692a2a68331"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:51:56.149895 kubelet[1889]: I0130 12:51:56.149639 1889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "da1ce413-1b6b-4c40-9eff-d692a2a68331" (UID: "da1ce413-1b6b-4c40-9eff-d692a2a68331"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:51:56.149895 kubelet[1889]: I0130 12:51:56.149657 1889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "da1ce413-1b6b-4c40-9eff-d692a2a68331" (UID: "da1ce413-1b6b-4c40-9eff-d692a2a68331"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:51:56.151838 kubelet[1889]: I0130 12:51:56.149995 1889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "da1ce413-1b6b-4c40-9eff-d692a2a68331" (UID: "da1ce413-1b6b-4c40-9eff-d692a2a68331"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:51:56.151957 kubelet[1889]: I0130 12:51:56.151856 1889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-cni-path" (OuterVolumeSpecName: "cni-path") pod "da1ce413-1b6b-4c40-9eff-d692a2a68331" (UID: "da1ce413-1b6b-4c40-9eff-d692a2a68331"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 12:51:56.151957 kubelet[1889]: I0130 12:51:56.151920 1889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da1ce413-1b6b-4c40-9eff-d692a2a68331-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "da1ce413-1b6b-4c40-9eff-d692a2a68331" (UID: "da1ce413-1b6b-4c40-9eff-d692a2a68331"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 12:51:56.152006 kubelet[1889]: I0130 12:51:56.151970 1889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da1ce413-1b6b-4c40-9eff-d692a2a68331-kube-api-access-dghjf" (OuterVolumeSpecName: "kube-api-access-dghjf") pod "da1ce413-1b6b-4c40-9eff-d692a2a68331" (UID: "da1ce413-1b6b-4c40-9eff-d692a2a68331"). InnerVolumeSpecName "kube-api-access-dghjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 12:51:56.152372 kubelet[1889]: I0130 12:51:56.152339 1889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da1ce413-1b6b-4c40-9eff-d692a2a68331-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "da1ce413-1b6b-4c40-9eff-d692a2a68331" (UID: "da1ce413-1b6b-4c40-9eff-d692a2a68331"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 12:51:56.152683 kubelet[1889]: I0130 12:51:56.152643 1889 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da1ce413-1b6b-4c40-9eff-d692a2a68331-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "da1ce413-1b6b-4c40-9eff-d692a2a68331" (UID: "da1ce413-1b6b-4c40-9eff-d692a2a68331"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 12:51:56.250706 kubelet[1889]: I0130 12:51:56.250042 1889 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-dghjf\" (UniqueName: \"kubernetes.io/projected/da1ce413-1b6b-4c40-9eff-d692a2a68331-kube-api-access-dghjf\") on node \"10.0.0.39\" DevicePath \"\"" Jan 30 12:51:56.250706 kubelet[1889]: I0130 12:51:56.250078 1889 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-lib-modules\") on node \"10.0.0.39\" DevicePath \"\"" Jan 30 12:51:56.250706 kubelet[1889]: I0130 12:51:56.250087 1889 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-host-proc-sys-kernel\") on node \"10.0.0.39\" DevicePath \"\"" Jan 30 12:51:56.250706 kubelet[1889]: I0130 12:51:56.250096 1889 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-host-proc-sys-net\") on node \"10.0.0.39\" DevicePath \"\"" Jan 30 12:51:56.250706 kubelet[1889]: I0130 12:51:56.250105 1889 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/da1ce413-1b6b-4c40-9eff-d692a2a68331-clustermesh-secrets\") on node \"10.0.0.39\" DevicePath \"\"" Jan 30 12:51:56.250706 kubelet[1889]: I0130 12:51:56.250112 1889 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da1ce413-1b6b-4c40-9eff-d692a2a68331-cilium-config-path\") on node \"10.0.0.39\" DevicePath \"\"" Jan 30 12:51:56.250706 kubelet[1889]: I0130 12:51:56.250119 1889 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-bpf-maps\") on node \"10.0.0.39\" DevicePath \"\"" Jan 30 12:51:56.250706 kubelet[1889]: I0130 12:51:56.250126 1889 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-cilium-cgroup\") on node \"10.0.0.39\" DevicePath \"\"" Jan 30 12:51:56.250966 kubelet[1889]: I0130 12:51:56.250134 1889 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-cni-path\") on node \"10.0.0.39\" DevicePath \"\"" Jan 30 12:51:56.250966 kubelet[1889]: I0130 12:51:56.250144 1889 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/da1ce413-1b6b-4c40-9eff-d692a2a68331-hubble-tls\") on node \"10.0.0.39\" DevicePath \"\"" Jan 30 12:51:56.250966 kubelet[1889]: I0130 12:51:56.250151 1889 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-etc-cni-netd\") on node \"10.0.0.39\" DevicePath \"\"" Jan 30 12:51:56.250966 kubelet[1889]: I0130 12:51:56.250158 1889 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-hostproc\") on node \"10.0.0.39\" DevicePath \"\"" Jan 30 12:51:56.250966 kubelet[1889]: I0130 12:51:56.250165 1889 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-cilium-run\") on node \"10.0.0.39\" DevicePath \"\"" Jan 30 12:51:56.250966 kubelet[1889]: I0130 12:51:56.250172 1889 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da1ce413-1b6b-4c40-9eff-d692a2a68331-xtables-lock\") on node \"10.0.0.39\" DevicePath \"\"" Jan 30 12:51:56.693462 kubelet[1889]: E0130 12:51:56.693390 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:56.820583 systemd[1]: var-lib-kubelet-pods-da1ce413\x2d1b6b\x2d4c40\x2d9eff\x2dd692a2a68331-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddghjf.mount: Deactivated successfully. Jan 30 12:51:56.820739 systemd[1]: var-lib-kubelet-pods-da1ce413\x2d1b6b\x2d4c40\x2d9eff\x2dd692a2a68331-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 12:51:56.820822 systemd[1]: var-lib-kubelet-pods-da1ce413\x2d1b6b\x2d4c40\x2d9eff\x2dd692a2a68331-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 12:51:56.937036 kubelet[1889]: I0130 12:51:56.936981 1889 scope.go:117] "RemoveContainer" containerID="3534406de37ede3796f4777f9d2534bd4772f4deb6f638b238312c63efc0f9a8" Jan 30 12:51:56.938767 containerd[1564]: time="2025-01-30T12:51:56.938449873Z" level=info msg="RemoveContainer for \"3534406de37ede3796f4777f9d2534bd4772f4deb6f638b238312c63efc0f9a8\"" Jan 30 12:51:56.950694 containerd[1564]: time="2025-01-30T12:51:56.950587830Z" level=info msg="RemoveContainer for \"3534406de37ede3796f4777f9d2534bd4772f4deb6f638b238312c63efc0f9a8\" returns successfully" Jan 30 12:51:56.951294 kubelet[1889]: I0130 12:51:56.951272 1889 scope.go:117] "RemoveContainer" containerID="87ef1bdd1690494b5e7c22ec5a589c4f5eca32a73235d8c47b2773c484039ceb" Jan 30 12:51:56.952357 containerd[1564]: time="2025-01-30T12:51:56.952332265Z" level=info msg="RemoveContainer for \"87ef1bdd1690494b5e7c22ec5a589c4f5eca32a73235d8c47b2773c484039ceb\"" Jan 30 12:51:56.956783 containerd[1564]: time="2025-01-30T12:51:56.956699471Z" level=info msg="RemoveContainer for \"87ef1bdd1690494b5e7c22ec5a589c4f5eca32a73235d8c47b2773c484039ceb\" returns successfully" Jan 30 12:51:56.957216 kubelet[1889]: I0130 12:51:56.957037 1889 scope.go:117] "RemoveContainer" containerID="2733b0ae2483392c7754ce847a9a9d688dacc2701f13be1b7f156d5a1c723a4f" Jan 30 12:51:56.958771 containerd[1564]: time="2025-01-30T12:51:56.958437865Z" level=info msg="RemoveContainer for \"2733b0ae2483392c7754ce847a9a9d688dacc2701f13be1b7f156d5a1c723a4f\"" Jan 30 12:51:56.961244 containerd[1564]: time="2025-01-30T12:51:56.961210647Z" level=info msg="RemoveContainer for \"2733b0ae2483392c7754ce847a9a9d688dacc2701f13be1b7f156d5a1c723a4f\" returns successfully" Jan 30 12:51:56.961848 kubelet[1889]: I0130 12:51:56.961818 1889 scope.go:117] "RemoveContainer" containerID="13edadc044c60bf860f72622f97b60fe33456b1fb9e3012e53e0a6b96bfb46a3" Jan 30 12:51:56.963046 containerd[1564]: time="2025-01-30T12:51:56.962788511Z" level=info msg="RemoveContainer for \"13edadc044c60bf860f72622f97b60fe33456b1fb9e3012e53e0a6b96bfb46a3\"" Jan 30 12:51:56.966852 containerd[1564]: time="2025-01-30T12:51:56.966809015Z" level=info msg="RemoveContainer for \"13edadc044c60bf860f72622f97b60fe33456b1fb9e3012e53e0a6b96bfb46a3\" returns successfully" Jan 30 12:51:56.967327 kubelet[1889]: I0130 12:51:56.967162 1889 scope.go:117] "RemoveContainer" containerID="61112361e2355b8da067c8482570fcf8df4a1797bfad90fade65da836b4ece6b" Jan 30 12:51:56.968385 containerd[1564]: time="2025-01-30T12:51:56.968320394Z" level=info msg="RemoveContainer for \"61112361e2355b8da067c8482570fcf8df4a1797bfad90fade65da836b4ece6b\"" Jan 30 12:51:56.971005 containerd[1564]: time="2025-01-30T12:51:56.970961408Z" level=info msg="RemoveContainer for \"61112361e2355b8da067c8482570fcf8df4a1797bfad90fade65da836b4ece6b\" returns successfully" Jan 30 12:51:56.971213 kubelet[1889]: I0130 12:51:56.971189 1889 scope.go:117] "RemoveContainer" containerID="3534406de37ede3796f4777f9d2534bd4772f4deb6f638b238312c63efc0f9a8" Jan 30 12:51:56.971458 containerd[1564]: time="2025-01-30T12:51:56.971413117Z" level=error msg="ContainerStatus for \"3534406de37ede3796f4777f9d2534bd4772f4deb6f638b238312c63efc0f9a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3534406de37ede3796f4777f9d2534bd4772f4deb6f638b238312c63efc0f9a8\": not found" Jan 30 12:51:56.971658 kubelet[1889]: E0130 12:51:56.971633 1889 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3534406de37ede3796f4777f9d2534bd4772f4deb6f638b238312c63efc0f9a8\": not found" containerID="3534406de37ede3796f4777f9d2534bd4772f4deb6f638b238312c63efc0f9a8" Jan 30 12:51:56.971741 kubelet[1889]: I0130 12:51:56.971666 1889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3534406de37ede3796f4777f9d2534bd4772f4deb6f638b238312c63efc0f9a8"} err="failed to get container status \"3534406de37ede3796f4777f9d2534bd4772f4deb6f638b238312c63efc0f9a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"3534406de37ede3796f4777f9d2534bd4772f4deb6f638b238312c63efc0f9a8\": not found" Jan 30 12:51:56.971782 kubelet[1889]: I0130 12:51:56.971742 1889 scope.go:117] "RemoveContainer" containerID="87ef1bdd1690494b5e7c22ec5a589c4f5eca32a73235d8c47b2773c484039ceb" Jan 30 12:51:56.971958 containerd[1564]: time="2025-01-30T12:51:56.971922431Z" level=error msg="ContainerStatus for \"87ef1bdd1690494b5e7c22ec5a589c4f5eca32a73235d8c47b2773c484039ceb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"87ef1bdd1690494b5e7c22ec5a589c4f5eca32a73235d8c47b2773c484039ceb\": not found" Jan 30 12:51:56.972144 kubelet[1889]: E0130 12:51:56.972080 1889 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"87ef1bdd1690494b5e7c22ec5a589c4f5eca32a73235d8c47b2773c484039ceb\": not found" containerID="87ef1bdd1690494b5e7c22ec5a589c4f5eca32a73235d8c47b2773c484039ceb" Jan 30 12:51:56.972187 kubelet[1889]: I0130 12:51:56.972130 1889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"87ef1bdd1690494b5e7c22ec5a589c4f5eca32a73235d8c47b2773c484039ceb"} err="failed to get container status \"87ef1bdd1690494b5e7c22ec5a589c4f5eca32a73235d8c47b2773c484039ceb\": rpc error: code = NotFound desc = an error occurred when try to find container \"87ef1bdd1690494b5e7c22ec5a589c4f5eca32a73235d8c47b2773c484039ceb\": not found" Jan 30 12:51:56.972187 kubelet[1889]: I0130 12:51:56.972160 1889 scope.go:117] "RemoveContainer" containerID="2733b0ae2483392c7754ce847a9a9d688dacc2701f13be1b7f156d5a1c723a4f" Jan 30 12:51:56.972440 kubelet[1889]: E0130 12:51:56.972403 1889 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2733b0ae2483392c7754ce847a9a9d688dacc2701f13be1b7f156d5a1c723a4f\": not found" containerID="2733b0ae2483392c7754ce847a9a9d688dacc2701f13be1b7f156d5a1c723a4f" Jan 30 12:51:56.972440 kubelet[1889]: I0130 12:51:56.972417 1889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2733b0ae2483392c7754ce847a9a9d688dacc2701f13be1b7f156d5a1c723a4f"} err="failed to get container status \"2733b0ae2483392c7754ce847a9a9d688dacc2701f13be1b7f156d5a1c723a4f\": rpc error: code = NotFound desc = an error occurred when try to find container \"2733b0ae2483392c7754ce847a9a9d688dacc2701f13be1b7f156d5a1c723a4f\": not found" Jan 30 12:51:56.972440 kubelet[1889]: I0130 12:51:56.972427 1889 scope.go:117] "RemoveContainer" containerID="13edadc044c60bf860f72622f97b60fe33456b1fb9e3012e53e0a6b96bfb46a3" Jan 30 12:51:56.972519 containerd[1564]: time="2025-01-30T12:51:56.972322257Z" level=error msg="ContainerStatus for \"2733b0ae2483392c7754ce847a9a9d688dacc2701f13be1b7f156d5a1c723a4f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2733b0ae2483392c7754ce847a9a9d688dacc2701f13be1b7f156d5a1c723a4f\": not found" Jan 30 12:51:56.972780 containerd[1564]: time="2025-01-30T12:51:56.972721603Z" level=error msg="ContainerStatus for \"13edadc044c60bf860f72622f97b60fe33456b1fb9e3012e53e0a6b96bfb46a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13edadc044c60bf860f72622f97b60fe33456b1fb9e3012e53e0a6b96bfb46a3\": not found" Jan 30 12:51:56.972873 kubelet[1889]: E0130 12:51:56.972824 1889 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13edadc044c60bf860f72622f97b60fe33456b1fb9e3012e53e0a6b96bfb46a3\": not found" containerID="13edadc044c60bf860f72622f97b60fe33456b1fb9e3012e53e0a6b96bfb46a3" Jan 30 12:51:56.972873 kubelet[1889]: I0130 12:51:56.972842 1889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13edadc044c60bf860f72622f97b60fe33456b1fb9e3012e53e0a6b96bfb46a3"} err="failed to get container status \"13edadc044c60bf860f72622f97b60fe33456b1fb9e3012e53e0a6b96bfb46a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"13edadc044c60bf860f72622f97b60fe33456b1fb9e3012e53e0a6b96bfb46a3\": not found" Jan 30 12:51:56.972873 kubelet[1889]: I0130 12:51:56.972856 1889 scope.go:117] "RemoveContainer" containerID="61112361e2355b8da067c8482570fcf8df4a1797bfad90fade65da836b4ece6b" Jan 30 12:51:56.973276 kubelet[1889]: E0130 12:51:56.973151 1889 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"61112361e2355b8da067c8482570fcf8df4a1797bfad90fade65da836b4ece6b\": not found" containerID="61112361e2355b8da067c8482570fcf8df4a1797bfad90fade65da836b4ece6b" Jan 30 12:51:56.973276 kubelet[1889]: I0130 12:51:56.973175 1889 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"61112361e2355b8da067c8482570fcf8df4a1797bfad90fade65da836b4ece6b"} err="failed to get container status \"61112361e2355b8da067c8482570fcf8df4a1797bfad90fade65da836b4ece6b\": rpc error: code = NotFound desc = an error occurred when try to find container \"61112361e2355b8da067c8482570fcf8df4a1797bfad90fade65da836b4ece6b\": not found" Jan 30 12:51:56.973329 containerd[1564]: time="2025-01-30T12:51:56.973002062Z" level=error msg="ContainerStatus for \"61112361e2355b8da067c8482570fcf8df4a1797bfad90fade65da836b4ece6b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"61112361e2355b8da067c8482570fcf8df4a1797bfad90fade65da836b4ece6b\": not found" Jan 30 12:51:57.694355 kubelet[1889]: E0130 12:51:57.694298 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:58.694990 kubelet[1889]: E0130 12:51:58.694946 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:58.802470 kubelet[1889]: E0130 12:51:58.802422 1889 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 12:51:58.820444 kubelet[1889]: I0130 12:51:58.819811 1889 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da1ce413-1b6b-4c40-9eff-d692a2a68331" path="/var/lib/kubelet/pods/da1ce413-1b6b-4c40-9eff-d692a2a68331/volumes" Jan 30 12:51:58.853855 kubelet[1889]: I0130 12:51:58.853813 1889 topology_manager.go:215] "Topology Admit Handler" podUID="26132057-882e-46b5-9ba5-c76069612db3" podNamespace="kube-system" podName="cilium-operator-599987898-w446q" Jan 30 12:51:58.853855 kubelet[1889]: E0130 12:51:58.853866 1889 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da1ce413-1b6b-4c40-9eff-d692a2a68331" containerName="cilium-agent" Jan 30 12:51:58.854129 kubelet[1889]: E0130 12:51:58.853876 1889 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da1ce413-1b6b-4c40-9eff-d692a2a68331" containerName="mount-cgroup" Jan 30 12:51:58.854129 kubelet[1889]: E0130 12:51:58.853883 1889 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da1ce413-1b6b-4c40-9eff-d692a2a68331" containerName="mount-bpf-fs" Jan 30 12:51:58.854129 kubelet[1889]: E0130 12:51:58.853889 1889 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da1ce413-1b6b-4c40-9eff-d692a2a68331" containerName="clean-cilium-state" Jan 30 12:51:58.854129 kubelet[1889]: E0130 12:51:58.853896 1889 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da1ce413-1b6b-4c40-9eff-d692a2a68331" containerName="apply-sysctl-overwrites" Jan 30 12:51:58.854129 kubelet[1889]: I0130 12:51:58.853914 1889 memory_manager.go:354] "RemoveStaleState removing state" podUID="da1ce413-1b6b-4c40-9eff-d692a2a68331" containerName="cilium-agent" Jan 30 12:51:58.856422 kubelet[1889]: I0130 12:51:58.856385 1889 topology_manager.go:215] "Topology Admit Handler" podUID="fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9" podNamespace="kube-system" podName="cilium-fcn9p" Jan 30 12:51:58.965860 kubelet[1889]: I0130 12:51:58.965298 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9-cilium-config-path\") pod \"cilium-fcn9p\" (UID: \"fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9\") " pod="kube-system/cilium-fcn9p" Jan 30 12:51:58.965860 kubelet[1889]: I0130 12:51:58.965344 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9-host-proc-sys-kernel\") pod \"cilium-fcn9p\" (UID: \"fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9\") " pod="kube-system/cilium-fcn9p" Jan 30 12:51:58.965860 kubelet[1889]: I0130 12:51:58.965363 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6npv\" (UniqueName: \"kubernetes.io/projected/26132057-882e-46b5-9ba5-c76069612db3-kube-api-access-h6npv\") pod \"cilium-operator-599987898-w446q\" (UID: \"26132057-882e-46b5-9ba5-c76069612db3\") " pod="kube-system/cilium-operator-599987898-w446q" Jan 30 12:51:58.965860 kubelet[1889]: I0130 12:51:58.965406 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9-lib-modules\") pod \"cilium-fcn9p\" (UID: \"fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9\") " pod="kube-system/cilium-fcn9p" Jan 30 12:51:58.965860 kubelet[1889]: I0130 12:51:58.965421 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9-xtables-lock\") pod \"cilium-fcn9p\" (UID: \"fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9\") " pod="kube-system/cilium-fcn9p" Jan 30 12:51:58.966095 kubelet[1889]: I0130 12:51:58.965436 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9-host-proc-sys-net\") pod \"cilium-fcn9p\" (UID: \"fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9\") " pod="kube-system/cilium-fcn9p" Jan 30 12:51:58.966095 kubelet[1889]: I0130 12:51:58.965453 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9-bpf-maps\") pod \"cilium-fcn9p\" (UID: \"fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9\") " pod="kube-system/cilium-fcn9p" Jan 30 12:51:58.966095 kubelet[1889]: I0130 12:51:58.965468 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9-cni-path\") pod \"cilium-fcn9p\" (UID: \"fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9\") " pod="kube-system/cilium-fcn9p" Jan 30 12:51:58.966095 kubelet[1889]: I0130 12:51:58.965484 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9-clustermesh-secrets\") pod \"cilium-fcn9p\" (UID: \"fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9\") " pod="kube-system/cilium-fcn9p" Jan 30 12:51:58.966095 kubelet[1889]: I0130 12:51:58.965499 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9-hubble-tls\") pod \"cilium-fcn9p\" (UID: \"fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9\") " pod="kube-system/cilium-fcn9p" Jan 30 12:51:58.966095 kubelet[1889]: I0130 12:51:58.965519 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csgbl\" (UniqueName: \"kubernetes.io/projected/fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9-kube-api-access-csgbl\") pod \"cilium-fcn9p\" (UID: \"fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9\") " pod="kube-system/cilium-fcn9p" Jan 30 12:51:58.966221 kubelet[1889]: I0130 12:51:58.965535 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9-cilium-cgroup\") pod \"cilium-fcn9p\" (UID: \"fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9\") " pod="kube-system/cilium-fcn9p" Jan 30 12:51:58.966221 kubelet[1889]: I0130 12:51:58.965549 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9-etc-cni-netd\") pod \"cilium-fcn9p\" (UID: \"fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9\") " pod="kube-system/cilium-fcn9p" Jan 30 12:51:58.966221 kubelet[1889]: I0130 12:51:58.965563 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9-cilium-ipsec-secrets\") pod \"cilium-fcn9p\" (UID: \"fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9\") " pod="kube-system/cilium-fcn9p" Jan 30 12:51:58.966221 kubelet[1889]: I0130 12:51:58.965579 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26132057-882e-46b5-9ba5-c76069612db3-cilium-config-path\") pod \"cilium-operator-599987898-w446q\" (UID: \"26132057-882e-46b5-9ba5-c76069612db3\") " pod="kube-system/cilium-operator-599987898-w446q" Jan 30 12:51:58.966221 kubelet[1889]: I0130 12:51:58.965602 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9-cilium-run\") pod \"cilium-fcn9p\" (UID: \"fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9\") " pod="kube-system/cilium-fcn9p" Jan 30 12:51:58.966326 kubelet[1889]: I0130 12:51:58.965737 1889 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9-hostproc\") pod \"cilium-fcn9p\" (UID: \"fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9\") " pod="kube-system/cilium-fcn9p" Jan 30 12:51:59.157364 kubelet[1889]: E0130 12:51:59.157092 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:59.157640 containerd[1564]: time="2025-01-30T12:51:59.157596980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-w446q,Uid:26132057-882e-46b5-9ba5-c76069612db3,Namespace:kube-system,Attempt:0,}" Jan 30 12:51:59.161091 kubelet[1889]: E0130 12:51:59.160943 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:59.161435 containerd[1564]: time="2025-01-30T12:51:59.161387721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fcn9p,Uid:fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9,Namespace:kube-system,Attempt:0,}" Jan 30 12:51:59.280256 containerd[1564]: time="2025-01-30T12:51:59.279912340Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:51:59.280256 containerd[1564]: time="2025-01-30T12:51:59.279980944Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:51:59.282416 containerd[1564]: time="2025-01-30T12:51:59.282278198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:51:59.282640 containerd[1564]: time="2025-01-30T12:51:59.282587216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:51:59.284638 containerd[1564]: time="2025-01-30T12:51:59.284309996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 12:51:59.284638 containerd[1564]: time="2025-01-30T12:51:59.284355239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 12:51:59.284638 containerd[1564]: time="2025-01-30T12:51:59.284365879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:51:59.284638 containerd[1564]: time="2025-01-30T12:51:59.284503487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 12:51:59.325387 containerd[1564]: time="2025-01-30T12:51:59.325327544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fcn9p,Uid:fa035dd3-2859-4ae1-aaf8-2b8bfd97cab9,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab6634688cfc1883bbca126c37395349508151269677d563d46a0601d6dcbcc0\"" Jan 30 12:51:59.326433 kubelet[1889]: E0130 12:51:59.326410 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:59.328543 containerd[1564]: time="2025-01-30T12:51:59.328457326Z" level=info msg="CreateContainer within sandbox \"ab6634688cfc1883bbca126c37395349508151269677d563d46a0601d6dcbcc0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 12:51:59.333437 containerd[1564]: time="2025-01-30T12:51:59.333398333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-w446q,Uid:26132057-882e-46b5-9ba5-c76069612db3,Namespace:kube-system,Attempt:0,} returns sandbox id \"c86c2c719e66f1a42a585aeede4e255caf481d375f9e03ccf40755168b7049da\"" Jan 30 12:51:59.334087 kubelet[1889]: E0130 12:51:59.334025 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:59.335838 containerd[1564]: time="2025-01-30T12:51:59.335355087Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 12:51:59.341788 containerd[1564]: time="2025-01-30T12:51:59.341746259Z" level=info msg="CreateContainer within sandbox \"ab6634688cfc1883bbca126c37395349508151269677d563d46a0601d6dcbcc0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"81aa05f87ac7b3976d87348fa9b9432e6df6f6b9e965affa17ed77e3f925c7e0\"" Jan 30 12:51:59.342492 containerd[1564]: time="2025-01-30T12:51:59.342464661Z" level=info msg="StartContainer for \"81aa05f87ac7b3976d87348fa9b9432e6df6f6b9e965affa17ed77e3f925c7e0\"" Jan 30 12:51:59.388915 containerd[1564]: time="2025-01-30T12:51:59.388875283Z" level=info msg="StartContainer for \"81aa05f87ac7b3976d87348fa9b9432e6df6f6b9e965affa17ed77e3f925c7e0\" returns successfully" Jan 30 12:51:59.505083 containerd[1564]: time="2025-01-30T12:51:59.505025643Z" level=info msg="shim disconnected" id=81aa05f87ac7b3976d87348fa9b9432e6df6f6b9e965affa17ed77e3f925c7e0 namespace=k8s.io Jan 30 12:51:59.505083 containerd[1564]: time="2025-01-30T12:51:59.505078486Z" level=warning msg="cleaning up after shim disconnected" id=81aa05f87ac7b3976d87348fa9b9432e6df6f6b9e965affa17ed77e3f925c7e0 namespace=k8s.io Jan 30 12:51:59.505083 containerd[1564]: time="2025-01-30T12:51:59.505087407Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:51:59.695261 kubelet[1889]: E0130 12:51:59.695194 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:51:59.803499 kubelet[1889]: I0130 12:51:59.803442 1889 setters.go:580] "Node became not ready" node="10.0.0.39" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T12:51:59Z","lastTransitionTime":"2025-01-30T12:51:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 12:51:59.947693 kubelet[1889]: E0130 12:51:59.947398 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:51:59.949748 containerd[1564]: time="2025-01-30T12:51:59.949711048Z" level=info msg="CreateContainer within sandbox \"ab6634688cfc1883bbca126c37395349508151269677d563d46a0601d6dcbcc0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 12:51:59.958570 containerd[1564]: time="2025-01-30T12:51:59.958447716Z" level=info msg="CreateContainer within sandbox \"ab6634688cfc1883bbca126c37395349508151269677d563d46a0601d6dcbcc0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fd2db17d51020e429461174e0c4c39993d7fcbc37a2c9f7b97eab5eacd8a9931\"" Jan 30 12:51:59.959067 containerd[1564]: time="2025-01-30T12:51:59.958931024Z" level=info msg="StartContainer for \"fd2db17d51020e429461174e0c4c39993d7fcbc37a2c9f7b97eab5eacd8a9931\"" Jan 30 12:52:00.002958 containerd[1564]: time="2025-01-30T12:52:00.002882539Z" level=info msg="StartContainer for \"fd2db17d51020e429461174e0c4c39993d7fcbc37a2c9f7b97eab5eacd8a9931\" returns successfully" Jan 30 12:52:00.046972 containerd[1564]: time="2025-01-30T12:52:00.046854282Z" level=info msg="shim disconnected" id=fd2db17d51020e429461174e0c4c39993d7fcbc37a2c9f7b97eab5eacd8a9931 namespace=k8s.io Jan 30 12:52:00.046972 containerd[1564]: time="2025-01-30T12:52:00.046971249Z" level=warning msg="cleaning up after shim disconnected" id=fd2db17d51020e429461174e0c4c39993d7fcbc37a2c9f7b97eab5eacd8a9931 namespace=k8s.io Jan 30 12:52:00.046972 containerd[1564]: time="2025-01-30T12:52:00.046981809Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:52:00.278619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount743880198.mount: Deactivated successfully. Jan 30 12:52:00.695615 kubelet[1889]: E0130 12:52:00.695561 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:52:00.954820 kubelet[1889]: E0130 12:52:00.954687 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:52:00.958074 containerd[1564]: time="2025-01-30T12:52:00.957931769Z" level=info msg="CreateContainer within sandbox \"ab6634688cfc1883bbca126c37395349508151269677d563d46a0601d6dcbcc0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 12:52:00.991488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount161061438.mount: Deactivated successfully. Jan 30 12:52:00.996674 containerd[1564]: time="2025-01-30T12:52:00.996628697Z" level=info msg="CreateContainer within sandbox \"ab6634688cfc1883bbca126c37395349508151269677d563d46a0601d6dcbcc0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bf76ef7fd85086a05210858a371809d690ca3d57a6522d38ce02e977699c70ff\"" Jan 30 12:52:00.997420 containerd[1564]: time="2025-01-30T12:52:00.997396500Z" level=info msg="StartContainer for \"bf76ef7fd85086a05210858a371809d690ca3d57a6522d38ce02e977699c70ff\"" Jan 30 12:52:01.058506 containerd[1564]: time="2025-01-30T12:52:01.058458724Z" level=info msg="StartContainer for \"bf76ef7fd85086a05210858a371809d690ca3d57a6522d38ce02e977699c70ff\" returns successfully" Jan 30 12:52:01.079273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf76ef7fd85086a05210858a371809d690ca3d57a6522d38ce02e977699c70ff-rootfs.mount: Deactivated successfully. Jan 30 12:52:01.105842 containerd[1564]: time="2025-01-30T12:52:01.105764678Z" level=info msg="shim disconnected" id=bf76ef7fd85086a05210858a371809d690ca3d57a6522d38ce02e977699c70ff namespace=k8s.io Jan 30 12:52:01.105842 containerd[1564]: time="2025-01-30T12:52:01.105836802Z" level=warning msg="cleaning up after shim disconnected" id=bf76ef7fd85086a05210858a371809d690ca3d57a6522d38ce02e977699c70ff namespace=k8s.io Jan 30 12:52:01.105842 containerd[1564]: time="2025-01-30T12:52:01.105847122Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:52:01.257851 containerd[1564]: time="2025-01-30T12:52:01.257738162Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:52:01.258607 containerd[1564]: time="2025-01-30T12:52:01.258553486Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 30 12:52:01.259480 containerd[1564]: time="2025-01-30T12:52:01.259251044Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 12:52:01.260754 containerd[1564]: time="2025-01-30T12:52:01.260720843Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.925300073s" Jan 30 12:52:01.260821 containerd[1564]: time="2025-01-30T12:52:01.260757125Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 30 12:52:01.262951 containerd[1564]: time="2025-01-30T12:52:01.262923202Z" level=info msg="CreateContainer within sandbox \"c86c2c719e66f1a42a585aeede4e255caf481d375f9e03ccf40755168b7049da\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 12:52:01.271651 containerd[1564]: time="2025-01-30T12:52:01.271590310Z" level=info msg="CreateContainer within sandbox \"c86c2c719e66f1a42a585aeede4e255caf481d375f9e03ccf40755168b7049da\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a5eedc07a95466d345a93e6be3aa018d2f0366e0ad74cd49d2207b7c0259dcf0\"" Jan 30 12:52:01.273171 containerd[1564]: time="2025-01-30T12:52:01.272332550Z" level=info msg="StartContainer for \"a5eedc07a95466d345a93e6be3aa018d2f0366e0ad74cd49d2207b7c0259dcf0\"" Jan 30 12:52:01.314454 containerd[1564]: time="2025-01-30T12:52:01.314389141Z" level=info msg="StartContainer for \"a5eedc07a95466d345a93e6be3aa018d2f0366e0ad74cd49d2207b7c0259dcf0\" returns successfully" Jan 30 12:52:01.696294 kubelet[1889]: E0130 12:52:01.696238 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:52:01.960139 kubelet[1889]: E0130 12:52:01.959197 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:52:01.961577 kubelet[1889]: E0130 12:52:01.961494 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:52:01.961798 containerd[1564]: time="2025-01-30T12:52:01.961763730Z" level=info msg="CreateContainer within sandbox \"ab6634688cfc1883bbca126c37395349508151269677d563d46a0601d6dcbcc0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 12:52:01.973639 containerd[1564]: time="2025-01-30T12:52:01.973591368Z" level=info msg="CreateContainer within sandbox \"ab6634688cfc1883bbca126c37395349508151269677d563d46a0601d6dcbcc0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"96940cae0da7c07e830bde1a54a43c5e23e25fd34fce6b10fb2ca6dd1400105d\"" Jan 30 12:52:01.974478 containerd[1564]: time="2025-01-30T12:52:01.974452895Z" level=info msg="StartContainer for \"96940cae0da7c07e830bde1a54a43c5e23e25fd34fce6b10fb2ca6dd1400105d\"" Jan 30 12:52:02.000813 kubelet[1889]: I0130 12:52:02.000736 1889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-w446q" podStartSLOduration=2.074300098 podStartE2EDuration="4.000718553s" podCreationTimestamp="2025-01-30 12:51:58 +0000 UTC" firstStartedPulling="2025-01-30 12:51:59.335050269 +0000 UTC m=+51.215517845" lastFinishedPulling="2025-01-30 12:52:01.261468724 +0000 UTC m=+53.141936300" observedRunningTime="2025-01-30 12:52:02.000470339 +0000 UTC m=+53.880937915" watchObservedRunningTime="2025-01-30 12:52:02.000718553 +0000 UTC m=+53.881186129" Jan 30 12:52:02.019004 containerd[1564]: time="2025-01-30T12:52:02.018946582Z" level=info msg="StartContainer for \"96940cae0da7c07e830bde1a54a43c5e23e25fd34fce6b10fb2ca6dd1400105d\" returns successfully" Jan 30 12:52:02.034747 containerd[1564]: time="2025-01-30T12:52:02.034676521Z" level=info msg="shim disconnected" id=96940cae0da7c07e830bde1a54a43c5e23e25fd34fce6b10fb2ca6dd1400105d namespace=k8s.io Jan 30 12:52:02.034747 containerd[1564]: time="2025-01-30T12:52:02.034730164Z" level=warning msg="cleaning up after shim disconnected" id=96940cae0da7c07e830bde1a54a43c5e23e25fd34fce6b10fb2ca6dd1400105d namespace=k8s.io Jan 30 12:52:02.034747 containerd[1564]: time="2025-01-30T12:52:02.034738685Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 12:52:02.697396 kubelet[1889]: E0130 12:52:02.697342 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:52:02.966146 kubelet[1889]: E0130 12:52:02.965263 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:52:02.966146 kubelet[1889]: E0130 12:52:02.965995 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:52:02.968387 containerd[1564]: time="2025-01-30T12:52:02.968343138Z" level=info msg="CreateContainer within sandbox \"ab6634688cfc1883bbca126c37395349508151269677d563d46a0601d6dcbcc0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 12:52:02.983221 containerd[1564]: time="2025-01-30T12:52:02.983176110Z" level=info msg="CreateContainer within sandbox \"ab6634688cfc1883bbca126c37395349508151269677d563d46a0601d6dcbcc0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"574881dc76adb22513e356076d875688417af2cb6b858cf1ee1440c5e7a42517\"" Jan 30 12:52:02.983672 containerd[1564]: time="2025-01-30T12:52:02.983644335Z" level=info msg="StartContainer for \"574881dc76adb22513e356076d875688417af2cb6b858cf1ee1440c5e7a42517\"" Jan 30 12:52:03.038371 containerd[1564]: time="2025-01-30T12:52:03.038293953Z" level=info msg="StartContainer for \"574881dc76adb22513e356076d875688417af2cb6b858cf1ee1440c5e7a42517\" returns successfully" Jan 30 12:52:03.071537 systemd[1]: run-containerd-runc-k8s.io-574881dc76adb22513e356076d875688417af2cb6b858cf1ee1440c5e7a42517-runc.hWcTZu.mount: Deactivated successfully. Jan 30 12:52:03.334044 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 30 12:52:03.698369 kubelet[1889]: E0130 12:52:03.698309 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:52:03.971040 kubelet[1889]: E0130 12:52:03.970908 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:52:03.987157 kubelet[1889]: I0130 12:52:03.987078 1889 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fcn9p" podStartSLOduration=5.987061333 podStartE2EDuration="5.987061333s" podCreationTimestamp="2025-01-30 12:51:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 12:52:03.986735396 +0000 UTC m=+55.867202972" watchObservedRunningTime="2025-01-30 12:52:03.987061333 +0000 UTC m=+55.867528909" Jan 30 12:52:04.699335 kubelet[1889]: E0130 12:52:04.699272 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:52:05.162147 kubelet[1889]: E0130 12:52:05.162050 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:52:05.699426 kubelet[1889]: E0130 12:52:05.699378 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:52:06.321409 systemd-networkd[1234]: lxc_health: Link UP Jan 30 12:52:06.336252 systemd-networkd[1234]: lxc_health: Gained carrier Jan 30 12:52:06.700307 kubelet[1889]: E0130 12:52:06.700267 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:52:07.163217 kubelet[1889]: E0130 12:52:07.163169 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:52:07.562547 systemd[1]: run-containerd-runc-k8s.io-574881dc76adb22513e356076d875688417af2cb6b858cf1ee1440c5e7a42517-runc.8oK8vZ.mount: Deactivated successfully. Jan 30 12:52:07.701070 kubelet[1889]: E0130 12:52:07.701026 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:52:07.929241 systemd-networkd[1234]: lxc_health: Gained IPv6LL Jan 30 12:52:07.980000 kubelet[1889]: E0130 12:52:07.979953 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:52:08.659480 kubelet[1889]: E0130 12:52:08.659429 1889 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:52:08.678800 containerd[1564]: time="2025-01-30T12:52:08.678738879Z" level=info msg="StopPodSandbox for \"7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12\"" Jan 30 12:52:08.679390 containerd[1564]: time="2025-01-30T12:52:08.678860124Z" level=info msg="TearDown network for sandbox \"7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12\" successfully" Jan 30 12:52:08.679390 containerd[1564]: time="2025-01-30T12:52:08.678873765Z" level=info msg="StopPodSandbox for \"7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12\" returns successfully" Jan 30 12:52:08.680372 containerd[1564]: time="2025-01-30T12:52:08.679734881Z" level=info msg="RemovePodSandbox for \"7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12\"" Jan 30 12:52:08.680372 containerd[1564]: time="2025-01-30T12:52:08.679767563Z" level=info msg="Forcibly stopping sandbox \"7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12\"" Jan 30 12:52:08.680372 containerd[1564]: time="2025-01-30T12:52:08.679839006Z" level=info msg="TearDown network for sandbox \"7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12\" successfully" Jan 30 12:52:08.701951 kubelet[1889]: E0130 12:52:08.701904 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:52:08.706044 containerd[1564]: time="2025-01-30T12:52:08.705628551Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 12:52:08.706044 containerd[1564]: time="2025-01-30T12:52:08.705715315Z" level=info msg="RemovePodSandbox \"7cdd09b2a75b70bf755151b226f6b7f6a1f6147b14d35b4bb8503f3d1ddb7b12\" returns successfully" Jan 30 12:52:08.981460 kubelet[1889]: E0130 12:52:08.981348 1889 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 12:52:09.702722 kubelet[1889]: E0130 12:52:09.702645 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:52:10.708653 kubelet[1889]: E0130 12:52:10.703150 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:52:11.704361 kubelet[1889]: E0130 12:52:11.704311 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 12:52:12.707288 kubelet[1889]: E0130 12:52:12.707233 1889 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"