Dec 13 01:29:54.911343 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 01:29:54.911422 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:29:54.911435 kernel: KASLR enabled Dec 13 01:29:54.911440 kernel: efi: EFI v2.7 by EDK II Dec 13 01:29:54.911446 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Dec 13 01:29:54.911452 kernel: random: crng init done Dec 13 01:29:54.911459 kernel: ACPI: Early table checksum verification disabled Dec 13 01:29:54.911464 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Dec 13 01:29:54.911471 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 01:29:54.911478 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:54.911484 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:54.911490 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:54.911496 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:54.911502 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:54.911509 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:54.911517 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:54.911524 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:54.911530 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:29:54.911536 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 01:29:54.911543 kernel: NUMA: Failed to initialise from firmware Dec 13 01:29:54.911550 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:29:54.911556 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Dec 13 01:29:54.911562 kernel: Zone ranges: Dec 13 01:29:54.911578 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:29:54.911588 kernel: DMA32 empty Dec 13 01:29:54.911596 kernel: Normal empty Dec 13 01:29:54.911603 kernel: Movable zone start for each node Dec 13 01:29:54.911609 kernel: Early memory node ranges Dec 13 01:29:54.911616 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Dec 13 01:29:54.911622 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Dec 13 01:29:54.911629 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Dec 13 01:29:54.911635 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Dec 13 01:29:54.911642 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Dec 13 01:29:54.911648 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Dec 13 01:29:54.911654 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Dec 13 01:29:54.911661 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 01:29:54.911667 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 01:29:54.911675 kernel: psci: probing for conduit method from ACPI. Dec 13 01:29:54.911682 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 01:29:54.911688 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:29:54.911698 kernel: psci: Trusted OS migration not required Dec 13 01:29:54.911705 kernel: psci: SMC Calling Convention v1.1 Dec 13 01:29:54.911712 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 01:29:54.911720 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:29:54.911727 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:29:54.911734 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 01:29:54.911741 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:29:54.911748 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:29:54.911755 kernel: CPU features: detected: Hardware dirty bit management Dec 13 01:29:54.911762 kernel: CPU features: detected: Spectre-v4 Dec 13 01:29:54.911769 kernel: CPU features: detected: Spectre-BHB Dec 13 01:29:54.911776 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 01:29:54.911783 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 01:29:54.911791 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 01:29:54.911797 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 01:29:54.911804 kernel: alternatives: applying boot alternatives Dec 13 01:29:54.911813 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:29:54.911820 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:29:54.911827 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:29:54.911834 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:29:54.911841 kernel: Fallback order for Node 0: 0 Dec 13 01:29:54.911847 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Dec 13 01:29:54.911854 kernel: Policy zone: DMA Dec 13 01:29:54.911860 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:29:54.911868 kernel: software IO TLB: area num 4. Dec 13 01:29:54.911875 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Dec 13 01:29:54.911882 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved) Dec 13 01:29:54.911889 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:29:54.911896 kernel: trace event string verifier disabled Dec 13 01:29:54.911903 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:29:54.911910 kernel: rcu: RCU event tracing is enabled. Dec 13 01:29:54.911917 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:29:54.911924 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:29:54.911931 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:29:54.911938 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:29:54.911944 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:29:54.911952 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:29:54.911959 kernel: GICv3: 256 SPIs implemented Dec 13 01:29:54.911966 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:29:54.911972 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:29:54.911979 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 01:29:54.911986 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 01:29:54.911992 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 01:29:54.911999 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 01:29:54.912006 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Dec 13 01:29:54.912013 kernel: GICv3: using LPI property table @0x00000000400f0000 Dec 13 01:29:54.912025 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Dec 13 01:29:54.912033 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:29:54.912040 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:29:54.912046 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 01:29:54.912053 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 01:29:54.912060 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 01:29:54.912067 kernel: arm-pv: using stolen time PV Dec 13 01:29:54.912073 kernel: Console: colour dummy device 80x25 Dec 13 01:29:54.912080 kernel: ACPI: Core revision 20230628 Dec 13 01:29:54.912088 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 01:29:54.912094 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:29:54.912102 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:29:54.912109 kernel: landlock: Up and running. Dec 13 01:29:54.912116 kernel: SELinux: Initializing. Dec 13 01:29:54.912123 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:29:54.912130 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:29:54.912137 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:29:54.912144 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:29:54.912151 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:29:54.912158 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:29:54.912166 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 01:29:54.912173 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 01:29:54.912180 kernel: Remapping and enabling EFI services. Dec 13 01:29:54.912187 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:29:54.912194 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:29:54.912201 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 01:29:54.912208 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Dec 13 01:29:54.912215 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:29:54.912221 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 01:29:54.912228 kernel: Detected PIPT I-cache on CPU2 Dec 13 01:29:54.912241 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 01:29:54.912248 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Dec 13 01:29:54.912259 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:29:54.912268 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 01:29:54.912275 kernel: Detected PIPT I-cache on CPU3 Dec 13 01:29:54.912282 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 01:29:54.912289 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Dec 13 01:29:54.912297 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:29:54.912304 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 01:29:54.912315 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:29:54.912322 kernel: SMP: Total of 4 processors activated. Dec 13 01:29:54.912329 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:29:54.912337 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 01:29:54.912344 kernel: CPU features: detected: Common not Private translations Dec 13 01:29:54.912351 kernel: CPU features: detected: CRC32 instructions Dec 13 01:29:54.912358 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 13 01:29:54.912376 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 01:29:54.912385 kernel: CPU features: detected: LSE atomic instructions Dec 13 01:29:54.912397 kernel: CPU features: detected: Privileged Access Never Dec 13 01:29:54.912404 kernel: CPU features: detected: RAS Extension Support Dec 13 01:29:54.912412 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 01:29:54.912419 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:29:54.912426 kernel: alternatives: applying system-wide alternatives Dec 13 01:29:54.912433 kernel: devtmpfs: initialized Dec 13 01:29:54.912441 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:29:54.912448 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:29:54.912456 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:29:54.912464 kernel: SMBIOS 3.0.0 present. Dec 13 01:29:54.912471 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Dec 13 01:29:54.912478 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:29:54.912486 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:29:54.912493 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:29:54.912500 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:29:54.912508 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:29:54.912515 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 Dec 13 01:29:54.912523 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:29:54.912531 kernel: cpuidle: using governor menu Dec 13 01:29:54.912538 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:29:54.912545 kernel: ASID allocator initialised with 32768 entries Dec 13 01:29:54.912552 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:29:54.912559 kernel: Serial: AMBA PL011 UART driver Dec 13 01:29:54.912567 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 01:29:54.912574 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 01:29:54.912581 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:29:54.912590 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:29:54.912597 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:29:54.912604 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:29:54.912612 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:29:54.912619 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:29:54.912626 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:29:54.912633 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:29:54.912640 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:29:54.912648 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:29:54.912656 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:29:54.912663 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:29:54.912670 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:29:54.912677 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:29:54.912685 kernel: ACPI: Interpreter enabled Dec 13 01:29:54.912692 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:29:54.912699 kernel: ACPI: MCFG table detected, 1 entries Dec 13 01:29:54.912706 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 01:29:54.912713 kernel: printk: console [ttyAMA0] enabled Dec 13 01:29:54.912722 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:29:54.912842 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:29:54.912916 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 01:29:54.912983 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 01:29:54.913050 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 01:29:54.913124 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 01:29:54.913134 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 01:29:54.913144 kernel: PCI host bridge to bus 0000:00 Dec 13 01:29:54.913215 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 01:29:54.913274 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 01:29:54.913340 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 01:29:54.913419 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:29:54.913502 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 01:29:54.913581 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:29:54.913654 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Dec 13 01:29:54.913722 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Dec 13 01:29:54.913788 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 01:29:54.913855 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 01:29:54.913921 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Dec 13 01:29:54.913987 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Dec 13 01:29:54.914046 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 01:29:54.914117 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 01:29:54.914181 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 01:29:54.914191 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 01:29:54.914198 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 01:29:54.914206 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 01:29:54.914213 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 01:29:54.914220 kernel: iommu: Default domain type: Translated Dec 13 01:29:54.914228 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:29:54.914237 kernel: efivars: Registered efivars operations Dec 13 01:29:54.914244 kernel: vgaarb: loaded Dec 13 01:29:54.914251 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:29:54.914259 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:29:54.914266 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:29:54.914273 kernel: pnp: PnP ACPI init Dec 13 01:29:54.914351 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 01:29:54.914393 kernel: pnp: PnP ACPI: found 1 devices Dec 13 01:29:54.914406 kernel: NET: Registered PF_INET protocol family Dec 13 01:29:54.914413 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:29:54.914421 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:29:54.914428 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:29:54.914439 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:29:54.914446 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:29:54.914454 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:29:54.914461 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:29:54.914468 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:29:54.914478 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:29:54.914485 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:29:54.914492 kernel: kvm [1]: HYP mode not available Dec 13 01:29:54.914499 kernel: Initialise system trusted keyrings Dec 13 01:29:54.914506 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:29:54.914514 kernel: Key type asymmetric registered Dec 13 01:29:54.914521 kernel: Asymmetric key parser 'x509' registered Dec 13 01:29:54.914528 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:29:54.914535 kernel: io scheduler mq-deadline registered Dec 13 01:29:54.914544 kernel: io scheduler kyber registered Dec 13 01:29:54.914551 kernel: io scheduler bfq registered Dec 13 01:29:54.914558 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 01:29:54.914565 kernel: ACPI: button: Power Button [PWRB] Dec 13 01:29:54.914573 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 01:29:54.914652 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 01:29:54.914662 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:29:54.914669 kernel: thunder_xcv, ver 1.0 Dec 13 01:29:54.914677 kernel: thunder_bgx, ver 1.0 Dec 13 01:29:54.914686 kernel: nicpf, ver 1.0 Dec 13 01:29:54.914693 kernel: nicvf, ver 1.0 Dec 13 01:29:54.914780 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:29:54.914846 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:29:54 UTC (1734053394) Dec 13 01:29:54.914856 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:29:54.914864 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 01:29:54.914871 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:29:54.914878 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:29:54.914888 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:29:54.914895 kernel: Segment Routing with IPv6 Dec 13 01:29:54.914902 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:29:54.914909 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:29:54.914917 kernel: Key type dns_resolver registered Dec 13 01:29:54.914924 kernel: registered taskstats version 1 Dec 13 01:29:54.914931 kernel: Loading compiled-in X.509 certificates Dec 13 01:29:54.914938 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:29:54.914946 kernel: Key type .fscrypt registered Dec 13 01:29:54.914954 kernel: Key type fscrypt-provisioning registered Dec 13 01:29:54.914961 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:29:54.914969 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:29:54.914976 kernel: ima: No architecture policies found Dec 13 01:29:54.914983 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:29:54.914990 kernel: clk: Disabling unused clocks Dec 13 01:29:54.914997 kernel: Freeing unused kernel memory: 39360K Dec 13 01:29:54.915005 kernel: Run /init as init process Dec 13 01:29:54.915012 kernel: with arguments: Dec 13 01:29:54.915020 kernel: /init Dec 13 01:29:54.915027 kernel: with environment: Dec 13 01:29:54.915038 kernel: HOME=/ Dec 13 01:29:54.915045 kernel: TERM=linux Dec 13 01:29:54.915052 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:29:54.915061 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:29:54.915070 systemd[1]: Detected virtualization kvm. Dec 13 01:29:54.915078 systemd[1]: Detected architecture arm64. Dec 13 01:29:54.915087 systemd[1]: Running in initrd. Dec 13 01:29:54.915095 systemd[1]: No hostname configured, using default hostname. Dec 13 01:29:54.915102 systemd[1]: Hostname set to . Dec 13 01:29:54.915110 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:29:54.915118 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:29:54.915126 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:54.915133 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:54.915142 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:29:54.915154 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:29:54.915161 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:29:54.915169 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:29:54.915178 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:29:54.915186 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:29:54.915194 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:54.915202 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:54.915211 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:29:54.915221 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:29:54.915229 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:29:54.915236 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:29:54.915244 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:29:54.915252 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:29:54.915260 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:29:54.915268 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:29:54.915277 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:54.915285 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:54.915293 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:54.915300 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:29:54.915308 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:29:54.915316 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:29:54.915323 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:29:54.915331 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:29:54.915339 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:29:54.915348 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:29:54.915356 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:54.915449 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:29:54.915460 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:54.915468 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:29:54.915476 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:29:54.915515 systemd-journald[237]: Collecting audit messages is disabled. Dec 13 01:29:54.915536 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:29:54.915547 systemd-journald[237]: Journal started Dec 13 01:29:54.915565 systemd-journald[237]: Runtime Journal (/run/log/journal/5a0391942c8b453ab01cdc3457c81dfe) is 5.9M, max 47.3M, 41.4M free. Dec 13 01:29:54.906820 systemd-modules-load[238]: Inserted module 'overlay' Dec 13 01:29:54.919604 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:29:54.919622 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:29:54.920135 systemd-modules-load[238]: Inserted module 'br_netfilter' Dec 13 01:29:54.921047 kernel: Bridge firewalling registered Dec 13 01:29:54.922387 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:54.925452 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:54.937626 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:54.939291 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:29:54.941281 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:29:54.943887 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:29:54.950026 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:54.951186 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:54.954858 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:54.964561 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:29:54.966955 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:54.970064 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:29:54.982145 dracut-cmdline[280]: dracut-dracut-053 Dec 13 01:29:54.984461 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:29:54.988988 systemd-resolved[275]: Positive Trust Anchors: Dec 13 01:29:54.989004 systemd-resolved[275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:29:54.989035 systemd-resolved[275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:29:54.993689 systemd-resolved[275]: Defaulting to hostname 'linux'. Dec 13 01:29:54.997014 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:29:54.998131 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:55.049395 kernel: SCSI subsystem initialized Dec 13 01:29:55.054387 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:29:55.064387 kernel: iscsi: registered transport (tcp) Dec 13 01:29:55.077435 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:29:55.077492 kernel: QLogic iSCSI HBA Driver Dec 13 01:29:55.117092 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:29:55.124511 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:29:55.142396 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:29:55.142426 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:29:55.142445 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:29:55.188407 kernel: raid6: neonx8 gen() 15695 MB/s Dec 13 01:29:55.205407 kernel: raid6: neonx4 gen() 15600 MB/s Dec 13 01:29:55.222395 kernel: raid6: neonx2 gen() 13179 MB/s Dec 13 01:29:55.239401 kernel: raid6: neonx1 gen() 10438 MB/s Dec 13 01:29:55.256392 kernel: raid6: int64x8 gen() 6940 MB/s Dec 13 01:29:55.273391 kernel: raid6: int64x4 gen() 7344 MB/s Dec 13 01:29:55.290393 kernel: raid6: int64x2 gen() 6120 MB/s Dec 13 01:29:55.307486 kernel: raid6: int64x1 gen() 5053 MB/s Dec 13 01:29:55.307500 kernel: raid6: using algorithm neonx8 gen() 15695 MB/s Dec 13 01:29:55.325448 kernel: raid6: .... xor() 11914 MB/s, rmw enabled Dec 13 01:29:55.325463 kernel: raid6: using neon recovery algorithm Dec 13 01:29:55.330774 kernel: xor: measuring software checksum speed Dec 13 01:29:55.330792 kernel: 8regs : 19716 MB/sec Dec 13 01:29:55.331456 kernel: 32regs : 19299 MB/sec Dec 13 01:29:55.332680 kernel: arm64_neon : 26458 MB/sec Dec 13 01:29:55.332697 kernel: xor: using function: arm64_neon (26458 MB/sec) Dec 13 01:29:55.382415 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:29:55.393086 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:29:55.411558 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:55.424606 systemd-udevd[461]: Using default interface naming scheme 'v255'. Dec 13 01:29:55.427714 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:55.430721 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:29:55.445101 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Dec 13 01:29:55.471538 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:29:55.487557 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:29:55.525120 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:55.535544 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:29:55.545277 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:29:55.546895 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:29:55.548498 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:55.549553 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:29:55.556559 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:29:55.566204 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:29:55.572390 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Dec 13 01:29:55.588840 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:29:55.588940 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:29:55.588952 kernel: GPT:9289727 != 19775487 Dec 13 01:29:55.588966 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:29:55.588976 kernel: GPT:9289727 != 19775487 Dec 13 01:29:55.588985 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:29:55.588994 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:29:55.577065 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:29:55.577149 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:55.582810 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:55.586137 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:55.586194 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:55.588692 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:55.602509 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:29:55.606386 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/vda3 scanned by (udev-worker) (520) Dec 13 01:29:55.608393 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (523) Dec 13 01:29:55.615038 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:29:55.616506 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:55.625846 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:29:55.633239 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:29:55.634504 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:29:55.640638 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:29:55.660512 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:29:55.662249 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:29:55.667732 disk-uuid[550]: Primary Header is updated. Dec 13 01:29:55.667732 disk-uuid[550]: Secondary Entries is updated. Dec 13 01:29:55.667732 disk-uuid[550]: Secondary Header is updated. Dec 13 01:29:55.670776 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:29:55.686077 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:56.684399 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:29:56.684582 disk-uuid[552]: The operation has completed successfully. Dec 13 01:29:56.702021 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:29:56.702109 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:29:56.729498 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:29:56.733399 sh[574]: Success Dec 13 01:29:56.749416 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:29:56.774667 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:29:56.783610 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:29:56.785467 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:29:56.794695 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:29:56.794725 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:29:56.794736 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:29:56.796485 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:29:56.796498 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:29:56.800389 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:29:56.801636 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:29:56.809500 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:29:56.811075 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:29:56.818565 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:29:56.818603 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:29:56.818614 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:29:56.822023 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:29:56.827938 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:29:56.829790 kernel: BTRFS info (device vda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:29:56.834858 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:29:56.841525 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:29:56.907688 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:29:56.919585 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:29:56.937003 ignition[663]: Ignition 2.19.0 Dec 13 01:29:56.937012 ignition[663]: Stage: fetch-offline Dec 13 01:29:56.937044 ignition[663]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:56.937052 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:29:56.937230 ignition[663]: parsed url from cmdline: "" Dec 13 01:29:56.937234 ignition[663]: no config URL provided Dec 13 01:29:56.937238 ignition[663]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:29:56.937245 ignition[663]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:29:56.937266 ignition[663]: op(1): [started] loading QEMU firmware config module Dec 13 01:29:56.937270 ignition[663]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:29:56.945860 systemd-networkd[765]: lo: Link UP Dec 13 01:29:56.944048 ignition[663]: op(1): [finished] loading QEMU firmware config module Dec 13 01:29:56.945864 systemd-networkd[765]: lo: Gained carrier Dec 13 01:29:56.946562 systemd-networkd[765]: Enumeration completed Dec 13 01:29:56.946638 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:29:56.948891 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:56.948894 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:29:56.955030 ignition[663]: parsing config with SHA512: ad070fde47df6c1acc6842e58cf664b2d59797481170672d26489a9925c798dffc5c82029282bfc4fa1ce7c8b84f19100e28899ecd2c9d5bbd53660d7dfcc4b6 Dec 13 01:29:56.949498 systemd[1]: Reached target network.target - Network. Dec 13 01:29:56.950540 systemd-networkd[765]: eth0: Link UP Dec 13 01:29:56.950544 systemd-networkd[765]: eth0: Gained carrier Dec 13 01:29:56.958346 ignition[663]: fetch-offline: fetch-offline passed Dec 13 01:29:56.950550 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:29:56.958447 ignition[663]: Ignition finished successfully Dec 13 01:29:56.958032 unknown[663]: fetched base config from "system" Dec 13 01:29:56.958039 unknown[663]: fetched user config from "qemu" Dec 13 01:29:56.959708 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:29:56.961598 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:29:56.964416 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.68/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:29:56.967510 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:29:56.977174 ignition[772]: Ignition 2.19.0 Dec 13 01:29:56.977183 ignition[772]: Stage: kargs Dec 13 01:29:56.977336 ignition[772]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:56.977345 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:29:56.978034 ignition[772]: kargs: kargs passed Dec 13 01:29:56.981663 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:29:56.978074 ignition[772]: Ignition finished successfully Dec 13 01:29:56.996503 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:29:57.005430 ignition[781]: Ignition 2.19.0 Dec 13 01:29:57.005440 ignition[781]: Stage: disks Dec 13 01:29:57.005594 ignition[781]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:57.007926 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:29:57.005603 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:29:57.009339 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:29:57.006234 ignition[781]: disks: disks passed Dec 13 01:29:57.011002 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:29:57.006278 ignition[781]: Ignition finished successfully Dec 13 01:29:57.012936 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:29:57.014697 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:29:57.016082 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:29:57.027495 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:29:57.037572 systemd-fsck[792]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:29:57.040485 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:29:57.042818 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:29:57.089404 kernel: EXT4-fs (vda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:29:57.089577 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:29:57.090677 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:29:57.102450 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:29:57.103963 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:29:57.105501 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:29:57.105539 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:29:57.111856 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (800) Dec 13 01:29:57.111877 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:29:57.105559 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:29:57.116267 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:29:57.116286 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:29:57.116296 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:29:57.112325 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:29:57.117760 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:29:57.119571 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:29:57.162693 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:29:57.166728 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:29:57.170262 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:29:57.174063 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:29:57.237488 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:29:57.250506 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:29:57.252832 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:29:57.257390 kernel: BTRFS info (device vda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:29:57.274924 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:29:57.278864 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:29:57.280848 ignition[913]: INFO : Ignition 2.19.0 Dec 13 01:29:57.280848 ignition[913]: INFO : Stage: mount Dec 13 01:29:57.280848 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:57.280848 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:29:57.280848 ignition[913]: INFO : mount: mount passed Dec 13 01:29:57.280848 ignition[913]: INFO : Ignition finished successfully Dec 13 01:29:57.287512 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:29:57.793826 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:29:57.802562 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:29:57.809071 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (926) Dec 13 01:29:57.809099 kernel: BTRFS info (device vda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:29:57.809110 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:29:57.810743 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:29:57.813398 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:29:57.814014 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:29:57.830719 ignition[943]: INFO : Ignition 2.19.0 Dec 13 01:29:57.830719 ignition[943]: INFO : Stage: files Dec 13 01:29:57.832277 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:57.832277 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:29:57.834569 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:29:57.834569 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:29:57.834569 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:29:57.838608 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:29:57.838608 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:29:57.838608 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:29:57.838608 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:29:57.838608 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:29:57.838608 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:29:57.838608 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:29:57.838608 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:29:57.838608 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:29:57.838608 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:29:57.838608 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Dec 13 01:29:57.835719 unknown[943]: wrote ssh authorized keys file for user: core Dec 13 01:29:58.179543 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 01:29:58.556404 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Dec 13 01:29:58.556404 ignition[943]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Dec 13 01:29:58.560736 ignition[943]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:29:58.560736 ignition[943]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:29:58.560736 ignition[943]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Dec 13 01:29:58.560736 ignition[943]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:29:58.588474 ignition[943]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:29:58.593634 ignition[943]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:29:58.593634 ignition[943]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:29:58.593634 ignition[943]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:29:58.593634 ignition[943]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:29:58.593634 ignition[943]: INFO : files: files passed Dec 13 01:29:58.593634 ignition[943]: INFO : Ignition finished successfully Dec 13 01:29:58.595465 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:29:58.624566 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:29:58.627316 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:29:58.639431 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:29:58.639515 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:29:58.644755 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:29:58.647932 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:58.647932 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:58.651249 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:29:58.653786 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:29:58.655111 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:29:58.670377 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:29:58.688989 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:29:58.689083 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:29:58.691301 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:29:58.693231 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:29:58.695080 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:29:58.695763 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:29:58.716575 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:29:58.718791 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:29:58.730600 systemd-networkd[765]: eth0: Gained IPv6LL Dec 13 01:29:58.737502 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:29:58.738694 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:58.740678 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:29:58.742364 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:29:58.742496 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:29:58.744948 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:29:58.748217 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:29:58.749823 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:29:58.751956 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:29:58.753846 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:29:58.755760 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:29:58.758214 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:29:58.761458 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:29:58.763516 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:29:58.765178 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:29:58.767858 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:29:58.767974 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:29:58.770466 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:29:58.772404 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:58.774301 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:29:58.777448 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:58.778707 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:29:58.778818 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:29:58.781658 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:29:58.781774 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:29:58.783693 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:29:58.785217 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:29:58.788668 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:58.789918 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:29:58.792049 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:29:58.799365 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:29:58.799667 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:29:58.803025 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:29:58.803147 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:29:58.804815 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:29:58.804981 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:29:58.806817 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:29:58.807266 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:29:58.819077 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:29:58.819998 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:29:58.820121 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:58.823827 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:29:58.824806 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:29:58.824936 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:58.826728 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:29:58.826838 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:29:58.834952 ignition[998]: INFO : Ignition 2.19.0 Dec 13 01:29:58.834952 ignition[998]: INFO : Stage: umount Dec 13 01:29:58.834952 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:29:58.834952 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:29:58.835146 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:29:58.843389 ignition[998]: INFO : umount: umount passed Dec 13 01:29:58.843389 ignition[998]: INFO : Ignition finished successfully Dec 13 01:29:58.835233 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:29:58.839464 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:29:58.839567 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:29:58.841958 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:29:58.843117 systemd[1]: Stopped target network.target - Network. Dec 13 01:29:58.845245 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:29:58.845314 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:29:58.847015 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:29:58.847062 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:29:58.848875 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:29:58.848918 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:29:58.850819 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:29:58.850866 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:29:58.852738 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:29:58.854418 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:29:58.859463 systemd-networkd[765]: eth0: DHCPv6 lease lost Dec 13 01:29:58.861328 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:29:58.863413 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:29:58.864851 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:29:58.864882 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:58.877466 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:29:58.878285 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:29:58.878339 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:29:58.880435 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:29:58.885231 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:29:58.885320 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:29:58.904774 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:29:58.904929 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:29:58.907242 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:29:58.907330 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:29:58.909227 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:29:58.910432 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:29:58.913364 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:29:58.913431 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:58.915467 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:29:58.915501 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:58.917216 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:29:58.917261 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:29:58.919860 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:29:58.919904 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:29:58.922608 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:29:58.922652 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:29:58.925345 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:29:58.925437 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:29:58.929862 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:29:58.931236 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:29:58.931290 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:58.933136 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:29:58.933178 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:58.935084 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:29:58.935126 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:58.937340 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:29:58.937404 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:29:58.939288 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:29:58.939330 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:29:58.941627 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:29:58.941703 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:29:58.943584 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:29:58.945860 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:29:58.955237 systemd[1]: Switching root. Dec 13 01:29:58.983746 systemd-journald[237]: Journal stopped Dec 13 01:29:59.696030 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Dec 13 01:29:59.696084 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:29:59.696097 kernel: SELinux: policy capability open_perms=1 Dec 13 01:29:59.696107 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:29:59.696120 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:29:59.696134 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:29:59.696144 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:29:59.696153 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:29:59.696163 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:29:59.696172 kernel: audit: type=1403 audit(1734053399.110:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:29:59.696183 systemd[1]: Successfully loaded SELinux policy in 37.709ms. Dec 13 01:29:59.696199 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.304ms. Dec 13 01:29:59.696211 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:29:59.696223 systemd[1]: Detected virtualization kvm. Dec 13 01:29:59.696235 systemd[1]: Detected architecture arm64. Dec 13 01:29:59.696245 systemd[1]: Detected first boot. Dec 13 01:29:59.696256 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:29:59.696267 zram_generator::config[1044]: No configuration found. Dec 13 01:29:59.696278 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:29:59.696289 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:29:59.696300 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:29:59.696312 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:29:59.696327 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:29:59.696338 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:29:59.696361 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:29:59.696493 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:29:59.696506 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:29:59.696518 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:29:59.696532 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:29:59.696543 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:29:59.696553 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:29:59.696564 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:29:59.696578 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:29:59.696590 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:29:59.696601 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:29:59.696612 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:29:59.696624 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 01:29:59.696635 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:29:59.696647 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:29:59.696657 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:29:59.696668 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:29:59.696679 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:29:59.696689 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:29:59.696700 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:29:59.696713 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:29:59.696723 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:29:59.696735 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:29:59.696746 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:29:59.696757 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:29:59.696768 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:29:59.696779 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:29:59.696790 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:29:59.696801 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:29:59.696814 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:29:59.696875 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:29:59.696888 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:29:59.696900 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:29:59.696911 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:29:59.696922 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:29:59.696933 systemd[1]: Reached target machines.target - Containers. Dec 13 01:29:59.696944 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:29:59.696955 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:29:59.696969 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:29:59.696980 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:29:59.696991 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:29:59.697002 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:29:59.697013 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:29:59.697023 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:29:59.697034 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:29:59.697045 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:29:59.697057 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:29:59.697068 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:29:59.697078 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:29:59.697088 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:29:59.697100 kernel: fuse: init (API version 7.39) Dec 13 01:29:59.697110 kernel: loop: module loaded Dec 13 01:29:59.697120 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:29:59.697130 kernel: ACPI: bus type drm_connector registered Dec 13 01:29:59.697141 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:29:59.697153 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:29:59.697164 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:29:59.697174 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:29:59.697206 systemd-journald[1108]: Collecting audit messages is disabled. Dec 13 01:29:59.697228 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:29:59.697239 systemd[1]: Stopped verity-setup.service. Dec 13 01:29:59.697250 systemd-journald[1108]: Journal started Dec 13 01:29:59.697272 systemd-journald[1108]: Runtime Journal (/run/log/journal/5a0391942c8b453ab01cdc3457c81dfe) is 5.9M, max 47.3M, 41.4M free. Dec 13 01:29:59.478061 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:29:59.502890 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:29:59.503243 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:29:59.701748 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:29:59.702487 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:29:59.703939 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:29:59.705217 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:29:59.706398 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:29:59.707597 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:29:59.708829 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:29:59.711399 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:29:59.712890 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:29:59.714505 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:29:59.714657 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:29:59.716077 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:29:59.716225 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:29:59.718705 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:29:59.718852 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:29:59.720181 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:29:59.720328 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:29:59.721850 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:29:59.722014 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:29:59.723474 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:29:59.723613 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:29:59.725291 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:29:59.727047 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:29:59.728636 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:29:59.743151 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:29:59.759535 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:29:59.761949 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:29:59.763258 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:29:59.763314 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:29:59.765506 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:29:59.767807 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:29:59.770086 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:29:59.771299 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:29:59.772987 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:29:59.775127 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:29:59.776443 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:29:59.780588 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:29:59.782187 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:29:59.783598 systemd-journald[1108]: Time spent on flushing to /var/log/journal/5a0391942c8b453ab01cdc3457c81dfe is 22.801ms for 836 entries. Dec 13 01:29:59.783598 systemd-journald[1108]: System Journal (/var/log/journal/5a0391942c8b453ab01cdc3457c81dfe) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:29:59.821584 systemd-journald[1108]: Received client request to flush runtime journal. Dec 13 01:29:59.821659 kernel: loop0: detected capacity change from 0 to 114328 Dec 13 01:29:59.785584 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:29:59.789461 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:29:59.791693 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:29:59.795471 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:29:59.797160 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:29:59.798738 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:29:59.801403 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:29:59.803102 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:29:59.808482 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:29:59.822341 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:29:59.825423 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:29:59.840719 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:29:59.840769 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:29:59.849084 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:29:59.853610 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:29:59.871606 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:29:59.873727 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:29:59.874498 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:29:59.878421 kernel: loop1: detected capacity change from 0 to 189592 Dec 13 01:29:59.880550 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:29:59.903534 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Dec 13 01:29:59.903553 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Dec 13 01:29:59.908640 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:29:59.922943 kernel: loop2: detected capacity change from 0 to 114432 Dec 13 01:29:59.962413 kernel: loop3: detected capacity change from 0 to 114328 Dec 13 01:29:59.967400 kernel: loop4: detected capacity change from 0 to 189592 Dec 13 01:29:59.973391 kernel: loop5: detected capacity change from 0 to 114432 Dec 13 01:29:59.983135 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:29:59.983578 (sd-merge)[1180]: Merged extensions into '/usr'. Dec 13 01:29:59.989964 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:29:59.989982 systemd[1]: Reloading... Dec 13 01:30:00.036432 zram_generator::config[1206]: No configuration found. Dec 13 01:30:00.109900 ldconfig[1150]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:30:00.148402 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:30:00.184357 systemd[1]: Reloading finished in 193 ms. Dec 13 01:30:00.215847 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:30:00.217435 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:30:00.234539 systemd[1]: Starting ensure-sysext.service... Dec 13 01:30:00.236336 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:30:00.249326 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:30:00.249341 systemd[1]: Reloading... Dec 13 01:30:00.253854 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:30:00.254438 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:30:00.255174 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:30:00.255633 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Dec 13 01:30:00.255761 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Dec 13 01:30:00.260591 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:30:00.260695 systemd-tmpfiles[1241]: Skipping /boot Dec 13 01:30:00.267825 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:30:00.267948 systemd-tmpfiles[1241]: Skipping /boot Dec 13 01:30:00.293393 zram_generator::config[1271]: No configuration found. Dec 13 01:30:00.372198 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:30:00.407180 systemd[1]: Reloading finished in 157 ms. Dec 13 01:30:00.423438 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:30:00.432788 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:30:00.441815 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:30:00.444557 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:30:00.447284 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:30:00.450634 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:30:00.453447 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:30:00.456575 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:30:00.459870 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:30:00.463132 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:30:00.474972 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:30:00.477208 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:30:00.478656 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:30:00.479456 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:30:00.481415 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:30:00.483534 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:30:00.483659 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:30:00.487139 systemd-udevd[1310]: Using default interface naming scheme 'v255'. Dec 13 01:30:00.488419 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:30:00.490047 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:30:00.490186 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:30:00.495515 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:30:00.495779 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:30:00.508837 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:30:00.512919 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:30:00.517709 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:30:00.524412 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:30:00.526903 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:30:00.533069 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:30:00.540678 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:30:00.545117 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:30:00.549659 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:30:00.552628 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:30:00.554430 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:30:00.558019 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:30:00.560913 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:30:00.562422 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:30:00.567238 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:30:00.567411 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:30:00.569682 augenrules[1360]: No rules Dec 13 01:30:00.570386 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:30:00.572147 systemd[1]: Finished ensure-sysext.service. Dec 13 01:30:00.574585 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:30:00.577429 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:30:00.584186 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1333) Dec 13 01:30:00.581841 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:30:00.582569 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:30:00.594320 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:30:00.594970 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:30:00.604403 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1357) Dec 13 01:30:00.610674 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 01:30:00.610743 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:30:00.610778 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:30:00.616511 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:30:00.617624 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:30:00.622393 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1357) Dec 13 01:30:00.664207 systemd-networkd[1358]: lo: Link UP Dec 13 01:30:00.664214 systemd-networkd[1358]: lo: Gained carrier Dec 13 01:30:00.664957 systemd-networkd[1358]: Enumeration completed Dec 13 01:30:00.665063 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:30:00.665472 systemd-networkd[1358]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:30:00.665482 systemd-networkd[1358]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:30:00.666071 systemd-networkd[1358]: eth0: Link UP Dec 13 01:30:00.666083 systemd-networkd[1358]: eth0: Gained carrier Dec 13 01:30:00.666096 systemd-networkd[1358]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:30:00.672577 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:30:00.677945 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:30:00.682598 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:30:00.691459 systemd-networkd[1358]: eth0: DHCPv4 address 10.0.0.68/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:30:00.695041 systemd-resolved[1309]: Positive Trust Anchors: Dec 13 01:30:00.695060 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:30:00.695092 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:30:00.704163 systemd-resolved[1309]: Defaulting to hostname 'linux'. Dec 13 01:30:00.227880 systemd-timesyncd[1380]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:30:00.247627 systemd-journald[1108]: Time jumped backwards, rotating. Dec 13 01:30:00.227934 systemd-timesyncd[1380]: Initial clock synchronization to Fri 2024-12-13 01:30:00.227786 UTC. Dec 13 01:30:00.237746 systemd-resolved[1309]: Clock change detected. Flushing caches. Dec 13 01:30:00.237900 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:30:00.240162 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:30:00.241965 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:30:00.244809 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:30:00.247173 systemd[1]: Reached target network.target - Network. Dec 13 01:30:00.249793 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:30:00.251189 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:30:00.265259 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:30:00.271701 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:30:00.295973 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:30:00.299360 lvm[1398]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:30:00.332016 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:30:00.333683 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:30:00.334820 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:30:00.335948 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:30:00.337188 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:30:00.338616 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:30:00.339744 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:30:00.341001 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:30:00.342246 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:30:00.342285 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:30:00.343245 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:30:00.345582 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:30:00.347949 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:30:00.356236 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:30:00.359273 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:30:00.360885 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:30:00.362084 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:30:00.363065 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:30:00.364094 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:30:00.364129 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:30:00.365035 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:30:00.367260 lvm[1405]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:30:00.367397 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:30:00.370964 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:30:00.374760 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:30:00.377496 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:30:00.378892 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:30:00.380813 jq[1408]: false Dec 13 01:30:00.382853 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:30:00.385712 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:30:00.390379 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:30:00.391734 extend-filesystems[1409]: Found loop3 Dec 13 01:30:00.395318 extend-filesystems[1409]: Found loop4 Dec 13 01:30:00.395318 extend-filesystems[1409]: Found loop5 Dec 13 01:30:00.395318 extend-filesystems[1409]: Found vda Dec 13 01:30:00.395318 extend-filesystems[1409]: Found vda1 Dec 13 01:30:00.395318 extend-filesystems[1409]: Found vda2 Dec 13 01:30:00.395318 extend-filesystems[1409]: Found vda3 Dec 13 01:30:00.395318 extend-filesystems[1409]: Found usr Dec 13 01:30:00.395318 extend-filesystems[1409]: Found vda4 Dec 13 01:30:00.395318 extend-filesystems[1409]: Found vda6 Dec 13 01:30:00.395318 extend-filesystems[1409]: Found vda7 Dec 13 01:30:00.395318 extend-filesystems[1409]: Found vda9 Dec 13 01:30:00.395318 extend-filesystems[1409]: Checking size of /dev/vda9 Dec 13 01:30:00.409216 dbus-daemon[1407]: [system] SELinux support is enabled Dec 13 01:30:00.415096 extend-filesystems[1409]: Resized partition /dev/vda9 Dec 13 01:30:00.397316 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:30:00.397778 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:30:00.406794 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:30:00.411267 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:30:00.414193 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:30:00.418308 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:30:00.422358 jq[1428]: true Dec 13 01:30:00.423110 extend-filesystems[1429]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:30:00.425017 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (1339) Dec 13 01:30:00.424250 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:30:00.424410 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:30:00.424703 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:30:00.424841 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:30:00.426474 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:30:00.427910 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:30:00.433390 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:30:00.441172 (ntainerd)[1434]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:30:00.450657 jq[1431]: true Dec 13 01:30:00.466566 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:30:00.466861 update_engine[1421]: I20241213 01:30:00.466638 1421 main.cc:92] Flatcar Update Engine starting Dec 13 01:30:00.469600 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:30:00.469629 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:30:00.479205 update_engine[1421]: I20241213 01:30:00.473064 1421 update_check_scheduler.cc:74] Next update check in 2m28s Dec 13 01:30:00.471168 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:30:00.471191 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:30:00.473788 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:30:00.483461 extend-filesystems[1429]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:30:00.483461 extend-filesystems[1429]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:30:00.483461 extend-filesystems[1429]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:30:00.487894 extend-filesystems[1409]: Resized filesystem in /dev/vda9 Dec 13 01:30:00.489807 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:30:00.491296 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:30:00.491483 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:30:00.496349 systemd-logind[1415]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 01:30:00.505218 systemd-logind[1415]: New seat seat0. Dec 13 01:30:00.507704 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:30:00.516258 bash[1458]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:30:00.520552 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:30:00.523534 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:30:00.525470 locksmithd[1447]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:30:00.624917 containerd[1434]: time="2024-12-13T01:30:00.624818542Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:30:00.649319 containerd[1434]: time="2024-12-13T01:30:00.649097062Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:30:00.650487 containerd[1434]: time="2024-12-13T01:30:00.650434862Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:30:00.650532 containerd[1434]: time="2024-12-13T01:30:00.650485502Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:30:00.650532 containerd[1434]: time="2024-12-13T01:30:00.650520742Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:30:00.650721 containerd[1434]: time="2024-12-13T01:30:00.650691022Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:30:00.650721 containerd[1434]: time="2024-12-13T01:30:00.650716142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:30:00.650777 containerd[1434]: time="2024-12-13T01:30:00.650768582Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:30:00.650796 containerd[1434]: time="2024-12-13T01:30:00.650780302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:30:00.650953 containerd[1434]: time="2024-12-13T01:30:00.650923942Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:30:00.650953 containerd[1434]: time="2024-12-13T01:30:00.650945102Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:30:00.650995 containerd[1434]: time="2024-12-13T01:30:00.650961942Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:30:00.650995 containerd[1434]: time="2024-12-13T01:30:00.650971582Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:30:00.651174 containerd[1434]: time="2024-12-13T01:30:00.651156382Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:30:00.651385 containerd[1434]: time="2024-12-13T01:30:00.651357902Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:30:00.651495 containerd[1434]: time="2024-12-13T01:30:00.651475622Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:30:00.651527 containerd[1434]: time="2024-12-13T01:30:00.651494142Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:30:00.651618 containerd[1434]: time="2024-12-13T01:30:00.651600382Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:30:00.651659 containerd[1434]: time="2024-12-13T01:30:00.651647342Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:30:00.654460 containerd[1434]: time="2024-12-13T01:30:00.654425822Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:30:00.654493 containerd[1434]: time="2024-12-13T01:30:00.654469662Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:30:00.654493 containerd[1434]: time="2024-12-13T01:30:00.654489222Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:30:00.654557 containerd[1434]: time="2024-12-13T01:30:00.654506822Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:30:00.654557 containerd[1434]: time="2024-12-13T01:30:00.654528902Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:30:00.654689 containerd[1434]: time="2024-12-13T01:30:00.654658942Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:30:00.654883 containerd[1434]: time="2024-12-13T01:30:00.654856782Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:30:00.654974 containerd[1434]: time="2024-12-13T01:30:00.654951662Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:30:00.654998 containerd[1434]: time="2024-12-13T01:30:00.654972662Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:30:00.654998 containerd[1434]: time="2024-12-13T01:30:00.654986702Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:30:00.655037 containerd[1434]: time="2024-12-13T01:30:00.655002622Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:30:00.655037 containerd[1434]: time="2024-12-13T01:30:00.655018542Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:30:00.655037 containerd[1434]: time="2024-12-13T01:30:00.655030382Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:30:00.655083 containerd[1434]: time="2024-12-13T01:30:00.655043582Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:30:00.655083 containerd[1434]: time="2024-12-13T01:30:00.655057262Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:30:00.655083 containerd[1434]: time="2024-12-13T01:30:00.655069142Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:30:00.655132 containerd[1434]: time="2024-12-13T01:30:00.655081102Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:30:00.655132 containerd[1434]: time="2024-12-13T01:30:00.655094742Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:30:00.655132 containerd[1434]: time="2024-12-13T01:30:00.655112862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:30:00.655132 containerd[1434]: time="2024-12-13T01:30:00.655125542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:30:00.655204 containerd[1434]: time="2024-12-13T01:30:00.655136142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:30:00.655204 containerd[1434]: time="2024-12-13T01:30:00.655147862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:30:00.655204 containerd[1434]: time="2024-12-13T01:30:00.655159822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:30:00.655204 containerd[1434]: time="2024-12-13T01:30:00.655172742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:30:00.655204 containerd[1434]: time="2024-12-13T01:30:00.655184182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:30:00.655204 containerd[1434]: time="2024-12-13T01:30:00.655196542Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:30:00.655341 containerd[1434]: time="2024-12-13T01:30:00.655214782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:30:00.655341 containerd[1434]: time="2024-12-13T01:30:00.655230102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:30:00.655341 containerd[1434]: time="2024-12-13T01:30:00.655241502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:30:00.655341 containerd[1434]: time="2024-12-13T01:30:00.655252182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:30:00.655341 containerd[1434]: time="2024-12-13T01:30:00.655263022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:30:00.655341 containerd[1434]: time="2024-12-13T01:30:00.655277262Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:30:00.655341 containerd[1434]: time="2024-12-13T01:30:00.655295382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:30:00.655341 containerd[1434]: time="2024-12-13T01:30:00.655306622Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:30:00.655341 containerd[1434]: time="2024-12-13T01:30:00.655316622Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:30:00.656151 containerd[1434]: time="2024-12-13T01:30:00.656119142Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:30:00.656197 containerd[1434]: time="2024-12-13T01:30:00.656151782Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:30:00.656197 containerd[1434]: time="2024-12-13T01:30:00.656162782Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:30:00.656286 containerd[1434]: time="2024-12-13T01:30:00.656182102Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:30:00.656286 containerd[1434]: time="2024-12-13T01:30:00.656215862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:30:00.656286 containerd[1434]: time="2024-12-13T01:30:00.656233662Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:30:00.656286 containerd[1434]: time="2024-12-13T01:30:00.656243502Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:30:00.656286 containerd[1434]: time="2024-12-13T01:30:00.656255142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:30:00.656666 containerd[1434]: time="2024-12-13T01:30:00.656611862Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:30:00.656778 containerd[1434]: time="2024-12-13T01:30:00.656675822Z" level=info msg="Connect containerd service" Dec 13 01:30:00.656778 containerd[1434]: time="2024-12-13T01:30:00.656702422Z" level=info msg="using legacy CRI server" Dec 13 01:30:00.656778 containerd[1434]: time="2024-12-13T01:30:00.656708542Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:30:00.656830 containerd[1434]: time="2024-12-13T01:30:00.656780062Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:30:00.657386 containerd[1434]: time="2024-12-13T01:30:00.657358702Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:30:00.657605 containerd[1434]: time="2024-12-13T01:30:00.657569462Z" level=info msg="Start subscribing containerd event" Dec 13 01:30:00.657630 containerd[1434]: time="2024-12-13T01:30:00.657618582Z" level=info msg="Start recovering state" Dec 13 01:30:00.657692 containerd[1434]: time="2024-12-13T01:30:00.657679182Z" level=info msg="Start event monitor" Dec 13 01:30:00.657711 containerd[1434]: time="2024-12-13T01:30:00.657693942Z" level=info msg="Start snapshots syncer" Dec 13 01:30:00.657711 containerd[1434]: time="2024-12-13T01:30:00.657703422Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:30:00.657711 containerd[1434]: time="2024-12-13T01:30:00.657710662Z" level=info msg="Start streaming server" Dec 13 01:30:00.658243 containerd[1434]: time="2024-12-13T01:30:00.658224422Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:30:00.658296 containerd[1434]: time="2024-12-13T01:30:00.658282582Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:30:00.659551 containerd[1434]: time="2024-12-13T01:30:00.658349062Z" level=info msg="containerd successfully booted in 0.034950s" Dec 13 01:30:00.658429 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:30:00.742791 sshd_keygen[1425]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:30:00.763594 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:30:00.775775 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:30:00.782492 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:30:00.783625 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:30:00.786275 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:30:00.800170 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:30:00.803092 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:30:00.805230 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 01:30:00.806670 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:30:01.702773 systemd-networkd[1358]: eth0: Gained IPv6LL Dec 13 01:30:01.705482 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:30:01.707335 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:30:01.718856 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:30:01.721137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:01.723221 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:30:01.737864 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:30:01.738140 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:30:01.739839 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:30:01.752117 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:30:02.257877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:02.259451 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:30:02.260817 systemd[1]: Startup finished in 589ms (kernel) + 4.392s (initrd) + 3.670s (userspace) = 8.653s. Dec 13 01:30:02.262595 (kubelet)[1512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:30:02.780997 kubelet[1512]: E1213 01:30:02.780891 1512 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:30:02.783512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:30:02.783689 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:30:07.518203 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:30:07.519263 systemd[1]: Started sshd@0-10.0.0.68:22-10.0.0.1:56866.service - OpenSSH per-connection server daemon (10.0.0.1:56866). Dec 13 01:30:07.568034 sshd[1525]: Accepted publickey for core from 10.0.0.1 port 56866 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:30:07.569775 sshd[1525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:07.607154 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:30:07.613784 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:30:07.615709 systemd-logind[1415]: New session 1 of user core. Dec 13 01:30:07.623108 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:30:07.625220 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:30:07.631965 (systemd)[1529]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:30:07.718129 systemd[1529]: Queued start job for default target default.target. Dec 13 01:30:07.727425 systemd[1529]: Created slice app.slice - User Application Slice. Dec 13 01:30:07.727458 systemd[1529]: Reached target paths.target - Paths. Dec 13 01:30:07.727470 systemd[1529]: Reached target timers.target - Timers. Dec 13 01:30:07.728778 systemd[1529]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:30:07.738454 systemd[1529]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:30:07.738525 systemd[1529]: Reached target sockets.target - Sockets. Dec 13 01:30:07.738554 systemd[1529]: Reached target basic.target - Basic System. Dec 13 01:30:07.738592 systemd[1529]: Reached target default.target - Main User Target. Dec 13 01:30:07.738619 systemd[1529]: Startup finished in 101ms. Dec 13 01:30:07.738878 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:30:07.740119 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:30:07.805837 systemd[1]: Started sshd@1-10.0.0.68:22-10.0.0.1:56868.service - OpenSSH per-connection server daemon (10.0.0.1:56868). Dec 13 01:30:07.839289 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 56868 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:30:07.841017 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:07.845243 systemd-logind[1415]: New session 2 of user core. Dec 13 01:30:07.856733 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:30:07.909691 sshd[1540]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:07.923884 systemd[1]: sshd@1-10.0.0.68:22-10.0.0.1:56868.service: Deactivated successfully. Dec 13 01:30:07.925783 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:30:07.927106 systemd-logind[1415]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:30:07.940803 systemd[1]: Started sshd@2-10.0.0.68:22-10.0.0.1:56870.service - OpenSSH per-connection server daemon (10.0.0.1:56870). Dec 13 01:30:07.941637 systemd-logind[1415]: Removed session 2. Dec 13 01:30:07.969996 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 56870 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:30:07.971305 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:07.975400 systemd-logind[1415]: New session 3 of user core. Dec 13 01:30:07.984058 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:30:08.032225 sshd[1547]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:08.043646 systemd[1]: sshd@2-10.0.0.68:22-10.0.0.1:56870.service: Deactivated successfully. Dec 13 01:30:08.044874 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:30:08.046723 systemd-logind[1415]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:30:08.047835 systemd[1]: Started sshd@3-10.0.0.68:22-10.0.0.1:56878.service - OpenSSH per-connection server daemon (10.0.0.1:56878). Dec 13 01:30:08.048670 systemd-logind[1415]: Removed session 3. Dec 13 01:30:08.083215 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 56878 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:30:08.084828 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:08.088616 systemd-logind[1415]: New session 4 of user core. Dec 13 01:30:08.096695 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:30:08.150179 sshd[1554]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:08.160893 systemd[1]: sshd@3-10.0.0.68:22-10.0.0.1:56878.service: Deactivated successfully. Dec 13 01:30:08.162915 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:30:08.164563 systemd-logind[1415]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:30:08.172807 systemd[1]: Started sshd@4-10.0.0.68:22-10.0.0.1:56880.service - OpenSSH per-connection server daemon (10.0.0.1:56880). Dec 13 01:30:08.176828 systemd-logind[1415]: Removed session 4. Dec 13 01:30:08.206820 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 56880 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:30:08.207929 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:08.211227 systemd-logind[1415]: New session 5 of user core. Dec 13 01:30:08.219726 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:30:08.277285 sudo[1564]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:30:08.277614 sudo[1564]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:30:08.290298 sudo[1564]: pam_unix(sudo:session): session closed for user root Dec 13 01:30:08.292009 sshd[1561]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:08.305800 systemd[1]: sshd@4-10.0.0.68:22-10.0.0.1:56880.service: Deactivated successfully. Dec 13 01:30:08.307053 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:30:08.308921 systemd-logind[1415]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:30:08.310040 systemd[1]: Started sshd@5-10.0.0.68:22-10.0.0.1:56892.service - OpenSSH per-connection server daemon (10.0.0.1:56892). Dec 13 01:30:08.310768 systemd-logind[1415]: Removed session 5. Dec 13 01:30:08.347248 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 56892 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:30:08.348846 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:08.353451 systemd-logind[1415]: New session 6 of user core. Dec 13 01:30:08.362686 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:30:08.415906 sudo[1573]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:30:08.416174 sudo[1573]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:30:08.419299 sudo[1573]: pam_unix(sudo:session): session closed for user root Dec 13 01:30:08.423562 sudo[1572]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:30:08.423834 sudo[1572]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:30:08.449770 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:30:08.452016 auditctl[1576]: No rules Dec 13 01:30:08.452956 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:30:08.453157 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:30:08.454784 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:30:08.477967 augenrules[1594]: No rules Dec 13 01:30:08.479071 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:30:08.480262 sudo[1572]: pam_unix(sudo:session): session closed for user root Dec 13 01:30:08.482360 sshd[1569]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:08.488810 systemd[1]: sshd@5-10.0.0.68:22-10.0.0.1:56892.service: Deactivated successfully. Dec 13 01:30:08.490241 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:30:08.491438 systemd-logind[1415]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:30:08.492704 systemd[1]: Started sshd@6-10.0.0.68:22-10.0.0.1:56908.service - OpenSSH per-connection server daemon (10.0.0.1:56908). Dec 13 01:30:08.494253 systemd-logind[1415]: Removed session 6. Dec 13 01:30:08.527723 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 56908 ssh2: RSA SHA256:yVKhZEHbC7ylZ7bY3Y8pwdh1t/xp6Vz/y3yLFfd9j+Q Dec 13 01:30:08.528939 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:30:08.532757 systemd-logind[1415]: New session 7 of user core. Dec 13 01:30:08.539681 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:30:08.590424 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:30:08.590737 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:30:08.610813 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:30:08.625152 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:30:08.626600 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:30:09.031663 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:09.046876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:09.067365 systemd[1]: Reloading requested from client PID 1645 ('systemctl') (unit session-7.scope)... Dec 13 01:30:09.067383 systemd[1]: Reloading... Dec 13 01:30:09.133601 zram_generator::config[1681]: No configuration found. Dec 13 01:30:09.312084 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:30:09.363891 systemd[1]: Reloading finished in 296 ms. Dec 13 01:30:09.403634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:09.405921 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:09.407263 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:30:09.407456 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:09.408922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:30:09.498826 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:30:09.502188 (kubelet)[1730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:30:09.537895 kubelet[1730]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:30:09.537895 kubelet[1730]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:30:09.537895 kubelet[1730]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:30:09.538206 kubelet[1730]: I1213 01:30:09.537941 1730 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:30:10.484687 kubelet[1730]: I1213 01:30:10.484649 1730 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 01:30:10.486248 kubelet[1730]: I1213 01:30:10.484847 1730 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:30:10.486248 kubelet[1730]: I1213 01:30:10.485087 1730 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 01:30:10.523065 kubelet[1730]: I1213 01:30:10.522558 1730 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:30:10.534102 kubelet[1730]: E1213 01:30:10.534047 1730 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 01:30:10.534102 kubelet[1730]: I1213 01:30:10.534089 1730 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 01:30:10.539518 kubelet[1730]: I1213 01:30:10.539484 1730 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:30:10.540186 kubelet[1730]: I1213 01:30:10.540155 1730 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 01:30:10.540316 kubelet[1730]: I1213 01:30:10.540274 1730 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:30:10.540509 kubelet[1730]: I1213 01:30:10.540309 1730 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.68","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 01:30:10.540776 kubelet[1730]: I1213 01:30:10.540763 1730 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:30:10.540776 kubelet[1730]: I1213 01:30:10.540776 1730 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 01:30:10.541024 kubelet[1730]: I1213 01:30:10.541000 1730 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:30:10.543042 kubelet[1730]: I1213 01:30:10.543016 1730 kubelet.go:408] "Attempting to sync node with API server" Dec 13 01:30:10.543109 kubelet[1730]: I1213 01:30:10.543050 1730 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:30:10.543109 kubelet[1730]: I1213 01:30:10.543078 1730 kubelet.go:314] "Adding apiserver pod source" Dec 13 01:30:10.543109 kubelet[1730]: I1213 01:30:10.543096 1730 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:30:10.544529 kubelet[1730]: E1213 01:30:10.544431 1730 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:10.544529 kubelet[1730]: E1213 01:30:10.544487 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:10.548833 kubelet[1730]: I1213 01:30:10.548801 1730 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:30:10.550531 kubelet[1730]: I1213 01:30:10.550510 1730 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:30:10.554210 kubelet[1730]: W1213 01:30:10.554159 1730 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:30:10.554890 kubelet[1730]: W1213 01:30:10.554570 1730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.68" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 01:30:10.554890 kubelet[1730]: E1213 01:30:10.554717 1730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.68\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 01:30:10.554890 kubelet[1730]: W1213 01:30:10.554767 1730 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 01:30:10.554890 kubelet[1730]: E1213 01:30:10.554798 1730 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 01:30:10.556910 kubelet[1730]: I1213 01:30:10.556804 1730 server.go:1269] "Started kubelet" Dec 13 01:30:10.557552 kubelet[1730]: I1213 01:30:10.557432 1730 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:30:10.558082 kubelet[1730]: I1213 01:30:10.557453 1730 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:30:10.558082 kubelet[1730]: I1213 01:30:10.558024 1730 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:30:10.558853 kubelet[1730]: I1213 01:30:10.558580 1730 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:30:10.558853 kubelet[1730]: I1213 01:30:10.558696 1730 server.go:460] "Adding debug handlers to kubelet server" Dec 13 01:30:10.559066 kubelet[1730]: I1213 01:30:10.559044 1730 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 01:30:10.559917 kubelet[1730]: I1213 01:30:10.559883 1730 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 01:30:10.559994 kubelet[1730]: I1213 01:30:10.559974 1730 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 01:30:10.560127 kubelet[1730]: I1213 01:30:10.560107 1730 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:30:10.561300 kubelet[1730]: E1213 01:30:10.560767 1730 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.68\" not found" Dec 13 01:30:10.561300 kubelet[1730]: I1213 01:30:10.561077 1730 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:30:10.561300 kubelet[1730]: I1213 01:30:10.561228 1730 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:30:10.562241 kubelet[1730]: E1213 01:30:10.562221 1730 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:30:10.562624 kubelet[1730]: I1213 01:30:10.562603 1730 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:30:10.572074 kubelet[1730]: E1213 01:30:10.570992 1730 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.68\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 01:30:10.572074 kubelet[1730]: E1213 01:30:10.570552 1730 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.68.18109863b5ca7f2e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.68,UID:10.0.0.68,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.68,},FirstTimestamp:2024-12-13 01:30:10.556772142 +0000 UTC m=+1.051755041,LastTimestamp:2024-12-13 01:30:10.556772142 +0000 UTC m=+1.051755041,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.68,}" Dec 13 01:30:10.573150 kubelet[1730]: I1213 01:30:10.573114 1730 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:30:10.573150 kubelet[1730]: I1213 01:30:10.573135 1730 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:30:10.573150 kubelet[1730]: I1213 01:30:10.573154 1730 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:30:10.638781 kubelet[1730]: I1213 01:30:10.638739 1730 policy_none.go:49] "None policy: Start" Dec 13 01:30:10.642490 kubelet[1730]: I1213 01:30:10.642415 1730 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:30:10.642490 kubelet[1730]: I1213 01:30:10.642452 1730 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:30:10.651875 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:30:10.660940 kubelet[1730]: E1213 01:30:10.660890 1730 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.68\" not found" Dec 13 01:30:10.669796 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:30:10.673350 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:30:10.673856 kubelet[1730]: I1213 01:30:10.673810 1730 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:30:10.674976 kubelet[1730]: I1213 01:30:10.674944 1730 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:30:10.674976 kubelet[1730]: I1213 01:30:10.674973 1730 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:30:10.675067 kubelet[1730]: I1213 01:30:10.674988 1730 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 01:30:10.675110 kubelet[1730]: E1213 01:30:10.675092 1730 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:30:10.679513 kubelet[1730]: I1213 01:30:10.679466 1730 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:30:10.679721 kubelet[1730]: I1213 01:30:10.679694 1730 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 01:30:10.679788 kubelet[1730]: I1213 01:30:10.679712 1730 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:30:10.680582 kubelet[1730]: I1213 01:30:10.680534 1730 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:30:10.681323 kubelet[1730]: E1213 01:30:10.681302 1730 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.68\" not found" Dec 13 01:30:10.776036 kubelet[1730]: E1213 01:30:10.774991 1730 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.68\" not found" node="10.0.0.68" Dec 13 01:30:10.781128 kubelet[1730]: I1213 01:30:10.780843 1730 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.68" Dec 13 01:30:10.785285 kubelet[1730]: I1213 01:30:10.785255 1730 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.68" Dec 13 01:30:10.785285 kubelet[1730]: E1213 01:30:10.785283 1730 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.68\": node \"10.0.0.68\" not found" Dec 13 01:30:10.793207 kubelet[1730]: E1213 01:30:10.793180 1730 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.68\" not found" Dec 13 01:30:10.894253 kubelet[1730]: E1213 01:30:10.894216 1730 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.68\" not found" Dec 13 01:30:10.994670 kubelet[1730]: E1213 01:30:10.994640 1730 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.68\" not found" Dec 13 01:30:11.026794 sudo[1605]: pam_unix(sudo:session): session closed for user root Dec 13 01:30:11.028746 sshd[1602]: pam_unix(sshd:session): session closed for user core Dec 13 01:30:11.031363 systemd[1]: sshd@6-10.0.0.68:22-10.0.0.1:56908.service: Deactivated successfully. Dec 13 01:30:11.032928 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:30:11.034191 systemd-logind[1415]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:30:11.035090 systemd-logind[1415]: Removed session 7. Dec 13 01:30:11.095019 kubelet[1730]: E1213 01:30:11.094977 1730 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.68\" not found" Dec 13 01:30:11.195504 kubelet[1730]: E1213 01:30:11.195443 1730 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.68\" not found" Dec 13 01:30:11.296111 kubelet[1730]: E1213 01:30:11.295992 1730 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.68\" not found" Dec 13 01:30:11.396490 kubelet[1730]: E1213 01:30:11.396447 1730 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.68\" not found" Dec 13 01:30:11.486992 kubelet[1730]: I1213 01:30:11.486958 1730 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 01:30:11.487166 kubelet[1730]: W1213 01:30:11.487113 1730 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:30:11.487166 kubelet[1730]: W1213 01:30:11.487126 1730 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 01:30:11.497340 kubelet[1730]: E1213 01:30:11.497308 1730 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.68\" not found" Dec 13 01:30:11.544737 kubelet[1730]: E1213 01:30:11.544711 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:11.598627 kubelet[1730]: I1213 01:30:11.598502 1730 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 01:30:11.599158 containerd[1434]: time="2024-12-13T01:30:11.599085182Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:30:11.599483 kubelet[1730]: I1213 01:30:11.599225 1730 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 01:30:12.545097 kubelet[1730]: I1213 01:30:12.545059 1730 apiserver.go:52] "Watching apiserver" Dec 13 01:30:12.545423 kubelet[1730]: E1213 01:30:12.545060 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:12.557604 systemd[1]: Created slice kubepods-besteffort-pod625c26a3_757d_4a4d_9d88_484527562735.slice - libcontainer container kubepods-besteffort-pod625c26a3_757d_4a4d_9d88_484527562735.slice. Dec 13 01:30:12.561276 kubelet[1730]: I1213 01:30:12.561245 1730 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 01:30:12.567070 systemd[1]: Created slice kubepods-burstable-pod5853b9be_d5d5_49ec_8381_edf5f18df523.slice - libcontainer container kubepods-burstable-pod5853b9be_d5d5_49ec_8381_edf5f18df523.slice. Dec 13 01:30:12.571083 kubelet[1730]: I1213 01:30:12.571003 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5853b9be-d5d5-49ec-8381-edf5f18df523-clustermesh-secrets\") pod \"cilium-gncmt\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " pod="kube-system/cilium-gncmt" Dec 13 01:30:12.571083 kubelet[1730]: I1213 01:30:12.571063 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-host-proc-sys-kernel\") pod \"cilium-gncmt\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " pod="kube-system/cilium-gncmt" Dec 13 01:30:12.571194 kubelet[1730]: I1213 01:30:12.571093 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5853b9be-d5d5-49ec-8381-edf5f18df523-hubble-tls\") pod \"cilium-gncmt\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " pod="kube-system/cilium-gncmt" Dec 13 01:30:12.571194 kubelet[1730]: I1213 01:30:12.571125 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5bl8\" (UniqueName: \"kubernetes.io/projected/5853b9be-d5d5-49ec-8381-edf5f18df523-kube-api-access-x5bl8\") pod \"cilium-gncmt\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " pod="kube-system/cilium-gncmt" Dec 13 01:30:12.571194 kubelet[1730]: I1213 01:30:12.571152 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-cni-path\") pod \"cilium-gncmt\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " pod="kube-system/cilium-gncmt" Dec 13 01:30:12.571194 kubelet[1730]: I1213 01:30:12.571167 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-bpf-maps\") pod \"cilium-gncmt\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " pod="kube-system/cilium-gncmt" Dec 13 01:30:12.571194 kubelet[1730]: I1213 01:30:12.571181 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-cilium-cgroup\") pod \"cilium-gncmt\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " pod="kube-system/cilium-gncmt" Dec 13 01:30:12.571301 kubelet[1730]: I1213 01:30:12.571216 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-etc-cni-netd\") pod \"cilium-gncmt\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " pod="kube-system/cilium-gncmt" Dec 13 01:30:12.571301 kubelet[1730]: I1213 01:30:12.571256 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-lib-modules\") pod \"cilium-gncmt\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " pod="kube-system/cilium-gncmt" Dec 13 01:30:12.571301 kubelet[1730]: I1213 01:30:12.571275 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/625c26a3-757d-4a4d-9d88-484527562735-kube-proxy\") pod \"kube-proxy-bw2zs\" (UID: \"625c26a3-757d-4a4d-9d88-484527562735\") " pod="kube-system/kube-proxy-bw2zs" Dec 13 01:30:12.571301 kubelet[1730]: I1213 01:30:12.571290 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-cilium-run\") pod \"cilium-gncmt\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " pod="kube-system/cilium-gncmt" Dec 13 01:30:12.571382 kubelet[1730]: I1213 01:30:12.571306 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-xtables-lock\") pod \"cilium-gncmt\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " pod="kube-system/cilium-gncmt" Dec 13 01:30:12.571382 kubelet[1730]: I1213 01:30:12.571321 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-host-proc-sys-net\") pod \"cilium-gncmt\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " pod="kube-system/cilium-gncmt" Dec 13 01:30:12.571382 kubelet[1730]: I1213 01:30:12.571342 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/625c26a3-757d-4a4d-9d88-484527562735-xtables-lock\") pod \"kube-proxy-bw2zs\" (UID: \"625c26a3-757d-4a4d-9d88-484527562735\") " pod="kube-system/kube-proxy-bw2zs" Dec 13 01:30:12.571382 kubelet[1730]: I1213 01:30:12.571355 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-hostproc\") pod \"cilium-gncmt\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " pod="kube-system/cilium-gncmt" Dec 13 01:30:12.571382 kubelet[1730]: I1213 01:30:12.571375 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/625c26a3-757d-4a4d-9d88-484527562735-lib-modules\") pod \"kube-proxy-bw2zs\" (UID: \"625c26a3-757d-4a4d-9d88-484527562735\") " pod="kube-system/kube-proxy-bw2zs" Dec 13 01:30:12.571482 kubelet[1730]: I1213 01:30:12.571391 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l588w\" (UniqueName: \"kubernetes.io/projected/625c26a3-757d-4a4d-9d88-484527562735-kube-api-access-l588w\") pod \"kube-proxy-bw2zs\" (UID: \"625c26a3-757d-4a4d-9d88-484527562735\") " pod="kube-system/kube-proxy-bw2zs" Dec 13 01:30:12.571482 kubelet[1730]: I1213 01:30:12.571415 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5853b9be-d5d5-49ec-8381-edf5f18df523-cilium-config-path\") pod \"cilium-gncmt\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " pod="kube-system/cilium-gncmt" Dec 13 01:30:12.865314 kubelet[1730]: E1213 01:30:12.865203 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:12.866200 containerd[1434]: time="2024-12-13T01:30:12.866150182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bw2zs,Uid:625c26a3-757d-4a4d-9d88-484527562735,Namespace:kube-system,Attempt:0,}" Dec 13 01:30:12.880824 kubelet[1730]: E1213 01:30:12.880793 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:12.881217 containerd[1434]: time="2024-12-13T01:30:12.881180022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gncmt,Uid:5853b9be-d5d5-49ec-8381-edf5f18df523,Namespace:kube-system,Attempt:0,}" Dec 13 01:30:13.360668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount93418017.mount: Deactivated successfully. Dec 13 01:30:13.365333 containerd[1434]: time="2024-12-13T01:30:13.365282902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:30:13.367259 containerd[1434]: time="2024-12-13T01:30:13.367217822Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:30:13.367806 containerd[1434]: time="2024-12-13T01:30:13.367773702Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Dec 13 01:30:13.368404 containerd[1434]: time="2024-12-13T01:30:13.368261062Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:30:13.368801 containerd[1434]: time="2024-12-13T01:30:13.368762822Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:30:13.372616 containerd[1434]: time="2024-12-13T01:30:13.372565782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:30:13.373473 containerd[1434]: time="2024-12-13T01:30:13.373435582Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 507.19696ms" Dec 13 01:30:13.374167 containerd[1434]: time="2024-12-13T01:30:13.374143822Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 492.87056ms" Dec 13 01:30:13.482356 containerd[1434]: time="2024-12-13T01:30:13.482256862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:13.482881 containerd[1434]: time="2024-12-13T01:30:13.482365062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:13.482955 containerd[1434]: time="2024-12-13T01:30:13.482786542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:13.482955 containerd[1434]: time="2024-12-13T01:30:13.482896702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:13.483153 containerd[1434]: time="2024-12-13T01:30:13.483093462Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:13.483204 containerd[1434]: time="2024-12-13T01:30:13.483152542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:13.483204 containerd[1434]: time="2024-12-13T01:30:13.483167542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:13.483390 containerd[1434]: time="2024-12-13T01:30:13.483252502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:13.545516 kubelet[1730]: E1213 01:30:13.545450 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:13.570711 systemd[1]: Started cri-containerd-2fc9cbbabcc2f4781a41f62d001c4e5233dd526a889161034b373af6acd0c4f2.scope - libcontainer container 2fc9cbbabcc2f4781a41f62d001c4e5233dd526a889161034b373af6acd0c4f2. Dec 13 01:30:13.572203 systemd[1]: Started cri-containerd-71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b.scope - libcontainer container 71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b. Dec 13 01:30:13.591882 containerd[1434]: time="2024-12-13T01:30:13.591772462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bw2zs,Uid:625c26a3-757d-4a4d-9d88-484527562735,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fc9cbbabcc2f4781a41f62d001c4e5233dd526a889161034b373af6acd0c4f2\"" Dec 13 01:30:13.592780 kubelet[1730]: E1213 01:30:13.592748 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:13.594325 containerd[1434]: time="2024-12-13T01:30:13.594271142Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 01:30:13.594643 containerd[1434]: time="2024-12-13T01:30:13.594617262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gncmt,Uid:5853b9be-d5d5-49ec-8381-edf5f18df523,Namespace:kube-system,Attempt:0,} returns sandbox id \"71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b\"" Dec 13 01:30:13.595317 kubelet[1730]: E1213 01:30:13.595231 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:14.546485 kubelet[1730]: E1213 01:30:14.546407 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:14.636192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3522534863.mount: Deactivated successfully. Dec 13 01:30:14.880164 containerd[1434]: time="2024-12-13T01:30:14.880052422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:14.881186 containerd[1434]: time="2024-12-13T01:30:14.880954102Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771428" Dec 13 01:30:14.882086 containerd[1434]: time="2024-12-13T01:30:14.882027902Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:14.884095 containerd[1434]: time="2024-12-13T01:30:14.884039502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:14.884843 containerd[1434]: time="2024-12-13T01:30:14.884804542Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 1.29049692s" Dec 13 01:30:14.884843 containerd[1434]: time="2024-12-13T01:30:14.884843142Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"" Dec 13 01:30:14.885958 containerd[1434]: time="2024-12-13T01:30:14.885926862Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 01:30:14.887166 containerd[1434]: time="2024-12-13T01:30:14.887126742Z" level=info msg="CreateContainer within sandbox \"2fc9cbbabcc2f4781a41f62d001c4e5233dd526a889161034b373af6acd0c4f2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:30:14.898651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3504135977.mount: Deactivated successfully. Dec 13 01:30:14.901788 containerd[1434]: time="2024-12-13T01:30:14.901636782Z" level=info msg="CreateContainer within sandbox \"2fc9cbbabcc2f4781a41f62d001c4e5233dd526a889161034b373af6acd0c4f2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7e971b7e79a77b22ea3be6133e01b34805ef923724e9e3d1981b0573e85a76f1\"" Dec 13 01:30:14.902452 containerd[1434]: time="2024-12-13T01:30:14.902346942Z" level=info msg="StartContainer for \"7e971b7e79a77b22ea3be6133e01b34805ef923724e9e3d1981b0573e85a76f1\"" Dec 13 01:30:14.919819 systemd[1]: run-containerd-runc-k8s.io-7e971b7e79a77b22ea3be6133e01b34805ef923724e9e3d1981b0573e85a76f1-runc.V70sA4.mount: Deactivated successfully. Dec 13 01:30:14.930725 systemd[1]: Started cri-containerd-7e971b7e79a77b22ea3be6133e01b34805ef923724e9e3d1981b0573e85a76f1.scope - libcontainer container 7e971b7e79a77b22ea3be6133e01b34805ef923724e9e3d1981b0573e85a76f1. Dec 13 01:30:14.954056 containerd[1434]: time="2024-12-13T01:30:14.952325742Z" level=info msg="StartContainer for \"7e971b7e79a77b22ea3be6133e01b34805ef923724e9e3d1981b0573e85a76f1\" returns successfully" Dec 13 01:30:15.547187 kubelet[1730]: E1213 01:30:15.547145 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:15.685707 kubelet[1730]: E1213 01:30:15.685678 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:16.547622 kubelet[1730]: E1213 01:30:16.547566 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:16.687081 kubelet[1730]: E1213 01:30:16.686641 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:17.547932 kubelet[1730]: E1213 01:30:17.547881 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:18.248469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3100002030.mount: Deactivated successfully. Dec 13 01:30:18.548576 kubelet[1730]: E1213 01:30:18.548388 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:19.549053 kubelet[1730]: E1213 01:30:19.549007 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:19.647936 containerd[1434]: time="2024-12-13T01:30:19.647891382Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:19.648472 containerd[1434]: time="2024-12-13T01:30:19.648425502Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157651502" Dec 13 01:30:19.649207 containerd[1434]: time="2024-12-13T01:30:19.649162862Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:19.650664 containerd[1434]: time="2024-12-13T01:30:19.650633342Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.76467364s" Dec 13 01:30:19.650751 containerd[1434]: time="2024-12-13T01:30:19.650667422Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Dec 13 01:30:19.652587 containerd[1434]: time="2024-12-13T01:30:19.652559662Z" level=info msg="CreateContainer within sandbox \"71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:30:19.663161 containerd[1434]: time="2024-12-13T01:30:19.663118822Z" level=info msg="CreateContainer within sandbox \"71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"04ecd5d53b85e9f088a042abbb1039422669c6afce24187abbcd25c5a3bc5c6d\"" Dec 13 01:30:19.663618 containerd[1434]: time="2024-12-13T01:30:19.663493342Z" level=info msg="StartContainer for \"04ecd5d53b85e9f088a042abbb1039422669c6afce24187abbcd25c5a3bc5c6d\"" Dec 13 01:30:19.693756 systemd[1]: Started cri-containerd-04ecd5d53b85e9f088a042abbb1039422669c6afce24187abbcd25c5a3bc5c6d.scope - libcontainer container 04ecd5d53b85e9f088a042abbb1039422669c6afce24187abbcd25c5a3bc5c6d. Dec 13 01:30:19.715531 containerd[1434]: time="2024-12-13T01:30:19.715366262Z" level=info msg="StartContainer for \"04ecd5d53b85e9f088a042abbb1039422669c6afce24187abbcd25c5a3bc5c6d\" returns successfully" Dec 13 01:30:19.760052 systemd[1]: cri-containerd-04ecd5d53b85e9f088a042abbb1039422669c6afce24187abbcd25c5a3bc5c6d.scope: Deactivated successfully. Dec 13 01:30:19.986254 containerd[1434]: time="2024-12-13T01:30:19.986178262Z" level=info msg="shim disconnected" id=04ecd5d53b85e9f088a042abbb1039422669c6afce24187abbcd25c5a3bc5c6d namespace=k8s.io Dec 13 01:30:19.986254 containerd[1434]: time="2024-12-13T01:30:19.986241462Z" level=warning msg="cleaning up after shim disconnected" id=04ecd5d53b85e9f088a042abbb1039422669c6afce24187abbcd25c5a3bc5c6d namespace=k8s.io Dec 13 01:30:19.986254 containerd[1434]: time="2024-12-13T01:30:19.986250342Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:20.549817 kubelet[1730]: E1213 01:30:20.549775 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:20.658861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04ecd5d53b85e9f088a042abbb1039422669c6afce24187abbcd25c5a3bc5c6d-rootfs.mount: Deactivated successfully. Dec 13 01:30:20.694164 kubelet[1730]: E1213 01:30:20.694137 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:20.695952 containerd[1434]: time="2024-12-13T01:30:20.695915182Z" level=info msg="CreateContainer within sandbox \"71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:30:20.709471 kubelet[1730]: I1213 01:30:20.709263 1730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bw2zs" podStartSLOduration=9.417280142 podStartE2EDuration="10.709248022s" podCreationTimestamp="2024-12-13 01:30:10 +0000 UTC" firstStartedPulling="2024-12-13 01:30:13.593833462 +0000 UTC m=+4.088816241" lastFinishedPulling="2024-12-13 01:30:14.885801342 +0000 UTC m=+5.380784121" observedRunningTime="2024-12-13 01:30:15.695166382 +0000 UTC m=+6.190149201" watchObservedRunningTime="2024-12-13 01:30:20.709248022 +0000 UTC m=+11.204230801" Dec 13 01:30:20.711635 containerd[1434]: time="2024-12-13T01:30:20.711535142Z" level=info msg="CreateContainer within sandbox \"71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5eeac84c0096f57729faf9ee3548d63e3c16811ba17795bcc78eed0276efd507\"" Dec 13 01:30:20.712112 containerd[1434]: time="2024-12-13T01:30:20.712079462Z" level=info msg="StartContainer for \"5eeac84c0096f57729faf9ee3548d63e3c16811ba17795bcc78eed0276efd507\"" Dec 13 01:30:20.739685 systemd[1]: Started cri-containerd-5eeac84c0096f57729faf9ee3548d63e3c16811ba17795bcc78eed0276efd507.scope - libcontainer container 5eeac84c0096f57729faf9ee3548d63e3c16811ba17795bcc78eed0276efd507. Dec 13 01:30:20.757663 containerd[1434]: time="2024-12-13T01:30:20.757563622Z" level=info msg="StartContainer for \"5eeac84c0096f57729faf9ee3548d63e3c16811ba17795bcc78eed0276efd507\" returns successfully" Dec 13 01:30:20.772249 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:30:20.772480 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:30:20.772567 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:30:20.777905 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:30:20.778090 systemd[1]: cri-containerd-5eeac84c0096f57729faf9ee3548d63e3c16811ba17795bcc78eed0276efd507.scope: Deactivated successfully. Dec 13 01:30:20.787525 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:30:20.807357 containerd[1434]: time="2024-12-13T01:30:20.807132302Z" level=info msg="shim disconnected" id=5eeac84c0096f57729faf9ee3548d63e3c16811ba17795bcc78eed0276efd507 namespace=k8s.io Dec 13 01:30:20.807357 containerd[1434]: time="2024-12-13T01:30:20.807186182Z" level=warning msg="cleaning up after shim disconnected" id=5eeac84c0096f57729faf9ee3548d63e3c16811ba17795bcc78eed0276efd507 namespace=k8s.io Dec 13 01:30:20.807357 containerd[1434]: time="2024-12-13T01:30:20.807195182Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:21.550653 kubelet[1730]: E1213 01:30:21.550581 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:21.658596 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5eeac84c0096f57729faf9ee3548d63e3c16811ba17795bcc78eed0276efd507-rootfs.mount: Deactivated successfully. Dec 13 01:30:21.697672 kubelet[1730]: E1213 01:30:21.697627 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:21.699245 containerd[1434]: time="2024-12-13T01:30:21.699189182Z" level=info msg="CreateContainer within sandbox \"71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:30:21.713911 containerd[1434]: time="2024-12-13T01:30:21.713868502Z" level=info msg="CreateContainer within sandbox \"71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c86518c15ea70177e7c7b5e5a2acd29c786a3155d215ab5d9905a838bff1fc39\"" Dec 13 01:30:21.714325 containerd[1434]: time="2024-12-13T01:30:21.714295782Z" level=info msg="StartContainer for \"c86518c15ea70177e7c7b5e5a2acd29c786a3155d215ab5d9905a838bff1fc39\"" Dec 13 01:30:21.740706 systemd[1]: Started cri-containerd-c86518c15ea70177e7c7b5e5a2acd29c786a3155d215ab5d9905a838bff1fc39.scope - libcontainer container c86518c15ea70177e7c7b5e5a2acd29c786a3155d215ab5d9905a838bff1fc39. Dec 13 01:30:21.761776 containerd[1434]: time="2024-12-13T01:30:21.761726102Z" level=info msg="StartContainer for \"c86518c15ea70177e7c7b5e5a2acd29c786a3155d215ab5d9905a838bff1fc39\" returns successfully" Dec 13 01:30:21.778213 systemd[1]: cri-containerd-c86518c15ea70177e7c7b5e5a2acd29c786a3155d215ab5d9905a838bff1fc39.scope: Deactivated successfully. Dec 13 01:30:21.797675 containerd[1434]: time="2024-12-13T01:30:21.797614502Z" level=info msg="shim disconnected" id=c86518c15ea70177e7c7b5e5a2acd29c786a3155d215ab5d9905a838bff1fc39 namespace=k8s.io Dec 13 01:30:21.797675 containerd[1434]: time="2024-12-13T01:30:21.797671702Z" level=warning msg="cleaning up after shim disconnected" id=c86518c15ea70177e7c7b5e5a2acd29c786a3155d215ab5d9905a838bff1fc39 namespace=k8s.io Dec 13 01:30:21.797675 containerd[1434]: time="2024-12-13T01:30:21.797682182Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:22.550765 kubelet[1730]: E1213 01:30:22.550705 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:22.658606 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c86518c15ea70177e7c7b5e5a2acd29c786a3155d215ab5d9905a838bff1fc39-rootfs.mount: Deactivated successfully. Dec 13 01:30:22.701892 kubelet[1730]: E1213 01:30:22.701727 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:22.703468 containerd[1434]: time="2024-12-13T01:30:22.703424142Z" level=info msg="CreateContainer within sandbox \"71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:30:22.716964 containerd[1434]: time="2024-12-13T01:30:22.716236582Z" level=info msg="CreateContainer within sandbox \"71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4d78014560e51d4b07b39b413df187c17bd98a5bbeb5798f7e4880c0f7df3a51\"" Dec 13 01:30:22.716964 containerd[1434]: time="2024-12-13T01:30:22.716745662Z" level=info msg="StartContainer for \"4d78014560e51d4b07b39b413df187c17bd98a5bbeb5798f7e4880c0f7df3a51\"" Dec 13 01:30:22.741698 systemd[1]: Started cri-containerd-4d78014560e51d4b07b39b413df187c17bd98a5bbeb5798f7e4880c0f7df3a51.scope - libcontainer container 4d78014560e51d4b07b39b413df187c17bd98a5bbeb5798f7e4880c0f7df3a51. Dec 13 01:30:22.758488 systemd[1]: cri-containerd-4d78014560e51d4b07b39b413df187c17bd98a5bbeb5798f7e4880c0f7df3a51.scope: Deactivated successfully. Dec 13 01:30:22.760209 containerd[1434]: time="2024-12-13T01:30:22.760135742Z" level=info msg="StartContainer for \"4d78014560e51d4b07b39b413df187c17bd98a5bbeb5798f7e4880c0f7df3a51\" returns successfully" Dec 13 01:30:22.776981 containerd[1434]: time="2024-12-13T01:30:22.776926422Z" level=info msg="shim disconnected" id=4d78014560e51d4b07b39b413df187c17bd98a5bbeb5798f7e4880c0f7df3a51 namespace=k8s.io Dec 13 01:30:22.777213 containerd[1434]: time="2024-12-13T01:30:22.777193342Z" level=warning msg="cleaning up after shim disconnected" id=4d78014560e51d4b07b39b413df187c17bd98a5bbeb5798f7e4880c0f7df3a51 namespace=k8s.io Dec 13 01:30:22.777286 containerd[1434]: time="2024-12-13T01:30:22.777271422Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:23.550901 kubelet[1730]: E1213 01:30:23.550846 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:23.658642 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d78014560e51d4b07b39b413df187c17bd98a5bbeb5798f7e4880c0f7df3a51-rootfs.mount: Deactivated successfully. Dec 13 01:30:23.706006 kubelet[1730]: E1213 01:30:23.705938 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:23.707621 containerd[1434]: time="2024-12-13T01:30:23.707584022Z" level=info msg="CreateContainer within sandbox \"71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:30:23.723921 containerd[1434]: time="2024-12-13T01:30:23.723867902Z" level=info msg="CreateContainer within sandbox \"71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d\"" Dec 13 01:30:23.724396 containerd[1434]: time="2024-12-13T01:30:23.724328862Z" level=info msg="StartContainer for \"f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d\"" Dec 13 01:30:23.755701 systemd[1]: Started cri-containerd-f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d.scope - libcontainer container f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d. Dec 13 01:30:23.778162 containerd[1434]: time="2024-12-13T01:30:23.778118382Z" level=info msg="StartContainer for \"f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d\" returns successfully" Dec 13 01:30:23.921800 kubelet[1730]: I1213 01:30:23.921764 1730 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 01:30:24.365579 kernel: Initializing XFRM netlink socket Dec 13 01:30:24.551178 kubelet[1730]: E1213 01:30:24.551127 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:24.710946 kubelet[1730]: E1213 01:30:24.710857 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:25.552163 kubelet[1730]: E1213 01:30:25.552110 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:25.712246 kubelet[1730]: E1213 01:30:25.712218 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:25.986279 systemd-networkd[1358]: cilium_host: Link UP Dec 13 01:30:25.986422 systemd-networkd[1358]: cilium_net: Link UP Dec 13 01:30:25.986795 systemd-networkd[1358]: cilium_net: Gained carrier Dec 13 01:30:25.986918 systemd-networkd[1358]: cilium_host: Gained carrier Dec 13 01:30:25.987008 systemd-networkd[1358]: cilium_net: Gained IPv6LL Dec 13 01:30:25.987126 systemd-networkd[1358]: cilium_host: Gained IPv6LL Dec 13 01:30:26.068676 systemd-networkd[1358]: cilium_vxlan: Link UP Dec 13 01:30:26.068682 systemd-networkd[1358]: cilium_vxlan: Gained carrier Dec 13 01:30:26.352658 kernel: NET: Registered PF_ALG protocol family Dec 13 01:30:26.552377 kubelet[1730]: E1213 01:30:26.552329 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:26.713509 kubelet[1730]: E1213 01:30:26.713472 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:26.905958 systemd-networkd[1358]: lxc_health: Link UP Dec 13 01:30:26.907168 systemd-networkd[1358]: lxc_health: Gained carrier Dec 13 01:30:26.959782 kubelet[1730]: I1213 01:30:26.959720 1730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gncmt" podStartSLOduration=10.904346342 podStartE2EDuration="16.959700942s" podCreationTimestamp="2024-12-13 01:30:10 +0000 UTC" firstStartedPulling="2024-12-13 01:30:13.596021502 +0000 UTC m=+4.091004281" lastFinishedPulling="2024-12-13 01:30:19.651376102 +0000 UTC m=+10.146358881" observedRunningTime="2024-12-13 01:30:24.725178662 +0000 UTC m=+15.220161441" watchObservedRunningTime="2024-12-13 01:30:26.959700942 +0000 UTC m=+17.454683681" Dec 13 01:30:26.966390 systemd[1]: Created slice kubepods-besteffort-pod0e5f671c_d0ed_489f_a796_d9ffbe55293a.slice - libcontainer container kubepods-besteffort-pod0e5f671c_d0ed_489f_a796_d9ffbe55293a.slice. Dec 13 01:30:27.062107 kubelet[1730]: I1213 01:30:27.062054 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2bcz\" (UniqueName: \"kubernetes.io/projected/0e5f671c-d0ed-489f-a796-d9ffbe55293a-kube-api-access-b2bcz\") pod \"nginx-deployment-8587fbcb89-hgvpl\" (UID: \"0e5f671c-d0ed-489f-a796-d9ffbe55293a\") " pod="default/nginx-deployment-8587fbcb89-hgvpl" Dec 13 01:30:27.275238 containerd[1434]: time="2024-12-13T01:30:27.275191422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hgvpl,Uid:0e5f671c-d0ed-489f-a796-d9ffbe55293a,Namespace:default,Attempt:0,}" Dec 13 01:30:27.318997 systemd-networkd[1358]: lxcb21ff77ccfc3: Link UP Dec 13 01:30:27.328576 kernel: eth0: renamed from tmp33e2c Dec 13 01:30:27.337299 systemd-networkd[1358]: lxcb21ff77ccfc3: Gained carrier Dec 13 01:30:27.553264 kubelet[1730]: E1213 01:30:27.553153 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:27.815088 systemd-networkd[1358]: cilium_vxlan: Gained IPv6LL Dec 13 01:30:28.555062 kubelet[1730]: E1213 01:30:28.555011 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:28.838955 systemd-networkd[1358]: lxc_health: Gained IPv6LL Dec 13 01:30:28.882465 kubelet[1730]: E1213 01:30:28.882286 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:29.031058 systemd-networkd[1358]: lxcb21ff77ccfc3: Gained IPv6LL Dec 13 01:30:29.555937 kubelet[1730]: E1213 01:30:29.555887 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:30.544143 kubelet[1730]: E1213 01:30:30.544100 1730 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:30.556529 kubelet[1730]: E1213 01:30:30.556493 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:30.728150 containerd[1434]: time="2024-12-13T01:30:30.728039342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:30.728150 containerd[1434]: time="2024-12-13T01:30:30.728112902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:30.728150 containerd[1434]: time="2024-12-13T01:30:30.728135462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:30.728631 containerd[1434]: time="2024-12-13T01:30:30.728227622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:30.750799 systemd[1]: Started cri-containerd-33e2cdc6c1af9f5849f1194618cb8b903fb53c2073807cf561102831ebc012c3.scope - libcontainer container 33e2cdc6c1af9f5849f1194618cb8b903fb53c2073807cf561102831ebc012c3. Dec 13 01:30:30.760260 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:30:30.774932 containerd[1434]: time="2024-12-13T01:30:30.774886182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hgvpl,Uid:0e5f671c-d0ed-489f-a796-d9ffbe55293a,Namespace:default,Attempt:0,} returns sandbox id \"33e2cdc6c1af9f5849f1194618cb8b903fb53c2073807cf561102831ebc012c3\"" Dec 13 01:30:30.776276 containerd[1434]: time="2024-12-13T01:30:30.776251502Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:30:31.556909 kubelet[1730]: E1213 01:30:31.556868 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:32.557451 kubelet[1730]: E1213 01:30:32.557398 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:33.558270 kubelet[1730]: E1213 01:30:33.558208 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:34.558663 kubelet[1730]: E1213 01:30:34.558625 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:35.558914 kubelet[1730]: E1213 01:30:35.558881 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:35.804393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1364895951.mount: Deactivated successfully. Dec 13 01:30:36.509703 containerd[1434]: time="2024-12-13T01:30:36.509655920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:36.510263 containerd[1434]: time="2024-12-13T01:30:36.510224562Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67696939" Dec 13 01:30:36.510843 containerd[1434]: time="2024-12-13T01:30:36.510818244Z" level=info msg="ImageCreate event name:\"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:36.513350 containerd[1434]: time="2024-12-13T01:30:36.513316291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:36.515568 containerd[1434]: time="2024-12-13T01:30:36.515421416Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"67696817\" in 5.739137594s" Dec 13 01:30:36.515568 containerd[1434]: time="2024-12-13T01:30:36.515456096Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 01:30:36.517259 containerd[1434]: time="2024-12-13T01:30:36.517108621Z" level=info msg="CreateContainer within sandbox \"33e2cdc6c1af9f5849f1194618cb8b903fb53c2073807cf561102831ebc012c3\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 01:30:36.526857 containerd[1434]: time="2024-12-13T01:30:36.526814088Z" level=info msg="CreateContainer within sandbox \"33e2cdc6c1af9f5849f1194618cb8b903fb53c2073807cf561102831ebc012c3\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"71b5fdf363cd68aae194067b465d0013be348eecd2b7dfc51c15f6d093d0bf3c\"" Dec 13 01:30:36.527227 containerd[1434]: time="2024-12-13T01:30:36.527204569Z" level=info msg="StartContainer for \"71b5fdf363cd68aae194067b465d0013be348eecd2b7dfc51c15f6d093d0bf3c\"" Dec 13 01:30:36.554698 systemd[1]: Started cri-containerd-71b5fdf363cd68aae194067b465d0013be348eecd2b7dfc51c15f6d093d0bf3c.scope - libcontainer container 71b5fdf363cd68aae194067b465d0013be348eecd2b7dfc51c15f6d093d0bf3c. Dec 13 01:30:36.559351 kubelet[1730]: E1213 01:30:36.559315 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:36.575520 containerd[1434]: time="2024-12-13T01:30:36.575419104Z" level=info msg="StartContainer for \"71b5fdf363cd68aae194067b465d0013be348eecd2b7dfc51c15f6d093d0bf3c\" returns successfully" Dec 13 01:30:36.738563 kubelet[1730]: I1213 01:30:36.738311 1730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-hgvpl" podStartSLOduration=4.998258322 podStartE2EDuration="10.738294558s" podCreationTimestamp="2024-12-13 01:30:26 +0000 UTC" firstStartedPulling="2024-12-13 01:30:30.776012502 +0000 UTC m=+21.270995241" lastFinishedPulling="2024-12-13 01:30:36.516048698 +0000 UTC m=+27.011031477" observedRunningTime="2024-12-13 01:30:36.738169477 +0000 UTC m=+27.233152296" watchObservedRunningTime="2024-12-13 01:30:36.738294558 +0000 UTC m=+27.233277337" Dec 13 01:30:37.560504 kubelet[1730]: E1213 01:30:37.560456 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:38.560820 kubelet[1730]: E1213 01:30:38.560765 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:39.083776 systemd[1]: Created slice kubepods-besteffort-pod9fb0738a_8f99_46a8_9bbc_16668c2d0f1f.slice - libcontainer container kubepods-besteffort-pod9fb0738a_8f99_46a8_9bbc_16668c2d0f1f.slice. Dec 13 01:30:39.129527 kubelet[1730]: I1213 01:30:39.129488 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/9fb0738a-8f99-46a8-9bbc-16668c2d0f1f-data\") pod \"nfs-server-provisioner-0\" (UID: \"9fb0738a-8f99-46a8-9bbc-16668c2d0f1f\") " pod="default/nfs-server-provisioner-0" Dec 13 01:30:39.129527 kubelet[1730]: I1213 01:30:39.129535 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxvwn\" (UniqueName: \"kubernetes.io/projected/9fb0738a-8f99-46a8-9bbc-16668c2d0f1f-kube-api-access-xxvwn\") pod \"nfs-server-provisioner-0\" (UID: \"9fb0738a-8f99-46a8-9bbc-16668c2d0f1f\") " pod="default/nfs-server-provisioner-0" Dec 13 01:30:39.386603 containerd[1434]: time="2024-12-13T01:30:39.386483363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9fb0738a-8f99-46a8-9bbc-16668c2d0f1f,Namespace:default,Attempt:0,}" Dec 13 01:30:39.415897 systemd-networkd[1358]: lxc356f02b05d1b: Link UP Dec 13 01:30:39.422569 kernel: eth0: renamed from tmp4c5c7 Dec 13 01:30:39.430783 systemd-networkd[1358]: lxc356f02b05d1b: Gained carrier Dec 13 01:30:39.551656 containerd[1434]: time="2024-12-13T01:30:39.551526582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:39.551656 containerd[1434]: time="2024-12-13T01:30:39.551606382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:39.551858 containerd[1434]: time="2024-12-13T01:30:39.551636422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:39.551921 containerd[1434]: time="2024-12-13T01:30:39.551717102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:39.561272 kubelet[1730]: E1213 01:30:39.561218 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:39.571740 systemd[1]: Started cri-containerd-4c5c74d5bb430c854c6cc83b71bfbb0ecdb8d20b5001511adb21df5035a2be0f.scope - libcontainer container 4c5c74d5bb430c854c6cc83b71bfbb0ecdb8d20b5001511adb21df5035a2be0f. Dec 13 01:30:39.580710 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:30:39.595687 containerd[1434]: time="2024-12-13T01:30:39.595643083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9fb0738a-8f99-46a8-9bbc-16668c2d0f1f,Namespace:default,Attempt:0,} returns sandbox id \"4c5c74d5bb430c854c6cc83b71bfbb0ecdb8d20b5001511adb21df5035a2be0f\"" Dec 13 01:30:39.597115 containerd[1434]: time="2024-12-13T01:30:39.597026526Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 01:30:40.561354 kubelet[1730]: E1213 01:30:40.561309 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:40.614820 systemd-networkd[1358]: lxc356f02b05d1b: Gained IPv6LL Dec 13 01:30:41.562763 kubelet[1730]: E1213 01:30:41.562402 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:41.685423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1911584103.mount: Deactivated successfully. Dec 13 01:30:41.892837 kubelet[1730]: I1213 01:30:41.892471 1730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:30:41.893240 kubelet[1730]: E1213 01:30:41.892956 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:42.562732 kubelet[1730]: E1213 01:30:42.562697 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:42.740427 kubelet[1730]: E1213 01:30:42.740381 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:30:43.086649 containerd[1434]: time="2024-12-13T01:30:43.086601431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:43.087616 containerd[1434]: time="2024-12-13T01:30:43.087320833Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Dec 13 01:30:43.088953 containerd[1434]: time="2024-12-13T01:30:43.088070954Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:43.098561 containerd[1434]: time="2024-12-13T01:30:43.098504093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:43.099749 containerd[1434]: time="2024-12-13T01:30:43.099712655Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.502638649s" Dec 13 01:30:43.099749 containerd[1434]: time="2024-12-13T01:30:43.099744175Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Dec 13 01:30:43.102985 containerd[1434]: time="2024-12-13T01:30:43.102949860Z" level=info msg="CreateContainer within sandbox \"4c5c74d5bb430c854c6cc83b71bfbb0ecdb8d20b5001511adb21df5035a2be0f\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 01:30:43.113055 containerd[1434]: time="2024-12-13T01:30:43.113005158Z" level=info msg="CreateContainer within sandbox \"4c5c74d5bb430c854c6cc83b71bfbb0ecdb8d20b5001511adb21df5035a2be0f\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"fe4d8157ec60f403fc25012da5f864ad8f19796006f372bb09a19394e4af58ff\"" Dec 13 01:30:43.113670 containerd[1434]: time="2024-12-13T01:30:43.113576679Z" level=info msg="StartContainer for \"fe4d8157ec60f403fc25012da5f864ad8f19796006f372bb09a19394e4af58ff\"" Dec 13 01:30:43.190754 systemd[1]: Started cri-containerd-fe4d8157ec60f403fc25012da5f864ad8f19796006f372bb09a19394e4af58ff.scope - libcontainer container fe4d8157ec60f403fc25012da5f864ad8f19796006f372bb09a19394e4af58ff. Dec 13 01:30:43.210584 containerd[1434]: time="2024-12-13T01:30:43.210535291Z" level=info msg="StartContainer for \"fe4d8157ec60f403fc25012da5f864ad8f19796006f372bb09a19394e4af58ff\" returns successfully" Dec 13 01:30:43.563130 kubelet[1730]: E1213 01:30:43.563072 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:44.563437 kubelet[1730]: E1213 01:30:44.563384 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:45.563823 kubelet[1730]: E1213 01:30:45.563782 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:45.609144 update_engine[1421]: I20241213 01:30:45.608580 1421 update_attempter.cc:509] Updating boot flags... Dec 13 01:30:45.634680 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (3120) Dec 13 01:30:45.671640 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 46 scanned by (udev-worker) (3120) Dec 13 01:30:46.564306 kubelet[1730]: E1213 01:30:46.564247 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:47.565247 kubelet[1730]: E1213 01:30:47.565203 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:48.566287 kubelet[1730]: E1213 01:30:48.566225 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:49.567303 kubelet[1730]: E1213 01:30:49.567263 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:50.544110 kubelet[1730]: E1213 01:30:50.544066 1730 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:50.567554 kubelet[1730]: E1213 01:30:50.567514 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:51.568122 kubelet[1730]: E1213 01:30:51.568075 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:52.568895 kubelet[1730]: E1213 01:30:52.568848 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:53.378845 kubelet[1730]: I1213 01:30:53.378780 1730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=10.874140608 podStartE2EDuration="14.378761901s" podCreationTimestamp="2024-12-13 01:30:39 +0000 UTC" firstStartedPulling="2024-12-13 01:30:39.596726925 +0000 UTC m=+30.091709704" lastFinishedPulling="2024-12-13 01:30:43.101348218 +0000 UTC m=+33.596330997" observedRunningTime="2024-12-13 01:30:43.753265774 +0000 UTC m=+34.248248553" watchObservedRunningTime="2024-12-13 01:30:53.378761901 +0000 UTC m=+43.873744680" Dec 13 01:30:53.386207 systemd[1]: Created slice kubepods-besteffort-pod4cd2e87d_0680_42df_bbb4_0271763da7f0.slice - libcontainer container kubepods-besteffort-pod4cd2e87d_0680_42df_bbb4_0271763da7f0.slice. Dec 13 01:30:53.408438 kubelet[1730]: I1213 01:30:53.408394 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-23a1572c-9845-4030-b3e9-dc8b51956d54\" (UniqueName: \"kubernetes.io/nfs/4cd2e87d-0680-42df-bbb4-0271763da7f0-pvc-23a1572c-9845-4030-b3e9-dc8b51956d54\") pod \"test-pod-1\" (UID: \"4cd2e87d-0680-42df-bbb4-0271763da7f0\") " pod="default/test-pod-1" Dec 13 01:30:53.408438 kubelet[1730]: I1213 01:30:53.408437 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnt47\" (UniqueName: \"kubernetes.io/projected/4cd2e87d-0680-42df-bbb4-0271763da7f0-kube-api-access-hnt47\") pod \"test-pod-1\" (UID: \"4cd2e87d-0680-42df-bbb4-0271763da7f0\") " pod="default/test-pod-1" Dec 13 01:30:53.526659 kernel: FS-Cache: Loaded Dec 13 01:30:53.551012 kernel: RPC: Registered named UNIX socket transport module. Dec 13 01:30:53.551095 kernel: RPC: Registered udp transport module. Dec 13 01:30:53.551126 kernel: RPC: Registered tcp transport module. Dec 13 01:30:53.551151 kernel: RPC: Registered tcp-with-tls transport module. Dec 13 01:30:53.551614 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 01:30:53.569754 kubelet[1730]: E1213 01:30:53.569717 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:53.729656 kernel: NFS: Registering the id_resolver key type Dec 13 01:30:53.729782 kernel: Key type id_resolver registered Dec 13 01:30:53.729821 kernel: Key type id_legacy registered Dec 13 01:30:53.753622 nfsidmap[3147]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 01:30:53.757332 nfsidmap[3150]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 01:30:53.989273 containerd[1434]: time="2024-12-13T01:30:53.989126709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4cd2e87d-0680-42df-bbb4-0271763da7f0,Namespace:default,Attempt:0,}" Dec 13 01:30:54.014483 systemd-networkd[1358]: lxc4c20fe6f5168: Link UP Dec 13 01:30:54.029631 kernel: eth0: renamed from tmp59573 Dec 13 01:30:54.039741 systemd-networkd[1358]: lxc4c20fe6f5168: Gained carrier Dec 13 01:30:54.197188 containerd[1434]: time="2024-12-13T01:30:54.197064771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:54.197188 containerd[1434]: time="2024-12-13T01:30:54.197127651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:54.197188 containerd[1434]: time="2024-12-13T01:30:54.197144691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:54.197346 containerd[1434]: time="2024-12-13T01:30:54.197217371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:54.218765 systemd[1]: Started cri-containerd-5957363c0b4d888b729a4025afd760852239d0bc7803e04bc78d63185ef96799.scope - libcontainer container 5957363c0b4d888b729a4025afd760852239d0bc7803e04bc78d63185ef96799. Dec 13 01:30:54.229244 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:30:54.244248 containerd[1434]: time="2024-12-13T01:30:54.244019452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:4cd2e87d-0680-42df-bbb4-0271763da7f0,Namespace:default,Attempt:0,} returns sandbox id \"5957363c0b4d888b729a4025afd760852239d0bc7803e04bc78d63185ef96799\"" Dec 13 01:30:54.245875 containerd[1434]: time="2024-12-13T01:30:54.245849854Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 01:30:54.496884 containerd[1434]: time="2024-12-13T01:30:54.496771112Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:54.498442 containerd[1434]: time="2024-12-13T01:30:54.498380994Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 13 01:30:54.501580 containerd[1434]: time="2024-12-13T01:30:54.501516117Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"67696817\" in 255.632383ms" Dec 13 01:30:54.501580 containerd[1434]: time="2024-12-13T01:30:54.501572677Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d5cb91e7550dca840aad69277b6dbccf8dc3739757998181746daf777a8bd9de\"" Dec 13 01:30:54.504454 containerd[1434]: time="2024-12-13T01:30:54.504408079Z" level=info msg="CreateContainer within sandbox \"5957363c0b4d888b729a4025afd760852239d0bc7803e04bc78d63185ef96799\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 01:30:54.530093 containerd[1434]: time="2024-12-13T01:30:54.530044581Z" level=info msg="CreateContainer within sandbox \"5957363c0b4d888b729a4025afd760852239d0bc7803e04bc78d63185ef96799\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"094b270cd30331a51d2bc4fb3c9a5f9739e54c16171850af746dd2382882aefd\"" Dec 13 01:30:54.530569 containerd[1434]: time="2024-12-13T01:30:54.530533742Z" level=info msg="StartContainer for \"094b270cd30331a51d2bc4fb3c9a5f9739e54c16171850af746dd2382882aefd\"" Dec 13 01:30:54.553697 systemd[1]: Started cri-containerd-094b270cd30331a51d2bc4fb3c9a5f9739e54c16171850af746dd2382882aefd.scope - libcontainer container 094b270cd30331a51d2bc4fb3c9a5f9739e54c16171850af746dd2382882aefd. Dec 13 01:30:54.570201 kubelet[1730]: E1213 01:30:54.569963 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:54.572092 containerd[1434]: time="2024-12-13T01:30:54.572056418Z" level=info msg="StartContainer for \"094b270cd30331a51d2bc4fb3c9a5f9739e54c16171850af746dd2382882aefd\" returns successfully" Dec 13 01:30:54.770458 kubelet[1730]: I1213 01:30:54.770304 1730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.512320246 podStartE2EDuration="15.770273711s" podCreationTimestamp="2024-12-13 01:30:39 +0000 UTC" firstStartedPulling="2024-12-13 01:30:54.245255533 +0000 UTC m=+44.740238312" lastFinishedPulling="2024-12-13 01:30:54.503208998 +0000 UTC m=+44.998191777" observedRunningTime="2024-12-13 01:30:54.76965103 +0000 UTC m=+45.264633809" watchObservedRunningTime="2024-12-13 01:30:54.770273711 +0000 UTC m=+45.265256490" Dec 13 01:30:55.571144 kubelet[1730]: E1213 01:30:55.571097 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:55.910710 systemd-networkd[1358]: lxc4c20fe6f5168: Gained IPv6LL Dec 13 01:30:56.571853 kubelet[1730]: E1213 01:30:56.571813 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:56.759613 containerd[1434]: time="2024-12-13T01:30:56.759562832Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:30:56.764809 containerd[1434]: time="2024-12-13T01:30:56.764770756Z" level=info msg="StopContainer for \"f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d\" with timeout 2 (s)" Dec 13 01:30:56.765020 containerd[1434]: time="2024-12-13T01:30:56.764989357Z" level=info msg="Stop container \"f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d\" with signal terminated" Dec 13 01:30:56.770581 systemd-networkd[1358]: lxc_health: Link DOWN Dec 13 01:30:56.770587 systemd-networkd[1358]: lxc_health: Lost carrier Dec 13 01:30:56.798312 systemd[1]: cri-containerd-f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d.scope: Deactivated successfully. Dec 13 01:30:56.798631 systemd[1]: cri-containerd-f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d.scope: Consumed 6.374s CPU time. Dec 13 01:30:56.813854 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d-rootfs.mount: Deactivated successfully. Dec 13 01:30:56.855820 containerd[1434]: time="2024-12-13T01:30:56.855563306Z" level=info msg="shim disconnected" id=f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d namespace=k8s.io Dec 13 01:30:56.855820 containerd[1434]: time="2024-12-13T01:30:56.855616906Z" level=warning msg="cleaning up after shim disconnected" id=f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d namespace=k8s.io Dec 13 01:30:56.855820 containerd[1434]: time="2024-12-13T01:30:56.855628226Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:56.868386 containerd[1434]: time="2024-12-13T01:30:56.868261316Z" level=info msg="StopContainer for \"f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d\" returns successfully" Dec 13 01:30:56.869166 containerd[1434]: time="2024-12-13T01:30:56.868960356Z" level=info msg="StopPodSandbox for \"71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b\"" Dec 13 01:30:56.869166 containerd[1434]: time="2024-12-13T01:30:56.869016356Z" level=info msg="Container to stop \"04ecd5d53b85e9f088a042abbb1039422669c6afce24187abbcd25c5a3bc5c6d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:30:56.869166 containerd[1434]: time="2024-12-13T01:30:56.869028556Z" level=info msg="Container to stop \"5eeac84c0096f57729faf9ee3548d63e3c16811ba17795bcc78eed0276efd507\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:30:56.869166 containerd[1434]: time="2024-12-13T01:30:56.869037516Z" level=info msg="Container to stop \"f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:30:56.869166 containerd[1434]: time="2024-12-13T01:30:56.869046356Z" level=info msg="Container to stop \"c86518c15ea70177e7c7b5e5a2acd29c786a3155d215ab5d9905a838bff1fc39\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:30:56.869166 containerd[1434]: time="2024-12-13T01:30:56.869055636Z" level=info msg="Container to stop \"4d78014560e51d4b07b39b413df187c17bd98a5bbeb5798f7e4880c0f7df3a51\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 01:30:56.870581 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b-shm.mount: Deactivated successfully. Dec 13 01:30:56.874137 systemd[1]: cri-containerd-71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b.scope: Deactivated successfully. Dec 13 01:30:56.892060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b-rootfs.mount: Deactivated successfully. Dec 13 01:30:56.894913 containerd[1434]: time="2024-12-13T01:30:56.894859976Z" level=info msg="shim disconnected" id=71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b namespace=k8s.io Dec 13 01:30:56.894913 containerd[1434]: time="2024-12-13T01:30:56.894912456Z" level=warning msg="cleaning up after shim disconnected" id=71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b namespace=k8s.io Dec 13 01:30:56.895049 containerd[1434]: time="2024-12-13T01:30:56.894921616Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:30:56.910184 containerd[1434]: time="2024-12-13T01:30:56.910143188Z" level=info msg="TearDown network for sandbox \"71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b\" successfully" Dec 13 01:30:56.910184 containerd[1434]: time="2024-12-13T01:30:56.910178508Z" level=info msg="StopPodSandbox for \"71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b\" returns successfully" Dec 13 01:30:56.935233 kubelet[1730]: I1213 01:30:56.933451 1730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-cilium-cgroup\") pod \"5853b9be-d5d5-49ec-8381-edf5f18df523\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " Dec 13 01:30:56.935233 kubelet[1730]: I1213 01:30:56.933497 1730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5853b9be-d5d5-49ec-8381-edf5f18df523-cilium-config-path\") pod \"5853b9be-d5d5-49ec-8381-edf5f18df523\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " Dec 13 01:30:56.935233 kubelet[1730]: I1213 01:30:56.933524 1730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-etc-cni-netd\") pod \"5853b9be-d5d5-49ec-8381-edf5f18df523\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " Dec 13 01:30:56.935233 kubelet[1730]: I1213 01:30:56.933554 1730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-bpf-maps\") pod \"5853b9be-d5d5-49ec-8381-edf5f18df523\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " Dec 13 01:30:56.935233 kubelet[1730]: I1213 01:30:56.933573 1730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-lib-modules\") pod \"5853b9be-d5d5-49ec-8381-edf5f18df523\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " Dec 13 01:30:56.935233 kubelet[1730]: I1213 01:30:56.933596 1730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-xtables-lock\") pod \"5853b9be-d5d5-49ec-8381-edf5f18df523\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " Dec 13 01:30:56.935495 kubelet[1730]: I1213 01:30:56.933588 1730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5853b9be-d5d5-49ec-8381-edf5f18df523" (UID: "5853b9be-d5d5-49ec-8381-edf5f18df523"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:30:56.935495 kubelet[1730]: I1213 01:30:56.933618 1730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-host-proc-sys-net\") pod \"5853b9be-d5d5-49ec-8381-edf5f18df523\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " Dec 13 01:30:56.935495 kubelet[1730]: I1213 01:30:56.933641 1730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-host-proc-sys-kernel\") pod \"5853b9be-d5d5-49ec-8381-edf5f18df523\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " Dec 13 01:30:56.935495 kubelet[1730]: I1213 01:30:56.933660 1730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5bl8\" (UniqueName: \"kubernetes.io/projected/5853b9be-d5d5-49ec-8381-edf5f18df523-kube-api-access-x5bl8\") pod \"5853b9be-d5d5-49ec-8381-edf5f18df523\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " Dec 13 01:30:56.935495 kubelet[1730]: I1213 01:30:56.933679 1730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5853b9be-d5d5-49ec-8381-edf5f18df523-hubble-tls\") pod \"5853b9be-d5d5-49ec-8381-edf5f18df523\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " Dec 13 01:30:56.935495 kubelet[1730]: I1213 01:30:56.933692 1730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-cilium-run\") pod \"5853b9be-d5d5-49ec-8381-edf5f18df523\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " Dec 13 01:30:56.935642 kubelet[1730]: I1213 01:30:56.933705 1730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-hostproc\") pod \"5853b9be-d5d5-49ec-8381-edf5f18df523\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " Dec 13 01:30:56.935642 kubelet[1730]: I1213 01:30:56.933725 1730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5853b9be-d5d5-49ec-8381-edf5f18df523-clustermesh-secrets\") pod \"5853b9be-d5d5-49ec-8381-edf5f18df523\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " Dec 13 01:30:56.935642 kubelet[1730]: I1213 01:30:56.933740 1730 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-cni-path\") pod \"5853b9be-d5d5-49ec-8381-edf5f18df523\" (UID: \"5853b9be-d5d5-49ec-8381-edf5f18df523\") " Dec 13 01:30:56.935642 kubelet[1730]: I1213 01:30:56.933768 1730 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-cilium-cgroup\") on node \"10.0.0.68\" DevicePath \"\"" Dec 13 01:30:56.935642 kubelet[1730]: I1213 01:30:56.933640 1730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5853b9be-d5d5-49ec-8381-edf5f18df523" (UID: "5853b9be-d5d5-49ec-8381-edf5f18df523"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:30:56.935642 kubelet[1730]: I1213 01:30:56.933653 1730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5853b9be-d5d5-49ec-8381-edf5f18df523" (UID: "5853b9be-d5d5-49ec-8381-edf5f18df523"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:30:56.935803 kubelet[1730]: I1213 01:30:56.933666 1730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5853b9be-d5d5-49ec-8381-edf5f18df523" (UID: "5853b9be-d5d5-49ec-8381-edf5f18df523"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:30:56.935803 kubelet[1730]: I1213 01:30:56.933676 1730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5853b9be-d5d5-49ec-8381-edf5f18df523" (UID: "5853b9be-d5d5-49ec-8381-edf5f18df523"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:30:56.935803 kubelet[1730]: I1213 01:30:56.933776 1730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5853b9be-d5d5-49ec-8381-edf5f18df523" (UID: "5853b9be-d5d5-49ec-8381-edf5f18df523"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:30:56.935803 kubelet[1730]: I1213 01:30:56.933802 1730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-cni-path" (OuterVolumeSpecName: "cni-path") pod "5853b9be-d5d5-49ec-8381-edf5f18df523" (UID: "5853b9be-d5d5-49ec-8381-edf5f18df523"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:30:56.935803 kubelet[1730]: I1213 01:30:56.933815 1730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5853b9be-d5d5-49ec-8381-edf5f18df523" (UID: "5853b9be-d5d5-49ec-8381-edf5f18df523"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:30:56.935908 kubelet[1730]: I1213 01:30:56.934126 1730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5853b9be-d5d5-49ec-8381-edf5f18df523" (UID: "5853b9be-d5d5-49ec-8381-edf5f18df523"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:30:56.935908 kubelet[1730]: I1213 01:30:56.934409 1730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-hostproc" (OuterVolumeSpecName: "hostproc") pod "5853b9be-d5d5-49ec-8381-edf5f18df523" (UID: "5853b9be-d5d5-49ec-8381-edf5f18df523"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 01:30:56.935908 kubelet[1730]: I1213 01:30:56.935451 1730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5853b9be-d5d5-49ec-8381-edf5f18df523-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5853b9be-d5d5-49ec-8381-edf5f18df523" (UID: "5853b9be-d5d5-49ec-8381-edf5f18df523"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 01:30:56.940759 kubelet[1730]: I1213 01:30:56.940460 1730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5853b9be-d5d5-49ec-8381-edf5f18df523-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5853b9be-d5d5-49ec-8381-edf5f18df523" (UID: "5853b9be-d5d5-49ec-8381-edf5f18df523"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 01:30:56.940759 kubelet[1730]: I1213 01:30:56.940484 1730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5853b9be-d5d5-49ec-8381-edf5f18df523-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5853b9be-d5d5-49ec-8381-edf5f18df523" (UID: "5853b9be-d5d5-49ec-8381-edf5f18df523"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:30:56.941314 kubelet[1730]: I1213 01:30:56.941261 1730 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5853b9be-d5d5-49ec-8381-edf5f18df523-kube-api-access-x5bl8" (OuterVolumeSpecName: "kube-api-access-x5bl8") pod "5853b9be-d5d5-49ec-8381-edf5f18df523" (UID: "5853b9be-d5d5-49ec-8381-edf5f18df523"). InnerVolumeSpecName "kube-api-access-x5bl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 01:30:56.941862 systemd[1]: var-lib-kubelet-pods-5853b9be\x2dd5d5\x2d49ec\x2d8381\x2dedf5f18df523-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 01:30:56.941966 systemd[1]: var-lib-kubelet-pods-5853b9be\x2dd5d5\x2d49ec\x2d8381\x2dedf5f18df523-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 01:30:57.034733 kubelet[1730]: I1213 01:30:57.034691 1730 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5853b9be-d5d5-49ec-8381-edf5f18df523-cilium-config-path\") on node \"10.0.0.68\" DevicePath \"\"" Dec 13 01:30:57.034733 kubelet[1730]: I1213 01:30:57.034726 1730 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-bpf-maps\") on node \"10.0.0.68\" DevicePath \"\"" Dec 13 01:30:57.034733 kubelet[1730]: I1213 01:30:57.034736 1730 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-lib-modules\") on node \"10.0.0.68\" DevicePath \"\"" Dec 13 01:30:57.034733 kubelet[1730]: I1213 01:30:57.034743 1730 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-xtables-lock\") on node \"10.0.0.68\" DevicePath \"\"" Dec 13 01:30:57.034908 kubelet[1730]: I1213 01:30:57.034752 1730 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-host-proc-sys-net\") on node \"10.0.0.68\" DevicePath \"\"" Dec 13 01:30:57.034908 kubelet[1730]: I1213 01:30:57.034760 1730 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-etc-cni-netd\") on node \"10.0.0.68\" DevicePath \"\"" Dec 13 01:30:57.034908 kubelet[1730]: I1213 01:30:57.034767 1730 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-x5bl8\" (UniqueName: \"kubernetes.io/projected/5853b9be-d5d5-49ec-8381-edf5f18df523-kube-api-access-x5bl8\") on node \"10.0.0.68\" DevicePath \"\"" Dec 13 01:30:57.034908 kubelet[1730]: I1213 01:30:57.034775 1730 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5853b9be-d5d5-49ec-8381-edf5f18df523-hubble-tls\") on node \"10.0.0.68\" DevicePath \"\"" Dec 13 01:30:57.034908 kubelet[1730]: I1213 01:30:57.034782 1730 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-cilium-run\") on node \"10.0.0.68\" DevicePath \"\"" Dec 13 01:30:57.034908 kubelet[1730]: I1213 01:30:57.034790 1730 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-hostproc\") on node \"10.0.0.68\" DevicePath \"\"" Dec 13 01:30:57.034908 kubelet[1730]: I1213 01:30:57.034797 1730 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-host-proc-sys-kernel\") on node \"10.0.0.68\" DevicePath \"\"" Dec 13 01:30:57.034908 kubelet[1730]: I1213 01:30:57.034804 1730 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5853b9be-d5d5-49ec-8381-edf5f18df523-cni-path\") on node \"10.0.0.68\" DevicePath \"\"" Dec 13 01:30:57.035067 kubelet[1730]: I1213 01:30:57.034811 1730 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5853b9be-d5d5-49ec-8381-edf5f18df523-clustermesh-secrets\") on node \"10.0.0.68\" DevicePath \"\"" Dec 13 01:30:57.572939 kubelet[1730]: E1213 01:30:57.572898 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:57.744311 systemd[1]: var-lib-kubelet-pods-5853b9be\x2dd5d5\x2d49ec\x2d8381\x2dedf5f18df523-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx5bl8.mount: Deactivated successfully. Dec 13 01:30:57.769552 kubelet[1730]: I1213 01:30:57.769508 1730 scope.go:117] "RemoveContainer" containerID="f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d" Dec 13 01:30:57.771286 containerd[1434]: time="2024-12-13T01:30:57.771247172Z" level=info msg="RemoveContainer for \"f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d\"" Dec 13 01:30:57.774085 systemd[1]: Removed slice kubepods-burstable-pod5853b9be_d5d5_49ec_8381_edf5f18df523.slice - libcontainer container kubepods-burstable-pod5853b9be_d5d5_49ec_8381_edf5f18df523.slice. Dec 13 01:30:57.774233 systemd[1]: kubepods-burstable-pod5853b9be_d5d5_49ec_8381_edf5f18df523.slice: Consumed 6.501s CPU time. Dec 13 01:30:57.774594 containerd[1434]: time="2024-12-13T01:30:57.774490734Z" level=info msg="RemoveContainer for \"f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d\" returns successfully" Dec 13 01:30:57.775131 kubelet[1730]: I1213 01:30:57.775090 1730 scope.go:117] "RemoveContainer" containerID="4d78014560e51d4b07b39b413df187c17bd98a5bbeb5798f7e4880c0f7df3a51" Dec 13 01:30:57.776308 containerd[1434]: time="2024-12-13T01:30:57.776247816Z" level=info msg="RemoveContainer for \"4d78014560e51d4b07b39b413df187c17bd98a5bbeb5798f7e4880c0f7df3a51\"" Dec 13 01:30:57.779071 containerd[1434]: time="2024-12-13T01:30:57.779033098Z" level=info msg="RemoveContainer for \"4d78014560e51d4b07b39b413df187c17bd98a5bbeb5798f7e4880c0f7df3a51\" returns successfully" Dec 13 01:30:57.779311 kubelet[1730]: I1213 01:30:57.779202 1730 scope.go:117] "RemoveContainer" containerID="c86518c15ea70177e7c7b5e5a2acd29c786a3155d215ab5d9905a838bff1fc39" Dec 13 01:30:57.780623 containerd[1434]: time="2024-12-13T01:30:57.780583739Z" level=info msg="RemoveContainer for \"c86518c15ea70177e7c7b5e5a2acd29c786a3155d215ab5d9905a838bff1fc39\"" Dec 13 01:30:57.788591 containerd[1434]: time="2024-12-13T01:30:57.788553424Z" level=info msg="RemoveContainer for \"c86518c15ea70177e7c7b5e5a2acd29c786a3155d215ab5d9905a838bff1fc39\" returns successfully" Dec 13 01:30:57.788788 kubelet[1730]: I1213 01:30:57.788752 1730 scope.go:117] "RemoveContainer" containerID="5eeac84c0096f57729faf9ee3548d63e3c16811ba17795bcc78eed0276efd507" Dec 13 01:30:57.789668 containerd[1434]: time="2024-12-13T01:30:57.789646305Z" level=info msg="RemoveContainer for \"5eeac84c0096f57729faf9ee3548d63e3c16811ba17795bcc78eed0276efd507\"" Dec 13 01:30:57.791675 containerd[1434]: time="2024-12-13T01:30:57.791639387Z" level=info msg="RemoveContainer for \"5eeac84c0096f57729faf9ee3548d63e3c16811ba17795bcc78eed0276efd507\" returns successfully" Dec 13 01:30:57.791842 kubelet[1730]: I1213 01:30:57.791811 1730 scope.go:117] "RemoveContainer" containerID="04ecd5d53b85e9f088a042abbb1039422669c6afce24187abbcd25c5a3bc5c6d" Dec 13 01:30:57.792803 containerd[1434]: time="2024-12-13T01:30:57.792772667Z" level=info msg="RemoveContainer for \"04ecd5d53b85e9f088a042abbb1039422669c6afce24187abbcd25c5a3bc5c6d\"" Dec 13 01:30:57.794963 containerd[1434]: time="2024-12-13T01:30:57.794926869Z" level=info msg="RemoveContainer for \"04ecd5d53b85e9f088a042abbb1039422669c6afce24187abbcd25c5a3bc5c6d\" returns successfully" Dec 13 01:30:57.795151 kubelet[1730]: I1213 01:30:57.795117 1730 scope.go:117] "RemoveContainer" containerID="f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d" Dec 13 01:30:57.795336 containerd[1434]: time="2024-12-13T01:30:57.795296229Z" level=error msg="ContainerStatus for \"f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d\": not found" Dec 13 01:30:57.795470 kubelet[1730]: E1213 01:30:57.795441 1730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d\": not found" containerID="f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d" Dec 13 01:30:57.795558 kubelet[1730]: I1213 01:30:57.795476 1730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d"} err="failed to get container status \"f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d\": rpc error: code = NotFound desc = an error occurred when try to find container \"f57cd6af7840665dcd5f2ac83f39d2dc39bc17712b8b2e1e0ffb2a4225312c9d\": not found" Dec 13 01:30:57.795592 kubelet[1730]: I1213 01:30:57.795561 1730 scope.go:117] "RemoveContainer" containerID="4d78014560e51d4b07b39b413df187c17bd98a5bbeb5798f7e4880c0f7df3a51" Dec 13 01:30:57.795780 containerd[1434]: time="2024-12-13T01:30:57.795741830Z" level=error msg="ContainerStatus for \"4d78014560e51d4b07b39b413df187c17bd98a5bbeb5798f7e4880c0f7df3a51\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4d78014560e51d4b07b39b413df187c17bd98a5bbeb5798f7e4880c0f7df3a51\": not found" Dec 13 01:30:57.795884 kubelet[1730]: E1213 01:30:57.795860 1730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4d78014560e51d4b07b39b413df187c17bd98a5bbeb5798f7e4880c0f7df3a51\": not found" containerID="4d78014560e51d4b07b39b413df187c17bd98a5bbeb5798f7e4880c0f7df3a51" Dec 13 01:30:57.795910 kubelet[1730]: I1213 01:30:57.795891 1730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4d78014560e51d4b07b39b413df187c17bd98a5bbeb5798f7e4880c0f7df3a51"} err="failed to get container status \"4d78014560e51d4b07b39b413df187c17bd98a5bbeb5798f7e4880c0f7df3a51\": rpc error: code = NotFound desc = an error occurred when try to find container \"4d78014560e51d4b07b39b413df187c17bd98a5bbeb5798f7e4880c0f7df3a51\": not found" Dec 13 01:30:57.795932 kubelet[1730]: I1213 01:30:57.795910 1730 scope.go:117] "RemoveContainer" containerID="c86518c15ea70177e7c7b5e5a2acd29c786a3155d215ab5d9905a838bff1fc39" Dec 13 01:30:57.796079 containerd[1434]: time="2024-12-13T01:30:57.796052710Z" level=error msg="ContainerStatus for \"c86518c15ea70177e7c7b5e5a2acd29c786a3155d215ab5d9905a838bff1fc39\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c86518c15ea70177e7c7b5e5a2acd29c786a3155d215ab5d9905a838bff1fc39\": not found" Dec 13 01:30:57.796193 kubelet[1730]: E1213 01:30:57.796172 1730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c86518c15ea70177e7c7b5e5a2acd29c786a3155d215ab5d9905a838bff1fc39\": not found" containerID="c86518c15ea70177e7c7b5e5a2acd29c786a3155d215ab5d9905a838bff1fc39" Dec 13 01:30:57.796215 kubelet[1730]: I1213 01:30:57.796200 1730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c86518c15ea70177e7c7b5e5a2acd29c786a3155d215ab5d9905a838bff1fc39"} err="failed to get container status \"c86518c15ea70177e7c7b5e5a2acd29c786a3155d215ab5d9905a838bff1fc39\": rpc error: code = NotFound desc = an error occurred when try to find container \"c86518c15ea70177e7c7b5e5a2acd29c786a3155d215ab5d9905a838bff1fc39\": not found" Dec 13 01:30:57.796239 kubelet[1730]: I1213 01:30:57.796219 1730 scope.go:117] "RemoveContainer" containerID="5eeac84c0096f57729faf9ee3548d63e3c16811ba17795bcc78eed0276efd507" Dec 13 01:30:57.796418 containerd[1434]: time="2024-12-13T01:30:57.796389670Z" level=error msg="ContainerStatus for \"5eeac84c0096f57729faf9ee3548d63e3c16811ba17795bcc78eed0276efd507\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5eeac84c0096f57729faf9ee3548d63e3c16811ba17795bcc78eed0276efd507\": not found" Dec 13 01:30:57.796530 kubelet[1730]: E1213 01:30:57.796512 1730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5eeac84c0096f57729faf9ee3548d63e3c16811ba17795bcc78eed0276efd507\": not found" containerID="5eeac84c0096f57729faf9ee3548d63e3c16811ba17795bcc78eed0276efd507" Dec 13 01:30:57.796565 kubelet[1730]: I1213 01:30:57.796550 1730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5eeac84c0096f57729faf9ee3548d63e3c16811ba17795bcc78eed0276efd507"} err="failed to get container status \"5eeac84c0096f57729faf9ee3548d63e3c16811ba17795bcc78eed0276efd507\": rpc error: code = NotFound desc = an error occurred when try to find container \"5eeac84c0096f57729faf9ee3548d63e3c16811ba17795bcc78eed0276efd507\": not found" Dec 13 01:30:57.796590 kubelet[1730]: I1213 01:30:57.796573 1730 scope.go:117] "RemoveContainer" containerID="04ecd5d53b85e9f088a042abbb1039422669c6afce24187abbcd25c5a3bc5c6d" Dec 13 01:30:57.796763 containerd[1434]: time="2024-12-13T01:30:57.796733030Z" level=error msg="ContainerStatus for \"04ecd5d53b85e9f088a042abbb1039422669c6afce24187abbcd25c5a3bc5c6d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"04ecd5d53b85e9f088a042abbb1039422669c6afce24187abbcd25c5a3bc5c6d\": not found" Dec 13 01:30:57.796859 kubelet[1730]: E1213 01:30:57.796842 1730 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"04ecd5d53b85e9f088a042abbb1039422669c6afce24187abbcd25c5a3bc5c6d\": not found" containerID="04ecd5d53b85e9f088a042abbb1039422669c6afce24187abbcd25c5a3bc5c6d" Dec 13 01:30:57.796886 kubelet[1730]: I1213 01:30:57.796870 1730 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"04ecd5d53b85e9f088a042abbb1039422669c6afce24187abbcd25c5a3bc5c6d"} err="failed to get container status \"04ecd5d53b85e9f088a042abbb1039422669c6afce24187abbcd25c5a3bc5c6d\": rpc error: code = NotFound desc = an error occurred when try to find container \"04ecd5d53b85e9f088a042abbb1039422669c6afce24187abbcd25c5a3bc5c6d\": not found" Dec 13 01:30:58.573842 kubelet[1730]: E1213 01:30:58.573769 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:58.678003 kubelet[1730]: I1213 01:30:58.677959 1730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5853b9be-d5d5-49ec-8381-edf5f18df523" path="/var/lib/kubelet/pods/5853b9be-d5d5-49ec-8381-edf5f18df523/volumes" Dec 13 01:30:59.574845 kubelet[1730]: E1213 01:30:59.574801 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:30:59.948330 kubelet[1730]: E1213 01:30:59.948286 1730 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5853b9be-d5d5-49ec-8381-edf5f18df523" containerName="cilium-agent" Dec 13 01:30:59.948330 kubelet[1730]: E1213 01:30:59.948321 1730 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5853b9be-d5d5-49ec-8381-edf5f18df523" containerName="mount-cgroup" Dec 13 01:30:59.948330 kubelet[1730]: E1213 01:30:59.948328 1730 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5853b9be-d5d5-49ec-8381-edf5f18df523" containerName="apply-sysctl-overwrites" Dec 13 01:30:59.948330 kubelet[1730]: E1213 01:30:59.948335 1730 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5853b9be-d5d5-49ec-8381-edf5f18df523" containerName="mount-bpf-fs" Dec 13 01:30:59.948330 kubelet[1730]: E1213 01:30:59.948341 1730 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5853b9be-d5d5-49ec-8381-edf5f18df523" containerName="clean-cilium-state" Dec 13 01:30:59.948572 kubelet[1730]: I1213 01:30:59.948372 1730 memory_manager.go:354] "RemoveStaleState removing state" podUID="5853b9be-d5d5-49ec-8381-edf5f18df523" containerName="cilium-agent" Dec 13 01:30:59.951422 kubelet[1730]: W1213 01:30:59.951342 1730 reflector.go:561] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.0.0.68" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.68' and this object Dec 13 01:30:59.951422 kubelet[1730]: E1213 01:30:59.951388 1730 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:10.0.0.68\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.68' and this object" logger="UnhandledError" Dec 13 01:30:59.953683 systemd[1]: Created slice kubepods-besteffort-podab52b9a8_8674_425f_9a98_afaa8cd837c8.slice - libcontainer container kubepods-besteffort-podab52b9a8_8674_425f_9a98_afaa8cd837c8.slice. Dec 13 01:30:59.979528 systemd[1]: Created slice kubepods-burstable-poda770025e_71c7_41d8_85e3_815ba2e8bb59.slice - libcontainer container kubepods-burstable-poda770025e_71c7_41d8_85e3_815ba2e8bb59.slice. Dec 13 01:31:00.052058 kubelet[1730]: I1213 01:31:00.051992 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a770025e-71c7-41d8-85e3-815ba2e8bb59-hostproc\") pod \"cilium-k8g7d\" (UID: \"a770025e-71c7-41d8-85e3-815ba2e8bb59\") " pod="kube-system/cilium-k8g7d" Dec 13 01:31:00.052058 kubelet[1730]: I1213 01:31:00.052043 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a770025e-71c7-41d8-85e3-815ba2e8bb59-host-proc-sys-net\") pod \"cilium-k8g7d\" (UID: \"a770025e-71c7-41d8-85e3-815ba2e8bb59\") " pod="kube-system/cilium-k8g7d" Dec 13 01:31:00.052058 kubelet[1730]: I1213 01:31:00.052070 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a770025e-71c7-41d8-85e3-815ba2e8bb59-bpf-maps\") pod \"cilium-k8g7d\" (UID: \"a770025e-71c7-41d8-85e3-815ba2e8bb59\") " pod="kube-system/cilium-k8g7d" Dec 13 01:31:00.052269 kubelet[1730]: I1213 01:31:00.052085 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a770025e-71c7-41d8-85e3-815ba2e8bb59-cni-path\") pod \"cilium-k8g7d\" (UID: \"a770025e-71c7-41d8-85e3-815ba2e8bb59\") " pod="kube-system/cilium-k8g7d" Dec 13 01:31:00.052269 kubelet[1730]: I1213 01:31:00.052101 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a770025e-71c7-41d8-85e3-815ba2e8bb59-lib-modules\") pod \"cilium-k8g7d\" (UID: \"a770025e-71c7-41d8-85e3-815ba2e8bb59\") " pod="kube-system/cilium-k8g7d" Dec 13 01:31:00.052269 kubelet[1730]: I1213 01:31:00.052118 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s57zb\" (UniqueName: \"kubernetes.io/projected/a770025e-71c7-41d8-85e3-815ba2e8bb59-kube-api-access-s57zb\") pod \"cilium-k8g7d\" (UID: \"a770025e-71c7-41d8-85e3-815ba2e8bb59\") " pod="kube-system/cilium-k8g7d" Dec 13 01:31:00.052269 kubelet[1730]: I1213 01:31:00.052137 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a770025e-71c7-41d8-85e3-815ba2e8bb59-cilium-config-path\") pod \"cilium-k8g7d\" (UID: \"a770025e-71c7-41d8-85e3-815ba2e8bb59\") " pod="kube-system/cilium-k8g7d" Dec 13 01:31:00.052269 kubelet[1730]: I1213 01:31:00.052155 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a770025e-71c7-41d8-85e3-815ba2e8bb59-hubble-tls\") pod \"cilium-k8g7d\" (UID: \"a770025e-71c7-41d8-85e3-815ba2e8bb59\") " pod="kube-system/cilium-k8g7d" Dec 13 01:31:00.052269 kubelet[1730]: I1213 01:31:00.052174 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a770025e-71c7-41d8-85e3-815ba2e8bb59-cilium-run\") pod \"cilium-k8g7d\" (UID: \"a770025e-71c7-41d8-85e3-815ba2e8bb59\") " pod="kube-system/cilium-k8g7d" Dec 13 01:31:00.052406 kubelet[1730]: I1213 01:31:00.052227 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a770025e-71c7-41d8-85e3-815ba2e8bb59-cilium-cgroup\") pod \"cilium-k8g7d\" (UID: \"a770025e-71c7-41d8-85e3-815ba2e8bb59\") " pod="kube-system/cilium-k8g7d" Dec 13 01:31:00.052406 kubelet[1730]: I1213 01:31:00.052242 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a770025e-71c7-41d8-85e3-815ba2e8bb59-clustermesh-secrets\") pod \"cilium-k8g7d\" (UID: \"a770025e-71c7-41d8-85e3-815ba2e8bb59\") " pod="kube-system/cilium-k8g7d" Dec 13 01:31:00.052406 kubelet[1730]: I1213 01:31:00.052261 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4nqv\" (UniqueName: \"kubernetes.io/projected/ab52b9a8-8674-425f-9a98-afaa8cd837c8-kube-api-access-l4nqv\") pod \"cilium-operator-5d85765b45-wsshc\" (UID: \"ab52b9a8-8674-425f-9a98-afaa8cd837c8\") " pod="kube-system/cilium-operator-5d85765b45-wsshc" Dec 13 01:31:00.052406 kubelet[1730]: I1213 01:31:00.052277 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a770025e-71c7-41d8-85e3-815ba2e8bb59-host-proc-sys-kernel\") pod \"cilium-k8g7d\" (UID: \"a770025e-71c7-41d8-85e3-815ba2e8bb59\") " pod="kube-system/cilium-k8g7d" Dec 13 01:31:00.052406 kubelet[1730]: I1213 01:31:00.052293 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ab52b9a8-8674-425f-9a98-afaa8cd837c8-cilium-config-path\") pod \"cilium-operator-5d85765b45-wsshc\" (UID: \"ab52b9a8-8674-425f-9a98-afaa8cd837c8\") " pod="kube-system/cilium-operator-5d85765b45-wsshc" Dec 13 01:31:00.052509 kubelet[1730]: I1213 01:31:00.052307 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a770025e-71c7-41d8-85e3-815ba2e8bb59-etc-cni-netd\") pod \"cilium-k8g7d\" (UID: \"a770025e-71c7-41d8-85e3-815ba2e8bb59\") " pod="kube-system/cilium-k8g7d" Dec 13 01:31:00.052509 kubelet[1730]: I1213 01:31:00.052323 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a770025e-71c7-41d8-85e3-815ba2e8bb59-xtables-lock\") pod \"cilium-k8g7d\" (UID: \"a770025e-71c7-41d8-85e3-815ba2e8bb59\") " pod="kube-system/cilium-k8g7d" Dec 13 01:31:00.052509 kubelet[1730]: I1213 01:31:00.052339 1730 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a770025e-71c7-41d8-85e3-815ba2e8bb59-cilium-ipsec-secrets\") pod \"cilium-k8g7d\" (UID: \"a770025e-71c7-41d8-85e3-815ba2e8bb59\") " pod="kube-system/cilium-k8g7d" Dec 13 01:31:00.575289 kubelet[1730]: E1213 01:31:00.575246 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:00.689982 kubelet[1730]: E1213 01:31:00.689939 1730 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:31:01.154381 kubelet[1730]: E1213 01:31:01.154307 1730 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:31:01.154504 kubelet[1730]: E1213 01:31:01.154405 1730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab52b9a8-8674-425f-9a98-afaa8cd837c8-cilium-config-path podName:ab52b9a8-8674-425f-9a98-afaa8cd837c8 nodeName:}" failed. No retries permitted until 2024-12-13 01:31:01.654381082 +0000 UTC m=+52.149363821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/ab52b9a8-8674-425f-9a98-afaa8cd837c8-cilium-config-path") pod "cilium-operator-5d85765b45-wsshc" (UID: "ab52b9a8-8674-425f-9a98-afaa8cd837c8") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:31:01.154703 kubelet[1730]: E1213 01:31:01.154318 1730 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:31:01.154703 kubelet[1730]: E1213 01:31:01.154681 1730 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a770025e-71c7-41d8-85e3-815ba2e8bb59-cilium-config-path podName:a770025e-71c7-41d8-85e3-815ba2e8bb59 nodeName:}" failed. No retries permitted until 2024-12-13 01:31:01.654664722 +0000 UTC m=+52.149647501 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/a770025e-71c7-41d8-85e3-815ba2e8bb59-cilium-config-path") pod "cilium-k8g7d" (UID: "a770025e-71c7-41d8-85e3-815ba2e8bb59") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:31:01.576439 kubelet[1730]: E1213 01:31:01.576394 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:01.592246 kubelet[1730]: I1213 01:31:01.592201 1730 setters.go:600] "Node became not ready" node="10.0.0.68" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T01:31:01Z","lastTransitionTime":"2024-12-13T01:31:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 01:31:01.757098 kubelet[1730]: E1213 01:31:01.757012 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:01.757841 containerd[1434]: time="2024-12-13T01:31:01.757711737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-wsshc,Uid:ab52b9a8-8674-425f-9a98-afaa8cd837c8,Namespace:kube-system,Attempt:0,}" Dec 13 01:31:01.778591 containerd[1434]: time="2024-12-13T01:31:01.778063628Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:01.778591 containerd[1434]: time="2024-12-13T01:31:01.778421508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:01.778591 containerd[1434]: time="2024-12-13T01:31:01.778440988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:01.778591 containerd[1434]: time="2024-12-13T01:31:01.778520188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:01.793141 kubelet[1730]: E1213 01:31:01.792710 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:01.793275 containerd[1434]: time="2024-12-13T01:31:01.793042316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k8g7d,Uid:a770025e-71c7-41d8-85e3-815ba2e8bb59,Namespace:kube-system,Attempt:0,}" Dec 13 01:31:01.793726 systemd[1]: Started cri-containerd-89b040c296a220f243bf569c08ca2310085c84f58f513fe3db7ecdc225c05fa1.scope - libcontainer container 89b040c296a220f243bf569c08ca2310085c84f58f513fe3db7ecdc225c05fa1. Dec 13 01:31:01.816275 containerd[1434]: time="2024-12-13T01:31:01.816157249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:31:01.816462 containerd[1434]: time="2024-12-13T01:31:01.816339049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:31:01.816462 containerd[1434]: time="2024-12-13T01:31:01.816399369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:01.818665 containerd[1434]: time="2024-12-13T01:31:01.816908250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:31:01.831988 containerd[1434]: time="2024-12-13T01:31:01.831889178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-wsshc,Uid:ab52b9a8-8674-425f-9a98-afaa8cd837c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"89b040c296a220f243bf569c08ca2310085c84f58f513fe3db7ecdc225c05fa1\"" Dec 13 01:31:01.833256 kubelet[1730]: E1213 01:31:01.833233 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:01.834403 containerd[1434]: time="2024-12-13T01:31:01.834191939Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 01:31:01.837743 systemd[1]: Started cri-containerd-89bcda2264e5d2fb8de8f3e5ea169546149014f20163a4895c0bb028e72f5e6b.scope - libcontainer container 89bcda2264e5d2fb8de8f3e5ea169546149014f20163a4895c0bb028e72f5e6b. Dec 13 01:31:01.858948 containerd[1434]: time="2024-12-13T01:31:01.858903753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k8g7d,Uid:a770025e-71c7-41d8-85e3-815ba2e8bb59,Namespace:kube-system,Attempt:0,} returns sandbox id \"89bcda2264e5d2fb8de8f3e5ea169546149014f20163a4895c0bb028e72f5e6b\"" Dec 13 01:31:01.860505 kubelet[1730]: E1213 01:31:01.860474 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:01.862302 containerd[1434]: time="2024-12-13T01:31:01.862257035Z" level=info msg="CreateContainer within sandbox \"89bcda2264e5d2fb8de8f3e5ea169546149014f20163a4895c0bb028e72f5e6b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 01:31:01.873772 containerd[1434]: time="2024-12-13T01:31:01.873724561Z" level=info msg="CreateContainer within sandbox \"89bcda2264e5d2fb8de8f3e5ea169546149014f20163a4895c0bb028e72f5e6b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c00fb41524f81eeba5bd9293ba277fb32bf52e95c789cb4f5d03e91525b233fa\"" Dec 13 01:31:01.874697 containerd[1434]: time="2024-12-13T01:31:01.874275721Z" level=info msg="StartContainer for \"c00fb41524f81eeba5bd9293ba277fb32bf52e95c789cb4f5d03e91525b233fa\"" Dec 13 01:31:01.897703 systemd[1]: Started cri-containerd-c00fb41524f81eeba5bd9293ba277fb32bf52e95c789cb4f5d03e91525b233fa.scope - libcontainer container c00fb41524f81eeba5bd9293ba277fb32bf52e95c789cb4f5d03e91525b233fa. Dec 13 01:31:01.917083 containerd[1434]: time="2024-12-13T01:31:01.917042425Z" level=info msg="StartContainer for \"c00fb41524f81eeba5bd9293ba277fb32bf52e95c789cb4f5d03e91525b233fa\" returns successfully" Dec 13 01:31:01.966734 systemd[1]: cri-containerd-c00fb41524f81eeba5bd9293ba277fb32bf52e95c789cb4f5d03e91525b233fa.scope: Deactivated successfully. Dec 13 01:31:01.992192 containerd[1434]: time="2024-12-13T01:31:01.992121867Z" level=info msg="shim disconnected" id=c00fb41524f81eeba5bd9293ba277fb32bf52e95c789cb4f5d03e91525b233fa namespace=k8s.io Dec 13 01:31:01.992192 containerd[1434]: time="2024-12-13T01:31:01.992181507Z" level=warning msg="cleaning up after shim disconnected" id=c00fb41524f81eeba5bd9293ba277fb32bf52e95c789cb4f5d03e91525b233fa namespace=k8s.io Dec 13 01:31:01.992192 containerd[1434]: time="2024-12-13T01:31:01.992190987Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:31:02.576560 kubelet[1730]: E1213 01:31:02.576483 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:02.780482 kubelet[1730]: E1213 01:31:02.780435 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:02.782163 containerd[1434]: time="2024-12-13T01:31:02.782126078Z" level=info msg="CreateContainer within sandbox \"89bcda2264e5d2fb8de8f3e5ea169546149014f20163a4895c0bb028e72f5e6b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 01:31:02.796633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3559114292.mount: Deactivated successfully. Dec 13 01:31:02.800082 containerd[1434]: time="2024-12-13T01:31:02.799989808Z" level=info msg="CreateContainer within sandbox \"89bcda2264e5d2fb8de8f3e5ea169546149014f20163a4895c0bb028e72f5e6b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5fa8270b2ecab596a93ae0c4a2a12e812c954d4f3aac1d6aedc0cb470ed7d82e\"" Dec 13 01:31:02.801634 containerd[1434]: time="2024-12-13T01:31:02.800809648Z" level=info msg="StartContainer for \"5fa8270b2ecab596a93ae0c4a2a12e812c954d4f3aac1d6aedc0cb470ed7d82e\"" Dec 13 01:31:02.835805 systemd[1]: Started cri-containerd-5fa8270b2ecab596a93ae0c4a2a12e812c954d4f3aac1d6aedc0cb470ed7d82e.scope - libcontainer container 5fa8270b2ecab596a93ae0c4a2a12e812c954d4f3aac1d6aedc0cb470ed7d82e. Dec 13 01:31:02.856033 containerd[1434]: time="2024-12-13T01:31:02.855986197Z" level=info msg="StartContainer for \"5fa8270b2ecab596a93ae0c4a2a12e812c954d4f3aac1d6aedc0cb470ed7d82e\" returns successfully" Dec 13 01:31:02.870367 systemd[1]: cri-containerd-5fa8270b2ecab596a93ae0c4a2a12e812c954d4f3aac1d6aedc0cb470ed7d82e.scope: Deactivated successfully. Dec 13 01:31:02.889919 containerd[1434]: time="2024-12-13T01:31:02.889863894Z" level=info msg="shim disconnected" id=5fa8270b2ecab596a93ae0c4a2a12e812c954d4f3aac1d6aedc0cb470ed7d82e namespace=k8s.io Dec 13 01:31:02.889919 containerd[1434]: time="2024-12-13T01:31:02.889914934Z" level=warning msg="cleaning up after shim disconnected" id=5fa8270b2ecab596a93ae0c4a2a12e812c954d4f3aac1d6aedc0cb470ed7d82e namespace=k8s.io Dec 13 01:31:02.889919 containerd[1434]: time="2024-12-13T01:31:02.889923374Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:31:03.577154 kubelet[1730]: E1213 01:31:03.577104 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:03.768779 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fa8270b2ecab596a93ae0c4a2a12e812c954d4f3aac1d6aedc0cb470ed7d82e-rootfs.mount: Deactivated successfully. Dec 13 01:31:03.786373 kubelet[1730]: E1213 01:31:03.786340 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:03.788010 containerd[1434]: time="2024-12-13T01:31:03.787903576Z" level=info msg="CreateContainer within sandbox \"89bcda2264e5d2fb8de8f3e5ea169546149014f20163a4895c0bb028e72f5e6b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 01:31:03.798645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount295086056.mount: Deactivated successfully. Dec 13 01:31:03.802383 containerd[1434]: time="2024-12-13T01:31:03.802324823Z" level=info msg="CreateContainer within sandbox \"89bcda2264e5d2fb8de8f3e5ea169546149014f20163a4895c0bb028e72f5e6b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"14b29b29929d2645bae4c7b26a51e418a57a37c36852c154616dfd2438868826\"" Dec 13 01:31:03.802918 containerd[1434]: time="2024-12-13T01:31:03.802872904Z" level=info msg="StartContainer for \"14b29b29929d2645bae4c7b26a51e418a57a37c36852c154616dfd2438868826\"" Dec 13 01:31:03.834762 systemd[1]: Started cri-containerd-14b29b29929d2645bae4c7b26a51e418a57a37c36852c154616dfd2438868826.scope - libcontainer container 14b29b29929d2645bae4c7b26a51e418a57a37c36852c154616dfd2438868826. Dec 13 01:31:03.855422 containerd[1434]: time="2024-12-13T01:31:03.855318329Z" level=info msg="StartContainer for \"14b29b29929d2645bae4c7b26a51e418a57a37c36852c154616dfd2438868826\" returns successfully" Dec 13 01:31:03.855589 systemd[1]: cri-containerd-14b29b29929d2645bae4c7b26a51e418a57a37c36852c154616dfd2438868826.scope: Deactivated successfully. Dec 13 01:31:03.875821 containerd[1434]: time="2024-12-13T01:31:03.875746419Z" level=info msg="shim disconnected" id=14b29b29929d2645bae4c7b26a51e418a57a37c36852c154616dfd2438868826 namespace=k8s.io Dec 13 01:31:03.875821 containerd[1434]: time="2024-12-13T01:31:03.875806219Z" level=warning msg="cleaning up after shim disconnected" id=14b29b29929d2645bae4c7b26a51e418a57a37c36852c154616dfd2438868826 namespace=k8s.io Dec 13 01:31:03.875821 containerd[1434]: time="2024-12-13T01:31:03.875814579Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:31:04.578019 kubelet[1730]: E1213 01:31:04.577978 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:04.768858 systemd[1]: run-containerd-runc-k8s.io-14b29b29929d2645bae4c7b26a51e418a57a37c36852c154616dfd2438868826-runc.GrqY48.mount: Deactivated successfully. Dec 13 01:31:04.768959 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14b29b29929d2645bae4c7b26a51e418a57a37c36852c154616dfd2438868826-rootfs.mount: Deactivated successfully. Dec 13 01:31:04.790780 kubelet[1730]: E1213 01:31:04.790673 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:04.792275 containerd[1434]: time="2024-12-13T01:31:04.792236003Z" level=info msg="CreateContainer within sandbox \"89bcda2264e5d2fb8de8f3e5ea169546149014f20163a4895c0bb028e72f5e6b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 01:31:04.805473 containerd[1434]: time="2024-12-13T01:31:04.805434729Z" level=info msg="CreateContainer within sandbox \"89bcda2264e5d2fb8de8f3e5ea169546149014f20163a4895c0bb028e72f5e6b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"095f12613d6e03bc1cbd3595f111b7f70703c4bdb68e69ea8f06a32f6f86f7ce\"" Dec 13 01:31:04.806109 containerd[1434]: time="2024-12-13T01:31:04.805896689Z" level=info msg="StartContainer for \"095f12613d6e03bc1cbd3595f111b7f70703c4bdb68e69ea8f06a32f6f86f7ce\"" Dec 13 01:31:04.838242 systemd[1]: Started cri-containerd-095f12613d6e03bc1cbd3595f111b7f70703c4bdb68e69ea8f06a32f6f86f7ce.scope - libcontainer container 095f12613d6e03bc1cbd3595f111b7f70703c4bdb68e69ea8f06a32f6f86f7ce. Dec 13 01:31:04.856219 systemd[1]: cri-containerd-095f12613d6e03bc1cbd3595f111b7f70703c4bdb68e69ea8f06a32f6f86f7ce.scope: Deactivated successfully. Dec 13 01:31:04.857124 containerd[1434]: time="2024-12-13T01:31:04.857085312Z" level=info msg="StartContainer for \"095f12613d6e03bc1cbd3595f111b7f70703c4bdb68e69ea8f06a32f6f86f7ce\" returns successfully" Dec 13 01:31:04.877413 containerd[1434]: time="2024-12-13T01:31:04.877359681Z" level=info msg="shim disconnected" id=095f12613d6e03bc1cbd3595f111b7f70703c4bdb68e69ea8f06a32f6f86f7ce namespace=k8s.io Dec 13 01:31:04.877727 containerd[1434]: time="2024-12-13T01:31:04.877642162Z" level=warning msg="cleaning up after shim disconnected" id=095f12613d6e03bc1cbd3595f111b7f70703c4bdb68e69ea8f06a32f6f86f7ce namespace=k8s.io Dec 13 01:31:04.877727 containerd[1434]: time="2024-12-13T01:31:04.877661882Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:31:04.887170 containerd[1434]: time="2024-12-13T01:31:04.887109846Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:31:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:31:05.151557 containerd[1434]: time="2024-12-13T01:31:05.151444563Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:05.152602 containerd[1434]: time="2024-12-13T01:31:05.152533843Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17138342" Dec 13 01:31:05.153251 containerd[1434]: time="2024-12-13T01:31:05.153138003Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:31:05.155427 containerd[1434]: time="2024-12-13T01:31:05.155012644Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.320788265s" Dec 13 01:31:05.155427 containerd[1434]: time="2024-12-13T01:31:05.155049084Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Dec 13 01:31:05.157568 containerd[1434]: time="2024-12-13T01:31:05.157521005Z" level=info msg="CreateContainer within sandbox \"89b040c296a220f243bf569c08ca2310085c84f58f513fe3db7ecdc225c05fa1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 01:31:05.170820 containerd[1434]: time="2024-12-13T01:31:05.170774291Z" level=info msg="CreateContainer within sandbox \"89b040c296a220f243bf569c08ca2310085c84f58f513fe3db7ecdc225c05fa1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d36e9f08012ff65509b1c33177a1cee79274595b324ee12be967014a2fd4bed6\"" Dec 13 01:31:05.171214 containerd[1434]: time="2024-12-13T01:31:05.171171571Z" level=info msg="StartContainer for \"d36e9f08012ff65509b1c33177a1cee79274595b324ee12be967014a2fd4bed6\"" Dec 13 01:31:05.199719 systemd[1]: Started cri-containerd-d36e9f08012ff65509b1c33177a1cee79274595b324ee12be967014a2fd4bed6.scope - libcontainer container d36e9f08012ff65509b1c33177a1cee79274595b324ee12be967014a2fd4bed6. Dec 13 01:31:05.272904 containerd[1434]: time="2024-12-13T01:31:05.272796495Z" level=info msg="StartContainer for \"d36e9f08012ff65509b1c33177a1cee79274595b324ee12be967014a2fd4bed6\" returns successfully" Dec 13 01:31:05.578156 kubelet[1730]: E1213 01:31:05.578083 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:05.691627 kubelet[1730]: E1213 01:31:05.691570 1730 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 01:31:05.769934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-095f12613d6e03bc1cbd3595f111b7f70703c4bdb68e69ea8f06a32f6f86f7ce-rootfs.mount: Deactivated successfully. Dec 13 01:31:05.795329 kubelet[1730]: E1213 01:31:05.795294 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:05.796912 kubelet[1730]: E1213 01:31:05.796892 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:05.797504 containerd[1434]: time="2024-12-13T01:31:05.797467840Z" level=info msg="CreateContainer within sandbox \"89bcda2264e5d2fb8de8f3e5ea169546149014f20163a4895c0bb028e72f5e6b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 01:31:05.811397 containerd[1434]: time="2024-12-13T01:31:05.811302366Z" level=info msg="CreateContainer within sandbox \"89bcda2264e5d2fb8de8f3e5ea169546149014f20163a4895c0bb028e72f5e6b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9f5691cc16221b6d6689c7b4eb51a173fe9387248e0efd819bfa24c25c6ddd93\"" Dec 13 01:31:05.811867 containerd[1434]: time="2024-12-13T01:31:05.811839686Z" level=info msg="StartContainer for \"9f5691cc16221b6d6689c7b4eb51a173fe9387248e0efd819bfa24c25c6ddd93\"" Dec 13 01:31:05.824938 kubelet[1730]: I1213 01:31:05.824829 1730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-wsshc" podStartSLOduration=3.502591305 podStartE2EDuration="6.824814731s" podCreationTimestamp="2024-12-13 01:30:59 +0000 UTC" firstStartedPulling="2024-12-13 01:31:01.833939739 +0000 UTC m=+52.328922518" lastFinishedPulling="2024-12-13 01:31:05.156163165 +0000 UTC m=+55.651145944" observedRunningTime="2024-12-13 01:31:05.824122731 +0000 UTC m=+56.319105510" watchObservedRunningTime="2024-12-13 01:31:05.824814731 +0000 UTC m=+56.319797510" Dec 13 01:31:05.846694 systemd[1]: Started cri-containerd-9f5691cc16221b6d6689c7b4eb51a173fe9387248e0efd819bfa24c25c6ddd93.scope - libcontainer container 9f5691cc16221b6d6689c7b4eb51a173fe9387248e0efd819bfa24c25c6ddd93. Dec 13 01:31:05.878895 containerd[1434]: time="2024-12-13T01:31:05.878848275Z" level=info msg="StartContainer for \"9f5691cc16221b6d6689c7b4eb51a173fe9387248e0efd819bfa24c25c6ddd93\" returns successfully" Dec 13 01:31:06.133637 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Dec 13 01:31:06.578937 kubelet[1730]: E1213 01:31:06.578877 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:06.769026 systemd[1]: run-containerd-runc-k8s.io-9f5691cc16221b6d6689c7b4eb51a173fe9387248e0efd819bfa24c25c6ddd93-runc.rayFiK.mount: Deactivated successfully. Dec 13 01:31:06.801994 kubelet[1730]: E1213 01:31:06.801581 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:06.801994 kubelet[1730]: E1213 01:31:06.801933 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:06.818217 kubelet[1730]: I1213 01:31:06.818162 1730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k8g7d" podStartSLOduration=7.818149015 podStartE2EDuration="7.818149015s" podCreationTimestamp="2024-12-13 01:30:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:31:06.818093615 +0000 UTC m=+57.313076394" watchObservedRunningTime="2024-12-13 01:31:06.818149015 +0000 UTC m=+57.313131794" Dec 13 01:31:07.579275 kubelet[1730]: E1213 01:31:07.579232 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:07.803662 kubelet[1730]: E1213 01:31:07.803274 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:08.579723 kubelet[1730]: E1213 01:31:08.579674 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:08.895724 systemd-networkd[1358]: lxc_health: Link UP Dec 13 01:31:08.902428 systemd-networkd[1358]: lxc_health: Gained carrier Dec 13 01:31:09.580317 kubelet[1730]: E1213 01:31:09.580272 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:09.796989 kubelet[1730]: E1213 01:31:09.796938 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:09.807070 kubelet[1730]: E1213 01:31:09.806968 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:10.543418 kubelet[1730]: E1213 01:31:10.543372 1730 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:10.568516 containerd[1434]: time="2024-12-13T01:31:10.568299727Z" level=info msg="StopPodSandbox for \"71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b\"" Dec 13 01:31:10.569397 containerd[1434]: time="2024-12-13T01:31:10.569064928Z" level=info msg="TearDown network for sandbox \"71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b\" successfully" Dec 13 01:31:10.569397 containerd[1434]: time="2024-12-13T01:31:10.569092048Z" level=info msg="StopPodSandbox for \"71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b\" returns successfully" Dec 13 01:31:10.572560 containerd[1434]: time="2024-12-13T01:31:10.571277568Z" level=info msg="RemovePodSandbox for \"71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b\"" Dec 13 01:31:10.572560 containerd[1434]: time="2024-12-13T01:31:10.571318848Z" level=info msg="Forcibly stopping sandbox \"71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b\"" Dec 13 01:31:10.572560 containerd[1434]: time="2024-12-13T01:31:10.571389408Z" level=info msg="TearDown network for sandbox \"71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b\" successfully" Dec 13 01:31:10.580436 containerd[1434]: time="2024-12-13T01:31:10.580397331Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:31:10.580642 containerd[1434]: time="2024-12-13T01:31:10.580621211Z" level=info msg="RemovePodSandbox \"71c6e36a48cb979ca451d00d68420a9c21e18f9d890a2f68597a4ddfbaa4c26b\" returns successfully" Dec 13 01:31:10.580913 kubelet[1730]: E1213 01:31:10.580882 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:10.737433 systemd[1]: run-containerd-runc-k8s.io-9f5691cc16221b6d6689c7b4eb51a173fe9387248e0efd819bfa24c25c6ddd93-runc.87l1KB.mount: Deactivated successfully. Dec 13 01:31:10.808901 kubelet[1730]: E1213 01:31:10.808569 1730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:31:10.950818 systemd-networkd[1358]: lxc_health: Gained IPv6LL Dec 13 01:31:11.581031 kubelet[1730]: E1213 01:31:11.580988 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:12.581997 kubelet[1730]: E1213 01:31:12.581945 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:13.589559 kubelet[1730]: E1213 01:31:13.582819 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:14.583976 kubelet[1730]: E1213 01:31:14.583936 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:15.584565 kubelet[1730]: E1213 01:31:15.584506 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 01:31:16.584717 kubelet[1730]: E1213 01:31:16.584652 1730 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"