Feb 13 15:26:17.896558 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 13 15:26:17.896579 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 14:02:42 -00 2025 Feb 13 15:26:17.896589 kernel: KASLR enabled Feb 13 15:26:17.896595 kernel: efi: EFI v2.7 by EDK II Feb 13 15:26:17.896601 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Feb 13 15:26:17.896606 kernel: random: crng init done Feb 13 15:26:17.896613 kernel: secureboot: Secure boot disabled Feb 13 15:26:17.896619 kernel: ACPI: Early table checksum verification disabled Feb 13 15:26:17.896625 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Feb 13 15:26:17.896632 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Feb 13 15:26:17.896638 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:17.896643 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:17.896649 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:17.896655 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:17.896662 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:17.896670 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:17.896676 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:17.896683 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:17.896689 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 15:26:17.896695 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Feb 13 15:26:17.896701 kernel: NUMA: Failed to initialise from firmware Feb 13 15:26:17.896708 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:26:17.896714 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Feb 13 15:26:17.896720 kernel: Zone ranges: Feb 13 15:26:17.896726 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:26:17.896734 kernel: DMA32 empty Feb 13 15:26:17.896740 kernel: Normal empty Feb 13 15:26:17.896746 kernel: Movable zone start for each node Feb 13 15:26:17.896752 kernel: Early memory node ranges Feb 13 15:26:17.896759 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Feb 13 15:26:17.896765 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Feb 13 15:26:17.896771 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Feb 13 15:26:17.896777 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Feb 13 15:26:17.896783 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Feb 13 15:26:17.896789 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Feb 13 15:26:17.896795 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Feb 13 15:26:17.896801 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Feb 13 15:26:17.896809 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Feb 13 15:26:17.896815 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Feb 13 15:26:17.896822 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Feb 13 15:26:17.896831 kernel: psci: probing for conduit method from ACPI. Feb 13 15:26:17.896837 kernel: psci: PSCIv1.1 detected in firmware. Feb 13 15:26:17.896844 kernel: psci: Using standard PSCI v0.2 function IDs Feb 13 15:26:17.896852 kernel: psci: Trusted OS migration not required Feb 13 15:26:17.896858 kernel: psci: SMC Calling Convention v1.1 Feb 13 15:26:17.896865 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Feb 13 15:26:17.896872 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Feb 13 15:26:17.896879 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Feb 13 15:26:17.896885 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Feb 13 15:26:17.896892 kernel: Detected PIPT I-cache on CPU0 Feb 13 15:26:17.896898 kernel: CPU features: detected: GIC system register CPU interface Feb 13 15:26:17.896905 kernel: CPU features: detected: Hardware dirty bit management Feb 13 15:26:17.896911 kernel: CPU features: detected: Spectre-v4 Feb 13 15:26:17.896919 kernel: CPU features: detected: Spectre-BHB Feb 13 15:26:17.896931 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 13 15:26:17.896940 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 13 15:26:17.896947 kernel: CPU features: detected: ARM erratum 1418040 Feb 13 15:26:17.896954 kernel: CPU features: detected: SSBS not fully self-synchronizing Feb 13 15:26:17.896960 kernel: alternatives: applying boot alternatives Feb 13 15:26:17.896968 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a Feb 13 15:26:17.896975 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 15:26:17.896981 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 15:26:17.896988 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 15:26:17.896994 kernel: Fallback order for Node 0: 0 Feb 13 15:26:17.897002 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Feb 13 15:26:17.897009 kernel: Policy zone: DMA Feb 13 15:26:17.897015 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 15:26:17.897022 kernel: software IO TLB: area num 4. Feb 13 15:26:17.897029 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Feb 13 15:26:17.897036 kernel: Memory: 2385940K/2572288K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 186348K reserved, 0K cma-reserved) Feb 13 15:26:17.897045 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 15:26:17.897052 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 15:26:17.897062 kernel: rcu: RCU event tracing is enabled. Feb 13 15:26:17.897068 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 15:26:17.897075 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 15:26:17.897082 kernel: Tracing variant of Tasks RCU enabled. Feb 13 15:26:17.897091 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 15:26:17.897098 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 15:26:17.897105 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 13 15:26:17.897111 kernel: GICv3: 256 SPIs implemented Feb 13 15:26:17.897118 kernel: GICv3: 0 Extended SPIs implemented Feb 13 15:26:17.897124 kernel: Root IRQ handler: gic_handle_irq Feb 13 15:26:17.897131 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Feb 13 15:26:17.897138 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Feb 13 15:26:17.897145 kernel: ITS [mem 0x08080000-0x0809ffff] Feb 13 15:26:17.897151 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Feb 13 15:26:17.897158 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Feb 13 15:26:17.897166 kernel: GICv3: using LPI property table @0x00000000400f0000 Feb 13 15:26:17.897173 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Feb 13 15:26:17.897179 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 15:26:17.897186 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:26:17.897193 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 13 15:26:17.897199 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 13 15:26:17.897206 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 13 15:26:17.897213 kernel: arm-pv: using stolen time PV Feb 13 15:26:17.897220 kernel: Console: colour dummy device 80x25 Feb 13 15:26:17.897227 kernel: ACPI: Core revision 20230628 Feb 13 15:26:17.897234 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 13 15:26:17.897242 kernel: pid_max: default: 32768 minimum: 301 Feb 13 15:26:17.897249 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 15:26:17.897256 kernel: landlock: Up and running. Feb 13 15:26:17.897263 kernel: SELinux: Initializing. Feb 13 15:26:17.897270 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:26:17.897277 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 15:26:17.897284 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:26:17.897291 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 15:26:17.897297 kernel: rcu: Hierarchical SRCU implementation. Feb 13 15:26:17.897306 kernel: rcu: Max phase no-delay instances is 400. Feb 13 15:26:17.897313 kernel: Platform MSI: ITS@0x8080000 domain created Feb 13 15:26:17.897319 kernel: PCI/MSI: ITS@0x8080000 domain created Feb 13 15:26:17.897326 kernel: Remapping and enabling EFI services. Feb 13 15:26:17.897333 kernel: smp: Bringing up secondary CPUs ... Feb 13 15:26:17.897340 kernel: Detected PIPT I-cache on CPU1 Feb 13 15:26:17.897347 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Feb 13 15:26:17.897354 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Feb 13 15:26:17.897360 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:26:17.897369 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 13 15:26:17.897376 kernel: Detected PIPT I-cache on CPU2 Feb 13 15:26:17.897387 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Feb 13 15:26:17.897396 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Feb 13 15:26:17.897403 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:26:17.897411 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Feb 13 15:26:17.897418 kernel: Detected PIPT I-cache on CPU3 Feb 13 15:26:17.897425 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Feb 13 15:26:17.897432 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Feb 13 15:26:17.897442 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 13 15:26:17.897449 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Feb 13 15:26:17.897456 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 15:26:17.897464 kernel: SMP: Total of 4 processors activated. Feb 13 15:26:17.897472 kernel: CPU features: detected: 32-bit EL0 Support Feb 13 15:26:17.897480 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 13 15:26:17.897508 kernel: CPU features: detected: Common not Private translations Feb 13 15:26:17.897516 kernel: CPU features: detected: CRC32 instructions Feb 13 15:26:17.897527 kernel: CPU features: detected: Enhanced Virtualization Traps Feb 13 15:26:17.897535 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 13 15:26:17.897542 kernel: CPU features: detected: LSE atomic instructions Feb 13 15:26:17.897550 kernel: CPU features: detected: Privileged Access Never Feb 13 15:26:17.897559 kernel: CPU features: detected: RAS Extension Support Feb 13 15:26:17.897581 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Feb 13 15:26:17.897588 kernel: CPU: All CPU(s) started at EL1 Feb 13 15:26:17.897595 kernel: alternatives: applying system-wide alternatives Feb 13 15:26:17.897602 kernel: devtmpfs: initialized Feb 13 15:26:17.897611 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 15:26:17.897618 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 15:26:17.897625 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 15:26:17.897633 kernel: SMBIOS 3.0.0 present. Feb 13 15:26:17.897640 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Feb 13 15:26:17.897647 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 15:26:17.897654 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 13 15:26:17.897661 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 13 15:26:17.897669 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 13 15:26:17.897678 kernel: audit: initializing netlink subsys (disabled) Feb 13 15:26:17.897685 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Feb 13 15:26:17.897692 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 15:26:17.897699 kernel: cpuidle: using governor menu Feb 13 15:26:17.897706 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 13 15:26:17.897713 kernel: ASID allocator initialised with 32768 entries Feb 13 15:26:17.897720 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 15:26:17.897727 kernel: Serial: AMBA PL011 UART driver Feb 13 15:26:17.897735 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Feb 13 15:26:17.897743 kernel: Modules: 0 pages in range for non-PLT usage Feb 13 15:26:17.897751 kernel: Modules: 508880 pages in range for PLT usage Feb 13 15:26:17.897758 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 15:26:17.897765 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 15:26:17.897774 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Feb 13 15:26:17.897782 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Feb 13 15:26:17.897789 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 15:26:17.897796 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 15:26:17.897803 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Feb 13 15:26:17.897812 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Feb 13 15:26:17.897819 kernel: ACPI: Added _OSI(Module Device) Feb 13 15:26:17.897826 kernel: ACPI: Added _OSI(Processor Device) Feb 13 15:26:17.897834 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 15:26:17.897841 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 15:26:17.897848 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 15:26:17.897855 kernel: ACPI: Interpreter enabled Feb 13 15:26:17.897862 kernel: ACPI: Using GIC for interrupt routing Feb 13 15:26:17.897869 kernel: ACPI: MCFG table detected, 1 entries Feb 13 15:26:17.897876 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Feb 13 15:26:17.897885 kernel: printk: console [ttyAMA0] enabled Feb 13 15:26:17.897892 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 15:26:17.898022 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 15:26:17.898102 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 15:26:17.898167 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 15:26:17.898233 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Feb 13 15:26:17.898296 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Feb 13 15:26:17.898308 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Feb 13 15:26:17.898315 kernel: PCI host bridge to bus 0000:00 Feb 13 15:26:17.898386 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Feb 13 15:26:17.898447 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 13 15:26:17.898528 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Feb 13 15:26:17.898588 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 15:26:17.898667 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Feb 13 15:26:17.898748 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 15:26:17.898822 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Feb 13 15:26:17.898890 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Feb 13 15:26:17.898961 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:26:17.899027 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Feb 13 15:26:17.899096 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Feb 13 15:26:17.899165 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Feb 13 15:26:17.899223 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Feb 13 15:26:17.899280 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 13 15:26:17.899336 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Feb 13 15:26:17.899346 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 13 15:26:17.899353 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 13 15:26:17.899360 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 13 15:26:17.899367 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 13 15:26:17.899376 kernel: iommu: Default domain type: Translated Feb 13 15:26:17.899383 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 13 15:26:17.899390 kernel: efivars: Registered efivars operations Feb 13 15:26:17.899397 kernel: vgaarb: loaded Feb 13 15:26:17.899404 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 13 15:26:17.899411 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 15:26:17.899418 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 15:26:17.899425 kernel: pnp: PnP ACPI init Feb 13 15:26:17.899534 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Feb 13 15:26:17.899549 kernel: pnp: PnP ACPI: found 1 devices Feb 13 15:26:17.899556 kernel: NET: Registered PF_INET protocol family Feb 13 15:26:17.899563 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 15:26:17.899570 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 15:26:17.899578 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 15:26:17.899585 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 15:26:17.899592 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 15:26:17.899599 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 15:26:17.899607 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:26:17.899615 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 15:26:17.899622 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 15:26:17.899629 kernel: PCI: CLS 0 bytes, default 64 Feb 13 15:26:17.899636 kernel: kvm [1]: HYP mode not available Feb 13 15:26:17.899643 kernel: Initialise system trusted keyrings Feb 13 15:26:17.899650 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 15:26:17.899657 kernel: Key type asymmetric registered Feb 13 15:26:17.899664 kernel: Asymmetric key parser 'x509' registered Feb 13 15:26:17.899672 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Feb 13 15:26:17.899679 kernel: io scheduler mq-deadline registered Feb 13 15:26:17.899686 kernel: io scheduler kyber registered Feb 13 15:26:17.899693 kernel: io scheduler bfq registered Feb 13 15:26:17.899700 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 13 15:26:17.899712 kernel: ACPI: button: Power Button [PWRB] Feb 13 15:26:17.899719 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 13 15:26:17.899794 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Feb 13 15:26:17.899804 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 15:26:17.899813 kernel: thunder_xcv, ver 1.0 Feb 13 15:26:17.899820 kernel: thunder_bgx, ver 1.0 Feb 13 15:26:17.899827 kernel: nicpf, ver 1.0 Feb 13 15:26:17.899834 kernel: nicvf, ver 1.0 Feb 13 15:26:17.899906 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 13 15:26:17.899972 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:26:17 UTC (1739460377) Feb 13 15:26:17.899983 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 15:26:17.899990 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Feb 13 15:26:17.900000 kernel: watchdog: Delayed init of the lockup detector failed: -19 Feb 13 15:26:17.900008 kernel: watchdog: Hard watchdog permanently disabled Feb 13 15:26:17.900015 kernel: NET: Registered PF_INET6 protocol family Feb 13 15:26:17.900023 kernel: Segment Routing with IPv6 Feb 13 15:26:17.900030 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 15:26:17.900038 kernel: NET: Registered PF_PACKET protocol family Feb 13 15:26:17.900045 kernel: Key type dns_resolver registered Feb 13 15:26:17.900056 kernel: registered taskstats version 1 Feb 13 15:26:17.900065 kernel: Loading compiled-in X.509 certificates Feb 13 15:26:17.900072 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 62d673f884efd54b6d6ef802a9b879413c8a346e' Feb 13 15:26:17.900082 kernel: Key type .fscrypt registered Feb 13 15:26:17.900089 kernel: Key type fscrypt-provisioning registered Feb 13 15:26:17.900096 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 15:26:17.900103 kernel: ima: Allocated hash algorithm: sha1 Feb 13 15:26:17.900111 kernel: ima: No architecture policies found Feb 13 15:26:17.900118 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 13 15:26:17.900125 kernel: clk: Disabling unused clocks Feb 13 15:26:17.900132 kernel: Freeing unused kernel memory: 39936K Feb 13 15:26:17.900140 kernel: Run /init as init process Feb 13 15:26:17.900147 kernel: with arguments: Feb 13 15:26:17.900154 kernel: /init Feb 13 15:26:17.900161 kernel: with environment: Feb 13 15:26:17.900168 kernel: HOME=/ Feb 13 15:26:17.900175 kernel: TERM=linux Feb 13 15:26:17.900181 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 15:26:17.900190 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:26:17.900200 systemd[1]: Detected virtualization kvm. Feb 13 15:26:17.900208 systemd[1]: Detected architecture arm64. Feb 13 15:26:17.900215 systemd[1]: Running in initrd. Feb 13 15:26:17.900222 systemd[1]: No hostname configured, using default hostname. Feb 13 15:26:17.900229 systemd[1]: Hostname set to . Feb 13 15:26:17.900237 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:26:17.900245 systemd[1]: Queued start job for default target initrd.target. Feb 13 15:26:17.900253 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:26:17.900263 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:26:17.900272 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 15:26:17.900280 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:26:17.900288 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 15:26:17.900296 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 15:26:17.900305 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 15:26:17.900314 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 15:26:17.900324 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:26:17.900332 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:26:17.900341 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:26:17.900349 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:26:17.900357 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:26:17.900365 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:26:17.900372 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:26:17.900380 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:26:17.900388 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 15:26:17.900397 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 15:26:17.900404 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:26:17.900412 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:26:17.900420 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:26:17.900428 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:26:17.900435 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 15:26:17.900443 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:26:17.900451 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 15:26:17.900460 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 15:26:17.900467 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:26:17.900475 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:26:17.900483 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:26:17.900513 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 15:26:17.900522 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:26:17.900530 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 15:26:17.900541 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:26:17.900549 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:26:17.900576 systemd-journald[239]: Collecting audit messages is disabled. Feb 13 15:26:17.900597 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:26:17.900605 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:26:17.900613 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:26:17.900621 systemd-journald[239]: Journal started Feb 13 15:26:17.900643 systemd-journald[239]: Runtime Journal (/run/log/journal/038e05d2898243fc9158d9e3290b4360) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:26:17.892610 systemd-modules-load[240]: Inserted module 'overlay' Feb 13 15:26:17.904515 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:26:17.904549 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 15:26:17.906121 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:26:17.908790 systemd-modules-load[240]: Inserted module 'br_netfilter' Feb 13 15:26:17.909532 kernel: Bridge firewalling registered Feb 13 15:26:17.909872 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:26:17.911582 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:26:17.917133 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:26:17.920806 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:26:17.922225 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:26:17.923976 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:26:17.935754 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 15:26:17.937697 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:26:17.945159 dracut-cmdline[277]: dracut-dracut-053 Feb 13 15:26:17.947862 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a Feb 13 15:26:17.973393 systemd-resolved[280]: Positive Trust Anchors: Feb 13 15:26:17.974164 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:26:17.974198 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:26:17.978867 systemd-resolved[280]: Defaulting to hostname 'linux'. Feb 13 15:26:17.979782 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:26:17.981352 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:26:18.015536 kernel: SCSI subsystem initialized Feb 13 15:26:18.020516 kernel: Loading iSCSI transport class v2.0-870. Feb 13 15:26:18.027541 kernel: iscsi: registered transport (tcp) Feb 13 15:26:18.040523 kernel: iscsi: registered transport (qla4xxx) Feb 13 15:26:18.040540 kernel: QLogic iSCSI HBA Driver Feb 13 15:26:18.085391 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 15:26:18.097656 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 15:26:18.114807 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 15:26:18.115861 kernel: device-mapper: uevent: version 1.0.3 Feb 13 15:26:18.115875 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 15:26:18.167517 kernel: raid6: neonx8 gen() 15789 MB/s Feb 13 15:26:18.184507 kernel: raid6: neonx4 gen() 15812 MB/s Feb 13 15:26:18.201506 kernel: raid6: neonx2 gen() 13214 MB/s Feb 13 15:26:18.218510 kernel: raid6: neonx1 gen() 10444 MB/s Feb 13 15:26:18.235513 kernel: raid6: int64x8 gen() 6786 MB/s Feb 13 15:26:18.252505 kernel: raid6: int64x4 gen() 7347 MB/s Feb 13 15:26:18.269507 kernel: raid6: int64x2 gen() 6114 MB/s Feb 13 15:26:18.286504 kernel: raid6: int64x1 gen() 5058 MB/s Feb 13 15:26:18.286524 kernel: raid6: using algorithm neonx4 gen() 15812 MB/s Feb 13 15:26:18.303514 kernel: raid6: .... xor() 12395 MB/s, rmw enabled Feb 13 15:26:18.303532 kernel: raid6: using neon recovery algorithm Feb 13 15:26:18.308865 kernel: xor: measuring software checksum speed Feb 13 15:26:18.308880 kernel: 8regs : 21562 MB/sec Feb 13 15:26:18.308897 kernel: 32regs : 21704 MB/sec Feb 13 15:26:18.309905 kernel: arm64_neon : 27936 MB/sec Feb 13 15:26:18.309927 kernel: xor: using function: arm64_neon (27936 MB/sec) Feb 13 15:26:18.362528 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 15:26:18.380290 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:26:18.396668 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:26:18.407402 systemd-udevd[463]: Using default interface naming scheme 'v255'. Feb 13 15:26:18.410523 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:26:18.432670 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 15:26:18.443425 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Feb 13 15:26:18.468061 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:26:18.478689 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:26:18.517022 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:26:18.526738 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 15:26:18.539692 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 15:26:18.540874 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:26:18.542252 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:26:18.544057 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:26:18.556541 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Feb 13 15:26:18.567662 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 15:26:18.569891 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 15:26:18.569918 kernel: GPT:9289727 != 19775487 Feb 13 15:26:18.569927 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 15:26:18.569936 kernel: GPT:9289727 != 19775487 Feb 13 15:26:18.569945 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 15:26:18.569954 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:26:18.557622 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 15:26:18.567974 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:26:18.572011 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:26:18.572115 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:26:18.576192 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:26:18.576970 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:26:18.577113 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:26:18.578308 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:26:18.590513 kernel: BTRFS: device fsid dbbe73f5-49db-4e16-b023-d47ce63b488f devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (518) Feb 13 15:26:18.593518 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (513) Feb 13 15:26:18.593778 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:26:18.604381 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 15:26:18.606114 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:26:18.617910 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 15:26:18.621411 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 15:26:18.622357 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 15:26:18.627277 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:26:18.635699 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 15:26:18.637686 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 15:26:18.641465 disk-uuid[551]: Primary Header is updated. Feb 13 15:26:18.641465 disk-uuid[551]: Secondary Entries is updated. Feb 13 15:26:18.641465 disk-uuid[551]: Secondary Header is updated. Feb 13 15:26:18.649527 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:26:18.654102 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:26:18.659633 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:26:19.654521 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 15:26:19.655292 disk-uuid[552]: The operation has completed successfully. Feb 13 15:26:19.678396 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 15:26:19.678524 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 15:26:19.724730 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 15:26:19.727841 sh[571]: Success Feb 13 15:26:19.745520 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 13 15:26:19.781551 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 15:26:19.792645 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 15:26:19.801531 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 15:26:19.804742 kernel: BTRFS info (device dm-0): first mount of filesystem dbbe73f5-49db-4e16-b023-d47ce63b488f Feb 13 15:26:19.804779 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:26:19.806733 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 15:26:19.806751 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 15:26:19.807735 kernel: BTRFS info (device dm-0): using free space tree Feb 13 15:26:19.811049 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 15:26:19.812169 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 15:26:19.828741 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 15:26:19.830296 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 15:26:19.840166 kernel: BTRFS info (device vda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:26:19.840223 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:26:19.840740 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:26:19.843553 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:26:19.851113 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 15:26:19.853880 kernel: BTRFS info (device vda6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:26:19.858468 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 15:26:19.866672 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 15:26:19.958230 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:26:19.969564 ignition[653]: Ignition 2.20.0 Feb 13 15:26:19.969575 ignition[653]: Stage: fetch-offline Feb 13 15:26:19.971695 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:26:19.969611 ignition[653]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:26:19.969619 ignition[653]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:26:19.969768 ignition[653]: parsed url from cmdline: "" Feb 13 15:26:19.969772 ignition[653]: no config URL provided Feb 13 15:26:19.969776 ignition[653]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 15:26:19.969783 ignition[653]: no config at "/usr/lib/ignition/user.ign" Feb 13 15:26:19.969808 ignition[653]: op(1): [started] loading QEMU firmware config module Feb 13 15:26:19.969813 ignition[653]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 15:26:19.975288 ignition[653]: op(1): [finished] loading QEMU firmware config module Feb 13 15:26:19.984103 ignition[653]: parsing config with SHA512: 371f808a71d7938af50f22269ad1031dfb3b0923ed11e572ab3165575eb84882c48afc8c3e54e67cda11c2929932bb584ac577548d3bf02ae446a6e8801c8df2 Feb 13 15:26:19.987659 unknown[653]: fetched base config from "system" Feb 13 15:26:19.987680 unknown[653]: fetched user config from "qemu" Feb 13 15:26:19.987958 ignition[653]: fetch-offline: fetch-offline passed Feb 13 15:26:19.988037 ignition[653]: Ignition finished successfully Feb 13 15:26:19.990359 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:26:19.999790 systemd-networkd[762]: lo: Link UP Feb 13 15:26:19.999801 systemd-networkd[762]: lo: Gained carrier Feb 13 15:26:20.000669 systemd-networkd[762]: Enumeration completed Feb 13 15:26:20.000766 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:26:20.001057 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:26:20.001061 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:26:20.002167 systemd[1]: Reached target network.target - Network. Feb 13 15:26:20.002356 systemd-networkd[762]: eth0: Link UP Feb 13 15:26:20.002359 systemd-networkd[762]: eth0: Gained carrier Feb 13 15:26:20.002366 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:26:20.003632 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 15:26:20.015666 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 15:26:20.025573 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:26:20.028303 ignition[767]: Ignition 2.20.0 Feb 13 15:26:20.028315 ignition[767]: Stage: kargs Feb 13 15:26:20.028488 ignition[767]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:26:20.028584 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:26:20.029244 ignition[767]: kargs: kargs passed Feb 13 15:26:20.029282 ignition[767]: Ignition finished successfully Feb 13 15:26:20.031558 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 15:26:20.037652 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 15:26:20.048665 ignition[777]: Ignition 2.20.0 Feb 13 15:26:20.048675 ignition[777]: Stage: disks Feb 13 15:26:20.048845 ignition[777]: no configs at "/usr/lib/ignition/base.d" Feb 13 15:26:20.048854 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:26:20.049524 ignition[777]: disks: disks passed Feb 13 15:26:20.049569 ignition[777]: Ignition finished successfully Feb 13 15:26:20.053533 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 15:26:20.054758 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 15:26:20.055863 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 15:26:20.057336 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:26:20.058789 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:26:20.062635 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:26:20.082691 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 15:26:20.093250 systemd-fsck[788]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 15:26:20.098523 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 15:26:20.111602 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 15:26:20.162308 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 15:26:20.163657 kernel: EXT4-fs (vda9): mounted filesystem 469d244b-00c1-45f4-bce0-c1d88e98a895 r/w with ordered data mode. Quota mode: none. Feb 13 15:26:20.163534 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 15:26:20.177577 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:26:20.180031 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 15:26:20.180975 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 15:26:20.181012 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 15:26:20.181033 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:26:20.188044 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 15:26:20.189965 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 15:26:20.193844 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (796) Feb 13 15:26:20.193879 kernel: BTRFS info (device vda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:26:20.193895 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:26:20.193904 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:26:20.197505 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:26:20.198578 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:26:20.234973 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 15:26:20.238981 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory Feb 13 15:26:20.242279 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 15:26:20.245967 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 15:26:20.320777 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 15:26:20.333601 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 15:26:20.337660 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 15:26:20.343549 kernel: BTRFS info (device vda6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:26:20.369785 ignition[909]: INFO : Ignition 2.20.0 Feb 13 15:26:20.369785 ignition[909]: INFO : Stage: mount Feb 13 15:26:20.371040 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:26:20.371040 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:26:20.371040 ignition[909]: INFO : mount: mount passed Feb 13 15:26:20.371040 ignition[909]: INFO : Ignition finished successfully Feb 13 15:26:20.372435 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 15:26:20.375950 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 15:26:20.378756 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 15:26:20.803165 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 15:26:20.817741 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 15:26:20.824509 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (923) Feb 13 15:26:20.826828 kernel: BTRFS info (device vda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74 Feb 13 15:26:20.826855 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Feb 13 15:26:20.826865 kernel: BTRFS info (device vda6): using free space tree Feb 13 15:26:20.830521 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 15:26:20.831601 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 15:26:20.851770 ignition[940]: INFO : Ignition 2.20.0 Feb 13 15:26:20.853591 ignition[940]: INFO : Stage: files Feb 13 15:26:20.853591 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:26:20.853591 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:26:20.855913 ignition[940]: DEBUG : files: compiled without relabeling support, skipping Feb 13 15:26:20.855913 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 15:26:20.855913 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 15:26:20.858924 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 15:26:20.858924 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 15:26:20.858924 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 15:26:20.858924 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 15:26:20.858924 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 15:26:20.857408 unknown[940]: wrote ssh authorized keys file for user: core Feb 13 15:26:20.865563 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:26:20.865563 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 15:26:20.865563 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:26:20.865563 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:26:20.865563 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:26:20.865563 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Feb 13 15:26:21.157332 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 15:26:21.194710 systemd-networkd[762]: eth0: Gained IPv6LL Feb 13 15:26:21.405572 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Feb 13 15:26:21.405572 ignition[940]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Feb 13 15:26:21.412875 ignition[940]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:26:21.414955 ignition[940]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 15:26:21.414955 ignition[940]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Feb 13 15:26:21.414955 ignition[940]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 15:26:21.448269 ignition[940]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:26:21.454275 ignition[940]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 15:26:21.456754 ignition[940]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 15:26:21.456754 ignition[940]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:26:21.456754 ignition[940]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 15:26:21.456754 ignition[940]: INFO : files: files passed Feb 13 15:26:21.456754 ignition[940]: INFO : Ignition finished successfully Feb 13 15:26:21.457686 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 15:26:21.470039 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 15:26:21.472579 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 15:26:21.474104 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 15:26:21.474205 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 15:26:21.481047 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 15:26:21.483298 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:26:21.483298 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:26:21.486742 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 15:26:21.489370 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:26:21.491158 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 15:26:21.505679 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 15:26:21.528358 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 15:26:21.528482 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 15:26:21.530582 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 15:26:21.532204 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 15:26:21.534105 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 15:26:21.535993 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 15:26:21.553212 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:26:21.563693 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 15:26:21.573479 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:26:21.575867 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:26:21.577162 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 15:26:21.579018 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 15:26:21.579157 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 15:26:21.581937 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 15:26:21.584142 systemd[1]: Stopped target basic.target - Basic System. Feb 13 15:26:21.585783 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 15:26:21.587503 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 15:26:21.589624 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 15:26:21.591794 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 15:26:21.593740 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 15:26:21.595759 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 15:26:21.597821 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 15:26:21.599614 systemd[1]: Stopped target swap.target - Swaps. Feb 13 15:26:21.601260 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 15:26:21.601409 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 15:26:21.603771 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:26:21.605777 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:26:21.607725 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 15:26:21.608563 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:26:21.609839 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 15:26:21.609964 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 15:26:21.612776 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 15:26:21.612896 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 15:26:21.614922 systemd[1]: Stopped target paths.target - Path Units. Feb 13 15:26:21.616351 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 15:26:21.619561 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:26:21.621697 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 15:26:21.623827 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 15:26:21.625393 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 15:26:21.625508 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 15:26:21.627100 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 15:26:21.627183 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 15:26:21.628765 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 15:26:21.628879 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 15:26:21.630663 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 15:26:21.630766 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 15:26:21.645704 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 15:26:21.647410 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 15:26:21.648378 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 15:26:21.648531 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:26:21.650591 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 15:26:21.650700 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 15:26:21.656480 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 15:26:21.657481 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 15:26:21.663249 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 15:26:21.665565 ignition[997]: INFO : Ignition 2.20.0 Feb 13 15:26:21.665565 ignition[997]: INFO : Stage: umount Feb 13 15:26:21.667425 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 15:26:21.667425 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 15:26:21.667425 ignition[997]: INFO : umount: umount passed Feb 13 15:26:21.667425 ignition[997]: INFO : Ignition finished successfully Feb 13 15:26:21.667261 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 15:26:21.667365 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 15:26:21.668643 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 15:26:21.668729 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 15:26:21.671033 systemd[1]: Stopped target network.target - Network. Feb 13 15:26:21.671996 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 15:26:21.672082 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 15:26:21.673837 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 15:26:21.673890 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 15:26:21.675220 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 15:26:21.675266 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 15:26:21.676824 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 15:26:21.676868 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 15:26:21.678662 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 15:26:21.678707 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 15:26:21.680486 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 15:26:21.681975 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 15:26:21.689224 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 15:26:21.689342 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 15:26:21.691551 systemd-networkd[762]: eth0: DHCPv6 lease lost Feb 13 15:26:21.692204 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 15:26:21.692274 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:26:21.694265 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 15:26:21.694378 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 15:26:21.698625 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 15:26:21.698695 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:26:21.712631 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 15:26:21.713589 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 15:26:21.713665 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 15:26:21.715512 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:26:21.715558 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:26:21.717354 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 15:26:21.717398 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 15:26:21.719378 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:26:21.728687 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 15:26:21.729531 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 15:26:21.730657 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 15:26:21.730783 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:26:21.733041 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 15:26:21.733108 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 15:26:21.734296 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 15:26:21.734331 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:26:21.736370 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 15:26:21.736420 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 15:26:21.739044 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 15:26:21.739099 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 15:26:21.741860 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 15:26:21.741910 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 15:26:21.756682 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 15:26:21.757534 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 15:26:21.757596 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:26:21.759631 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 15:26:21.759675 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:26:21.761470 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 15:26:21.761521 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:26:21.763595 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 15:26:21.763644 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:26:21.765900 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 15:26:21.765989 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 15:26:21.768036 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 15:26:21.770024 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 15:26:21.780145 systemd[1]: Switching root. Feb 13 15:26:21.809543 systemd-journald[239]: Journal stopped Feb 13 15:26:22.545069 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Feb 13 15:26:22.545128 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 15:26:22.545140 kernel: SELinux: policy capability open_perms=1 Feb 13 15:26:22.545150 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 15:26:22.545160 kernel: SELinux: policy capability always_check_network=0 Feb 13 15:26:22.545170 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 15:26:22.545180 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 15:26:22.545190 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 15:26:22.545219 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 15:26:22.545228 kernel: audit: type=1403 audit(1739460381.942:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 15:26:22.545240 systemd[1]: Successfully loaded SELinux policy in 34.482ms. Feb 13 15:26:22.545255 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.361ms. Feb 13 15:26:22.545267 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 15:26:22.545279 systemd[1]: Detected virtualization kvm. Feb 13 15:26:22.545289 systemd[1]: Detected architecture arm64. Feb 13 15:26:22.545299 systemd[1]: Detected first boot. Feb 13 15:26:22.545311 systemd[1]: Initializing machine ID from VM UUID. Feb 13 15:26:22.545323 zram_generator::config[1042]: No configuration found. Feb 13 15:26:22.545334 systemd[1]: Populated /etc with preset unit settings. Feb 13 15:26:22.545345 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 15:26:22.545355 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 15:26:22.545367 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 15:26:22.545377 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 15:26:22.545388 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 15:26:22.545398 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 15:26:22.545408 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 15:26:22.545419 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 15:26:22.545429 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 15:26:22.545439 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 15:26:22.545449 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 15:26:22.545469 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 15:26:22.545482 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 15:26:22.545682 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 15:26:22.545706 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 15:26:22.545720 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 15:26:22.545731 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 15:26:22.545741 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Feb 13 15:26:22.545751 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 15:26:22.545762 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 15:26:22.545778 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 15:26:22.545842 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 15:26:22.545859 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 15:26:22.545870 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 15:26:22.545881 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 15:26:22.545892 systemd[1]: Reached target slices.target - Slice Units. Feb 13 15:26:22.545902 systemd[1]: Reached target swap.target - Swaps. Feb 13 15:26:22.545957 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 15:26:22.545976 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 15:26:22.545988 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 15:26:22.545999 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 15:26:22.546010 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 15:26:22.546020 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 15:26:22.546077 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 15:26:22.546092 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 15:26:22.546104 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 15:26:22.546116 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 15:26:22.546129 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 15:26:22.546140 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 15:26:22.546151 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 15:26:22.546161 systemd[1]: Reached target machines.target - Containers. Feb 13 15:26:22.546174 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 15:26:22.546185 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:26:22.546198 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 15:26:22.546211 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 15:26:22.546228 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:26:22.546241 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:26:22.546252 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:26:22.546262 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 15:26:22.546272 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:26:22.546284 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 15:26:22.546298 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 15:26:22.546308 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 15:26:22.546319 kernel: fuse: init (API version 7.39) Feb 13 15:26:22.546330 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 15:26:22.546341 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 15:26:22.546351 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 15:26:22.546363 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 15:26:22.546373 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 15:26:22.546383 kernel: loop: module loaded Feb 13 15:26:22.546393 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 15:26:22.546404 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 15:26:22.546414 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 15:26:22.546426 systemd[1]: Stopped verity-setup.service. Feb 13 15:26:22.546437 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 15:26:22.546447 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 15:26:22.546463 kernel: ACPI: bus type drm_connector registered Feb 13 15:26:22.546476 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 15:26:22.546488 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 15:26:22.546540 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 15:26:22.546552 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 15:26:22.546587 systemd-journald[1106]: Collecting audit messages is disabled. Feb 13 15:26:22.546610 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 15:26:22.546621 systemd-journald[1106]: Journal started Feb 13 15:26:22.546649 systemd-journald[1106]: Runtime Journal (/run/log/journal/038e05d2898243fc9158d9e3290b4360) is 5.9M, max 47.3M, 41.4M free. Feb 13 15:26:22.337408 systemd[1]: Queued start job for default target multi-user.target. Feb 13 15:26:22.364185 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 15:26:22.364600 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 15:26:22.548550 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 15:26:22.550411 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 15:26:22.550591 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 15:26:22.551695 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:26:22.551829 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:26:22.553354 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:26:22.553529 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:26:22.554619 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:26:22.555555 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:26:22.556875 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 15:26:22.557966 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 15:26:22.558110 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 15:26:22.559194 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:26:22.559332 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:26:22.560484 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 15:26:22.561868 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 15:26:22.563069 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 15:26:22.575191 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 15:26:22.582631 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 15:26:22.584533 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 15:26:22.585319 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 15:26:22.585360 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 15:26:22.587216 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 15:26:22.589167 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 15:26:22.591065 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 15:26:22.591978 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:26:22.595780 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 15:26:22.599219 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 15:26:22.600141 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:26:22.601720 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 15:26:22.602631 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:26:22.603700 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:26:22.609652 systemd-journald[1106]: Time spent on flushing to /var/log/journal/038e05d2898243fc9158d9e3290b4360 is 13.065ms for 842 entries. Feb 13 15:26:22.609652 systemd-journald[1106]: System Journal (/var/log/journal/038e05d2898243fc9158d9e3290b4360) is 8.0M, max 195.6M, 187.6M free. Feb 13 15:26:22.635254 systemd-journald[1106]: Received client request to flush runtime journal. Feb 13 15:26:22.635289 kernel: loop0: detected capacity change from 0 to 194512 Feb 13 15:26:22.609089 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 15:26:22.613802 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 15:26:22.616805 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 15:26:22.618306 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 15:26:22.619549 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 15:26:22.620700 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 15:26:22.622139 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 15:26:22.626705 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 15:26:22.645217 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 15:26:22.649594 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 15:26:22.651896 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 15:26:22.654158 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 15:26:22.655618 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:26:22.664003 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 15:26:22.664744 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 15:26:22.667145 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 15:26:22.673058 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Feb 13 15:26:22.673543 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Feb 13 15:26:22.678562 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 15:26:22.682592 kernel: loop1: detected capacity change from 0 to 113552 Feb 13 15:26:22.687693 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 15:26:22.711133 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 15:26:22.714518 kernel: loop2: detected capacity change from 0 to 116784 Feb 13 15:26:22.723643 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 15:26:22.733511 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Feb 13 15:26:22.733531 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Feb 13 15:26:22.738099 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 15:26:22.749520 kernel: loop3: detected capacity change from 0 to 194512 Feb 13 15:26:22.756553 kernel: loop4: detected capacity change from 0 to 113552 Feb 13 15:26:22.763534 kernel: loop5: detected capacity change from 0 to 116784 Feb 13 15:26:22.768251 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 15:26:22.768734 (sd-merge)[1181]: Merged extensions into '/usr'. Feb 13 15:26:22.772264 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 15:26:22.772279 systemd[1]: Reloading... Feb 13 15:26:22.824524 zram_generator::config[1206]: No configuration found. Feb 13 15:26:22.899814 ldconfig[1148]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 15:26:22.907619 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:26:22.942678 systemd[1]: Reloading finished in 170 ms. Feb 13 15:26:22.971620 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 15:26:22.974633 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 15:26:22.988779 systemd[1]: Starting ensure-sysext.service... Feb 13 15:26:22.990572 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 15:26:23.001489 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Feb 13 15:26:23.001536 systemd[1]: Reloading... Feb 13 15:26:23.016567 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 15:26:23.016771 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 15:26:23.017395 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 15:26:23.017632 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Feb 13 15:26:23.017680 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Feb 13 15:26:23.021070 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:26:23.021084 systemd-tmpfiles[1243]: Skipping /boot Feb 13 15:26:23.033649 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 15:26:23.033665 systemd-tmpfiles[1243]: Skipping /boot Feb 13 15:26:23.047558 zram_generator::config[1270]: No configuration found. Feb 13 15:26:23.128678 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:26:23.163930 systemd[1]: Reloading finished in 162 ms. Feb 13 15:26:23.176626 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 15:26:23.190907 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 15:26:23.199168 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:26:23.201406 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 15:26:23.203488 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 15:26:23.208852 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 15:26:23.214247 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 15:26:23.216709 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 15:26:23.222123 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:26:23.226850 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:26:23.231716 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:26:23.240347 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:26:23.241737 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:26:23.243541 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 15:26:23.245139 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:26:23.245328 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:26:23.246833 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:26:23.246955 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:26:23.249313 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:26:23.249441 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:26:23.256882 systemd-udevd[1316]: Using default interface naming scheme 'v255'. Feb 13 15:26:23.259970 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:26:23.267737 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:26:23.269678 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:26:23.273765 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:26:23.274816 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:26:23.281758 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 15:26:23.282582 augenrules[1342]: No rules Feb 13 15:26:23.285060 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 15:26:23.286681 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 15:26:23.288095 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:26:23.288275 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:26:23.290736 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 15:26:23.292641 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 15:26:23.293868 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:26:23.293998 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:26:23.295278 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:26:23.295408 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:26:23.297347 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:26:23.297499 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:26:23.298928 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 15:26:23.322790 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:26:23.324038 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 15:26:23.325811 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 15:26:23.328673 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 15:26:23.331733 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 15:26:23.334636 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 15:26:23.336460 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 15:26:23.340875 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 15:26:23.343506 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1352) Feb 13 15:26:23.344660 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 15:26:23.344966 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 15:26:23.346277 systemd[1]: Finished ensure-sysext.service. Feb 13 15:26:23.347240 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 15:26:23.347394 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 15:26:23.348532 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 15:26:23.348808 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 15:26:23.354832 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 15:26:23.355333 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 15:26:23.361982 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Feb 13 15:26:23.366408 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 15:26:23.369329 augenrules[1374]: /sbin/augenrules: No change Feb 13 15:26:23.375160 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 15:26:23.383545 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 15:26:23.383735 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 15:26:23.389941 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 15:26:23.393586 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 15:26:23.398318 augenrules[1409]: No rules Feb 13 15:26:23.397957 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 15:26:23.402867 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:26:23.403619 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:26:23.404967 systemd-resolved[1309]: Positive Trust Anchors: Feb 13 15:26:23.404985 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 15:26:23.405016 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 15:26:23.416049 systemd-resolved[1309]: Defaulting to hostname 'linux'. Feb 13 15:26:23.419006 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 15:26:23.419977 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 15:26:23.432870 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 15:26:23.452312 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 15:26:23.453639 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 15:26:23.460429 systemd-networkd[1390]: lo: Link UP Feb 13 15:26:23.460437 systemd-networkd[1390]: lo: Gained carrier Feb 13 15:26:23.463155 systemd-networkd[1390]: Enumeration completed Feb 13 15:26:23.463273 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 15:26:23.464854 systemd[1]: Reached target network.target - Network. Feb 13 15:26:23.467540 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:26:23.467549 systemd-networkd[1390]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 15:26:23.468201 systemd-networkd[1390]: eth0: Link UP Feb 13 15:26:23.468211 systemd-networkd[1390]: eth0: Gained carrier Feb 13 15:26:23.468225 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 15:26:23.472745 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 15:26:23.482662 systemd-networkd[1390]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 15:26:23.483413 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. Feb 13 15:26:23.483827 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 15:26:23.484002 systemd-timesyncd[1406]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 15:26:23.484059 systemd-timesyncd[1406]: Initial clock synchronization to Thu 2025-02-13 15:26:23.334966 UTC. Feb 13 15:26:23.503803 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 15:26:23.515706 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 15:26:23.538758 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:26:23.543564 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 15:26:23.571022 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 15:26:23.573712 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 15:26:23.574562 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 15:26:23.575367 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 15:26:23.576328 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 15:26:23.577518 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 15:26:23.578384 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 15:26:23.579474 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 15:26:23.580390 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 15:26:23.580444 systemd[1]: Reached target paths.target - Path Units. Feb 13 15:26:23.581158 systemd[1]: Reached target timers.target - Timer Units. Feb 13 15:26:23.583681 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 15:26:23.586062 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 15:26:23.597745 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 15:26:23.601052 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 15:26:23.602841 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 15:26:23.603875 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 15:26:23.604690 systemd[1]: Reached target basic.target - Basic System. Feb 13 15:26:23.605625 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:26:23.605657 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 15:26:23.606673 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 15:26:23.608507 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 15:26:23.609911 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 15:26:23.612633 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 15:26:23.615795 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 15:26:23.617209 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 15:26:23.620751 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 15:26:23.625238 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 15:26:23.630820 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 15:26:23.634424 jq[1441]: false Feb 13 15:26:23.635020 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 15:26:23.641965 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 15:26:23.642467 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 15:26:23.643713 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 15:26:23.647690 extend-filesystems[1442]: Found loop3 Feb 13 15:26:23.660655 extend-filesystems[1442]: Found loop4 Feb 13 15:26:23.660655 extend-filesystems[1442]: Found loop5 Feb 13 15:26:23.660655 extend-filesystems[1442]: Found vda Feb 13 15:26:23.660655 extend-filesystems[1442]: Found vda1 Feb 13 15:26:23.660655 extend-filesystems[1442]: Found vda2 Feb 13 15:26:23.660655 extend-filesystems[1442]: Found vda3 Feb 13 15:26:23.660655 extend-filesystems[1442]: Found usr Feb 13 15:26:23.660655 extend-filesystems[1442]: Found vda4 Feb 13 15:26:23.660655 extend-filesystems[1442]: Found vda6 Feb 13 15:26:23.660655 extend-filesystems[1442]: Found vda7 Feb 13 15:26:23.660655 extend-filesystems[1442]: Found vda9 Feb 13 15:26:23.660655 extend-filesystems[1442]: Checking size of /dev/vda9 Feb 13 15:26:23.660655 extend-filesystems[1442]: Resized partition /dev/vda9 Feb 13 15:26:23.700029 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 15:26:23.700061 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1354) Feb 13 15:26:23.648615 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 15:26:23.700443 extend-filesystems[1463]: resize2fs 1.47.1 (20-May-2024) Feb 13 15:26:23.720464 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 15:26:23.720665 update_engine[1450]: I20250213 15:26:23.717571 1450 main.cc:92] Flatcar Update Engine starting Feb 13 15:26:23.653190 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 15:26:23.721043 extend-filesystems[1463]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 15:26:23.721043 extend-filesystems[1463]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 15:26:23.721043 extend-filesystems[1463]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 15:26:23.703462 dbus-daemon[1440]: [system] SELinux support is enabled Feb 13 15:26:23.728994 jq[1452]: true Feb 13 15:26:23.657100 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 15:26:23.729213 extend-filesystems[1442]: Resized filesystem in /dev/vda9 Feb 13 15:26:23.733695 update_engine[1450]: I20250213 15:26:23.725157 1450 update_check_scheduler.cc:74] Next update check in 8m5s Feb 13 15:26:23.658145 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 15:26:23.734070 jq[1467]: true Feb 13 15:26:23.658434 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 15:26:23.658778 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 15:26:23.666898 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 15:26:23.667125 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 15:26:23.687286 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 15:26:23.703831 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 15:26:23.713203 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 15:26:23.715445 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 15:26:23.716820 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 15:26:23.716889 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 15:26:23.722686 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 15:26:23.722874 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 15:26:23.725387 systemd[1]: Started update-engine.service - Update Engine. Feb 13 15:26:23.736802 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 15:26:23.748150 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (Power Button) Feb 13 15:26:23.748682 systemd-logind[1448]: New seat seat0. Feb 13 15:26:23.753534 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 15:26:23.783591 bash[1491]: Updated "/home/core/.ssh/authorized_keys" Feb 13 15:26:23.785228 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 15:26:23.787775 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 15:26:23.803865 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 15:26:23.902976 containerd[1470]: time="2025-02-13T15:26:23.902877360Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 15:26:23.929260 containerd[1470]: time="2025-02-13T15:26:23.929196080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:26:23.930840 containerd[1470]: time="2025-02-13T15:26:23.930790000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:26:23.930840 containerd[1470]: time="2025-02-13T15:26:23.930828040Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 15:26:23.930885 containerd[1470]: time="2025-02-13T15:26:23.930847200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 15:26:23.931038 containerd[1470]: time="2025-02-13T15:26:23.931007320Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 15:26:23.931038 containerd[1470]: time="2025-02-13T15:26:23.931030720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 15:26:23.931099 containerd[1470]: time="2025-02-13T15:26:23.931084880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:26:23.931120 containerd[1470]: time="2025-02-13T15:26:23.931101760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:26:23.931290 containerd[1470]: time="2025-02-13T15:26:23.931262240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:26:23.931290 containerd[1470]: time="2025-02-13T15:26:23.931282400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 15:26:23.931337 containerd[1470]: time="2025-02-13T15:26:23.931295600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:26:23.931337 containerd[1470]: time="2025-02-13T15:26:23.931304760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 15:26:23.931387 containerd[1470]: time="2025-02-13T15:26:23.931373080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:26:23.931621 containerd[1470]: time="2025-02-13T15:26:23.931593280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 15:26:23.931719 containerd[1470]: time="2025-02-13T15:26:23.931696440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 15:26:23.931719 containerd[1470]: time="2025-02-13T15:26:23.931714000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 15:26:23.931799 containerd[1470]: time="2025-02-13T15:26:23.931786880Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 15:26:23.931843 containerd[1470]: time="2025-02-13T15:26:23.931831400Z" level=info msg="metadata content store policy set" policy=shared Feb 13 15:26:23.936005 containerd[1470]: time="2025-02-13T15:26:23.935977760Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 15:26:23.936080 containerd[1470]: time="2025-02-13T15:26:23.936021640Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 15:26:23.936080 containerd[1470]: time="2025-02-13T15:26:23.936037240Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 15:26:23.936080 containerd[1470]: time="2025-02-13T15:26:23.936052920Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 15:26:23.936080 containerd[1470]: time="2025-02-13T15:26:23.936068320Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 15:26:23.936219 containerd[1470]: time="2025-02-13T15:26:23.936200240Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 15:26:23.937290 containerd[1470]: time="2025-02-13T15:26:23.936485640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 15:26:23.937290 containerd[1470]: time="2025-02-13T15:26:23.936643560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 15:26:23.937290 containerd[1470]: time="2025-02-13T15:26:23.936660960Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 15:26:23.937290 containerd[1470]: time="2025-02-13T15:26:23.936676080Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 15:26:23.937290 containerd[1470]: time="2025-02-13T15:26:23.936691600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 15:26:23.937290 containerd[1470]: time="2025-02-13T15:26:23.936704720Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 15:26:23.937290 containerd[1470]: time="2025-02-13T15:26:23.936717360Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 15:26:23.937290 containerd[1470]: time="2025-02-13T15:26:23.936730560Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 15:26:23.937290 containerd[1470]: time="2025-02-13T15:26:23.936744680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 15:26:23.937290 containerd[1470]: time="2025-02-13T15:26:23.936760360Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 15:26:23.937290 containerd[1470]: time="2025-02-13T15:26:23.936773320Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 15:26:23.937290 containerd[1470]: time="2025-02-13T15:26:23.936784280Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 15:26:23.937290 containerd[1470]: time="2025-02-13T15:26:23.936804360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 15:26:23.937290 containerd[1470]: time="2025-02-13T15:26:23.936817480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 15:26:23.937592 containerd[1470]: time="2025-02-13T15:26:23.936829560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 15:26:23.937592 containerd[1470]: time="2025-02-13T15:26:23.936841640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 15:26:23.937592 containerd[1470]: time="2025-02-13T15:26:23.936853000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 15:26:23.937592 containerd[1470]: time="2025-02-13T15:26:23.936867640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 15:26:23.937592 containerd[1470]: time="2025-02-13T15:26:23.936878880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 15:26:23.937592 containerd[1470]: time="2025-02-13T15:26:23.936890720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 15:26:23.937592 containerd[1470]: time="2025-02-13T15:26:23.936904320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 15:26:23.937592 containerd[1470]: time="2025-02-13T15:26:23.936917800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 15:26:23.937592 containerd[1470]: time="2025-02-13T15:26:23.936929240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 15:26:23.937592 containerd[1470]: time="2025-02-13T15:26:23.936944080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 15:26:23.937592 containerd[1470]: time="2025-02-13T15:26:23.936955560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 15:26:23.937592 containerd[1470]: time="2025-02-13T15:26:23.936978080Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 15:26:23.937592 containerd[1470]: time="2025-02-13T15:26:23.936998560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 15:26:23.937592 containerd[1470]: time="2025-02-13T15:26:23.937011680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 15:26:23.937592 containerd[1470]: time="2025-02-13T15:26:23.937027360Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 15:26:23.938303 containerd[1470]: time="2025-02-13T15:26:23.938268840Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 15:26:23.938392 containerd[1470]: time="2025-02-13T15:26:23.938377160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 15:26:23.938444 containerd[1470]: time="2025-02-13T15:26:23.938432040Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 15:26:23.938531 containerd[1470]: time="2025-02-13T15:26:23.938514120Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 15:26:23.938584 containerd[1470]: time="2025-02-13T15:26:23.938572040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 15:26:23.938656 containerd[1470]: time="2025-02-13T15:26:23.938643040Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 15:26:23.938706 containerd[1470]: time="2025-02-13T15:26:23.938696360Z" level=info msg="NRI interface is disabled by configuration." Feb 13 15:26:23.938783 containerd[1470]: time="2025-02-13T15:26:23.938770520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 15:26:23.939187 containerd[1470]: time="2025-02-13T15:26:23.939137520Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 15:26:23.939343 containerd[1470]: time="2025-02-13T15:26:23.939327080Z" level=info msg="Connect containerd service" Feb 13 15:26:23.939437 containerd[1470]: time="2025-02-13T15:26:23.939422760Z" level=info msg="using legacy CRI server" Feb 13 15:26:23.939516 containerd[1470]: time="2025-02-13T15:26:23.939488720Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 15:26:23.939978 containerd[1470]: time="2025-02-13T15:26:23.939777280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 15:26:23.940877 containerd[1470]: time="2025-02-13T15:26:23.940845080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:26:23.941241 containerd[1470]: time="2025-02-13T15:26:23.941145040Z" level=info msg="Start subscribing containerd event" Feb 13 15:26:23.941241 containerd[1470]: time="2025-02-13T15:26:23.941216280Z" level=info msg="Start recovering state" Feb 13 15:26:23.941365 containerd[1470]: time="2025-02-13T15:26:23.941296560Z" level=info msg="Start event monitor" Feb 13 15:26:23.941365 containerd[1470]: time="2025-02-13T15:26:23.941337560Z" level=info msg="Start snapshots syncer" Feb 13 15:26:23.941365 containerd[1470]: time="2025-02-13T15:26:23.941348560Z" level=info msg="Start cni network conf syncer for default" Feb 13 15:26:23.941365 containerd[1470]: time="2025-02-13T15:26:23.941356120Z" level=info msg="Start streaming server" Feb 13 15:26:23.941708 containerd[1470]: time="2025-02-13T15:26:23.941685000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 15:26:23.941820 containerd[1470]: time="2025-02-13T15:26:23.941805480Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 15:26:23.941924 containerd[1470]: time="2025-02-13T15:26:23.941911440Z" level=info msg="containerd successfully booted in 0.041549s" Feb 13 15:26:23.941997 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 15:26:25.037634 sshd_keygen[1468]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 15:26:25.056566 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 15:26:25.065811 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 15:26:25.070752 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 15:26:25.071574 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 15:26:25.074456 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 15:26:25.084303 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 15:26:25.087220 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 15:26:25.089096 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Feb 13 15:26:25.090142 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 15:26:25.226628 systemd-networkd[1390]: eth0: Gained IPv6LL Feb 13 15:26:25.229238 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 15:26:25.230797 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 15:26:25.241853 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:26:25.244303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:26:25.246370 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 15:26:25.264579 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:26:25.264791 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:26:25.266399 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 15:26:25.272630 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 15:26:25.721292 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:26:25.722628 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 15:26:25.726121 (kubelet)[1545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 15:26:25.728074 systemd[1]: Startup finished in 536ms (kernel) + 4.234s (initrd) + 3.825s (userspace) = 8.596s. Feb 13 15:26:25.740425 agetty[1521]: failed to open credentials directory Feb 13 15:26:25.740456 agetty[1522]: failed to open credentials directory Feb 13 15:26:26.252420 kubelet[1545]: E0213 15:26:26.252334 1545 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 15:26:26.255035 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 15:26:26.255191 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 15:26:30.268478 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 15:26:30.270034 systemd[1]: Started sshd@0-10.0.0.59:22-10.0.0.1:51170.service - OpenSSH per-connection server daemon (10.0.0.1:51170). Feb 13 15:26:30.352768 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 51170 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:26:30.355140 sshd-session[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:30.386952 systemd-logind[1448]: New session 1 of user core. Feb 13 15:26:30.388047 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 15:26:30.399846 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 15:26:30.414856 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 15:26:30.421097 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 15:26:30.428349 (systemd)[1564]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 15:26:30.513657 systemd[1564]: Queued start job for default target default.target. Feb 13 15:26:30.525559 systemd[1564]: Created slice app.slice - User Application Slice. Feb 13 15:26:30.525608 systemd[1564]: Reached target paths.target - Paths. Feb 13 15:26:30.525621 systemd[1564]: Reached target timers.target - Timers. Feb 13 15:26:30.526998 systemd[1564]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 15:26:30.538269 systemd[1564]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 15:26:30.538403 systemd[1564]: Reached target sockets.target - Sockets. Feb 13 15:26:30.538418 systemd[1564]: Reached target basic.target - Basic System. Feb 13 15:26:30.538462 systemd[1564]: Reached target default.target - Main User Target. Feb 13 15:26:30.538535 systemd[1564]: Startup finished in 102ms. Feb 13 15:26:30.538778 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 15:26:30.540434 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 15:26:30.619112 systemd[1]: Started sshd@1-10.0.0.59:22-10.0.0.1:51186.service - OpenSSH per-connection server daemon (10.0.0.1:51186). Feb 13 15:26:30.664019 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 51186 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:26:30.665412 sshd-session[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:30.673373 systemd-logind[1448]: New session 2 of user core. Feb 13 15:26:30.682694 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 15:26:30.735540 sshd[1577]: Connection closed by 10.0.0.1 port 51186 Feb 13 15:26:30.736240 sshd-session[1575]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:30.748175 systemd[1]: sshd@1-10.0.0.59:22-10.0.0.1:51186.service: Deactivated successfully. Feb 13 15:26:30.750980 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 15:26:30.752626 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Feb 13 15:26:30.754086 systemd[1]: Started sshd@2-10.0.0.59:22-10.0.0.1:51190.service - OpenSSH per-connection server daemon (10.0.0.1:51190). Feb 13 15:26:30.755883 systemd-logind[1448]: Removed session 2. Feb 13 15:26:30.807972 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 51190 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:26:30.809455 sshd-session[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:30.813575 systemd-logind[1448]: New session 3 of user core. Feb 13 15:26:30.827670 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 15:26:30.875429 sshd[1584]: Connection closed by 10.0.0.1 port 51190 Feb 13 15:26:30.875879 sshd-session[1582]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:30.890954 systemd[1]: sshd@2-10.0.0.59:22-10.0.0.1:51190.service: Deactivated successfully. Feb 13 15:26:30.892576 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 15:26:30.894756 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Feb 13 15:26:30.910828 systemd[1]: Started sshd@3-10.0.0.59:22-10.0.0.1:51206.service - OpenSSH per-connection server daemon (10.0.0.1:51206). Feb 13 15:26:30.915298 systemd-logind[1448]: Removed session 3. Feb 13 15:26:30.972640 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 51206 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:26:30.974032 sshd-session[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:30.977850 systemd-logind[1448]: New session 4 of user core. Feb 13 15:26:30.984689 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 15:26:31.037160 sshd[1591]: Connection closed by 10.0.0.1 port 51206 Feb 13 15:26:31.038341 sshd-session[1589]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:31.050309 systemd[1]: sshd@3-10.0.0.59:22-10.0.0.1:51206.service: Deactivated successfully. Feb 13 15:26:31.052066 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 15:26:31.054698 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Feb 13 15:26:31.056238 systemd[1]: Started sshd@4-10.0.0.59:22-10.0.0.1:51208.service - OpenSSH per-connection server daemon (10.0.0.1:51208). Feb 13 15:26:31.057068 systemd-logind[1448]: Removed session 4. Feb 13 15:26:31.103503 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 51208 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:26:31.104925 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:31.109413 systemd-logind[1448]: New session 5 of user core. Feb 13 15:26:31.121718 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 15:26:31.180228 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 15:26:31.180543 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:26:31.194588 sudo[1599]: pam_unix(sudo:session): session closed for user root Feb 13 15:26:31.196193 sshd[1598]: Connection closed by 10.0.0.1 port 51208 Feb 13 15:26:31.197011 sshd-session[1596]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:31.209996 systemd[1]: sshd@4-10.0.0.59:22-10.0.0.1:51208.service: Deactivated successfully. Feb 13 15:26:31.211598 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 15:26:31.213785 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Feb 13 15:26:31.222802 systemd[1]: Started sshd@5-10.0.0.59:22-10.0.0.1:51214.service - OpenSSH per-connection server daemon (10.0.0.1:51214). Feb 13 15:26:31.223677 systemd-logind[1448]: Removed session 5. Feb 13 15:26:31.268743 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 51214 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:26:31.270858 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:31.274700 systemd-logind[1448]: New session 6 of user core. Feb 13 15:26:31.290668 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 15:26:31.342670 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 15:26:31.342953 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:26:31.346098 sudo[1608]: pam_unix(sudo:session): session closed for user root Feb 13 15:26:31.351002 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 15:26:31.351272 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:26:31.374173 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 15:26:31.397353 augenrules[1630]: No rules Feb 13 15:26:31.397983 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 15:26:31.398301 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 15:26:31.399662 sudo[1607]: pam_unix(sudo:session): session closed for user root Feb 13 15:26:31.401523 sshd[1606]: Connection closed by 10.0.0.1 port 51214 Feb 13 15:26:31.401325 sshd-session[1604]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:31.411022 systemd[1]: sshd@5-10.0.0.59:22-10.0.0.1:51214.service: Deactivated successfully. Feb 13 15:26:31.412659 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 15:26:31.413999 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Feb 13 15:26:31.421862 systemd[1]: Started sshd@6-10.0.0.59:22-10.0.0.1:51216.service - OpenSSH per-connection server daemon (10.0.0.1:51216). Feb 13 15:26:31.422791 systemd-logind[1448]: Removed session 6. Feb 13 15:26:31.465332 sshd[1638]: Accepted publickey for core from 10.0.0.1 port 51216 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY Feb 13 15:26:31.466697 sshd-session[1638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 15:26:31.471184 systemd-logind[1448]: New session 7 of user core. Feb 13 15:26:31.487678 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 15:26:31.539210 sudo[1641]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 15:26:31.539533 sudo[1641]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 15:26:31.560850 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 15:26:31.578333 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 15:26:31.578554 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 15:26:32.189650 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:26:32.202760 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:26:32.222342 systemd[1]: Reloading requested from client PID 1690 ('systemctl') (unit session-7.scope)... Feb 13 15:26:32.222362 systemd[1]: Reloading... Feb 13 15:26:32.290522 zram_generator::config[1734]: No configuration found. Feb 13 15:26:32.448654 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 15:26:32.502482 systemd[1]: Reloading finished in 279 ms. Feb 13 15:26:32.547953 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 15:26:32.548027 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 15:26:32.548251 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:26:32.550167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 15:26:32.655719 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 15:26:32.660427 (kubelet)[1773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 15:26:32.700946 kubelet[1773]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:26:32.700946 kubelet[1773]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 15:26:32.700946 kubelet[1773]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 15:26:32.700946 kubelet[1773]: I0213 15:26:32.700911 1773 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 15:26:34.143072 kubelet[1773]: I0213 15:26:34.143034 1773 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Feb 13 15:26:34.143072 kubelet[1773]: I0213 15:26:34.143069 1773 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 15:26:34.143555 kubelet[1773]: I0213 15:26:34.143282 1773 server.go:919] "Client rotation is on, will bootstrap in background" Feb 13 15:26:34.178107 kubelet[1773]: I0213 15:26:34.178001 1773 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 15:26:34.184714 kubelet[1773]: I0213 15:26:34.184691 1773 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 15:26:34.185591 kubelet[1773]: I0213 15:26:34.185565 1773 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 15:26:34.185811 kubelet[1773]: I0213 15:26:34.185758 1773 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 15:26:34.185811 kubelet[1773]: I0213 15:26:34.185779 1773 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 15:26:34.185811 kubelet[1773]: I0213 15:26:34.185788 1773 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 15:26:34.185970 kubelet[1773]: I0213 15:26:34.185893 1773 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:26:34.187983 kubelet[1773]: I0213 15:26:34.187953 1773 kubelet.go:396] "Attempting to sync node with API server" Feb 13 15:26:34.187983 kubelet[1773]: I0213 15:26:34.187981 1773 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 15:26:34.188051 kubelet[1773]: I0213 15:26:34.188002 1773 kubelet.go:312] "Adding apiserver pod source" Feb 13 15:26:34.188051 kubelet[1773]: I0213 15:26:34.188017 1773 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 15:26:34.188431 kubelet[1773]: E0213 15:26:34.188098 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:34.188431 kubelet[1773]: E0213 15:26:34.188150 1773 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:34.190829 kubelet[1773]: I0213 15:26:34.190806 1773 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 15:26:34.191478 kubelet[1773]: I0213 15:26:34.191350 1773 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 15:26:34.191700 kubelet[1773]: W0213 15:26:34.191525 1773 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 15:26:34.192540 kubelet[1773]: I0213 15:26:34.192519 1773 server.go:1256] "Started kubelet" Feb 13 15:26:34.192668 kubelet[1773]: I0213 15:26:34.192629 1773 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 15:26:34.194445 kubelet[1773]: I0213 15:26:34.193299 1773 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 15:26:34.194445 kubelet[1773]: I0213 15:26:34.193565 1773 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 15:26:34.194445 kubelet[1773]: I0213 15:26:34.194089 1773 server.go:461] "Adding debug handlers to kubelet server" Feb 13 15:26:34.196481 kubelet[1773]: I0213 15:26:34.196451 1773 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 15:26:34.197296 kubelet[1773]: I0213 15:26:34.196902 1773 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 15:26:34.197296 kubelet[1773]: I0213 15:26:34.197024 1773 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 13 15:26:34.197296 kubelet[1773]: I0213 15:26:34.197092 1773 reconciler_new.go:29] "Reconciler: start to sync state" Feb 13 15:26:34.199854 kubelet[1773]: W0213 15:26:34.199764 1773 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 15:26:34.199854 kubelet[1773]: E0213 15:26:34.199800 1773 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 15:26:34.200997 kubelet[1773]: E0213 15:26:34.200965 1773 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.59\" not found" Feb 13 15:26:34.201708 kubelet[1773]: E0213 15:26:34.201684 1773 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 15:26:34.202264 kubelet[1773]: I0213 15:26:34.202239 1773 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 15:26:34.203394 kubelet[1773]: I0213 15:26:34.203357 1773 factory.go:221] Registration of the containerd container factory successfully Feb 13 15:26:34.203394 kubelet[1773]: I0213 15:26:34.203378 1773 factory.go:221] Registration of the systemd container factory successfully Feb 13 15:26:34.214594 kubelet[1773]: W0213 15:26:34.213445 1773 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 15:26:34.214594 kubelet[1773]: E0213 15:26:34.213476 1773 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 15:26:34.214594 kubelet[1773]: E0213 15:26:34.213565 1773 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.59\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 15:26:34.214594 kubelet[1773]: W0213 15:26:34.213600 1773 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.0.0.59" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 15:26:34.214594 kubelet[1773]: E0213 15:26:34.213609 1773 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.59" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 15:26:34.215187 kubelet[1773]: I0213 15:26:34.215166 1773 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 15:26:34.215187 kubelet[1773]: I0213 15:26:34.215188 1773 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 15:26:34.215269 kubelet[1773]: I0213 15:26:34.215207 1773 state_mem.go:36] "Initialized new in-memory state store" Feb 13 15:26:34.290972 kubelet[1773]: I0213 15:26:34.290936 1773 policy_none.go:49] "None policy: Start" Feb 13 15:26:34.291724 kubelet[1773]: I0213 15:26:34.291696 1773 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 15:26:34.291793 kubelet[1773]: I0213 15:26:34.291752 1773 state_mem.go:35] "Initializing new in-memory state store" Feb 13 15:26:34.292412 kubelet[1773]: E0213 15:26:34.292379 1773 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.59.1823ce0330b421a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.59,UID:10.0.0.59,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.59,},FirstTimestamp:2025-02-13 15:26:34.192478626 +0000 UTC m=+1.528597745,LastTimestamp:2025-02-13 15:26:34.192478626 +0000 UTC m=+1.528597745,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.59,}" Feb 13 15:26:34.293944 kubelet[1773]: E0213 15:26:34.293798 1773 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.59.1823ce033140691b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.59,UID:10.0.0.59,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.59,},FirstTimestamp:2025-02-13 15:26:34.201671963 +0000 UTC m=+1.537791082,LastTimestamp:2025-02-13 15:26:34.201671963 +0000 UTC m=+1.537791082,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.59,}" Feb 13 15:26:34.302545 kubelet[1773]: I0213 15:26:34.302410 1773 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.59" Feb 13 15:26:34.303209 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 15:26:34.307093 kubelet[1773]: I0213 15:26:34.307053 1773 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.59" Feb 13 15:26:34.315304 kubelet[1773]: E0213 15:26:34.315256 1773 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.59\" not found" Feb 13 15:26:34.316474 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 15:26:34.319860 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 15:26:34.325773 kubelet[1773]: I0213 15:26:34.325732 1773 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 15:26:34.326943 kubelet[1773]: I0213 15:26:34.326921 1773 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 15:26:34.328367 kubelet[1773]: I0213 15:26:34.327039 1773 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 15:26:34.328367 kubelet[1773]: I0213 15:26:34.327063 1773 kubelet.go:2329] "Starting kubelet main sync loop" Feb 13 15:26:34.328367 kubelet[1773]: E0213 15:26:34.327182 1773 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 15:26:34.328927 kubelet[1773]: I0213 15:26:34.328731 1773 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 15:26:34.329035 kubelet[1773]: I0213 15:26:34.329012 1773 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 15:26:34.332731 kubelet[1773]: E0213 15:26:34.332688 1773 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.59\" not found" Feb 13 15:26:34.416110 kubelet[1773]: E0213 15:26:34.416003 1773 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.59\" not found" Feb 13 15:26:34.516901 kubelet[1773]: E0213 15:26:34.516855 1773 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.59\" not found" Feb 13 15:26:34.617413 kubelet[1773]: E0213 15:26:34.617381 1773 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.59\" not found" Feb 13 15:26:34.718162 kubelet[1773]: E0213 15:26:34.718061 1773 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.59\" not found" Feb 13 15:26:34.818581 kubelet[1773]: E0213 15:26:34.818540 1773 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.59\" not found" Feb 13 15:26:34.919057 kubelet[1773]: E0213 15:26:34.919020 1773 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.59\" not found" Feb 13 15:26:35.019617 kubelet[1773]: E0213 15:26:35.019532 1773 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.59\" not found" Feb 13 15:26:35.120098 kubelet[1773]: E0213 15:26:35.120058 1773 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.59\" not found" Feb 13 15:26:35.145216 kubelet[1773]: I0213 15:26:35.145168 1773 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 15:26:35.145580 kubelet[1773]: W0213 15:26:35.145351 1773 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Feb 13 15:26:35.185964 sudo[1641]: pam_unix(sudo:session): session closed for user root Feb 13 15:26:35.187533 sshd[1640]: Connection closed by 10.0.0.1 port 51216 Feb 13 15:26:35.187995 sshd-session[1638]: pam_unix(sshd:session): session closed for user core Feb 13 15:26:35.188300 kubelet[1773]: E0213 15:26:35.188184 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:35.191447 systemd[1]: sshd@6-10.0.0.59:22-10.0.0.1:51216.service: Deactivated successfully. Feb 13 15:26:35.193181 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 15:26:35.196154 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Feb 13 15:26:35.197250 systemd-logind[1448]: Removed session 7. Feb 13 15:26:35.220907 kubelet[1773]: E0213 15:26:35.220853 1773 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.59\" not found" Feb 13 15:26:35.321815 kubelet[1773]: E0213 15:26:35.321788 1773 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.59\" not found" Feb 13 15:26:35.422569 kubelet[1773]: E0213 15:26:35.422515 1773 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.59\" not found" Feb 13 15:26:35.524111 kubelet[1773]: I0213 15:26:35.524063 1773 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 15:26:35.524412 containerd[1470]: time="2025-02-13T15:26:35.524377927Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 15:26:35.524901 kubelet[1773]: I0213 15:26:35.524876 1773 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 15:26:36.189268 kubelet[1773]: I0213 15:26:36.189185 1773 apiserver.go:52] "Watching apiserver" Feb 13 15:26:36.189644 kubelet[1773]: E0213 15:26:36.189477 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:36.194643 kubelet[1773]: I0213 15:26:36.194594 1773 topology_manager.go:215] "Topology Admit Handler" podUID="18ee5808-fa7d-4db2-acc6-381cd203a724" podNamespace="kube-system" podName="cilium-mzpnd" Feb 13 15:26:36.194740 kubelet[1773]: I0213 15:26:36.194704 1773 topology_manager.go:215] "Topology Admit Handler" podUID="a7f2e887-f1f6-43a8-b27b-c5cc8045bf67" podNamespace="kube-system" podName="kube-proxy-wf8gt" Feb 13 15:26:36.197946 kubelet[1773]: I0213 15:26:36.197907 1773 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 13 15:26:36.204418 systemd[1]: Created slice kubepods-besteffort-poda7f2e887_f1f6_43a8_b27b_c5cc8045bf67.slice - libcontainer container kubepods-besteffort-poda7f2e887_f1f6_43a8_b27b_c5cc8045bf67.slice. Feb 13 15:26:36.210142 kubelet[1773]: I0213 15:26:36.210103 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7f2e887-f1f6-43a8-b27b-c5cc8045bf67-xtables-lock\") pod \"kube-proxy-wf8gt\" (UID: \"a7f2e887-f1f6-43a8-b27b-c5cc8045bf67\") " pod="kube-system/kube-proxy-wf8gt" Feb 13 15:26:36.210142 kubelet[1773]: I0213 15:26:36.210149 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-etc-cni-netd\") pod \"cilium-mzpnd\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " pod="kube-system/cilium-mzpnd" Feb 13 15:26:36.210287 kubelet[1773]: I0213 15:26:36.210174 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wstxb\" (UniqueName: \"kubernetes.io/projected/18ee5808-fa7d-4db2-acc6-381cd203a724-kube-api-access-wstxb\") pod \"cilium-mzpnd\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " pod="kube-system/cilium-mzpnd" Feb 13 15:26:36.210287 kubelet[1773]: I0213 15:26:36.210193 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-host-proc-sys-net\") pod \"cilium-mzpnd\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " pod="kube-system/cilium-mzpnd" Feb 13 15:26:36.210287 kubelet[1773]: I0213 15:26:36.210215 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/18ee5808-fa7d-4db2-acc6-381cd203a724-hubble-tls\") pod \"cilium-mzpnd\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " pod="kube-system/cilium-mzpnd" Feb 13 15:26:36.210287 kubelet[1773]: I0213 15:26:36.210236 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d48t8\" (UniqueName: \"kubernetes.io/projected/a7f2e887-f1f6-43a8-b27b-c5cc8045bf67-kube-api-access-d48t8\") pod \"kube-proxy-wf8gt\" (UID: \"a7f2e887-f1f6-43a8-b27b-c5cc8045bf67\") " pod="kube-system/kube-proxy-wf8gt" Feb 13 15:26:36.210287 kubelet[1773]: I0213 15:26:36.210259 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-bpf-maps\") pod \"cilium-mzpnd\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " pod="kube-system/cilium-mzpnd" Feb 13 15:26:36.210390 kubelet[1773]: I0213 15:26:36.210278 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-xtables-lock\") pod \"cilium-mzpnd\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " pod="kube-system/cilium-mzpnd" Feb 13 15:26:36.210390 kubelet[1773]: I0213 15:26:36.210299 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-host-proc-sys-kernel\") pod \"cilium-mzpnd\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " pod="kube-system/cilium-mzpnd" Feb 13 15:26:36.210390 kubelet[1773]: I0213 15:26:36.210318 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a7f2e887-f1f6-43a8-b27b-c5cc8045bf67-kube-proxy\") pod \"kube-proxy-wf8gt\" (UID: \"a7f2e887-f1f6-43a8-b27b-c5cc8045bf67\") " pod="kube-system/kube-proxy-wf8gt" Feb 13 15:26:36.210390 kubelet[1773]: I0213 15:26:36.210336 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-cilium-run\") pod \"cilium-mzpnd\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " pod="kube-system/cilium-mzpnd" Feb 13 15:26:36.210390 kubelet[1773]: I0213 15:26:36.210357 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18ee5808-fa7d-4db2-acc6-381cd203a724-cilium-config-path\") pod \"cilium-mzpnd\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " pod="kube-system/cilium-mzpnd" Feb 13 15:26:36.210390 kubelet[1773]: I0213 15:26:36.210376 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-cni-path\") pod \"cilium-mzpnd\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " pod="kube-system/cilium-mzpnd" Feb 13 15:26:36.210647 kubelet[1773]: I0213 15:26:36.210394 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-lib-modules\") pod \"cilium-mzpnd\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " pod="kube-system/cilium-mzpnd" Feb 13 15:26:36.210647 kubelet[1773]: I0213 15:26:36.210433 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/18ee5808-fa7d-4db2-acc6-381cd203a724-clustermesh-secrets\") pod \"cilium-mzpnd\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " pod="kube-system/cilium-mzpnd" Feb 13 15:26:36.210647 kubelet[1773]: I0213 15:26:36.210457 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7f2e887-f1f6-43a8-b27b-c5cc8045bf67-lib-modules\") pod \"kube-proxy-wf8gt\" (UID: \"a7f2e887-f1f6-43a8-b27b-c5cc8045bf67\") " pod="kube-system/kube-proxy-wf8gt" Feb 13 15:26:36.210647 kubelet[1773]: I0213 15:26:36.210481 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-hostproc\") pod \"cilium-mzpnd\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " pod="kube-system/cilium-mzpnd" Feb 13 15:26:36.210647 kubelet[1773]: I0213 15:26:36.210518 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-cilium-cgroup\") pod \"cilium-mzpnd\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " pod="kube-system/cilium-mzpnd" Feb 13 15:26:36.229168 systemd[1]: Created slice kubepods-burstable-pod18ee5808_fa7d_4db2_acc6_381cd203a724.slice - libcontainer container kubepods-burstable-pod18ee5808_fa7d_4db2_acc6_381cd203a724.slice. Feb 13 15:26:36.534069 kubelet[1773]: E0213 15:26:36.533795 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:36.535586 containerd[1470]: time="2025-02-13T15:26:36.535539923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wf8gt,Uid:a7f2e887-f1f6-43a8-b27b-c5cc8045bf67,Namespace:kube-system,Attempt:0,}" Feb 13 15:26:36.541568 kubelet[1773]: E0213 15:26:36.540980 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:36.541762 containerd[1470]: time="2025-02-13T15:26:36.541385930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mzpnd,Uid:18ee5808-fa7d-4db2-acc6-381cd203a724,Namespace:kube-system,Attempt:0,}" Feb 13 15:26:37.114343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1141159769.mount: Deactivated successfully. Feb 13 15:26:37.122277 containerd[1470]: time="2025-02-13T15:26:37.121838746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:26:37.123061 containerd[1470]: time="2025-02-13T15:26:37.122905119Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Feb 13 15:26:37.125185 containerd[1470]: time="2025-02-13T15:26:37.125111009Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:26:37.129074 containerd[1470]: time="2025-02-13T15:26:37.128986835Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:26:37.129748 containerd[1470]: time="2025-02-13T15:26:37.129690752Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 15:26:37.132560 containerd[1470]: time="2025-02-13T15:26:37.131511278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 15:26:37.133299 containerd[1470]: time="2025-02-13T15:26:37.133267943Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 597.647485ms" Feb 13 15:26:37.137868 containerd[1470]: time="2025-02-13T15:26:37.137822986Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 596.367074ms" Feb 13 15:26:37.190628 kubelet[1773]: E0213 15:26:37.190587 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:37.252635 containerd[1470]: time="2025-02-13T15:26:37.252433316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:26:37.252635 containerd[1470]: time="2025-02-13T15:26:37.252545184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:26:37.252977 containerd[1470]: time="2025-02-13T15:26:37.252883418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:26:37.253534 containerd[1470]: time="2025-02-13T15:26:37.253443516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:26:37.253577 containerd[1470]: time="2025-02-13T15:26:37.253526740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:26:37.253577 containerd[1470]: time="2025-02-13T15:26:37.253541480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:26:37.253737 containerd[1470]: time="2025-02-13T15:26:37.253608768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:26:37.253799 containerd[1470]: time="2025-02-13T15:26:37.253681196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:26:37.346750 systemd[1]: Started cri-containerd-2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5.scope - libcontainer container 2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5. Feb 13 15:26:37.348424 systemd[1]: Started cri-containerd-4ef6613f939d687c8b0bc9ecd91b0eb88e32cdb45f45b9f8bc3dc71bf62c2245.scope - libcontainer container 4ef6613f939d687c8b0bc9ecd91b0eb88e32cdb45f45b9f8bc3dc71bf62c2245. Feb 13 15:26:37.368591 containerd[1470]: time="2025-02-13T15:26:37.368420924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mzpnd,Uid:18ee5808-fa7d-4db2-acc6-381cd203a724,Namespace:kube-system,Attempt:0,} returns sandbox id \"2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5\"" Feb 13 15:26:37.369873 kubelet[1773]: E0213 15:26:37.369844 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:37.372665 containerd[1470]: time="2025-02-13T15:26:37.372624944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wf8gt,Uid:a7f2e887-f1f6-43a8-b27b-c5cc8045bf67,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ef6613f939d687c8b0bc9ecd91b0eb88e32cdb45f45b9f8bc3dc71bf62c2245\"" Feb 13 15:26:37.373150 containerd[1470]: time="2025-02-13T15:26:37.372945529Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 13 15:26:37.373442 kubelet[1773]: E0213 15:26:37.373418 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:38.191406 kubelet[1773]: E0213 15:26:38.191359 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:39.191760 kubelet[1773]: E0213 15:26:39.191687 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:40.192591 kubelet[1773]: E0213 15:26:40.192557 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:40.534433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983064293.mount: Deactivated successfully. Feb 13 15:26:41.193071 kubelet[1773]: E0213 15:26:41.193036 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:41.785678 containerd[1470]: time="2025-02-13T15:26:41.785264997Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:41.786176 containerd[1470]: time="2025-02-13T15:26:41.786129862Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Feb 13 15:26:41.787435 containerd[1470]: time="2025-02-13T15:26:41.787403013Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:41.789037 containerd[1470]: time="2025-02-13T15:26:41.788907603Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.415920788s" Feb 13 15:26:41.789037 containerd[1470]: time="2025-02-13T15:26:41.788944767Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 13 15:26:41.790008 containerd[1470]: time="2025-02-13T15:26:41.789933206Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\"" Feb 13 15:26:41.791314 containerd[1470]: time="2025-02-13T15:26:41.791192800Z" level=info msg="CreateContainer within sandbox \"2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:26:41.805387 containerd[1470]: time="2025-02-13T15:26:41.805331607Z" level=info msg="CreateContainer within sandbox \"2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e89246ea7da920697ee75d74417d4f9422835a7e569d8e855df380ff62f16adf\"" Feb 13 15:26:41.806227 containerd[1470]: time="2025-02-13T15:26:41.806131792Z" level=info msg="StartContainer for \"e89246ea7da920697ee75d74417d4f9422835a7e569d8e855df380ff62f16adf\"" Feb 13 15:26:41.834691 systemd[1]: Started cri-containerd-e89246ea7da920697ee75d74417d4f9422835a7e569d8e855df380ff62f16adf.scope - libcontainer container e89246ea7da920697ee75d74417d4f9422835a7e569d8e855df380ff62f16adf. Feb 13 15:26:41.854730 containerd[1470]: time="2025-02-13T15:26:41.854688833Z" level=info msg="StartContainer for \"e89246ea7da920697ee75d74417d4f9422835a7e569d8e855df380ff62f16adf\" returns successfully" Feb 13 15:26:41.904343 systemd[1]: cri-containerd-e89246ea7da920697ee75d74417d4f9422835a7e569d8e855df380ff62f16adf.scope: Deactivated successfully. Feb 13 15:26:42.035901 containerd[1470]: time="2025-02-13T15:26:42.035612406Z" level=info msg="shim disconnected" id=e89246ea7da920697ee75d74417d4f9422835a7e569d8e855df380ff62f16adf namespace=k8s.io Feb 13 15:26:42.035901 containerd[1470]: time="2025-02-13T15:26:42.035717539Z" level=warning msg="cleaning up after shim disconnected" id=e89246ea7da920697ee75d74417d4f9422835a7e569d8e855df380ff62f16adf namespace=k8s.io Feb 13 15:26:42.035901 containerd[1470]: time="2025-02-13T15:26:42.035726632Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:26:42.193812 kubelet[1773]: E0213 15:26:42.193759 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:42.347740 kubelet[1773]: E0213 15:26:42.347591 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:42.349347 containerd[1470]: time="2025-02-13T15:26:42.349300803Z" level=info msg="CreateContainer within sandbox \"2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:26:42.539924 containerd[1470]: time="2025-02-13T15:26:42.539819472Z" level=info msg="CreateContainer within sandbox \"2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5b315e23467c2c7a8ccd1784a45d381cdfca4c99dc7a22131aa28869ba9ace01\"" Feb 13 15:26:42.540715 containerd[1470]: time="2025-02-13T15:26:42.540685901Z" level=info msg="StartContainer for \"5b315e23467c2c7a8ccd1784a45d381cdfca4c99dc7a22131aa28869ba9ace01\"" Feb 13 15:26:42.567681 systemd[1]: Started cri-containerd-5b315e23467c2c7a8ccd1784a45d381cdfca4c99dc7a22131aa28869ba9ace01.scope - libcontainer container 5b315e23467c2c7a8ccd1784a45d381cdfca4c99dc7a22131aa28869ba9ace01. Feb 13 15:26:42.590722 containerd[1470]: time="2025-02-13T15:26:42.590622996Z" level=info msg="StartContainer for \"5b315e23467c2c7a8ccd1784a45d381cdfca4c99dc7a22131aa28869ba9ace01\" returns successfully" Feb 13 15:26:42.605004 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 15:26:42.605217 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:26:42.605282 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:26:42.610875 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 15:26:42.611057 systemd[1]: cri-containerd-5b315e23467c2c7a8ccd1784a45d381cdfca4c99dc7a22131aa28869ba9ace01.scope: Deactivated successfully. Feb 13 15:26:42.628818 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 15:26:42.630074 containerd[1470]: time="2025-02-13T15:26:42.629781225Z" level=info msg="shim disconnected" id=5b315e23467c2c7a8ccd1784a45d381cdfca4c99dc7a22131aa28869ba9ace01 namespace=k8s.io Feb 13 15:26:42.630074 containerd[1470]: time="2025-02-13T15:26:42.629856844Z" level=warning msg="cleaning up after shim disconnected" id=5b315e23467c2c7a8ccd1784a45d381cdfca4c99dc7a22131aa28869ba9ace01 namespace=k8s.io Feb 13 15:26:42.630074 containerd[1470]: time="2025-02-13T15:26:42.629865778Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:26:42.801898 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e89246ea7da920697ee75d74417d4f9422835a7e569d8e855df380ff62f16adf-rootfs.mount: Deactivated successfully. Feb 13 15:26:43.194242 kubelet[1773]: E0213 15:26:43.194206 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:43.351867 kubelet[1773]: E0213 15:26:43.351830 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:43.353661 containerd[1470]: time="2025-02-13T15:26:43.353624166Z" level=info msg="CreateContainer within sandbox \"2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:26:43.393363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2349411908.mount: Deactivated successfully. Feb 13 15:26:43.431392 containerd[1470]: time="2025-02-13T15:26:43.431339930Z" level=info msg="CreateContainer within sandbox \"2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"51facd003bf550e73af64571370134ac488f7fbaf3e0b4042834bc9078ec92e9\"" Feb 13 15:26:43.432198 containerd[1470]: time="2025-02-13T15:26:43.432105234Z" level=info msg="StartContainer for \"51facd003bf550e73af64571370134ac488f7fbaf3e0b4042834bc9078ec92e9\"" Feb 13 15:26:43.467711 systemd[1]: Started cri-containerd-51facd003bf550e73af64571370134ac488f7fbaf3e0b4042834bc9078ec92e9.scope - libcontainer container 51facd003bf550e73af64571370134ac488f7fbaf3e0b4042834bc9078ec92e9. Feb 13 15:26:43.502430 containerd[1470]: time="2025-02-13T15:26:43.502380614Z" level=info msg="StartContainer for \"51facd003bf550e73af64571370134ac488f7fbaf3e0b4042834bc9078ec92e9\" returns successfully" Feb 13 15:26:43.514260 systemd[1]: cri-containerd-51facd003bf550e73af64571370134ac488f7fbaf3e0b4042834bc9078ec92e9.scope: Deactivated successfully. Feb 13 15:26:43.609222 containerd[1470]: time="2025-02-13T15:26:43.609077648Z" level=info msg="shim disconnected" id=51facd003bf550e73af64571370134ac488f7fbaf3e0b4042834bc9078ec92e9 namespace=k8s.io Feb 13 15:26:43.609222 containerd[1470]: time="2025-02-13T15:26:43.609129665Z" level=warning msg="cleaning up after shim disconnected" id=51facd003bf550e73af64571370134ac488f7fbaf3e0b4042834bc9078ec92e9 namespace=k8s.io Feb 13 15:26:43.609222 containerd[1470]: time="2025-02-13T15:26:43.609138122Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:26:43.619327 containerd[1470]: time="2025-02-13T15:26:43.619210418Z" level=warning msg="cleanup warnings time=\"2025-02-13T15:26:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 15:26:43.690711 containerd[1470]: time="2025-02-13T15:26:43.690664370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.14\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:43.691351 containerd[1470]: time="2025-02-13T15:26:43.691267159Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.14: active requests=0, bytes read=25273377" Feb 13 15:26:43.692166 containerd[1470]: time="2025-02-13T15:26:43.692135222Z" level=info msg="ImageCreate event name:\"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:43.694527 containerd[1470]: time="2025-02-13T15:26:43.694463645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:43.695566 containerd[1470]: time="2025-02-13T15:26:43.695401317Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.14\" with image id \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\", repo tag \"registry.k8s.io/kube-proxy:v1.29.14\", repo digest \"registry.k8s.io/kube-proxy@sha256:197988595a902751e4e570a5e4d74182f12d83c1d175c1e79aa020f358f6535b\", size \"25272394\" in 1.905436926s" Feb 13 15:26:43.695566 containerd[1470]: time="2025-02-13T15:26:43.695437977Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.14\" returns image reference \"sha256:8acaac6288aef2fbe5821a7539f95a6043513e648e6ffaf6a545a93fa77fe8c8\"" Feb 13 15:26:43.697307 containerd[1470]: time="2025-02-13T15:26:43.697181562Z" level=info msg="CreateContainer within sandbox \"4ef6613f939d687c8b0bc9ecd91b0eb88e32cdb45f45b9f8bc3dc71bf62c2245\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 15:26:43.707881 containerd[1470]: time="2025-02-13T15:26:43.707835704Z" level=info msg="CreateContainer within sandbox \"4ef6613f939d687c8b0bc9ecd91b0eb88e32cdb45f45b9f8bc3dc71bf62c2245\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"894f032839581a7bf21e39598a64c239413da3e4117236831b97301407dce83c\"" Feb 13 15:26:43.709523 containerd[1470]: time="2025-02-13T15:26:43.708209600Z" level=info msg="StartContainer for \"894f032839581a7bf21e39598a64c239413da3e4117236831b97301407dce83c\"" Feb 13 15:26:43.737682 systemd[1]: Started cri-containerd-894f032839581a7bf21e39598a64c239413da3e4117236831b97301407dce83c.scope - libcontainer container 894f032839581a7bf21e39598a64c239413da3e4117236831b97301407dce83c. Feb 13 15:26:43.762418 containerd[1470]: time="2025-02-13T15:26:43.762370791Z" level=info msg="StartContainer for \"894f032839581a7bf21e39598a64c239413da3e4117236831b97301407dce83c\" returns successfully" Feb 13 15:26:44.194347 kubelet[1773]: E0213 15:26:44.194293 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:44.354969 kubelet[1773]: E0213 15:26:44.354926 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:44.356786 kubelet[1773]: E0213 15:26:44.356714 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:44.357068 containerd[1470]: time="2025-02-13T15:26:44.357035515Z" level=info msg="CreateContainer within sandbox \"2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:26:44.372923 containerd[1470]: time="2025-02-13T15:26:44.372870107Z" level=info msg="CreateContainer within sandbox \"2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0b1a64c7f7fbca5c06605d51f71a9eafbe4962a9ed8adbfd98c0463ae689ec86\"" Feb 13 15:26:44.373540 containerd[1470]: time="2025-02-13T15:26:44.373502563Z" level=info msg="StartContainer for \"0b1a64c7f7fbca5c06605d51f71a9eafbe4962a9ed8adbfd98c0463ae689ec86\"" Feb 13 15:26:44.397219 kubelet[1773]: I0213 15:26:44.397173 1773 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wf8gt" podStartSLOduration=4.075389383 podStartE2EDuration="10.39713035s" podCreationTimestamp="2025-02-13 15:26:34 +0000 UTC" firstStartedPulling="2025-02-13 15:26:37.373946088 +0000 UTC m=+4.710065207" lastFinishedPulling="2025-02-13 15:26:43.695687055 +0000 UTC m=+11.031806174" observedRunningTime="2025-02-13 15:26:44.397002438 +0000 UTC m=+11.733121557" watchObservedRunningTime="2025-02-13 15:26:44.39713035 +0000 UTC m=+11.733249469" Feb 13 15:26:44.400701 systemd[1]: Started cri-containerd-0b1a64c7f7fbca5c06605d51f71a9eafbe4962a9ed8adbfd98c0463ae689ec86.scope - libcontainer container 0b1a64c7f7fbca5c06605d51f71a9eafbe4962a9ed8adbfd98c0463ae689ec86. Feb 13 15:26:44.420131 systemd[1]: cri-containerd-0b1a64c7f7fbca5c06605d51f71a9eafbe4962a9ed8adbfd98c0463ae689ec86.scope: Deactivated successfully. Feb 13 15:26:44.421848 containerd[1470]: time="2025-02-13T15:26:44.421808480Z" level=info msg="StartContainer for \"0b1a64c7f7fbca5c06605d51f71a9eafbe4962a9ed8adbfd98c0463ae689ec86\" returns successfully" Feb 13 15:26:44.482488 containerd[1470]: time="2025-02-13T15:26:44.482355493Z" level=info msg="shim disconnected" id=0b1a64c7f7fbca5c06605d51f71a9eafbe4962a9ed8adbfd98c0463ae689ec86 namespace=k8s.io Feb 13 15:26:44.482488 containerd[1470]: time="2025-02-13T15:26:44.482413225Z" level=warning msg="cleaning up after shim disconnected" id=0b1a64c7f7fbca5c06605d51f71a9eafbe4962a9ed8adbfd98c0463ae689ec86 namespace=k8s.io Feb 13 15:26:44.482488 containerd[1470]: time="2025-02-13T15:26:44.482421443Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:26:44.801617 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b1a64c7f7fbca5c06605d51f71a9eafbe4962a9ed8adbfd98c0463ae689ec86-rootfs.mount: Deactivated successfully. Feb 13 15:26:45.194776 kubelet[1773]: E0213 15:26:45.194726 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:45.360189 kubelet[1773]: E0213 15:26:45.359973 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:45.360189 kubelet[1773]: E0213 15:26:45.359980 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:45.362116 containerd[1470]: time="2025-02-13T15:26:45.362063666Z" level=info msg="CreateContainer within sandbox \"2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:26:45.376605 containerd[1470]: time="2025-02-13T15:26:45.376543744Z" level=info msg="CreateContainer within sandbox \"2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e\"" Feb 13 15:26:45.377095 containerd[1470]: time="2025-02-13T15:26:45.377051163Z" level=info msg="StartContainer for \"f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e\"" Feb 13 15:26:45.405700 systemd[1]: Started cri-containerd-f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e.scope - libcontainer container f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e. Feb 13 15:26:45.428802 containerd[1470]: time="2025-02-13T15:26:45.428759461Z" level=info msg="StartContainer for \"f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e\" returns successfully" Feb 13 15:26:45.532300 kubelet[1773]: I0213 15:26:45.532194 1773 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 15:26:45.956544 kernel: Initializing XFRM netlink socket Feb 13 15:26:46.195625 kubelet[1773]: E0213 15:26:46.195567 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:46.280378 kubelet[1773]: I0213 15:26:46.280270 1773 topology_manager.go:215] "Topology Admit Handler" podUID="1f24f628-8438-4ed2-9cce-bbbe0e7d4cdb" podNamespace="default" podName="nginx-deployment-6d5f899847-d27rl" Feb 13 15:26:46.285388 systemd[1]: Created slice kubepods-besteffort-pod1f24f628_8438_4ed2_9cce_bbbe0e7d4cdb.slice - libcontainer container kubepods-besteffort-pod1f24f628_8438_4ed2_9cce_bbbe0e7d4cdb.slice. Feb 13 15:26:46.364543 kubelet[1773]: E0213 15:26:46.364510 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:46.372322 kubelet[1773]: I0213 15:26:46.372278 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdl6m\" (UniqueName: \"kubernetes.io/projected/1f24f628-8438-4ed2-9cce-bbbe0e7d4cdb-kube-api-access-zdl6m\") pod \"nginx-deployment-6d5f899847-d27rl\" (UID: \"1f24f628-8438-4ed2-9cce-bbbe0e7d4cdb\") " pod="default/nginx-deployment-6d5f899847-d27rl" Feb 13 15:26:46.587862 containerd[1470]: time="2025-02-13T15:26:46.587817996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-d27rl,Uid:1f24f628-8438-4ed2-9cce-bbbe0e7d4cdb,Namespace:default,Attempt:0,}" Feb 13 15:26:47.195983 kubelet[1773]: E0213 15:26:47.195922 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:47.366316 kubelet[1773]: E0213 15:26:47.366274 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:47.590417 systemd-networkd[1390]: cilium_host: Link UP Feb 13 15:26:47.591326 systemd-networkd[1390]: cilium_net: Link UP Feb 13 15:26:47.592160 systemd-networkd[1390]: cilium_net: Gained carrier Feb 13 15:26:47.592330 systemd-networkd[1390]: cilium_host: Gained carrier Feb 13 15:26:47.592430 systemd-networkd[1390]: cilium_net: Gained IPv6LL Feb 13 15:26:47.592574 systemd-networkd[1390]: cilium_host: Gained IPv6LL Feb 13 15:26:47.674473 systemd-networkd[1390]: cilium_vxlan: Link UP Feb 13 15:26:47.674479 systemd-networkd[1390]: cilium_vxlan: Gained carrier Feb 13 15:26:47.968531 kernel: NET: Registered PF_ALG protocol family Feb 13 15:26:48.196139 kubelet[1773]: E0213 15:26:48.196077 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:48.368214 kubelet[1773]: E0213 15:26:48.368179 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:48.526391 systemd-networkd[1390]: lxc_health: Link UP Feb 13 15:26:48.536785 systemd-networkd[1390]: lxc_health: Gained carrier Feb 13 15:26:48.561281 kubelet[1773]: I0213 15:26:48.561229 1773 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-mzpnd" podStartSLOduration=10.1431306 podStartE2EDuration="14.561188067s" podCreationTimestamp="2025-02-13 15:26:34 +0000 UTC" firstStartedPulling="2025-02-13 15:26:37.371301808 +0000 UTC m=+4.707420927" lastFinishedPulling="2025-02-13 15:26:41.789359195 +0000 UTC m=+9.125478394" observedRunningTime="2025-02-13 15:26:46.383166905 +0000 UTC m=+13.719286024" watchObservedRunningTime="2025-02-13 15:26:48.561188067 +0000 UTC m=+15.897307186" Feb 13 15:26:48.643857 systemd-networkd[1390]: lxcdec7db376cfd: Link UP Feb 13 15:26:48.652528 kernel: eth0: renamed from tmp786fc Feb 13 15:26:48.664551 systemd-networkd[1390]: lxcdec7db376cfd: Gained carrier Feb 13 15:26:49.196302 kubelet[1773]: E0213 15:26:49.196248 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:49.370348 kubelet[1773]: E0213 15:26:49.370164 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:49.738628 systemd-networkd[1390]: cilium_vxlan: Gained IPv6LL Feb 13 15:26:49.866932 systemd-networkd[1390]: lxc_health: Gained IPv6LL Feb 13 15:26:49.930815 systemd-networkd[1390]: lxcdec7db376cfd: Gained IPv6LL Feb 13 15:26:50.196742 kubelet[1773]: E0213 15:26:50.196699 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:50.371797 kubelet[1773]: E0213 15:26:50.371536 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:26:51.197192 kubelet[1773]: E0213 15:26:51.197140 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:52.184640 containerd[1470]: time="2025-02-13T15:26:52.184497010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:26:52.184640 containerd[1470]: time="2025-02-13T15:26:52.184625134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:26:52.184976 containerd[1470]: time="2025-02-13T15:26:52.184643426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:26:52.185178 containerd[1470]: time="2025-02-13T15:26:52.185131998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:26:52.196187 systemd[1]: run-containerd-runc-k8s.io-786fc7a77c7eee6c83ec0a17b16e8f9081edbca4d38e72495b70f494a6b08e73-runc.FfcMvx.mount: Deactivated successfully. Feb 13 15:26:52.197329 kubelet[1773]: E0213 15:26:52.197288 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:52.209731 systemd[1]: Started cri-containerd-786fc7a77c7eee6c83ec0a17b16e8f9081edbca4d38e72495b70f494a6b08e73.scope - libcontainer container 786fc7a77c7eee6c83ec0a17b16e8f9081edbca4d38e72495b70f494a6b08e73. Feb 13 15:26:52.219133 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:26:52.236283 containerd[1470]: time="2025-02-13T15:26:52.236238234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-d27rl,Uid:1f24f628-8438-4ed2-9cce-bbbe0e7d4cdb,Namespace:default,Attempt:0,} returns sandbox id \"786fc7a77c7eee6c83ec0a17b16e8f9081edbca4d38e72495b70f494a6b08e73\"" Feb 13 15:26:52.237605 containerd[1470]: time="2025-02-13T15:26:52.237577425Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 15:26:53.197860 kubelet[1773]: E0213 15:26:53.197812 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:54.189003 kubelet[1773]: E0213 15:26:54.188960 1773 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:54.198222 kubelet[1773]: E0213 15:26:54.198179 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:54.892044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3052522107.mount: Deactivated successfully. Feb 13 15:26:55.198722 kubelet[1773]: E0213 15:26:55.198558 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:55.672442 containerd[1470]: time="2025-02-13T15:26:55.672389097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:55.672883 containerd[1470]: time="2025-02-13T15:26:55.672841407Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69693086" Feb 13 15:26:55.673584 containerd[1470]: time="2025-02-13T15:26:55.673554029Z" level=info msg="ImageCreate event name:\"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:55.676073 containerd[1470]: time="2025-02-13T15:26:55.676042532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:26:55.677929 containerd[1470]: time="2025-02-13T15:26:55.677899511Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 3.440291331s" Feb 13 15:26:55.677977 containerd[1470]: time="2025-02-13T15:26:55.677934188Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 15:26:55.679424 containerd[1470]: time="2025-02-13T15:26:55.679382003Z" level=info msg="CreateContainer within sandbox \"786fc7a77c7eee6c83ec0a17b16e8f9081edbca4d38e72495b70f494a6b08e73\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 15:26:55.688190 containerd[1470]: time="2025-02-13T15:26:55.688135648Z" level=info msg="CreateContainer within sandbox \"786fc7a77c7eee6c83ec0a17b16e8f9081edbca4d38e72495b70f494a6b08e73\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"0fdbe740d1220487082442b686daebcdb8321abcb86c30640f631ec199755761\"" Feb 13 15:26:55.688689 containerd[1470]: time="2025-02-13T15:26:55.688666939Z" level=info msg="StartContainer for \"0fdbe740d1220487082442b686daebcdb8321abcb86c30640f631ec199755761\"" Feb 13 15:26:55.714691 systemd[1]: Started cri-containerd-0fdbe740d1220487082442b686daebcdb8321abcb86c30640f631ec199755761.scope - libcontainer container 0fdbe740d1220487082442b686daebcdb8321abcb86c30640f631ec199755761. Feb 13 15:26:55.737301 containerd[1470]: time="2025-02-13T15:26:55.736533681Z" level=info msg="StartContainer for \"0fdbe740d1220487082442b686daebcdb8321abcb86c30640f631ec199755761\" returns successfully" Feb 13 15:26:56.199323 kubelet[1773]: E0213 15:26:56.199277 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:56.395361 kubelet[1773]: I0213 15:26:56.395317 1773 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-d27rl" podStartSLOduration=6.954464193 podStartE2EDuration="10.39527951s" podCreationTimestamp="2025-02-13 15:26:46 +0000 UTC" firstStartedPulling="2025-02-13 15:26:52.237325091 +0000 UTC m=+19.573444250" lastFinishedPulling="2025-02-13 15:26:55.678140448 +0000 UTC m=+23.014259567" observedRunningTime="2025-02-13 15:26:56.394812941 +0000 UTC m=+23.730932060" watchObservedRunningTime="2025-02-13 15:26:56.39527951 +0000 UTC m=+23.731398629" Feb 13 15:26:57.200172 kubelet[1773]: E0213 15:26:57.200123 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:58.200921 kubelet[1773]: E0213 15:26:58.200867 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:58.716871 kubelet[1773]: I0213 15:26:58.716836 1773 topology_manager.go:215] "Topology Admit Handler" podUID="be4fe8cf-0b55-4e2d-a212-c886d23af315" podNamespace="default" podName="nfs-server-provisioner-0" Feb 13 15:26:58.722084 systemd[1]: Created slice kubepods-besteffort-podbe4fe8cf_0b55_4e2d_a212_c886d23af315.slice - libcontainer container kubepods-besteffort-podbe4fe8cf_0b55_4e2d_a212_c886d23af315.slice. Feb 13 15:26:58.737162 kubelet[1773]: I0213 15:26:58.737121 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/be4fe8cf-0b55-4e2d-a212-c886d23af315-data\") pod \"nfs-server-provisioner-0\" (UID: \"be4fe8cf-0b55-4e2d-a212-c886d23af315\") " pod="default/nfs-server-provisioner-0" Feb 13 15:26:58.737305 kubelet[1773]: I0213 15:26:58.737179 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4px9h\" (UniqueName: \"kubernetes.io/projected/be4fe8cf-0b55-4e2d-a212-c886d23af315-kube-api-access-4px9h\") pod \"nfs-server-provisioner-0\" (UID: \"be4fe8cf-0b55-4e2d-a212-c886d23af315\") " pod="default/nfs-server-provisioner-0" Feb 13 15:26:59.026024 containerd[1470]: time="2025-02-13T15:26:59.025900725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:be4fe8cf-0b55-4e2d-a212-c886d23af315,Namespace:default,Attempt:0,}" Feb 13 15:26:59.053929 systemd-networkd[1390]: lxca1396cc932ae: Link UP Feb 13 15:26:59.065586 kernel: eth0: renamed from tmp6e02f Feb 13 15:26:59.073550 systemd-networkd[1390]: lxca1396cc932ae: Gained carrier Feb 13 15:26:59.201320 kubelet[1773]: E0213 15:26:59.201269 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:26:59.275970 containerd[1470]: time="2025-02-13T15:26:59.275856766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:26:59.275970 containerd[1470]: time="2025-02-13T15:26:59.275947118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:26:59.276851 containerd[1470]: time="2025-02-13T15:26:59.276580862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:26:59.277103 containerd[1470]: time="2025-02-13T15:26:59.276710895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:26:59.302737 systemd[1]: Started cri-containerd-6e02f7d2bd8499f8e9877a62fb099a0a3ea1077397f00a0555bb9b66c5adca77.scope - libcontainer container 6e02f7d2bd8499f8e9877a62fb099a0a3ea1077397f00a0555bb9b66c5adca77. Feb 13 15:26:59.313093 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:26:59.328887 containerd[1470]: time="2025-02-13T15:26:59.328824845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:be4fe8cf-0b55-4e2d-a212-c886d23af315,Namespace:default,Attempt:0,} returns sandbox id \"6e02f7d2bd8499f8e9877a62fb099a0a3ea1077397f00a0555bb9b66c5adca77\"" Feb 13 15:26:59.330488 containerd[1470]: time="2025-02-13T15:26:59.330400631Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 15:27:00.202016 kubelet[1773]: E0213 15:27:00.201963 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:00.362794 systemd-networkd[1390]: lxca1396cc932ae: Gained IPv6LL Feb 13 15:27:01.095323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1882849051.mount: Deactivated successfully. Feb 13 15:27:01.202678 kubelet[1773]: E0213 15:27:01.202634 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:02.202926 kubelet[1773]: E0213 15:27:02.202878 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:02.416943 containerd[1470]: time="2025-02-13T15:27:02.416886437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:02.417878 containerd[1470]: time="2025-02-13T15:27:02.417434997Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Feb 13 15:27:02.425271 containerd[1470]: time="2025-02-13T15:27:02.425227908Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:02.428084 containerd[1470]: time="2025-02-13T15:27:02.428034497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:02.429195 containerd[1470]: time="2025-02-13T15:27:02.429168308Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.098739463s" Feb 13 15:27:02.429249 containerd[1470]: time="2025-02-13T15:27:02.429201841Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 13 15:27:02.431241 containerd[1470]: time="2025-02-13T15:27:02.431122381Z" level=info msg="CreateContainer within sandbox \"6e02f7d2bd8499f8e9877a62fb099a0a3ea1077397f00a0555bb9b66c5adca77\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 15:27:02.445485 containerd[1470]: time="2025-02-13T15:27:02.445443296Z" level=info msg="CreateContainer within sandbox \"6e02f7d2bd8499f8e9877a62fb099a0a3ea1077397f00a0555bb9b66c5adca77\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"22f3edade5ab5ed98151b880442b75261c74a029a33e0b84ce3682b7f8fb2750\"" Feb 13 15:27:02.446195 containerd[1470]: time="2025-02-13T15:27:02.446168794Z" level=info msg="StartContainer for \"22f3edade5ab5ed98151b880442b75261c74a029a33e0b84ce3682b7f8fb2750\"" Feb 13 15:27:02.532742 systemd[1]: Started cri-containerd-22f3edade5ab5ed98151b880442b75261c74a029a33e0b84ce3682b7f8fb2750.scope - libcontainer container 22f3edade5ab5ed98151b880442b75261c74a029a33e0b84ce3682b7f8fb2750. Feb 13 15:27:02.573727 containerd[1470]: time="2025-02-13T15:27:02.573661750Z" level=info msg="StartContainer for \"22f3edade5ab5ed98151b880442b75261c74a029a33e0b84ce3682b7f8fb2750\" returns successfully" Feb 13 15:27:03.203347 kubelet[1773]: E0213 15:27:03.203292 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:03.414389 kubelet[1773]: I0213 15:27:03.414327 1773 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.3148941499999998 podStartE2EDuration="5.414283861s" podCreationTimestamp="2025-02-13 15:26:58 +0000 UTC" firstStartedPulling="2025-02-13 15:26:59.330010651 +0000 UTC m=+26.666129730" lastFinishedPulling="2025-02-13 15:27:02.429400322 +0000 UTC m=+29.765519441" observedRunningTime="2025-02-13 15:27:03.410193855 +0000 UTC m=+30.746312974" watchObservedRunningTime="2025-02-13 15:27:03.414283861 +0000 UTC m=+30.750402979" Feb 13 15:27:04.203427 kubelet[1773]: E0213 15:27:04.203381 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:05.204216 kubelet[1773]: E0213 15:27:05.204165 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:06.205323 kubelet[1773]: E0213 15:27:06.205279 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:07.206298 kubelet[1773]: E0213 15:27:07.206240 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:08.207389 kubelet[1773]: E0213 15:27:08.207342 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:09.138530 update_engine[1450]: I20250213 15:27:09.138311 1450 update_attempter.cc:509] Updating boot flags... Feb 13 15:27:09.162595 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3173) Feb 13 15:27:09.199522 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3171) Feb 13 15:27:09.212311 kubelet[1773]: E0213 15:27:09.212273 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:10.212935 kubelet[1773]: E0213 15:27:10.212889 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:11.213870 kubelet[1773]: E0213 15:27:11.213829 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:12.214273 kubelet[1773]: E0213 15:27:12.214228 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:12.889211 kubelet[1773]: I0213 15:27:12.889173 1773 topology_manager.go:215] "Topology Admit Handler" podUID="8290d5a2-a2b1-4ab0-9c94-f5176857c8f3" podNamespace="default" podName="test-pod-1" Feb 13 15:27:12.894581 systemd[1]: Created slice kubepods-besteffort-pod8290d5a2_a2b1_4ab0_9c94_f5176857c8f3.slice - libcontainer container kubepods-besteffort-pod8290d5a2_a2b1_4ab0_9c94_f5176857c8f3.slice. Feb 13 15:27:12.910877 kubelet[1773]: I0213 15:27:12.910824 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bb406ec6-4743-49fa-83bd-8b68d2336202\" (UniqueName: \"kubernetes.io/nfs/8290d5a2-a2b1-4ab0-9c94-f5176857c8f3-pvc-bb406ec6-4743-49fa-83bd-8b68d2336202\") pod \"test-pod-1\" (UID: \"8290d5a2-a2b1-4ab0-9c94-f5176857c8f3\") " pod="default/test-pod-1" Feb 13 15:27:12.911124 kubelet[1773]: I0213 15:27:12.911092 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bdns\" (UniqueName: \"kubernetes.io/projected/8290d5a2-a2b1-4ab0-9c94-f5176857c8f3-kube-api-access-6bdns\") pod \"test-pod-1\" (UID: \"8290d5a2-a2b1-4ab0-9c94-f5176857c8f3\") " pod="default/test-pod-1" Feb 13 15:27:13.033593 kernel: FS-Cache: Loaded Feb 13 15:27:13.057747 kernel: RPC: Registered named UNIX socket transport module. Feb 13 15:27:13.057826 kernel: RPC: Registered udp transport module. Feb 13 15:27:13.057845 kernel: RPC: Registered tcp transport module. Feb 13 15:27:13.058669 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 15:27:13.058735 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 15:27:13.215059 kubelet[1773]: E0213 15:27:13.214934 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:13.216652 kernel: NFS: Registering the id_resolver key type Feb 13 15:27:13.216746 kernel: Key type id_resolver registered Feb 13 15:27:13.216764 kernel: Key type id_legacy registered Feb 13 15:27:13.242319 nfsidmap[3196]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 15:27:13.246672 nfsidmap[3199]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 15:27:13.497947 containerd[1470]: time="2025-02-13T15:27:13.497825297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:8290d5a2-a2b1-4ab0-9c94-f5176857c8f3,Namespace:default,Attempt:0,}" Feb 13 15:27:13.523602 systemd-networkd[1390]: lxc0265ec27d11e: Link UP Feb 13 15:27:13.532538 kernel: eth0: renamed from tmp15bfd Feb 13 15:27:13.539167 systemd-networkd[1390]: lxc0265ec27d11e: Gained carrier Feb 13 15:27:13.720203 containerd[1470]: time="2025-02-13T15:27:13.719962782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:13.720203 containerd[1470]: time="2025-02-13T15:27:13.720024398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:13.720203 containerd[1470]: time="2025-02-13T15:27:13.720035154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:13.720203 containerd[1470]: time="2025-02-13T15:27:13.720121840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:13.739776 systemd[1]: Started cri-containerd-15bfd243981338a1600df469450798a2e657aa5b8714edd235eea5b3fa66256f.scope - libcontainer container 15bfd243981338a1600df469450798a2e657aa5b8714edd235eea5b3fa66256f. Feb 13 15:27:13.752186 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 15:27:13.779259 containerd[1470]: time="2025-02-13T15:27:13.779033420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:8290d5a2-a2b1-4ab0-9c94-f5176857c8f3,Namespace:default,Attempt:0,} returns sandbox id \"15bfd243981338a1600df469450798a2e657aa5b8714edd235eea5b3fa66256f\"" Feb 13 15:27:13.780913 containerd[1470]: time="2025-02-13T15:27:13.780872735Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 15:27:14.143797 containerd[1470]: time="2025-02-13T15:27:14.143743388Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:14.144248 containerd[1470]: time="2025-02-13T15:27:14.144197700Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 15:27:14.147341 containerd[1470]: time="2025-02-13T15:27:14.147296155Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 366.375719ms" Feb 13 15:27:14.147341 containerd[1470]: time="2025-02-13T15:27:14.147333422Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"" Feb 13 15:27:14.148997 containerd[1470]: time="2025-02-13T15:27:14.148944466Z" level=info msg="CreateContainer within sandbox \"15bfd243981338a1600df469450798a2e657aa5b8714edd235eea5b3fa66256f\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 15:27:14.159375 containerd[1470]: time="2025-02-13T15:27:14.159328709Z" level=info msg="CreateContainer within sandbox \"15bfd243981338a1600df469450798a2e657aa5b8714edd235eea5b3fa66256f\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"1b4ff5d14096315c56945ed6cb917b997b1430915e2f6a38aeb383a9317eb48f\"" Feb 13 15:27:14.159842 containerd[1470]: time="2025-02-13T15:27:14.159819568Z" level=info msg="StartContainer for \"1b4ff5d14096315c56945ed6cb917b997b1430915e2f6a38aeb383a9317eb48f\"" Feb 13 15:27:14.185648 systemd[1]: Started cri-containerd-1b4ff5d14096315c56945ed6cb917b997b1430915e2f6a38aeb383a9317eb48f.scope - libcontainer container 1b4ff5d14096315c56945ed6cb917b997b1430915e2f6a38aeb383a9317eb48f. Feb 13 15:27:14.188275 kubelet[1773]: E0213 15:27:14.188247 1773 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:14.206520 containerd[1470]: time="2025-02-13T15:27:14.206443860Z" level=info msg="StartContainer for \"1b4ff5d14096315c56945ed6cb917b997b1430915e2f6a38aeb383a9317eb48f\" returns successfully" Feb 13 15:27:14.216118 kubelet[1773]: E0213 15:27:14.216061 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:15.018646 systemd-networkd[1390]: lxc0265ec27d11e: Gained IPv6LL Feb 13 15:27:15.216549 kubelet[1773]: E0213 15:27:15.216486 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:16.216846 kubelet[1773]: E0213 15:27:16.216799 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:17.187027 kubelet[1773]: I0213 15:27:17.186605 1773 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.81926682 podStartE2EDuration="19.186565742s" podCreationTimestamp="2025-02-13 15:26:58 +0000 UTC" firstStartedPulling="2025-02-13 15:27:13.780267893 +0000 UTC m=+41.116387012" lastFinishedPulling="2025-02-13 15:27:14.147566855 +0000 UTC m=+41.483685934" observedRunningTime="2025-02-13 15:27:14.42560276 +0000 UTC m=+41.761721879" watchObservedRunningTime="2025-02-13 15:27:17.186565742 +0000 UTC m=+44.522684861" Feb 13 15:27:17.217127 kubelet[1773]: E0213 15:27:17.217077 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:17.267266 containerd[1470]: time="2025-02-13T15:27:17.266530157Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 15:27:17.271980 containerd[1470]: time="2025-02-13T15:27:17.271941270Z" level=info msg="StopContainer for \"f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e\" with timeout 2 (s)" Feb 13 15:27:17.275710 containerd[1470]: time="2025-02-13T15:27:17.275656539Z" level=info msg="Stop container \"f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e\" with signal terminated" Feb 13 15:27:17.281393 systemd-networkd[1390]: lxc_health: Link DOWN Feb 13 15:27:17.281401 systemd-networkd[1390]: lxc_health: Lost carrier Feb 13 15:27:17.310175 systemd[1]: cri-containerd-f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e.scope: Deactivated successfully. Feb 13 15:27:17.310447 systemd[1]: cri-containerd-f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e.scope: Consumed 6.499s CPU time. Feb 13 15:27:17.339567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e-rootfs.mount: Deactivated successfully. Feb 13 15:27:17.349618 containerd[1470]: time="2025-02-13T15:27:17.349533687Z" level=info msg="shim disconnected" id=f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e namespace=k8s.io Feb 13 15:27:17.349618 containerd[1470]: time="2025-02-13T15:27:17.349614703Z" level=warning msg="cleaning up after shim disconnected" id=f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e namespace=k8s.io Feb 13 15:27:17.349618 containerd[1470]: time="2025-02-13T15:27:17.349625899Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:27:17.370368 containerd[1470]: time="2025-02-13T15:27:17.370322359Z" level=info msg="StopContainer for \"f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e\" returns successfully" Feb 13 15:27:17.371159 containerd[1470]: time="2025-02-13T15:27:17.371115237Z" level=info msg="StopPodSandbox for \"2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5\"" Feb 13 15:27:17.374546 containerd[1470]: time="2025-02-13T15:27:17.374500207Z" level=info msg="Container to stop \"e89246ea7da920697ee75d74417d4f9422835a7e569d8e855df380ff62f16adf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:27:17.374546 containerd[1470]: time="2025-02-13T15:27:17.374545033Z" level=info msg="Container to stop \"5b315e23467c2c7a8ccd1784a45d381cdfca4c99dc7a22131aa28869ba9ace01\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:27:17.374680 containerd[1470]: time="2025-02-13T15:27:17.374556789Z" level=info msg="Container to stop \"51facd003bf550e73af64571370134ac488f7fbaf3e0b4042834bc9078ec92e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:27:17.374680 containerd[1470]: time="2025-02-13T15:27:17.374566426Z" level=info msg="Container to stop \"0b1a64c7f7fbca5c06605d51f71a9eafbe4962a9ed8adbfd98c0463ae689ec86\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:27:17.374680 containerd[1470]: time="2025-02-13T15:27:17.374574584Z" level=info msg="Container to stop \"f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 13 15:27:17.376477 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5-shm.mount: Deactivated successfully. Feb 13 15:27:17.379792 systemd[1]: cri-containerd-2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5.scope: Deactivated successfully. Feb 13 15:27:17.396217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5-rootfs.mount: Deactivated successfully. Feb 13 15:27:17.401179 containerd[1470]: time="2025-02-13T15:27:17.401120622Z" level=info msg="shim disconnected" id=2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5 namespace=k8s.io Feb 13 15:27:17.401554 containerd[1470]: time="2025-02-13T15:27:17.401368947Z" level=warning msg="cleaning up after shim disconnected" id=2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5 namespace=k8s.io Feb 13 15:27:17.401554 containerd[1470]: time="2025-02-13T15:27:17.401384622Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:27:17.414380 containerd[1470]: time="2025-02-13T15:27:17.414220394Z" level=info msg="TearDown network for sandbox \"2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5\" successfully" Feb 13 15:27:17.414380 containerd[1470]: time="2025-02-13T15:27:17.414258503Z" level=info msg="StopPodSandbox for \"2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5\" returns successfully" Feb 13 15:27:17.423870 kubelet[1773]: I0213 15:27:17.423820 1773 scope.go:117] "RemoveContainer" containerID="f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e" Feb 13 15:27:17.425804 containerd[1470]: time="2025-02-13T15:27:17.425763840Z" level=info msg="RemoveContainer for \"f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e\"" Feb 13 15:27:17.428760 containerd[1470]: time="2025-02-13T15:27:17.428699186Z" level=info msg="RemoveContainer for \"f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e\" returns successfully" Feb 13 15:27:17.429085 kubelet[1773]: I0213 15:27:17.429050 1773 scope.go:117] "RemoveContainer" containerID="0b1a64c7f7fbca5c06605d51f71a9eafbe4962a9ed8adbfd98c0463ae689ec86" Feb 13 15:27:17.430193 containerd[1470]: time="2025-02-13T15:27:17.430164100Z" level=info msg="RemoveContainer for \"0b1a64c7f7fbca5c06605d51f71a9eafbe4962a9ed8adbfd98c0463ae689ec86\"" Feb 13 15:27:17.444241 containerd[1470]: time="2025-02-13T15:27:17.444127649Z" level=info msg="RemoveContainer for \"0b1a64c7f7fbca5c06605d51f71a9eafbe4962a9ed8adbfd98c0463ae689ec86\" returns successfully" Feb 13 15:27:17.444472 kubelet[1773]: I0213 15:27:17.444436 1773 scope.go:117] "RemoveContainer" containerID="51facd003bf550e73af64571370134ac488f7fbaf3e0b4042834bc9078ec92e9" Feb 13 15:27:17.446041 containerd[1470]: time="2025-02-13T15:27:17.445756513Z" level=info msg="RemoveContainer for \"51facd003bf550e73af64571370134ac488f7fbaf3e0b4042834bc9078ec92e9\"" Feb 13 15:27:17.448206 containerd[1470]: time="2025-02-13T15:27:17.448132150Z" level=info msg="RemoveContainer for \"51facd003bf550e73af64571370134ac488f7fbaf3e0b4042834bc9078ec92e9\" returns successfully" Feb 13 15:27:17.448556 kubelet[1773]: I0213 15:27:17.448532 1773 scope.go:117] "RemoveContainer" containerID="5b315e23467c2c7a8ccd1784a45d381cdfca4c99dc7a22131aa28869ba9ace01" Feb 13 15:27:17.449813 containerd[1470]: time="2025-02-13T15:27:17.449780688Z" level=info msg="RemoveContainer for \"5b315e23467c2c7a8ccd1784a45d381cdfca4c99dc7a22131aa28869ba9ace01\"" Feb 13 15:27:17.452155 containerd[1470]: time="2025-02-13T15:27:17.452115337Z" level=info msg="RemoveContainer for \"5b315e23467c2c7a8ccd1784a45d381cdfca4c99dc7a22131aa28869ba9ace01\" returns successfully" Feb 13 15:27:17.452369 kubelet[1773]: I0213 15:27:17.452329 1773 scope.go:117] "RemoveContainer" containerID="e89246ea7da920697ee75d74417d4f9422835a7e569d8e855df380ff62f16adf" Feb 13 15:27:17.453418 containerd[1470]: time="2025-02-13T15:27:17.453388230Z" level=info msg="RemoveContainer for \"e89246ea7da920697ee75d74417d4f9422835a7e569d8e855df380ff62f16adf\"" Feb 13 15:27:17.455683 containerd[1470]: time="2025-02-13T15:27:17.455645023Z" level=info msg="RemoveContainer for \"e89246ea7da920697ee75d74417d4f9422835a7e569d8e855df380ff62f16adf\" returns successfully" Feb 13 15:27:17.455994 kubelet[1773]: I0213 15:27:17.455868 1773 scope.go:117] "RemoveContainer" containerID="f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e" Feb 13 15:27:17.456261 containerd[1470]: time="2025-02-13T15:27:17.456222567Z" level=error msg="ContainerStatus for \"f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e\": not found" Feb 13 15:27:17.456414 kubelet[1773]: E0213 15:27:17.456392 1773 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e\": not found" containerID="f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e" Feb 13 15:27:17.456515 kubelet[1773]: I0213 15:27:17.456484 1773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e"} err="failed to get container status \"f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8d1f4e9a60066547cb3dfa9d21debf035c10dc255e4347ec5eeff9e173b730e\": not found" Feb 13 15:27:17.456543 kubelet[1773]: I0213 15:27:17.456519 1773 scope.go:117] "RemoveContainer" containerID="0b1a64c7f7fbca5c06605d51f71a9eafbe4962a9ed8adbfd98c0463ae689ec86" Feb 13 15:27:17.456737 containerd[1470]: time="2025-02-13T15:27:17.456697022Z" level=error msg="ContainerStatus for \"0b1a64c7f7fbca5c06605d51f71a9eafbe4962a9ed8adbfd98c0463ae689ec86\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0b1a64c7f7fbca5c06605d51f71a9eafbe4962a9ed8adbfd98c0463ae689ec86\": not found" Feb 13 15:27:17.456999 kubelet[1773]: E0213 15:27:17.456873 1773 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0b1a64c7f7fbca5c06605d51f71a9eafbe4962a9ed8adbfd98c0463ae689ec86\": not found" containerID="0b1a64c7f7fbca5c06605d51f71a9eafbe4962a9ed8adbfd98c0463ae689ec86" Feb 13 15:27:17.456999 kubelet[1773]: I0213 15:27:17.456907 1773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0b1a64c7f7fbca5c06605d51f71a9eafbe4962a9ed8adbfd98c0463ae689ec86"} err="failed to get container status \"0b1a64c7f7fbca5c06605d51f71a9eafbe4962a9ed8adbfd98c0463ae689ec86\": rpc error: code = NotFound desc = an error occurred when try to find container \"0b1a64c7f7fbca5c06605d51f71a9eafbe4962a9ed8adbfd98c0463ae689ec86\": not found" Feb 13 15:27:17.456999 kubelet[1773]: I0213 15:27:17.456921 1773 scope.go:117] "RemoveContainer" containerID="51facd003bf550e73af64571370134ac488f7fbaf3e0b4042834bc9078ec92e9" Feb 13 15:27:17.457309 containerd[1470]: time="2025-02-13T15:27:17.457243176Z" level=error msg="ContainerStatus for \"51facd003bf550e73af64571370134ac488f7fbaf3e0b4042834bc9078ec92e9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"51facd003bf550e73af64571370134ac488f7fbaf3e0b4042834bc9078ec92e9\": not found" Feb 13 15:27:17.457402 kubelet[1773]: E0213 15:27:17.457378 1773 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"51facd003bf550e73af64571370134ac488f7fbaf3e0b4042834bc9078ec92e9\": not found" containerID="51facd003bf550e73af64571370134ac488f7fbaf3e0b4042834bc9078ec92e9" Feb 13 15:27:17.457442 kubelet[1773]: I0213 15:27:17.457413 1773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"51facd003bf550e73af64571370134ac488f7fbaf3e0b4042834bc9078ec92e9"} err="failed to get container status \"51facd003bf550e73af64571370134ac488f7fbaf3e0b4042834bc9078ec92e9\": rpc error: code = NotFound desc = an error occurred when try to find container \"51facd003bf550e73af64571370134ac488f7fbaf3e0b4042834bc9078ec92e9\": not found" Feb 13 15:27:17.457442 kubelet[1773]: I0213 15:27:17.457425 1773 scope.go:117] "RemoveContainer" containerID="5b315e23467c2c7a8ccd1784a45d381cdfca4c99dc7a22131aa28869ba9ace01" Feb 13 15:27:17.457637 containerd[1470]: time="2025-02-13T15:27:17.457606346Z" level=error msg="ContainerStatus for \"5b315e23467c2c7a8ccd1784a45d381cdfca4c99dc7a22131aa28869ba9ace01\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b315e23467c2c7a8ccd1784a45d381cdfca4c99dc7a22131aa28869ba9ace01\": not found" Feb 13 15:27:17.457851 kubelet[1773]: E0213 15:27:17.457742 1773 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b315e23467c2c7a8ccd1784a45d381cdfca4c99dc7a22131aa28869ba9ace01\": not found" containerID="5b315e23467c2c7a8ccd1784a45d381cdfca4c99dc7a22131aa28869ba9ace01" Feb 13 15:27:17.457851 kubelet[1773]: I0213 15:27:17.457774 1773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b315e23467c2c7a8ccd1784a45d381cdfca4c99dc7a22131aa28869ba9ace01"} err="failed to get container status \"5b315e23467c2c7a8ccd1784a45d381cdfca4c99dc7a22131aa28869ba9ace01\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b315e23467c2c7a8ccd1784a45d381cdfca4c99dc7a22131aa28869ba9ace01\": not found" Feb 13 15:27:17.457851 kubelet[1773]: I0213 15:27:17.457784 1773 scope.go:117] "RemoveContainer" containerID="e89246ea7da920697ee75d74417d4f9422835a7e569d8e855df380ff62f16adf" Feb 13 15:27:17.458031 containerd[1470]: time="2025-02-13T15:27:17.457946722Z" level=error msg="ContainerStatus for \"e89246ea7da920697ee75d74417d4f9422835a7e569d8e855df380ff62f16adf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e89246ea7da920697ee75d74417d4f9422835a7e569d8e855df380ff62f16adf\": not found" Feb 13 15:27:17.458153 kubelet[1773]: E0213 15:27:17.458133 1773 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e89246ea7da920697ee75d74417d4f9422835a7e569d8e855df380ff62f16adf\": not found" containerID="e89246ea7da920697ee75d74417d4f9422835a7e569d8e855df380ff62f16adf" Feb 13 15:27:17.458186 kubelet[1773]: I0213 15:27:17.458170 1773 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e89246ea7da920697ee75d74417d4f9422835a7e569d8e855df380ff62f16adf"} err="failed to get container status \"e89246ea7da920697ee75d74417d4f9422835a7e569d8e855df380ff62f16adf\": rpc error: code = NotFound desc = an error occurred when try to find container \"e89246ea7da920697ee75d74417d4f9422835a7e569d8e855df380ff62f16adf\": not found" Feb 13 15:27:17.535519 kubelet[1773]: I0213 15:27:17.535427 1773 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-cilium-run\") pod \"18ee5808-fa7d-4db2-acc6-381cd203a724\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " Feb 13 15:27:17.535519 kubelet[1773]: I0213 15:27:17.535475 1773 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-etc-cni-netd\") pod \"18ee5808-fa7d-4db2-acc6-381cd203a724\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " Feb 13 15:27:17.535519 kubelet[1773]: I0213 15:27:17.535517 1773 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wstxb\" (UniqueName: \"kubernetes.io/projected/18ee5808-fa7d-4db2-acc6-381cd203a724-kube-api-access-wstxb\") pod \"18ee5808-fa7d-4db2-acc6-381cd203a724\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " Feb 13 15:27:17.536046 kubelet[1773]: I0213 15:27:17.535540 1773 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/18ee5808-fa7d-4db2-acc6-381cd203a724-hubble-tls\") pod \"18ee5808-fa7d-4db2-acc6-381cd203a724\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " Feb 13 15:27:17.536046 kubelet[1773]: I0213 15:27:17.535545 1773 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "18ee5808-fa7d-4db2-acc6-381cd203a724" (UID: "18ee5808-fa7d-4db2-acc6-381cd203a724"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:27:17.536046 kubelet[1773]: I0213 15:27:17.535558 1773 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-xtables-lock\") pod \"18ee5808-fa7d-4db2-acc6-381cd203a724\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " Feb 13 15:27:17.536046 kubelet[1773]: I0213 15:27:17.535586 1773 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "18ee5808-fa7d-4db2-acc6-381cd203a724" (UID: "18ee5808-fa7d-4db2-acc6-381cd203a724"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:27:17.536046 kubelet[1773]: I0213 15:27:17.535610 1773 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "18ee5808-fa7d-4db2-acc6-381cd203a724" (UID: "18ee5808-fa7d-4db2-acc6-381cd203a724"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:27:17.536150 kubelet[1773]: I0213 15:27:17.535618 1773 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18ee5808-fa7d-4db2-acc6-381cd203a724-cilium-config-path\") pod \"18ee5808-fa7d-4db2-acc6-381cd203a724\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " Feb 13 15:27:17.536150 kubelet[1773]: I0213 15:27:17.535649 1773 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/18ee5808-fa7d-4db2-acc6-381cd203a724-clustermesh-secrets\") pod \"18ee5808-fa7d-4db2-acc6-381cd203a724\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " Feb 13 15:27:17.536150 kubelet[1773]: I0213 15:27:17.535670 1773 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-cilium-cgroup\") pod \"18ee5808-fa7d-4db2-acc6-381cd203a724\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " Feb 13 15:27:17.536150 kubelet[1773]: I0213 15:27:17.535689 1773 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-host-proc-sys-net\") pod \"18ee5808-fa7d-4db2-acc6-381cd203a724\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " Feb 13 15:27:17.536150 kubelet[1773]: I0213 15:27:17.535707 1773 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-host-proc-sys-kernel\") pod \"18ee5808-fa7d-4db2-acc6-381cd203a724\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " Feb 13 15:27:17.536150 kubelet[1773]: I0213 15:27:17.535724 1773 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-cni-path\") pod \"18ee5808-fa7d-4db2-acc6-381cd203a724\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " Feb 13 15:27:17.536265 kubelet[1773]: I0213 15:27:17.535741 1773 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-lib-modules\") pod \"18ee5808-fa7d-4db2-acc6-381cd203a724\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " Feb 13 15:27:17.536265 kubelet[1773]: I0213 15:27:17.535759 1773 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-hostproc\") pod \"18ee5808-fa7d-4db2-acc6-381cd203a724\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " Feb 13 15:27:17.536265 kubelet[1773]: I0213 15:27:17.535776 1773 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-bpf-maps\") pod \"18ee5808-fa7d-4db2-acc6-381cd203a724\" (UID: \"18ee5808-fa7d-4db2-acc6-381cd203a724\") " Feb 13 15:27:17.536265 kubelet[1773]: I0213 15:27:17.535806 1773 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-xtables-lock\") on node \"10.0.0.59\" DevicePath \"\"" Feb 13 15:27:17.536265 kubelet[1773]: I0213 15:27:17.535816 1773 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-etc-cni-netd\") on node \"10.0.0.59\" DevicePath \"\"" Feb 13 15:27:17.536265 kubelet[1773]: I0213 15:27:17.535826 1773 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-cilium-run\") on node \"10.0.0.59\" DevicePath \"\"" Feb 13 15:27:17.536265 kubelet[1773]: I0213 15:27:17.535847 1773 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "18ee5808-fa7d-4db2-acc6-381cd203a724" (UID: "18ee5808-fa7d-4db2-acc6-381cd203a724"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:27:17.538252 kubelet[1773]: I0213 15:27:17.537846 1773 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "18ee5808-fa7d-4db2-acc6-381cd203a724" (UID: "18ee5808-fa7d-4db2-acc6-381cd203a724"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:27:17.538252 kubelet[1773]: I0213 15:27:17.537877 1773 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "18ee5808-fa7d-4db2-acc6-381cd203a724" (UID: "18ee5808-fa7d-4db2-acc6-381cd203a724"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:27:17.538252 kubelet[1773]: I0213 15:27:17.537847 1773 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-cni-path" (OuterVolumeSpecName: "cni-path") pod "18ee5808-fa7d-4db2-acc6-381cd203a724" (UID: "18ee5808-fa7d-4db2-acc6-381cd203a724"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:27:17.538252 kubelet[1773]: I0213 15:27:17.537907 1773 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "18ee5808-fa7d-4db2-acc6-381cd203a724" (UID: "18ee5808-fa7d-4db2-acc6-381cd203a724"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:27:17.538252 kubelet[1773]: I0213 15:27:17.537911 1773 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "18ee5808-fa7d-4db2-acc6-381cd203a724" (UID: "18ee5808-fa7d-4db2-acc6-381cd203a724"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:27:17.538435 kubelet[1773]: I0213 15:27:17.537948 1773 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-hostproc" (OuterVolumeSpecName: "hostproc") pod "18ee5808-fa7d-4db2-acc6-381cd203a724" (UID: "18ee5808-fa7d-4db2-acc6-381cd203a724"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 13 15:27:17.538435 kubelet[1773]: I0213 15:27:17.538184 1773 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18ee5808-fa7d-4db2-acc6-381cd203a724-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "18ee5808-fa7d-4db2-acc6-381cd203a724" (UID: "18ee5808-fa7d-4db2-acc6-381cd203a724"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 13 15:27:17.541384 kubelet[1773]: I0213 15:27:17.541335 1773 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18ee5808-fa7d-4db2-acc6-381cd203a724-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "18ee5808-fa7d-4db2-acc6-381cd203a724" (UID: "18ee5808-fa7d-4db2-acc6-381cd203a724"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:27:17.542703 systemd[1]: var-lib-kubelet-pods-18ee5808\x2dfa7d\x2d4db2\x2dacc6\x2d381cd203a724-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwstxb.mount: Deactivated successfully. Feb 13 15:27:17.543002 kubelet[1773]: I0213 15:27:17.542953 1773 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18ee5808-fa7d-4db2-acc6-381cd203a724-kube-api-access-wstxb" (OuterVolumeSpecName: "kube-api-access-wstxb") pod "18ee5808-fa7d-4db2-acc6-381cd203a724" (UID: "18ee5808-fa7d-4db2-acc6-381cd203a724"). InnerVolumeSpecName "kube-api-access-wstxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 13 15:27:17.543131 systemd[1]: var-lib-kubelet-pods-18ee5808\x2dfa7d\x2d4db2\x2dacc6\x2d381cd203a724-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 13 15:27:17.543432 kubelet[1773]: I0213 15:27:17.543396 1773 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18ee5808-fa7d-4db2-acc6-381cd203a724-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "18ee5808-fa7d-4db2-acc6-381cd203a724" (UID: "18ee5808-fa7d-4db2-acc6-381cd203a724"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 13 15:27:17.636740 kubelet[1773]: I0213 15:27:17.636667 1773 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-wstxb\" (UniqueName: \"kubernetes.io/projected/18ee5808-fa7d-4db2-acc6-381cd203a724-kube-api-access-wstxb\") on node \"10.0.0.59\" DevicePath \"\"" Feb 13 15:27:17.636740 kubelet[1773]: I0213 15:27:17.636704 1773 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/18ee5808-fa7d-4db2-acc6-381cd203a724-hubble-tls\") on node \"10.0.0.59\" DevicePath \"\"" Feb 13 15:27:17.636740 kubelet[1773]: I0213 15:27:17.636716 1773 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/18ee5808-fa7d-4db2-acc6-381cd203a724-cilium-config-path\") on node \"10.0.0.59\" DevicePath \"\"" Feb 13 15:27:17.636740 kubelet[1773]: I0213 15:27:17.636726 1773 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/18ee5808-fa7d-4db2-acc6-381cd203a724-clustermesh-secrets\") on node \"10.0.0.59\" DevicePath \"\"" Feb 13 15:27:17.636740 kubelet[1773]: I0213 15:27:17.636735 1773 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-cilium-cgroup\") on node \"10.0.0.59\" DevicePath \"\"" Feb 13 15:27:17.636740 kubelet[1773]: I0213 15:27:17.636745 1773 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-host-proc-sys-kernel\") on node \"10.0.0.59\" DevicePath \"\"" Feb 13 15:27:17.636740 kubelet[1773]: I0213 15:27:17.636754 1773 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-host-proc-sys-net\") on node \"10.0.0.59\" DevicePath \"\"" Feb 13 15:27:17.636740 kubelet[1773]: I0213 15:27:17.636763 1773 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-cni-path\") on node \"10.0.0.59\" DevicePath \"\"" Feb 13 15:27:17.637025 kubelet[1773]: I0213 15:27:17.636776 1773 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-lib-modules\") on node \"10.0.0.59\" DevicePath \"\"" Feb 13 15:27:17.637025 kubelet[1773]: I0213 15:27:17.636785 1773 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-hostproc\") on node \"10.0.0.59\" DevicePath \"\"" Feb 13 15:27:17.637025 kubelet[1773]: I0213 15:27:17.636793 1773 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/18ee5808-fa7d-4db2-acc6-381cd203a724-bpf-maps\") on node \"10.0.0.59\" DevicePath \"\"" Feb 13 15:27:18.218190 kubelet[1773]: E0213 15:27:18.218119 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:18.240984 systemd[1]: var-lib-kubelet-pods-18ee5808\x2dfa7d\x2d4db2\x2dacc6\x2d381cd203a724-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 13 15:27:18.334266 systemd[1]: Removed slice kubepods-burstable-pod18ee5808_fa7d_4db2_acc6_381cd203a724.slice - libcontainer container kubepods-burstable-pod18ee5808_fa7d_4db2_acc6_381cd203a724.slice. Feb 13 15:27:18.334360 systemd[1]: kubepods-burstable-pod18ee5808_fa7d_4db2_acc6_381cd203a724.slice: Consumed 6.643s CPU time. Feb 13 15:27:19.218549 kubelet[1773]: E0213 15:27:19.218463 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:19.335999 kubelet[1773]: E0213 15:27:19.335954 1773 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:27:19.959973 kubelet[1773]: I0213 15:27:19.959921 1773 topology_manager.go:215] "Topology Admit Handler" podUID="49df992f-d8c0-4353-95f8-91366ff3d921" podNamespace="kube-system" podName="cilium-operator-5cc964979-ccft5" Feb 13 15:27:19.959973 kubelet[1773]: E0213 15:27:19.959979 1773 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="18ee5808-fa7d-4db2-acc6-381cd203a724" containerName="mount-cgroup" Feb 13 15:27:19.959973 kubelet[1773]: E0213 15:27:19.959990 1773 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="18ee5808-fa7d-4db2-acc6-381cd203a724" containerName="mount-bpf-fs" Feb 13 15:27:19.960160 kubelet[1773]: E0213 15:27:19.959998 1773 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="18ee5808-fa7d-4db2-acc6-381cd203a724" containerName="cilium-agent" Feb 13 15:27:19.960160 kubelet[1773]: E0213 15:27:19.960005 1773 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="18ee5808-fa7d-4db2-acc6-381cd203a724" containerName="apply-sysctl-overwrites" Feb 13 15:27:19.960160 kubelet[1773]: E0213 15:27:19.960012 1773 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="18ee5808-fa7d-4db2-acc6-381cd203a724" containerName="clean-cilium-state" Feb 13 15:27:19.960160 kubelet[1773]: I0213 15:27:19.960029 1773 memory_manager.go:354] "RemoveStaleState removing state" podUID="18ee5808-fa7d-4db2-acc6-381cd203a724" containerName="cilium-agent" Feb 13 15:27:19.965450 kubelet[1773]: I0213 15:27:19.965418 1773 topology_manager.go:215] "Topology Admit Handler" podUID="0c02d2f0-8506-4034-9fba-1ffa37e406ec" podNamespace="kube-system" podName="cilium-4td9w" Feb 13 15:27:19.967659 systemd[1]: Created slice kubepods-besteffort-pod49df992f_d8c0_4353_95f8_91366ff3d921.slice - libcontainer container kubepods-besteffort-pod49df992f_d8c0_4353_95f8_91366ff3d921.slice. Feb 13 15:27:19.972939 systemd[1]: Created slice kubepods-burstable-pod0c02d2f0_8506_4034_9fba_1ffa37e406ec.slice - libcontainer container kubepods-burstable-pod0c02d2f0_8506_4034_9fba_1ffa37e406ec.slice. Feb 13 15:27:20.049329 kubelet[1773]: I0213 15:27:20.049272 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5zwb\" (UniqueName: \"kubernetes.io/projected/49df992f-d8c0-4353-95f8-91366ff3d921-kube-api-access-v5zwb\") pod \"cilium-operator-5cc964979-ccft5\" (UID: \"49df992f-d8c0-4353-95f8-91366ff3d921\") " pod="kube-system/cilium-operator-5cc964979-ccft5" Feb 13 15:27:20.049329 kubelet[1773]: I0213 15:27:20.049319 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c02d2f0-8506-4034-9fba-1ffa37e406ec-lib-modules\") pod \"cilium-4td9w\" (UID: \"0c02d2f0-8506-4034-9fba-1ffa37e406ec\") " pod="kube-system/cilium-4td9w" Feb 13 15:27:20.049516 kubelet[1773]: I0213 15:27:20.049373 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7xfw\" (UniqueName: \"kubernetes.io/projected/0c02d2f0-8506-4034-9fba-1ffa37e406ec-kube-api-access-r7xfw\") pod \"cilium-4td9w\" (UID: \"0c02d2f0-8506-4034-9fba-1ffa37e406ec\") " pod="kube-system/cilium-4td9w" Feb 13 15:27:20.049516 kubelet[1773]: I0213 15:27:20.049431 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c02d2f0-8506-4034-9fba-1ffa37e406ec-clustermesh-secrets\") pod \"cilium-4td9w\" (UID: \"0c02d2f0-8506-4034-9fba-1ffa37e406ec\") " pod="kube-system/cilium-4td9w" Feb 13 15:27:20.049516 kubelet[1773]: I0213 15:27:20.049478 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c02d2f0-8506-4034-9fba-1ffa37e406ec-cilium-config-path\") pod \"cilium-4td9w\" (UID: \"0c02d2f0-8506-4034-9fba-1ffa37e406ec\") " pod="kube-system/cilium-4td9w" Feb 13 15:27:20.049591 kubelet[1773]: I0213 15:27:20.049534 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c02d2f0-8506-4034-9fba-1ffa37e406ec-cilium-run\") pod \"cilium-4td9w\" (UID: \"0c02d2f0-8506-4034-9fba-1ffa37e406ec\") " pod="kube-system/cilium-4td9w" Feb 13 15:27:20.049591 kubelet[1773]: I0213 15:27:20.049586 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c02d2f0-8506-4034-9fba-1ffa37e406ec-cni-path\") pod \"cilium-4td9w\" (UID: \"0c02d2f0-8506-4034-9fba-1ffa37e406ec\") " pod="kube-system/cilium-4td9w" Feb 13 15:27:20.049636 kubelet[1773]: I0213 15:27:20.049609 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c02d2f0-8506-4034-9fba-1ffa37e406ec-etc-cni-netd\") pod \"cilium-4td9w\" (UID: \"0c02d2f0-8506-4034-9fba-1ffa37e406ec\") " pod="kube-system/cilium-4td9w" Feb 13 15:27:20.049636 kubelet[1773]: I0213 15:27:20.049629 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c02d2f0-8506-4034-9fba-1ffa37e406ec-host-proc-sys-net\") pod \"cilium-4td9w\" (UID: \"0c02d2f0-8506-4034-9fba-1ffa37e406ec\") " pod="kube-system/cilium-4td9w" Feb 13 15:27:20.049673 kubelet[1773]: I0213 15:27:20.049660 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c02d2f0-8506-4034-9fba-1ffa37e406ec-host-proc-sys-kernel\") pod \"cilium-4td9w\" (UID: \"0c02d2f0-8506-4034-9fba-1ffa37e406ec\") " pod="kube-system/cilium-4td9w" Feb 13 15:27:20.049695 kubelet[1773]: I0213 15:27:20.049677 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c02d2f0-8506-4034-9fba-1ffa37e406ec-bpf-maps\") pod \"cilium-4td9w\" (UID: \"0c02d2f0-8506-4034-9fba-1ffa37e406ec\") " pod="kube-system/cilium-4td9w" Feb 13 15:27:20.049717 kubelet[1773]: I0213 15:27:20.049696 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0c02d2f0-8506-4034-9fba-1ffa37e406ec-cilium-ipsec-secrets\") pod \"cilium-4td9w\" (UID: \"0c02d2f0-8506-4034-9fba-1ffa37e406ec\") " pod="kube-system/cilium-4td9w" Feb 13 15:27:20.049739 kubelet[1773]: I0213 15:27:20.049724 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c02d2f0-8506-4034-9fba-1ffa37e406ec-hubble-tls\") pod \"cilium-4td9w\" (UID: \"0c02d2f0-8506-4034-9fba-1ffa37e406ec\") " pod="kube-system/cilium-4td9w" Feb 13 15:27:20.049758 kubelet[1773]: I0213 15:27:20.049745 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c02d2f0-8506-4034-9fba-1ffa37e406ec-hostproc\") pod \"cilium-4td9w\" (UID: \"0c02d2f0-8506-4034-9fba-1ffa37e406ec\") " pod="kube-system/cilium-4td9w" Feb 13 15:27:20.049781 kubelet[1773]: I0213 15:27:20.049772 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c02d2f0-8506-4034-9fba-1ffa37e406ec-cilium-cgroup\") pod \"cilium-4td9w\" (UID: \"0c02d2f0-8506-4034-9fba-1ffa37e406ec\") " pod="kube-system/cilium-4td9w" Feb 13 15:27:20.049805 kubelet[1773]: I0213 15:27:20.049793 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c02d2f0-8506-4034-9fba-1ffa37e406ec-xtables-lock\") pod \"cilium-4td9w\" (UID: \"0c02d2f0-8506-4034-9fba-1ffa37e406ec\") " pod="kube-system/cilium-4td9w" Feb 13 15:27:20.049825 kubelet[1773]: I0213 15:27:20.049813 1773 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49df992f-d8c0-4353-95f8-91366ff3d921-cilium-config-path\") pod \"cilium-operator-5cc964979-ccft5\" (UID: \"49df992f-d8c0-4353-95f8-91366ff3d921\") " pod="kube-system/cilium-operator-5cc964979-ccft5" Feb 13 15:27:20.219340 kubelet[1773]: E0213 15:27:20.219223 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:20.271433 kubelet[1773]: E0213 15:27:20.270894 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:20.271867 containerd[1470]: time="2025-02-13T15:27:20.271799176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-ccft5,Uid:49df992f-d8c0-4353-95f8-91366ff3d921,Namespace:kube-system,Attempt:0,}" Feb 13 15:27:20.287650 kubelet[1773]: E0213 15:27:20.287596 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:20.288196 containerd[1470]: time="2025-02-13T15:27:20.288125760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4td9w,Uid:0c02d2f0-8506-4034-9fba-1ffa37e406ec,Namespace:kube-system,Attempt:0,}" Feb 13 15:27:20.292173 containerd[1470]: time="2025-02-13T15:27:20.291557859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:20.292173 containerd[1470]: time="2025-02-13T15:27:20.291624763Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:20.292173 containerd[1470]: time="2025-02-13T15:27:20.291640359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:20.292173 containerd[1470]: time="2025-02-13T15:27:20.291731296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:20.306886 containerd[1470]: time="2025-02-13T15:27:20.306731733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 15:27:20.306886 containerd[1470]: time="2025-02-13T15:27:20.306799156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 15:27:20.306886 containerd[1470]: time="2025-02-13T15:27:20.306810153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:20.307155 containerd[1470]: time="2025-02-13T15:27:20.306891973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 15:27:20.312677 systemd[1]: Started cri-containerd-d426247bf693e2aad573f9c7bc54c173b80b1e4f45b6b7736cf3373fc97854bd.scope - libcontainer container d426247bf693e2aad573f9c7bc54c173b80b1e4f45b6b7736cf3373fc97854bd. Feb 13 15:27:20.319293 systemd[1]: Started cri-containerd-ea46ceb86a2b067edea8df30251f026bc8354d97b203fb799ad282898310e713.scope - libcontainer container ea46ceb86a2b067edea8df30251f026bc8354d97b203fb799ad282898310e713. Feb 13 15:27:20.331366 kubelet[1773]: I0213 15:27:20.331336 1773 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="18ee5808-fa7d-4db2-acc6-381cd203a724" path="/var/lib/kubelet/pods/18ee5808-fa7d-4db2-acc6-381cd203a724/volumes" Feb 13 15:27:20.342642 containerd[1470]: time="2025-02-13T15:27:20.342337442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4td9w,Uid:0c02d2f0-8506-4034-9fba-1ffa37e406ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea46ceb86a2b067edea8df30251f026bc8354d97b203fb799ad282898310e713\"" Feb 13 15:27:20.343799 kubelet[1773]: E0213 15:27:20.343767 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:20.351448 containerd[1470]: time="2025-02-13T15:27:20.351389171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-ccft5,Uid:49df992f-d8c0-4353-95f8-91366ff3d921,Namespace:kube-system,Attempt:0,} returns sandbox id \"d426247bf693e2aad573f9c7bc54c173b80b1e4f45b6b7736cf3373fc97854bd\"" Feb 13 15:27:20.352236 kubelet[1773]: E0213 15:27:20.352209 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:20.353163 containerd[1470]: time="2025-02-13T15:27:20.353120297Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 13 15:27:20.361958 containerd[1470]: time="2025-02-13T15:27:20.361826593Z" level=info msg="CreateContainer within sandbox \"ea46ceb86a2b067edea8df30251f026bc8354d97b203fb799ad282898310e713\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 13 15:27:20.371770 containerd[1470]: time="2025-02-13T15:27:20.371714633Z" level=info msg="CreateContainer within sandbox \"ea46ceb86a2b067edea8df30251f026bc8354d97b203fb799ad282898310e713\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4a624402c1427823f68fe32297709df41d7568dff9c0c645cc22c726dd81dd1e\"" Feb 13 15:27:20.372536 containerd[1470]: time="2025-02-13T15:27:20.372462965Z" level=info msg="StartContainer for \"4a624402c1427823f68fe32297709df41d7568dff9c0c645cc22c726dd81dd1e\"" Feb 13 15:27:20.400860 systemd[1]: Started cri-containerd-4a624402c1427823f68fe32297709df41d7568dff9c0c645cc22c726dd81dd1e.scope - libcontainer container 4a624402c1427823f68fe32297709df41d7568dff9c0c645cc22c726dd81dd1e. Feb 13 15:27:20.423300 containerd[1470]: time="2025-02-13T15:27:20.423098904Z" level=info msg="StartContainer for \"4a624402c1427823f68fe32297709df41d7568dff9c0c645cc22c726dd81dd1e\" returns successfully" Feb 13 15:27:20.432116 kubelet[1773]: E0213 15:27:20.431658 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:20.494331 systemd[1]: cri-containerd-4a624402c1427823f68fe32297709df41d7568dff9c0c645cc22c726dd81dd1e.scope: Deactivated successfully. Feb 13 15:27:20.527320 containerd[1470]: time="2025-02-13T15:27:20.527253538Z" level=info msg="shim disconnected" id=4a624402c1427823f68fe32297709df41d7568dff9c0c645cc22c726dd81dd1e namespace=k8s.io Feb 13 15:27:20.527320 containerd[1470]: time="2025-02-13T15:27:20.527312043Z" level=warning msg="cleaning up after shim disconnected" id=4a624402c1427823f68fe32297709df41d7568dff9c0c645cc22c726dd81dd1e namespace=k8s.io Feb 13 15:27:20.527320 containerd[1470]: time="2025-02-13T15:27:20.527323480Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:27:21.218559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2390388303.mount: Deactivated successfully. Feb 13 15:27:21.220286 kubelet[1773]: E0213 15:27:21.220214 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:21.437718 kubelet[1773]: E0213 15:27:21.437679 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:21.440218 containerd[1470]: time="2025-02-13T15:27:21.440049889Z" level=info msg="CreateContainer within sandbox \"ea46ceb86a2b067edea8df30251f026bc8354d97b203fb799ad282898310e713\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 13 15:27:21.471685 containerd[1470]: time="2025-02-13T15:27:21.471484857Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:21.477325 containerd[1470]: time="2025-02-13T15:27:21.477200753Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Feb 13 15:27:21.477384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3023307270.mount: Deactivated successfully. Feb 13 15:27:21.479256 containerd[1470]: time="2025-02-13T15:27:21.479194244Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 15:27:21.481299 containerd[1470]: time="2025-02-13T15:27:21.481166940Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.127993017s" Feb 13 15:27:21.481299 containerd[1470]: time="2025-02-13T15:27:21.481221048Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 13 15:27:21.481782 containerd[1470]: time="2025-02-13T15:27:21.481734007Z" level=info msg="CreateContainer within sandbox \"ea46ceb86a2b067edea8df30251f026bc8354d97b203fb799ad282898310e713\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b63dfe410608bf9e7754398b55d3414f1ca61edc9c0d4eaecb592ce5c166ee46\"" Feb 13 15:27:21.482195 containerd[1470]: time="2025-02-13T15:27:21.482174583Z" level=info msg="StartContainer for \"b63dfe410608bf9e7754398b55d3414f1ca61edc9c0d4eaecb592ce5c166ee46\"" Feb 13 15:27:21.483305 containerd[1470]: time="2025-02-13T15:27:21.483262688Z" level=info msg="CreateContainer within sandbox \"d426247bf693e2aad573f9c7bc54c173b80b1e4f45b6b7736cf3373fc97854bd\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 13 15:27:21.496187 containerd[1470]: time="2025-02-13T15:27:21.496091671Z" level=info msg="CreateContainer within sandbox \"d426247bf693e2aad573f9c7bc54c173b80b1e4f45b6b7736cf3373fc97854bd\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"58404171d4a7fdbdc7adc55548ff51562423e54bcd9a48de9e327908dadeed22\"" Feb 13 15:27:21.497121 containerd[1470]: time="2025-02-13T15:27:21.497033969Z" level=info msg="StartContainer for \"58404171d4a7fdbdc7adc55548ff51562423e54bcd9a48de9e327908dadeed22\"" Feb 13 15:27:21.507709 systemd[1]: Started cri-containerd-b63dfe410608bf9e7754398b55d3414f1ca61edc9c0d4eaecb592ce5c166ee46.scope - libcontainer container b63dfe410608bf9e7754398b55d3414f1ca61edc9c0d4eaecb592ce5c166ee46. Feb 13 15:27:21.531746 systemd[1]: Started cri-containerd-58404171d4a7fdbdc7adc55548ff51562423e54bcd9a48de9e327908dadeed22.scope - libcontainer container 58404171d4a7fdbdc7adc55548ff51562423e54bcd9a48de9e327908dadeed22. Feb 13 15:27:21.577789 systemd[1]: cri-containerd-b63dfe410608bf9e7754398b55d3414f1ca61edc9c0d4eaecb592ce5c166ee46.scope: Deactivated successfully. Feb 13 15:27:21.636673 containerd[1470]: time="2025-02-13T15:27:21.636606508Z" level=info msg="StartContainer for \"58404171d4a7fdbdc7adc55548ff51562423e54bcd9a48de9e327908dadeed22\" returns successfully" Feb 13 15:27:21.636673 containerd[1470]: time="2025-02-13T15:27:21.636617785Z" level=info msg="StartContainer for \"b63dfe410608bf9e7754398b55d3414f1ca61edc9c0d4eaecb592ce5c166ee46\" returns successfully" Feb 13 15:27:21.663460 containerd[1470]: time="2025-02-13T15:27:21.663390449Z" level=info msg="shim disconnected" id=b63dfe410608bf9e7754398b55d3414f1ca61edc9c0d4eaecb592ce5c166ee46 namespace=k8s.io Feb 13 15:27:21.663460 containerd[1470]: time="2025-02-13T15:27:21.663448156Z" level=warning msg="cleaning up after shim disconnected" id=b63dfe410608bf9e7754398b55d3414f1ca61edc9c0d4eaecb592ce5c166ee46 namespace=k8s.io Feb 13 15:27:21.663460 containerd[1470]: time="2025-02-13T15:27:21.663457953Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:27:22.221303 kubelet[1773]: E0213 15:27:22.221253 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:22.441168 kubelet[1773]: E0213 15:27:22.441119 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:22.443480 containerd[1470]: time="2025-02-13T15:27:22.443429103Z" level=info msg="CreateContainer within sandbox \"ea46ceb86a2b067edea8df30251f026bc8354d97b203fb799ad282898310e713\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 13 15:27:22.444362 kubelet[1773]: E0213 15:27:22.444006 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:22.462133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2686227051.mount: Deactivated successfully. Feb 13 15:27:22.467548 containerd[1470]: time="2025-02-13T15:27:22.467483240Z" level=info msg="CreateContainer within sandbox \"ea46ceb86a2b067edea8df30251f026bc8354d97b203fb799ad282898310e713\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3cc34d35cb6f5e63632db09662fa8239621fbdba16e18fd38896dfb32d3bd48e\"" Feb 13 15:27:22.473064 containerd[1470]: time="2025-02-13T15:27:22.472591154Z" level=info msg="StartContainer for \"3cc34d35cb6f5e63632db09662fa8239621fbdba16e18fd38896dfb32d3bd48e\"" Feb 13 15:27:22.473182 kubelet[1773]: I0213 15:27:22.473056 1773 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-ccft5" podStartSLOduration=2.344242674 podStartE2EDuration="3.473010782s" podCreationTimestamp="2025-02-13 15:27:19 +0000 UTC" firstStartedPulling="2025-02-13 15:27:20.352781462 +0000 UTC m=+47.688900581" lastFinishedPulling="2025-02-13 15:27:21.48154957 +0000 UTC m=+48.817668689" observedRunningTime="2025-02-13 15:27:22.472992266 +0000 UTC m=+49.809111385" watchObservedRunningTime="2025-02-13 15:27:22.473010782 +0000 UTC m=+49.809129901" Feb 13 15:27:22.499735 systemd[1]: Started cri-containerd-3cc34d35cb6f5e63632db09662fa8239621fbdba16e18fd38896dfb32d3bd48e.scope - libcontainer container 3cc34d35cb6f5e63632db09662fa8239621fbdba16e18fd38896dfb32d3bd48e. Feb 13 15:27:22.526959 containerd[1470]: time="2025-02-13T15:27:22.526878066Z" level=info msg="StartContainer for \"3cc34d35cb6f5e63632db09662fa8239621fbdba16e18fd38896dfb32d3bd48e\" returns successfully" Feb 13 15:27:22.526910 systemd[1]: cri-containerd-3cc34d35cb6f5e63632db09662fa8239621fbdba16e18fd38896dfb32d3bd48e.scope: Deactivated successfully. Feb 13 15:27:22.550414 containerd[1470]: time="2025-02-13T15:27:22.550222080Z" level=info msg="shim disconnected" id=3cc34d35cb6f5e63632db09662fa8239621fbdba16e18fd38896dfb32d3bd48e namespace=k8s.io Feb 13 15:27:22.550414 containerd[1470]: time="2025-02-13T15:27:22.550281867Z" level=warning msg="cleaning up after shim disconnected" id=3cc34d35cb6f5e63632db09662fa8239621fbdba16e18fd38896dfb32d3bd48e namespace=k8s.io Feb 13 15:27:22.550414 containerd[1470]: time="2025-02-13T15:27:22.550292185Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:27:23.157040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cc34d35cb6f5e63632db09662fa8239621fbdba16e18fd38896dfb32d3bd48e-rootfs.mount: Deactivated successfully. Feb 13 15:27:23.221933 kubelet[1773]: E0213 15:27:23.221885 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:23.450948 kubelet[1773]: E0213 15:27:23.450847 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:23.451054 kubelet[1773]: E0213 15:27:23.450989 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:23.455867 containerd[1470]: time="2025-02-13T15:27:23.455811423Z" level=info msg="CreateContainer within sandbox \"ea46ceb86a2b067edea8df30251f026bc8354d97b203fb799ad282898310e713\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 13 15:27:23.468740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3845673175.mount: Deactivated successfully. Feb 13 15:27:23.472884 containerd[1470]: time="2025-02-13T15:27:23.472835264Z" level=info msg="CreateContainer within sandbox \"ea46ceb86a2b067edea8df30251f026bc8354d97b203fb799ad282898310e713\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0e1c995983706b5327515abb054bc194b08d9ea6d312a93fee5ecbc7ec3fc49c\"" Feb 13 15:27:23.473483 containerd[1470]: time="2025-02-13T15:27:23.473457896Z" level=info msg="StartContainer for \"0e1c995983706b5327515abb054bc194b08d9ea6d312a93fee5ecbc7ec3fc49c\"" Feb 13 15:27:23.506717 systemd[1]: Started cri-containerd-0e1c995983706b5327515abb054bc194b08d9ea6d312a93fee5ecbc7ec3fc49c.scope - libcontainer container 0e1c995983706b5327515abb054bc194b08d9ea6d312a93fee5ecbc7ec3fc49c. Feb 13 15:27:23.526747 systemd[1]: cri-containerd-0e1c995983706b5327515abb054bc194b08d9ea6d312a93fee5ecbc7ec3fc49c.scope: Deactivated successfully. Feb 13 15:27:23.528194 containerd[1470]: time="2025-02-13T15:27:23.528109121Z" level=info msg="StartContainer for \"0e1c995983706b5327515abb054bc194b08d9ea6d312a93fee5ecbc7ec3fc49c\" returns successfully" Feb 13 15:27:23.549371 containerd[1470]: time="2025-02-13T15:27:23.549304220Z" level=info msg="shim disconnected" id=0e1c995983706b5327515abb054bc194b08d9ea6d312a93fee5ecbc7ec3fc49c namespace=k8s.io Feb 13 15:27:23.549371 containerd[1470]: time="2025-02-13T15:27:23.549360329Z" level=warning msg="cleaning up after shim disconnected" id=0e1c995983706b5327515abb054bc194b08d9ea6d312a93fee5ecbc7ec3fc49c namespace=k8s.io Feb 13 15:27:23.549371 containerd[1470]: time="2025-02-13T15:27:23.549370047Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 15:27:24.157137 systemd[1]: run-containerd-runc-k8s.io-0e1c995983706b5327515abb054bc194b08d9ea6d312a93fee5ecbc7ec3fc49c-runc.2VisW0.mount: Deactivated successfully. Feb 13 15:27:24.157237 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e1c995983706b5327515abb054bc194b08d9ea6d312a93fee5ecbc7ec3fc49c-rootfs.mount: Deactivated successfully. Feb 13 15:27:24.222341 kubelet[1773]: E0213 15:27:24.222299 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:24.337620 kubelet[1773]: E0213 15:27:24.337548 1773 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 13 15:27:24.460006 kubelet[1773]: E0213 15:27:24.459878 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:24.462395 containerd[1470]: time="2025-02-13T15:27:24.462312377Z" level=info msg="CreateContainer within sandbox \"ea46ceb86a2b067edea8df30251f026bc8354d97b203fb799ad282898310e713\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 13 15:27:24.482901 containerd[1470]: time="2025-02-13T15:27:24.482847359Z" level=info msg="CreateContainer within sandbox \"ea46ceb86a2b067edea8df30251f026bc8354d97b203fb799ad282898310e713\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c4bed943f16408d2351a425c294d94c8cca8f3ed009bee3e8ba272778e29bd97\"" Feb 13 15:27:24.483439 containerd[1470]: time="2025-02-13T15:27:24.483415009Z" level=info msg="StartContainer for \"c4bed943f16408d2351a425c294d94c8cca8f3ed009bee3e8ba272778e29bd97\"" Feb 13 15:27:24.508686 systemd[1]: Started cri-containerd-c4bed943f16408d2351a425c294d94c8cca8f3ed009bee3e8ba272778e29bd97.scope - libcontainer container c4bed943f16408d2351a425c294d94c8cca8f3ed009bee3e8ba272778e29bd97. Feb 13 15:27:24.535006 containerd[1470]: time="2025-02-13T15:27:24.534954262Z" level=info msg="StartContainer for \"c4bed943f16408d2351a425c294d94c8cca8f3ed009bee3e8ba272778e29bd97\" returns successfully" Feb 13 15:27:24.810519 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Feb 13 15:27:25.175727 kubelet[1773]: I0213 15:27:25.175622 1773 setters.go:568] "Node became not ready" node="10.0.0.59" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:27:25Z","lastTransitionTime":"2025-02-13T15:27:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 13 15:27:25.222645 kubelet[1773]: E0213 15:27:25.222597 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:25.466023 kubelet[1773]: E0213 15:27:25.465913 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:25.482265 kubelet[1773]: I0213 15:27:25.482193 1773 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-4td9w" podStartSLOduration=6.482155226 podStartE2EDuration="6.482155226s" podCreationTimestamp="2025-02-13 15:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:27:25.481834684 +0000 UTC m=+52.817953843" watchObservedRunningTime="2025-02-13 15:27:25.482155226 +0000 UTC m=+52.818274345" Feb 13 15:27:26.223717 kubelet[1773]: E0213 15:27:26.223663 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:26.469225 kubelet[1773]: E0213 15:27:26.469167 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:26.500514 systemd[1]: run-containerd-runc-k8s.io-c4bed943f16408d2351a425c294d94c8cca8f3ed009bee3e8ba272778e29bd97-runc.4v9W9O.mount: Deactivated successfully. Feb 13 15:27:27.224299 kubelet[1773]: E0213 15:27:27.224242 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:27.822806 systemd-networkd[1390]: lxc_health: Link UP Feb 13 15:27:27.832946 systemd-networkd[1390]: lxc_health: Gained carrier Feb 13 15:27:28.224485 kubelet[1773]: E0213 15:27:28.224357 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:28.292840 kubelet[1773]: E0213 15:27:28.292806 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:28.471951 kubelet[1773]: E0213 15:27:28.471912 1773 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 15:27:29.224961 kubelet[1773]: E0213 15:27:29.224908 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:29.866711 systemd-networkd[1390]: lxc_health: Gained IPv6LL Feb 13 15:27:30.225643 kubelet[1773]: E0213 15:27:30.225510 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:31.226141 kubelet[1773]: E0213 15:27:31.226089 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:32.226990 kubelet[1773]: E0213 15:27:32.226937 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:33.227598 kubelet[1773]: E0213 15:27:33.227545 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:34.188427 kubelet[1773]: E0213 15:27:34.188385 1773 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 15:27:34.203508 containerd[1470]: time="2025-02-13T15:27:34.203442709Z" level=info msg="StopPodSandbox for \"2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5\"" Feb 13 15:27:34.204028 containerd[1470]: time="2025-02-13T15:27:34.203550105Z" level=info msg="TearDown network for sandbox \"2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5\" successfully" Feb 13 15:27:34.204028 containerd[1470]: time="2025-02-13T15:27:34.203561985Z" level=info msg="StopPodSandbox for \"2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5\" returns successfully" Feb 13 15:27:34.207712 containerd[1470]: time="2025-02-13T15:27:34.207676487Z" level=info msg="RemovePodSandbox for \"2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5\"" Feb 13 15:27:34.207770 containerd[1470]: time="2025-02-13T15:27:34.207722965Z" level=info msg="Forcibly stopping sandbox \"2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5\"" Feb 13 15:27:34.207796 containerd[1470]: time="2025-02-13T15:27:34.207780563Z" level=info msg="TearDown network for sandbox \"2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5\" successfully" Feb 13 15:27:34.218306 containerd[1470]: time="2025-02-13T15:27:34.218244332Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 15:27:34.218306 containerd[1470]: time="2025-02-13T15:27:34.218311890Z" level=info msg="RemovePodSandbox \"2895570939c231670880d0759b4b7a9e5fe1b6c28e9f7a7093ada78b133970b5\" returns successfully" Feb 13 15:27:34.227972 kubelet[1773]: E0213 15:27:34.227931 1773 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"