Sep 6 00:06:53.685072 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 6 00:06:53.685091 kernel: Linux version 5.15.190-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Sep 5 23:00:12 -00 2025 Sep 6 00:06:53.685099 kernel: efi: EFI v2.70 by EDK II Sep 6 00:06:53.685104 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Sep 6 00:06:53.685109 kernel: random: crng init done Sep 6 00:06:53.685115 kernel: ACPI: Early table checksum verification disabled Sep 6 00:06:53.685121 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Sep 6 00:06:53.685128 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 6 00:06:53.685133 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:06:53.685139 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:06:53.685144 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:06:53.685149 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:06:53.685155 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:06:53.685160 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:06:53.685168 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:06:53.685174 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:06:53.685180 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:06:53.685185 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 6 00:06:53.685191 kernel: NUMA: Failed to initialise from firmware Sep 6 00:06:53.685197 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 6 00:06:53.685203 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Sep 6 00:06:53.685209 kernel: Zone ranges: Sep 6 00:06:53.685215 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 6 00:06:53.685222 kernel: DMA32 empty Sep 6 00:06:53.685230 kernel: Normal empty Sep 6 00:06:53.685236 kernel: Movable zone start for each node Sep 6 00:06:53.685241 kernel: Early memory node ranges Sep 6 00:06:53.685247 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Sep 6 00:06:53.685253 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Sep 6 00:06:53.685260 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Sep 6 00:06:53.685266 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Sep 6 00:06:53.685274 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Sep 6 00:06:53.685279 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Sep 6 00:06:53.685285 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Sep 6 00:06:53.685291 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 6 00:06:53.685298 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 6 00:06:53.685304 kernel: psci: probing for conduit method from ACPI. Sep 6 00:06:53.685310 kernel: psci: PSCIv1.1 detected in firmware. Sep 6 00:06:53.685316 kernel: psci: Using standard PSCI v0.2 function IDs Sep 6 00:06:53.685322 kernel: psci: Trusted OS migration not required Sep 6 00:06:53.685330 kernel: psci: SMC Calling Convention v1.1 Sep 6 00:06:53.685336 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 6 00:06:53.685344 kernel: ACPI: SRAT not present Sep 6 00:06:53.685351 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Sep 6 00:06:53.685357 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Sep 6 00:06:53.685364 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 6 00:06:53.685370 kernel: Detected PIPT I-cache on CPU0 Sep 6 00:06:53.685377 kernel: CPU features: detected: GIC system register CPU interface Sep 6 00:06:53.685383 kernel: CPU features: detected: Hardware dirty bit management Sep 6 00:06:53.685389 kernel: CPU features: detected: Spectre-v4 Sep 6 00:06:53.685395 kernel: CPU features: detected: Spectre-BHB Sep 6 00:06:53.685402 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 6 00:06:53.685408 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 6 00:06:53.685414 kernel: CPU features: detected: ARM erratum 1418040 Sep 6 00:06:53.685420 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 6 00:06:53.685427 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 6 00:06:53.685432 kernel: Policy zone: DMA Sep 6 00:06:53.685440 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 6 00:06:53.685446 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:06:53.685452 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 00:06:53.685458 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:06:53.685465 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:06:53.685472 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Sep 6 00:06:53.685478 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 6 00:06:53.685485 kernel: trace event string verifier disabled Sep 6 00:06:53.685491 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 6 00:06:53.685498 kernel: rcu: RCU event tracing is enabled. Sep 6 00:06:53.685504 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 6 00:06:53.685510 kernel: Trampoline variant of Tasks RCU enabled. Sep 6 00:06:53.685517 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:06:53.685523 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:06:53.685529 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 6 00:06:53.685536 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 6 00:06:53.685543 kernel: GICv3: 256 SPIs implemented Sep 6 00:06:53.685554 kernel: GICv3: 0 Extended SPIs implemented Sep 6 00:06:53.685561 kernel: GICv3: Distributor has no Range Selector support Sep 6 00:06:53.685567 kernel: Root IRQ handler: gic_handle_irq Sep 6 00:06:53.685574 kernel: GICv3: 16 PPIs implemented Sep 6 00:06:53.685580 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 6 00:06:53.685586 kernel: ACPI: SRAT not present Sep 6 00:06:53.685592 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 6 00:06:53.685599 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Sep 6 00:06:53.685605 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Sep 6 00:06:53.685612 kernel: GICv3: using LPI property table @0x00000000400d0000 Sep 6 00:06:53.685618 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Sep 6 00:06:53.685625 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:06:53.685632 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 6 00:06:53.685639 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 6 00:06:53.685645 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 6 00:06:53.685651 kernel: arm-pv: using stolen time PV Sep 6 00:06:53.685665 kernel: Console: colour dummy device 80x25 Sep 6 00:06:53.685671 kernel: ACPI: Core revision 20210730 Sep 6 00:06:53.685677 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 6 00:06:53.685684 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:06:53.685690 kernel: LSM: Security Framework initializing Sep 6 00:06:53.685697 kernel: SELinux: Initializing. Sep 6 00:06:53.685704 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:06:53.685710 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:06:53.685716 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:06:53.685723 kernel: Platform MSI: ITS@0x8080000 domain created Sep 6 00:06:53.685729 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 6 00:06:53.685735 kernel: Remapping and enabling EFI services. Sep 6 00:06:53.685741 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:06:53.685747 kernel: Detected PIPT I-cache on CPU1 Sep 6 00:06:53.685766 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 6 00:06:53.685773 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Sep 6 00:06:53.685780 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:06:53.685786 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 6 00:06:53.685792 kernel: Detected PIPT I-cache on CPU2 Sep 6 00:06:53.685799 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 6 00:06:53.685805 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Sep 6 00:06:53.685817 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:06:53.685823 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 6 00:06:53.685829 kernel: Detected PIPT I-cache on CPU3 Sep 6 00:06:53.685837 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 6 00:06:53.685843 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Sep 6 00:06:53.685850 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:06:53.685856 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 6 00:06:53.685867 kernel: smp: Brought up 1 node, 4 CPUs Sep 6 00:06:53.685875 kernel: SMP: Total of 4 processors activated. Sep 6 00:06:53.685881 kernel: CPU features: detected: 32-bit EL0 Support Sep 6 00:06:53.685888 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 6 00:06:53.685894 kernel: CPU features: detected: Common not Private translations Sep 6 00:06:53.685901 kernel: CPU features: detected: CRC32 instructions Sep 6 00:06:53.685908 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 6 00:06:53.685914 kernel: CPU features: detected: LSE atomic instructions Sep 6 00:06:53.685922 kernel: CPU features: detected: Privileged Access Never Sep 6 00:06:53.685930 kernel: CPU features: detected: RAS Extension Support Sep 6 00:06:53.685937 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 6 00:06:53.685944 kernel: CPU: All CPU(s) started at EL1 Sep 6 00:06:53.685950 kernel: alternatives: patching kernel code Sep 6 00:06:53.685958 kernel: devtmpfs: initialized Sep 6 00:06:53.685964 kernel: KASLR enabled Sep 6 00:06:53.685971 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:06:53.685978 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 6 00:06:53.685985 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:06:53.685991 kernel: SMBIOS 3.0.0 present. Sep 6 00:06:53.685998 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Sep 6 00:06:53.686004 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:06:53.686011 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 6 00:06:53.686019 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 6 00:06:53.686026 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 6 00:06:53.686032 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:06:53.686040 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 Sep 6 00:06:53.686046 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:06:53.686053 kernel: cpuidle: using governor menu Sep 6 00:06:53.686059 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 6 00:06:53.686066 kernel: ASID allocator initialised with 32768 entries Sep 6 00:06:53.686072 kernel: ACPI: bus type PCI registered Sep 6 00:06:53.686080 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:06:53.686087 kernel: Serial: AMBA PL011 UART driver Sep 6 00:06:53.686094 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 00:06:53.686101 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 6 00:06:53.686107 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:06:53.686114 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 6 00:06:53.686121 kernel: cryptd: max_cpu_qlen set to 1000 Sep 6 00:06:53.686127 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 6 00:06:53.686134 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:06:53.686142 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:06:53.686149 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:06:53.686155 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 6 00:06:53.686161 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 6 00:06:53.686168 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 6 00:06:53.686175 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 00:06:53.686196 kernel: ACPI: Interpreter enabled Sep 6 00:06:53.686202 kernel: ACPI: Using GIC for interrupt routing Sep 6 00:06:53.686209 kernel: ACPI: MCFG table detected, 1 entries Sep 6 00:06:53.686217 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 6 00:06:53.686224 kernel: printk: console [ttyAMA0] enabled Sep 6 00:06:53.686230 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 00:06:53.686359 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:06:53.686423 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 6 00:06:53.686480 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 6 00:06:53.686537 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 6 00:06:53.686595 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 6 00:06:53.686604 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 6 00:06:53.686619 kernel: PCI host bridge to bus 0000:00 Sep 6 00:06:53.686689 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 6 00:06:53.686742 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 6 00:06:53.687020 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 6 00:06:53.687087 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 00:06:53.687177 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 6 00:06:53.687247 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 6 00:06:53.687307 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 6 00:06:53.687366 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 6 00:06:53.687427 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 6 00:06:53.687487 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 6 00:06:53.687561 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 6 00:06:53.687626 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 6 00:06:53.687679 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 6 00:06:53.687783 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 6 00:06:53.687889 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 6 00:06:53.687901 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 6 00:06:53.687908 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 6 00:06:53.687915 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 6 00:06:53.687922 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 6 00:06:53.687932 kernel: iommu: Default domain type: Translated Sep 6 00:06:53.687939 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 6 00:06:53.687946 kernel: vgaarb: loaded Sep 6 00:06:53.687952 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 6 00:06:53.687959 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 6 00:06:53.687966 kernel: PTP clock support registered Sep 6 00:06:53.687972 kernel: Registered efivars operations Sep 6 00:06:53.687980 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 6 00:06:53.687986 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:06:53.687995 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:06:53.688001 kernel: pnp: PnP ACPI init Sep 6 00:06:53.688076 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 6 00:06:53.688087 kernel: pnp: PnP ACPI: found 1 devices Sep 6 00:06:53.688094 kernel: NET: Registered PF_INET protocol family Sep 6 00:06:53.688101 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 00:06:53.688107 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 6 00:06:53.688114 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:06:53.688123 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 00:06:53.688130 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 6 00:06:53.688137 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 6 00:06:53.688143 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:06:53.688150 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:06:53.688157 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:06:53.688165 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:06:53.688172 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 6 00:06:53.688179 kernel: kvm [1]: HYP mode not available Sep 6 00:06:53.688186 kernel: Initialise system trusted keyrings Sep 6 00:06:53.688193 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 6 00:06:53.688200 kernel: Key type asymmetric registered Sep 6 00:06:53.688207 kernel: Asymmetric key parser 'x509' registered Sep 6 00:06:53.688214 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 6 00:06:53.688220 kernel: io scheduler mq-deadline registered Sep 6 00:06:53.688227 kernel: io scheduler kyber registered Sep 6 00:06:53.688234 kernel: io scheduler bfq registered Sep 6 00:06:53.688240 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 6 00:06:53.688248 kernel: ACPI: button: Power Button [PWRB] Sep 6 00:06:53.688255 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 6 00:06:53.688318 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 6 00:06:53.688327 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:06:53.688333 kernel: thunder_xcv, ver 1.0 Sep 6 00:06:53.688340 kernel: thunder_bgx, ver 1.0 Sep 6 00:06:53.688346 kernel: nicpf, ver 1.0 Sep 6 00:06:53.688353 kernel: nicvf, ver 1.0 Sep 6 00:06:53.688429 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 6 00:06:53.688486 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-06T00:06:53 UTC (1757117213) Sep 6 00:06:53.688495 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 6 00:06:53.688502 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:06:53.688508 kernel: Segment Routing with IPv6 Sep 6 00:06:53.688515 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:06:53.688522 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:06:53.688529 kernel: Key type dns_resolver registered Sep 6 00:06:53.688535 kernel: registered taskstats version 1 Sep 6 00:06:53.688543 kernel: Loading compiled-in X.509 certificates Sep 6 00:06:53.688550 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.190-flatcar: 72ab5ba99c2368429c7a4d04fccfc5a39dd84386' Sep 6 00:06:53.688558 kernel: Key type .fscrypt registered Sep 6 00:06:53.688564 kernel: Key type fscrypt-provisioning registered Sep 6 00:06:53.688571 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:06:53.688578 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:06:53.688584 kernel: ima: No architecture policies found Sep 6 00:06:53.688591 kernel: clk: Disabling unused clocks Sep 6 00:06:53.688597 kernel: Freeing unused kernel memory: 36416K Sep 6 00:06:53.688605 kernel: Run /init as init process Sep 6 00:06:53.688612 kernel: with arguments: Sep 6 00:06:53.688619 kernel: /init Sep 6 00:06:53.688625 kernel: with environment: Sep 6 00:06:53.688631 kernel: HOME=/ Sep 6 00:06:53.688638 kernel: TERM=linux Sep 6 00:06:53.688645 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:06:53.688653 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:06:53.688664 systemd[1]: Detected virtualization kvm. Sep 6 00:06:53.688671 systemd[1]: Detected architecture arm64. Sep 6 00:06:53.688678 systemd[1]: Running in initrd. Sep 6 00:06:53.688685 systemd[1]: No hostname configured, using default hostname. Sep 6 00:06:53.688692 systemd[1]: Hostname set to . Sep 6 00:06:53.688700 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:06:53.688706 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:06:53.688713 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:06:53.688722 systemd[1]: Reached target cryptsetup.target. Sep 6 00:06:53.688729 systemd[1]: Reached target paths.target. Sep 6 00:06:53.688735 systemd[1]: Reached target slices.target. Sep 6 00:06:53.688742 systemd[1]: Reached target swap.target. Sep 6 00:06:53.688749 systemd[1]: Reached target timers.target. Sep 6 00:06:53.688782 systemd[1]: Listening on iscsid.socket. Sep 6 00:06:53.688790 systemd[1]: Listening on iscsiuio.socket. Sep 6 00:06:53.688799 systemd[1]: Listening on systemd-journald-audit.socket. Sep 6 00:06:53.688894 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 6 00:06:53.688906 systemd[1]: Listening on systemd-journald.socket. Sep 6 00:06:53.688913 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:06:53.688921 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:06:53.688928 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:06:53.688935 systemd[1]: Reached target sockets.target. Sep 6 00:06:53.688942 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:06:53.688949 systemd[1]: Finished network-cleanup.service. Sep 6 00:06:53.688959 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:06:53.688967 systemd[1]: Starting systemd-journald.service... Sep 6 00:06:53.688974 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:06:53.688981 systemd[1]: Starting systemd-resolved.service... Sep 6 00:06:53.688988 systemd[1]: Starting systemd-vconsole-setup.service... Sep 6 00:06:53.688996 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:06:53.689003 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:06:53.689010 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 6 00:06:53.689022 systemd-journald[290]: Journal started Sep 6 00:06:53.689077 systemd-journald[290]: Runtime Journal (/run/log/journal/52a5d3dece044b2eb9d91521a6fce5fd) is 6.0M, max 48.7M, 42.6M free. Sep 6 00:06:53.685638 systemd-modules-load[291]: Inserted module 'overlay' Sep 6 00:06:53.691172 systemd[1]: Started systemd-journald.service. Sep 6 00:06:53.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:53.693653 systemd[1]: Finished systemd-vconsole-setup.service. Sep 6 00:06:53.697146 kernel: audit: type=1130 audit(1757117213.691:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:53.697170 kernel: audit: type=1130 audit(1757117213.694:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:53.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:53.697924 systemd[1]: Starting dracut-cmdline-ask.service... Sep 6 00:06:53.704470 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 6 00:06:53.713862 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:06:53.713889 kernel: audit: type=1130 audit(1757117213.706:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:53.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:53.709255 systemd-resolved[292]: Positive Trust Anchors: Sep 6 00:06:53.715439 kernel: Bridge firewalling registered Sep 6 00:06:53.709263 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:06:53.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:53.709293 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:06:53.723543 kernel: audit: type=1130 audit(1757117213.715:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:53.713617 systemd-resolved[292]: Defaulting to hostname 'linux'. Sep 6 00:06:53.726825 kernel: audit: type=1130 audit(1757117213.724:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:53.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:53.714530 systemd-modules-load[291]: Inserted module 'br_netfilter' Sep 6 00:06:53.714659 systemd[1]: Started systemd-resolved.service. Sep 6 00:06:53.719217 systemd[1]: Finished dracut-cmdline-ask.service. Sep 6 00:06:53.730192 kernel: SCSI subsystem initialized Sep 6 00:06:53.726836 systemd[1]: Reached target nss-lookup.target. Sep 6 00:06:53.728372 systemd[1]: Starting dracut-cmdline.service... Sep 6 00:06:53.737386 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:06:53.737441 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:06:53.737452 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 6 00:06:53.737487 dracut-cmdline[307]: dracut-dracut-053 Sep 6 00:06:53.739777 systemd-modules-load[291]: Inserted module 'dm_multipath' Sep 6 00:06:53.740587 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5cb382ab59aa1336098b36da02e2d4491706a6fda80ee56c4ff8582cce9206a4 Sep 6 00:06:53.740599 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:06:53.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:53.749886 kernel: audit: type=1130 audit(1757117213.744:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:53.746420 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:06:53.756449 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:06:53.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:53.760834 kernel: audit: type=1130 audit(1757117213.756:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:53.807781 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:06:53.820779 kernel: iscsi: registered transport (tcp) Sep 6 00:06:53.835835 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:06:53.835901 kernel: QLogic iSCSI HBA Driver Sep 6 00:06:53.870344 systemd[1]: Finished dracut-cmdline.service. Sep 6 00:06:53.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:53.873829 kernel: audit: type=1130 audit(1757117213.871:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:53.872067 systemd[1]: Starting dracut-pre-udev.service... Sep 6 00:06:53.915784 kernel: raid6: neonx8 gen() 13571 MB/s Sep 6 00:06:53.932794 kernel: raid6: neonx8 xor() 10787 MB/s Sep 6 00:06:53.949791 kernel: raid6: neonx4 gen() 13520 MB/s Sep 6 00:06:53.966792 kernel: raid6: neonx4 xor() 11277 MB/s Sep 6 00:06:53.983790 kernel: raid6: neonx2 gen() 12952 MB/s Sep 6 00:06:54.000786 kernel: raid6: neonx2 xor() 10379 MB/s Sep 6 00:06:54.017774 kernel: raid6: neonx1 gen() 10557 MB/s Sep 6 00:06:54.034789 kernel: raid6: neonx1 xor() 8779 MB/s Sep 6 00:06:54.051781 kernel: raid6: int64x8 gen() 6262 MB/s Sep 6 00:06:54.068804 kernel: raid6: int64x8 xor() 3544 MB/s Sep 6 00:06:54.085773 kernel: raid6: int64x4 gen() 7230 MB/s Sep 6 00:06:54.102798 kernel: raid6: int64x4 xor() 3848 MB/s Sep 6 00:06:54.119791 kernel: raid6: int64x2 gen() 6150 MB/s Sep 6 00:06:54.136791 kernel: raid6: int64x2 xor() 3319 MB/s Sep 6 00:06:54.153773 kernel: raid6: int64x1 gen() 5043 MB/s Sep 6 00:06:54.171104 kernel: raid6: int64x1 xor() 2645 MB/s Sep 6 00:06:54.171147 kernel: raid6: using algorithm neonx8 gen() 13571 MB/s Sep 6 00:06:54.171156 kernel: raid6: .... xor() 10787 MB/s, rmw enabled Sep 6 00:06:54.171165 kernel: raid6: using neon recovery algorithm Sep 6 00:06:54.181926 kernel: xor: measuring software checksum speed Sep 6 00:06:54.181956 kernel: 8regs : 17220 MB/sec Sep 6 00:06:54.183018 kernel: 32regs : 20728 MB/sec Sep 6 00:06:54.183038 kernel: arm64_neon : 27832 MB/sec Sep 6 00:06:54.183047 kernel: xor: using function: arm64_neon (27832 MB/sec) Sep 6 00:06:54.238786 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 6 00:06:54.248943 systemd[1]: Finished dracut-pre-udev.service. Sep 6 00:06:54.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:54.252000 audit: BPF prog-id=7 op=LOAD Sep 6 00:06:54.252000 audit: BPF prog-id=8 op=LOAD Sep 6 00:06:54.252784 kernel: audit: type=1130 audit(1757117214.248:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:54.252745 systemd[1]: Starting systemd-udevd.service... Sep 6 00:06:54.274877 systemd-udevd[491]: Using default interface naming scheme 'v252'. Sep 6 00:06:54.278502 systemd[1]: Started systemd-udevd.service. Sep 6 00:06:54.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:54.280215 systemd[1]: Starting dracut-pre-trigger.service... Sep 6 00:06:54.293265 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Sep 6 00:06:54.329469 systemd[1]: Finished dracut-pre-trigger.service. Sep 6 00:06:54.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:54.331155 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:06:54.364751 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:06:54.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:54.393927 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 6 00:06:54.398826 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:06:54.398843 kernel: GPT:9289727 != 19775487 Sep 6 00:06:54.398851 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:06:54.398866 kernel: GPT:9289727 != 19775487 Sep 6 00:06:54.398874 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:06:54.398883 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:06:54.412789 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (553) Sep 6 00:06:54.419178 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 6 00:06:54.420029 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 6 00:06:54.423947 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 6 00:06:54.428043 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:06:54.431159 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 6 00:06:54.432822 systemd[1]: Starting disk-uuid.service... Sep 6 00:06:54.490802 disk-uuid[562]: Primary Header is updated. Sep 6 00:06:54.490802 disk-uuid[562]: Secondary Entries is updated. Sep 6 00:06:54.490802 disk-uuid[562]: Secondary Header is updated. Sep 6 00:06:54.494764 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:06:54.497777 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:06:55.498775 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:06:55.498989 disk-uuid[563]: The operation has completed successfully. Sep 6 00:06:55.522735 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:06:55.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:55.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:55.522854 systemd[1]: Finished disk-uuid.service. Sep 6 00:06:55.529689 systemd[1]: Starting verity-setup.service... Sep 6 00:06:55.542805 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 6 00:06:55.562624 systemd[1]: Found device dev-mapper-usr.device. Sep 6 00:06:55.564917 systemd[1]: Mounting sysusr-usr.mount... Sep 6 00:06:55.566681 systemd[1]: Finished verity-setup.service. Sep 6 00:06:55.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:55.609781 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 6 00:06:55.610280 systemd[1]: Mounted sysusr-usr.mount. Sep 6 00:06:55.611063 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 6 00:06:55.612027 systemd[1]: Starting ignition-setup.service... Sep 6 00:06:55.613852 systemd[1]: Starting parse-ip-for-networkd.service... Sep 6 00:06:55.621144 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:06:55.621187 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:06:55.621198 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:06:55.629560 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:06:55.646271 systemd[1]: Finished ignition-setup.service. Sep 6 00:06:55.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:55.647930 systemd[1]: Starting ignition-fetch-offline.service... Sep 6 00:06:55.696357 ignition[665]: Ignition 2.14.0 Sep 6 00:06:55.696367 ignition[665]: Stage: fetch-offline Sep 6 00:06:55.696405 ignition[665]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:06:55.696415 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:06:55.696549 ignition[665]: parsed url from cmdline: "" Sep 6 00:06:55.696552 ignition[665]: no config URL provided Sep 6 00:06:55.696557 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:06:55.696564 ignition[665]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:06:55.696582 ignition[665]: op(1): [started] loading QEMU firmware config module Sep 6 00:06:55.696586 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 6 00:06:55.703402 systemd[1]: Finished parse-ip-for-networkd.service. Sep 6 00:06:55.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:55.702479 ignition[665]: op(1): [finished] loading QEMU firmware config module Sep 6 00:06:55.704000 audit: BPF prog-id=9 op=LOAD Sep 6 00:06:55.706338 systemd[1]: Starting systemd-networkd.service... Sep 6 00:06:55.712119 ignition[665]: parsing config with SHA512: 2f7fe5309cc5e5c09d55746beb42b982fb1c0b63bdfcbdcd64499d84420cfc5ac277f5148db55db1d71ef0c2255d05788b1d82660ed353fa1b255ec10f67d3af Sep 6 00:06:55.718073 unknown[665]: fetched base config from "system" Sep 6 00:06:55.718542 ignition[665]: fetch-offline: fetch-offline passed Sep 6 00:06:55.718097 unknown[665]: fetched user config from "qemu" Sep 6 00:06:55.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:55.718622 ignition[665]: Ignition finished successfully Sep 6 00:06:55.720156 systemd[1]: Finished ignition-fetch-offline.service. Sep 6 00:06:55.725774 systemd-networkd[741]: lo: Link UP Sep 6 00:06:55.725784 systemd-networkd[741]: lo: Gained carrier Sep 6 00:06:55.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:55.726183 systemd-networkd[741]: Enumeration completed Sep 6 00:06:55.726297 systemd[1]: Started systemd-networkd.service. Sep 6 00:06:55.726372 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:06:55.727419 systemd-networkd[741]: eth0: Link UP Sep 6 00:06:55.727424 systemd-networkd[741]: eth0: Gained carrier Sep 6 00:06:55.727916 systemd[1]: Reached target network.target. Sep 6 00:06:55.728945 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 6 00:06:55.729723 systemd[1]: Starting ignition-kargs.service... Sep 6 00:06:55.731510 systemd[1]: Starting iscsiuio.service... Sep 6 00:06:55.738162 systemd[1]: Started iscsiuio.service. Sep 6 00:06:55.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:55.739658 ignition[745]: Ignition 2.14.0 Sep 6 00:06:55.739664 ignition[745]: Stage: kargs Sep 6 00:06:55.740178 systemd[1]: Starting iscsid.service... Sep 6 00:06:55.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:55.739781 ignition[745]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:06:55.744292 iscsid[754]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:06:55.744292 iscsid[754]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 6 00:06:55.744292 iscsid[754]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 6 00:06:55.744292 iscsid[754]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 6 00:06:55.744292 iscsid[754]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 6 00:06:55.744292 iscsid[754]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 6 00:06:55.744292 iscsid[754]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 6 00:06:55.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:55.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:55.742174 systemd[1]: Finished ignition-kargs.service. Sep 6 00:06:55.739791 ignition[745]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:06:55.743712 systemd[1]: Starting ignition-disks.service... Sep 6 00:06:55.740440 ignition[745]: kargs: kargs passed Sep 6 00:06:55.746231 systemd[1]: Started iscsid.service. Sep 6 00:06:55.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:55.740475 ignition[745]: Ignition finished successfully Sep 6 00:06:55.750107 systemd[1]: Starting dracut-initqueue.service... Sep 6 00:06:55.752934 ignition[755]: Ignition 2.14.0 Sep 6 00:06:55.756185 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 00:06:55.752940 ignition[755]: Stage: disks Sep 6 00:06:55.758474 systemd[1]: Finished ignition-disks.service. Sep 6 00:06:55.753038 ignition[755]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:06:55.759728 systemd[1]: Reached target initrd-root-device.target. Sep 6 00:06:55.753048 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:06:55.761412 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:06:55.757156 ignition[755]: disks: disks passed Sep 6 00:06:55.762517 systemd[1]: Reached target local-fs.target. Sep 6 00:06:55.757204 ignition[755]: Ignition finished successfully Sep 6 00:06:55.763766 systemd[1]: Reached target sysinit.target. Sep 6 00:06:55.764944 systemd[1]: Reached target basic.target. Sep 6 00:06:55.765919 systemd[1]: Finished dracut-initqueue.service. Sep 6 00:06:55.766921 systemd[1]: Reached target remote-fs-pre.target. Sep 6 00:06:55.767859 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:06:55.768912 systemd[1]: Reached target remote-fs.target. Sep 6 00:06:55.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:55.770669 systemd[1]: Starting dracut-pre-mount.service... Sep 6 00:06:55.779700 systemd[1]: Finished dracut-pre-mount.service. Sep 6 00:06:55.781585 systemd[1]: Starting systemd-fsck-root.service... Sep 6 00:06:55.792566 systemd-fsck[776]: ROOT: clean, 629/553520 files, 56027/553472 blocks Sep 6 00:06:55.796008 systemd[1]: Finished systemd-fsck-root.service. Sep 6 00:06:55.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:55.797509 systemd[1]: Mounting sysroot.mount... Sep 6 00:06:55.803788 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 6 00:06:55.804463 systemd[1]: Mounted sysroot.mount. Sep 6 00:06:55.805230 systemd[1]: Reached target initrd-root-fs.target. Sep 6 00:06:55.808206 systemd[1]: Mounting sysroot-usr.mount... Sep 6 00:06:55.809090 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 6 00:06:55.809133 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:06:55.809158 systemd[1]: Reached target ignition-diskful.target. Sep 6 00:06:55.811190 systemd[1]: Mounted sysroot-usr.mount. Sep 6 00:06:55.813320 systemd[1]: Starting initrd-setup-root.service... Sep 6 00:06:55.817891 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:06:55.822673 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:06:55.826916 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:06:55.830878 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:06:55.859593 systemd[1]: Finished initrd-setup-root.service. Sep 6 00:06:55.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:55.861196 systemd[1]: Starting ignition-mount.service... Sep 6 00:06:55.862429 systemd[1]: Starting sysroot-boot.service... Sep 6 00:06:55.867459 bash[827]: umount: /sysroot/usr/share/oem: not mounted. Sep 6 00:06:55.878790 ignition[829]: INFO : Ignition 2.14.0 Sep 6 00:06:55.878790 ignition[829]: INFO : Stage: mount Sep 6 00:06:55.880096 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:06:55.880096 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:06:55.880096 ignition[829]: INFO : mount: mount passed Sep 6 00:06:55.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:55.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:55.879565 systemd[1]: Finished sysroot-boot.service. Sep 6 00:06:55.884022 ignition[829]: INFO : Ignition finished successfully Sep 6 00:06:55.881405 systemd[1]: Finished ignition-mount.service. Sep 6 00:06:56.573484 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 6 00:06:56.580274 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (838) Sep 6 00:06:56.580320 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:06:56.580331 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:06:56.581264 kernel: BTRFS info (device vda6): has skinny extents Sep 6 00:06:56.584297 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 6 00:06:56.585720 systemd[1]: Starting ignition-files.service... Sep 6 00:06:56.599899 ignition[858]: INFO : Ignition 2.14.0 Sep 6 00:06:56.599899 ignition[858]: INFO : Stage: files Sep 6 00:06:56.601369 ignition[858]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:06:56.601369 ignition[858]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:06:56.601369 ignition[858]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:06:56.604106 ignition[858]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:06:56.604106 ignition[858]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:06:56.606231 ignition[858]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:06:56.607302 ignition[858]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:06:56.608513 unknown[858]: wrote ssh authorized keys file for user: core Sep 6 00:06:56.609466 ignition[858]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:06:56.609466 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:06:56.609466 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:06:56.609466 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:06:56.614918 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:06:56.614918 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:06:56.614918 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:06:56.614918 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:06:56.614918 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 6 00:06:56.966222 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Sep 6 00:06:57.244247 systemd-networkd[741]: eth0: Gained IPv6LL Sep 6 00:06:57.329863 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 6 00:06:57.329863 ignition[858]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Sep 6 00:06:57.332784 ignition[858]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 00:06:57.332784 ignition[858]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 00:06:57.332784 ignition[858]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Sep 6 00:06:57.332784 ignition[858]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Sep 6 00:06:57.332784 ignition[858]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 00:06:57.360412 ignition[858]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 00:06:57.361862 ignition[858]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Sep 6 00:06:57.361862 ignition[858]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:06:57.361862 ignition[858]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:06:57.361862 ignition[858]: INFO : files: files passed Sep 6 00:06:57.361862 ignition[858]: INFO : Ignition finished successfully Sep 6 00:06:57.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.361880 systemd[1]: Finished ignition-files.service. Sep 6 00:06:57.363370 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 6 00:06:57.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.371546 initrd-setup-root-after-ignition[882]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 6 00:06:57.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.364428 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 6 00:06:57.375181 initrd-setup-root-after-ignition[885]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:06:57.365134 systemd[1]: Starting ignition-quench.service... Sep 6 00:06:57.369150 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:06:57.369234 systemd[1]: Finished ignition-quench.service. Sep 6 00:06:57.370899 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 6 00:06:57.372273 systemd[1]: Reached target ignition-complete.target. Sep 6 00:06:57.374395 systemd[1]: Starting initrd-parse-etc.service... Sep 6 00:06:57.386678 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:06:57.386777 systemd[1]: Finished initrd-parse-etc.service. Sep 6 00:06:57.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.388270 systemd[1]: Reached target initrd-fs.target. Sep 6 00:06:57.389230 systemd[1]: Reached target initrd.target. Sep 6 00:06:57.390339 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 6 00:06:57.391066 systemd[1]: Starting dracut-pre-pivot.service... Sep 6 00:06:57.401427 systemd[1]: Finished dracut-pre-pivot.service. Sep 6 00:06:57.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.402906 systemd[1]: Starting initrd-cleanup.service... Sep 6 00:06:57.411038 systemd[1]: Stopped target nss-lookup.target. Sep 6 00:06:57.411749 systemd[1]: Stopped target remote-cryptsetup.target. Sep 6 00:06:57.412959 systemd[1]: Stopped target timers.target. Sep 6 00:06:57.414089 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:06:57.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.414194 systemd[1]: Stopped dracut-pre-pivot.service. Sep 6 00:06:57.415241 systemd[1]: Stopped target initrd.target. Sep 6 00:06:57.416420 systemd[1]: Stopped target basic.target. Sep 6 00:06:57.417399 systemd[1]: Stopped target ignition-complete.target. Sep 6 00:06:57.418512 systemd[1]: Stopped target ignition-diskful.target. Sep 6 00:06:57.419561 systemd[1]: Stopped target initrd-root-device.target. Sep 6 00:06:57.420697 systemd[1]: Stopped target remote-fs.target. Sep 6 00:06:57.421912 systemd[1]: Stopped target remote-fs-pre.target. Sep 6 00:06:57.423030 systemd[1]: Stopped target sysinit.target. Sep 6 00:06:57.424058 systemd[1]: Stopped target local-fs.target. Sep 6 00:06:57.425092 systemd[1]: Stopped target local-fs-pre.target. Sep 6 00:06:57.426160 systemd[1]: Stopped target swap.target. Sep 6 00:06:57.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.427124 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:06:57.427229 systemd[1]: Stopped dracut-pre-mount.service. Sep 6 00:06:57.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.428353 systemd[1]: Stopped target cryptsetup.target. Sep 6 00:06:57.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.429313 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:06:57.429410 systemd[1]: Stopped dracut-initqueue.service. Sep 6 00:06:57.430565 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:06:57.430653 systemd[1]: Stopped ignition-fetch-offline.service. Sep 6 00:06:57.431707 systemd[1]: Stopped target paths.target. Sep 6 00:06:57.432646 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:06:57.437786 systemd[1]: Stopped systemd-ask-password-console.path. Sep 6 00:06:57.438692 systemd[1]: Stopped target slices.target. Sep 6 00:06:57.439958 systemd[1]: Stopped target sockets.target. Sep 6 00:06:57.441049 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:06:57.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.441158 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 6 00:06:57.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.442320 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:06:57.442406 systemd[1]: Stopped ignition-files.service. Sep 6 00:06:57.448385 iscsid[754]: iscsid shutting down. Sep 6 00:06:57.444638 systemd[1]: Stopping ignition-mount.service... Sep 6 00:06:57.445464 systemd[1]: Stopping iscsid.service... Sep 6 00:06:57.450325 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:06:57.450459 systemd[1]: Stopped kmod-static-nodes.service. Sep 6 00:06:57.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.452664 ignition[898]: INFO : Ignition 2.14.0 Sep 6 00:06:57.452664 ignition[898]: INFO : Stage: umount Sep 6 00:06:57.452664 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:06:57.452664 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:06:57.452664 ignition[898]: INFO : umount: umount passed Sep 6 00:06:57.452664 ignition[898]: INFO : Ignition finished successfully Sep 6 00:06:57.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.452232 systemd[1]: Stopping sysroot-boot.service... Sep 6 00:06:57.453201 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:06:57.453328 systemd[1]: Stopped systemd-udev-trigger.service. Sep 6 00:06:57.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.454493 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:06:57.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.454581 systemd[1]: Stopped dracut-pre-trigger.service. Sep 6 00:06:57.456999 systemd[1]: iscsid.service: Deactivated successfully. Sep 6 00:06:57.457087 systemd[1]: Stopped iscsid.service. Sep 6 00:06:57.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.458530 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:06:57.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.458606 systemd[1]: Stopped ignition-mount.service. Sep 6 00:06:57.459886 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:06:57.459952 systemd[1]: Closed iscsid.socket. Sep 6 00:06:57.461692 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:06:57.461737 systemd[1]: Stopped ignition-disks.service. Sep 6 00:06:57.463361 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:06:57.463402 systemd[1]: Stopped ignition-kargs.service. Sep 6 00:06:57.464520 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:06:57.464557 systemd[1]: Stopped ignition-setup.service. Sep 6 00:06:57.466588 systemd[1]: Stopping iscsiuio.service... Sep 6 00:06:57.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.468035 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:06:57.468498 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 6 00:06:57.468588 systemd[1]: Stopped iscsiuio.service. Sep 6 00:06:57.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.470195 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:06:57.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.470269 systemd[1]: Finished initrd-cleanup.service. Sep 6 00:06:57.471726 systemd[1]: Stopped target network.target. Sep 6 00:06:57.472878 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:06:57.472908 systemd[1]: Closed iscsiuio.socket. Sep 6 00:06:57.474017 systemd[1]: Stopping systemd-networkd.service... Sep 6 00:06:57.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.475054 systemd[1]: Stopping systemd-resolved.service... Sep 6 00:06:57.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.480023 systemd-networkd[741]: eth0: DHCPv6 lease lost Sep 6 00:06:57.496000 audit: BPF prog-id=9 op=UNLOAD Sep 6 00:06:57.481021 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:06:57.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.481118 systemd[1]: Stopped systemd-networkd.service. Sep 6 00:06:57.499000 audit: BPF prog-id=6 op=UNLOAD Sep 6 00:06:57.482770 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:06:57.482811 systemd[1]: Closed systemd-networkd.socket. Sep 6 00:06:57.485739 systemd[1]: Stopping network-cleanup.service... Sep 6 00:06:57.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.486823 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:06:57.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.486882 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 6 00:06:57.488371 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:06:57.488407 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:06:57.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.489860 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:06:57.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.489924 systemd[1]: Stopped systemd-modules-load.service. Sep 6 00:06:57.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.492530 systemd[1]: Stopping systemd-udevd.service... Sep 6 00:06:57.494591 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 6 00:06:57.495114 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:06:57.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.495203 systemd[1]: Stopped systemd-resolved.service. Sep 6 00:06:57.496286 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:06:57.496360 systemd[1]: Stopped sysroot-boot.service. Sep 6 00:06:57.497716 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:06:57.497862 systemd[1]: Stopped initrd-setup-root.service. Sep 6 00:06:57.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.501213 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:06:57.501369 systemd[1]: Stopped network-cleanup.service. Sep 6 00:06:57.502705 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:06:57.502838 systemd[1]: Stopped systemd-udevd.service. Sep 6 00:06:57.503954 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:06:57.503985 systemd[1]: Closed systemd-udevd-control.socket. Sep 6 00:06:57.505330 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:06:57.505366 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 6 00:06:57.506541 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:06:57.506576 systemd[1]: Stopped dracut-pre-udev.service. Sep 6 00:06:57.507905 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:06:57.507941 systemd[1]: Stopped dracut-cmdline.service. Sep 6 00:06:57.509017 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:06:57.509053 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 6 00:06:57.510942 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 6 00:06:57.511922 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:06:57.511971 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 6 00:06:57.516167 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:06:57.516246 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 6 00:06:57.517525 systemd[1]: Reached target initrd-switch-root.target. Sep 6 00:06:57.519209 systemd[1]: Starting initrd-switch-root.service... Sep 6 00:06:57.525692 systemd[1]: Switching root. Sep 6 00:06:57.544041 systemd-journald[290]: Journal stopped Sep 6 00:06:59.491395 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Sep 6 00:06:59.491468 kernel: SELinux: Class mctp_socket not defined in policy. Sep 6 00:06:59.491488 kernel: SELinux: Class anon_inode not defined in policy. Sep 6 00:06:59.491498 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 6 00:06:59.491508 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:06:59.491517 kernel: SELinux: policy capability open_perms=1 Sep 6 00:06:59.491528 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:06:59.491537 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:06:59.491547 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:06:59.491557 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:06:59.491566 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:06:59.491582 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:06:59.491593 systemd[1]: Successfully loaded SELinux policy in 32.911ms. Sep 6 00:06:59.491606 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.074ms. Sep 6 00:06:59.491617 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 6 00:06:59.491628 systemd[1]: Detected virtualization kvm. Sep 6 00:06:59.491638 systemd[1]: Detected architecture arm64. Sep 6 00:06:59.491649 systemd[1]: Detected first boot. Sep 6 00:06:59.491660 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:06:59.491673 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 6 00:06:59.491683 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:06:59.491694 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:06:59.491705 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:06:59.491717 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:06:59.491727 kernel: kauditd_printk_skb: 80 callbacks suppressed Sep 6 00:06:59.491738 kernel: audit: type=1334 audit(1757117219.377:84): prog-id=12 op=LOAD Sep 6 00:06:59.491748 kernel: audit: type=1334 audit(1757117219.377:85): prog-id=3 op=UNLOAD Sep 6 00:06:59.491770 kernel: audit: type=1334 audit(1757117219.378:86): prog-id=13 op=LOAD Sep 6 00:06:59.491789 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 6 00:06:59.491803 kernel: audit: type=1334 audit(1757117219.379:87): prog-id=14 op=LOAD Sep 6 00:06:59.491813 systemd[1]: Stopped initrd-switch-root.service. Sep 6 00:06:59.491823 kernel: audit: type=1334 audit(1757117219.379:88): prog-id=4 op=UNLOAD Sep 6 00:06:59.491833 kernel: audit: type=1334 audit(1757117219.379:89): prog-id=5 op=UNLOAD Sep 6 00:06:59.491848 kernel: audit: type=1131 audit(1757117219.379:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.491858 kernel: audit: type=1130 audit(1757117219.386:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.491869 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 6 00:06:59.491881 kernel: audit: type=1131 audit(1757117219.386:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.491891 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 6 00:06:59.491901 systemd[1]: Created slice system-addon\x2drun.slice. Sep 6 00:06:59.491911 systemd[1]: Created slice system-getty.slice. Sep 6 00:06:59.491924 systemd[1]: Created slice system-modprobe.slice. Sep 6 00:06:59.491935 kernel: audit: type=1334 audit(1757117219.397:93): prog-id=12 op=UNLOAD Sep 6 00:06:59.491949 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 6 00:06:59.491961 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 6 00:06:59.491971 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 6 00:06:59.491982 systemd[1]: Created slice user.slice. Sep 6 00:06:59.491992 systemd[1]: Started systemd-ask-password-console.path. Sep 6 00:06:59.492002 systemd[1]: Started systemd-ask-password-wall.path. Sep 6 00:06:59.492013 systemd[1]: Set up automount boot.automount. Sep 6 00:06:59.492023 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 6 00:06:59.492034 systemd[1]: Stopped target initrd-switch-root.target. Sep 6 00:06:59.492045 systemd[1]: Stopped target initrd-fs.target. Sep 6 00:06:59.492056 systemd[1]: Stopped target initrd-root-fs.target. Sep 6 00:06:59.492066 systemd[1]: Reached target integritysetup.target. Sep 6 00:06:59.492076 systemd[1]: Reached target remote-cryptsetup.target. Sep 6 00:06:59.492087 systemd[1]: Reached target remote-fs.target. Sep 6 00:06:59.492097 systemd[1]: Reached target slices.target. Sep 6 00:06:59.492107 systemd[1]: Reached target swap.target. Sep 6 00:06:59.492119 systemd[1]: Reached target torcx.target. Sep 6 00:06:59.492130 systemd[1]: Reached target veritysetup.target. Sep 6 00:06:59.492140 systemd[1]: Listening on systemd-coredump.socket. Sep 6 00:06:59.492151 systemd[1]: Listening on systemd-initctl.socket. Sep 6 00:06:59.492161 systemd[1]: Listening on systemd-networkd.socket. Sep 6 00:06:59.492171 systemd[1]: Listening on systemd-udevd-control.socket. Sep 6 00:06:59.492182 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 6 00:06:59.492192 systemd[1]: Listening on systemd-userdbd.socket. Sep 6 00:06:59.492203 systemd[1]: Mounting dev-hugepages.mount... Sep 6 00:06:59.492215 systemd[1]: Mounting dev-mqueue.mount... Sep 6 00:06:59.492225 systemd[1]: Mounting media.mount... Sep 6 00:06:59.492236 systemd[1]: Mounting sys-kernel-debug.mount... Sep 6 00:06:59.492246 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 6 00:06:59.492257 systemd[1]: Mounting tmp.mount... Sep 6 00:06:59.492267 systemd[1]: Starting flatcar-tmpfiles.service... Sep 6 00:06:59.492277 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:06:59.492288 systemd[1]: Starting kmod-static-nodes.service... Sep 6 00:06:59.492298 systemd[1]: Starting modprobe@configfs.service... Sep 6 00:06:59.492310 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:06:59.492321 systemd[1]: Starting modprobe@drm.service... Sep 6 00:06:59.492331 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:06:59.492342 systemd[1]: Starting modprobe@fuse.service... Sep 6 00:06:59.492352 systemd[1]: Starting modprobe@loop.service... Sep 6 00:06:59.492363 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:06:59.492374 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 6 00:06:59.492385 systemd[1]: Stopped systemd-fsck-root.service. Sep 6 00:06:59.492395 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 6 00:06:59.492407 systemd[1]: Stopped systemd-fsck-usr.service. Sep 6 00:06:59.492417 kernel: loop: module loaded Sep 6 00:06:59.492427 systemd[1]: Stopped systemd-journald.service. Sep 6 00:06:59.492442 kernel: fuse: init (API version 7.34) Sep 6 00:06:59.492452 systemd[1]: Starting systemd-journald.service... Sep 6 00:06:59.492462 systemd[1]: Starting systemd-modules-load.service... Sep 6 00:06:59.492472 systemd[1]: Starting systemd-network-generator.service... Sep 6 00:06:59.492483 systemd[1]: Starting systemd-remount-fs.service... Sep 6 00:06:59.492493 systemd[1]: Starting systemd-udev-trigger.service... Sep 6 00:06:59.492505 systemd[1]: verity-setup.service: Deactivated successfully. Sep 6 00:06:59.492515 systemd[1]: Stopped verity-setup.service. Sep 6 00:06:59.492528 systemd-journald[1006]: Journal started Sep 6 00:06:59.492568 systemd-journald[1006]: Runtime Journal (/run/log/journal/52a5d3dece044b2eb9d91521a6fce5fd) is 6.0M, max 48.7M, 42.6M free. Sep 6 00:06:57.598000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 00:06:57.680000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:06:57.680000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 6 00:06:57.680000 audit: BPF prog-id=10 op=LOAD Sep 6 00:06:57.680000 audit: BPF prog-id=10 op=UNLOAD Sep 6 00:06:57.680000 audit: BPF prog-id=11 op=LOAD Sep 6 00:06:57.680000 audit: BPF prog-id=11 op=UNLOAD Sep 6 00:06:57.720000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 6 00:06:57.720000 audit[931]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c589c a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:06:57.720000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:06:57.722000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 6 00:06:57.722000 audit[931]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5979 a2=1ed a3=0 items=2 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:06:57.722000 audit: CWD cwd="/" Sep 6 00:06:57.722000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:06:57.722000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 6 00:06:57.722000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 6 00:06:59.377000 audit: BPF prog-id=12 op=LOAD Sep 6 00:06:59.377000 audit: BPF prog-id=3 op=UNLOAD Sep 6 00:06:59.378000 audit: BPF prog-id=13 op=LOAD Sep 6 00:06:59.379000 audit: BPF prog-id=14 op=LOAD Sep 6 00:06:59.379000 audit: BPF prog-id=4 op=UNLOAD Sep 6 00:06:59.379000 audit: BPF prog-id=5 op=UNLOAD Sep 6 00:06:59.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.397000 audit: BPF prog-id=12 op=UNLOAD Sep 6 00:06:59.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.475000 audit: BPF prog-id=15 op=LOAD Sep 6 00:06:59.475000 audit: BPF prog-id=16 op=LOAD Sep 6 00:06:59.475000 audit: BPF prog-id=17 op=LOAD Sep 6 00:06:59.475000 audit: BPF prog-id=13 op=UNLOAD Sep 6 00:06:59.475000 audit: BPF prog-id=14 op=UNLOAD Sep 6 00:06:59.490000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 6 00:06:59.490000 audit[1006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffd8a91b00 a2=4000 a3=1 items=0 ppid=1 pid=1006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:06:59.490000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 6 00:06:59.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.376681 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:06:57.719617 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:06:57Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:06:59.376693 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 6 00:06:57.719887 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:06:57Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:06:59.380338 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 6 00:06:57.720021 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:06:57Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:06:57.720053 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:06:57Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 6 00:06:57.720063 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:06:57Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 6 00:06:57.720094 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:06:57Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 6 00:06:59.494469 systemd[1]: Started systemd-journald.service. Sep 6 00:06:57.720105 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:06:57Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 6 00:06:57.720298 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:06:57Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 6 00:06:59.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:57.720332 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:06:57Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 6 00:06:57.720344 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:06:57Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 6 00:06:57.721106 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:06:57Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 6 00:06:57.721141 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:06:57Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 6 00:06:59.494905 systemd[1]: Mounted dev-hugepages.mount. Sep 6 00:06:57.721159 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:06:57Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 6 00:06:57.721173 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:06:57Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 6 00:06:57.721191 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:06:57Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 6 00:06:57.721204 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:06:57Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 6 00:06:59.134988 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:06:59Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:06:59.135247 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:06:59Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:06:59.135345 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:06:59Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:06:59.135506 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:06:59Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 6 00:06:59.135555 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:06:59Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 6 00:06:59.135609 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-09-06T00:06:59Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 6 00:06:59.495832 systemd[1]: Mounted dev-mqueue.mount. Sep 6 00:06:59.496540 systemd[1]: Mounted media.mount. Sep 6 00:06:59.497227 systemd[1]: Mounted sys-kernel-debug.mount. Sep 6 00:06:59.497996 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 6 00:06:59.498743 systemd[1]: Mounted tmp.mount. Sep 6 00:06:59.501033 systemd[1]: Finished kmod-static-nodes.service. Sep 6 00:06:59.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.501968 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:06:59.502141 systemd[1]: Finished modprobe@configfs.service. Sep 6 00:06:59.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.503176 systemd[1]: Finished flatcar-tmpfiles.service. Sep 6 00:06:59.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.504133 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:06:59.504295 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:06:59.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.505374 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:06:59.505524 systemd[1]: Finished modprobe@drm.service. Sep 6 00:06:59.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.506587 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:06:59.506729 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:06:59.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.507777 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:06:59.507944 systemd[1]: Finished modprobe@fuse.service. Sep 6 00:06:59.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.508914 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:06:59.509039 systemd[1]: Finished modprobe@loop.service. Sep 6 00:06:59.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.510013 systemd[1]: Finished systemd-modules-load.service. Sep 6 00:06:59.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.511120 systemd[1]: Finished systemd-network-generator.service. Sep 6 00:06:59.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.512172 systemd[1]: Finished systemd-remount-fs.service. Sep 6 00:06:59.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.513465 systemd[1]: Reached target network-pre.target. Sep 6 00:06:59.515438 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 6 00:06:59.517343 systemd[1]: Mounting sys-kernel-config.mount... Sep 6 00:06:59.517976 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:06:59.519354 systemd[1]: Starting systemd-hwdb-update.service... Sep 6 00:06:59.521131 systemd[1]: Starting systemd-journal-flush.service... Sep 6 00:06:59.521921 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:06:59.522885 systemd[1]: Starting systemd-random-seed.service... Sep 6 00:06:59.525892 systemd-journald[1006]: Time spent on flushing to /var/log/journal/52a5d3dece044b2eb9d91521a6fce5fd is 17.222ms for 969 entries. Sep 6 00:06:59.525892 systemd-journald[1006]: System Journal (/var/log/journal/52a5d3dece044b2eb9d91521a6fce5fd) is 8.0M, max 195.6M, 187.6M free. Sep 6 00:06:59.557478 systemd-journald[1006]: Received client request to flush runtime journal. Sep 6 00:06:59.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.523663 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:06:59.524639 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:06:59.527826 systemd[1]: Starting systemd-sysusers.service... Sep 6 00:06:59.530963 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 6 00:06:59.558871 udevadm[1033]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 6 00:06:59.531837 systemd[1]: Mounted sys-kernel-config.mount. Sep 6 00:06:59.532708 systemd[1]: Finished systemd-random-seed.service. Sep 6 00:06:59.533637 systemd[1]: Reached target first-boot-complete.target. Sep 6 00:06:59.538490 systemd[1]: Finished systemd-udev-trigger.service. Sep 6 00:06:59.540368 systemd[1]: Starting systemd-udev-settle.service... Sep 6 00:06:59.541526 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:06:59.548956 systemd[1]: Finished systemd-sysusers.service. Sep 6 00:06:59.559673 systemd[1]: Finished systemd-journal-flush.service. Sep 6 00:06:59.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.985659 systemd[1]: Finished systemd-hwdb-update.service. Sep 6 00:06:59.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:06:59.986000 audit: BPF prog-id=18 op=LOAD Sep 6 00:06:59.986000 audit: BPF prog-id=19 op=LOAD Sep 6 00:06:59.986000 audit: BPF prog-id=7 op=UNLOAD Sep 6 00:06:59.986000 audit: BPF prog-id=8 op=UNLOAD Sep 6 00:06:59.987893 systemd[1]: Starting systemd-udevd.service... Sep 6 00:07:00.003116 systemd-udevd[1035]: Using default interface naming scheme 'v252'. Sep 6 00:07:00.016771 systemd[1]: Started systemd-udevd.service. Sep 6 00:07:00.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.017000 audit: BPF prog-id=20 op=LOAD Sep 6 00:07:00.019549 systemd[1]: Starting systemd-networkd.service... Sep 6 00:07:00.027000 audit: BPF prog-id=21 op=LOAD Sep 6 00:07:00.027000 audit: BPF prog-id=22 op=LOAD Sep 6 00:07:00.027000 audit: BPF prog-id=23 op=LOAD Sep 6 00:07:00.028905 systemd[1]: Starting systemd-userdbd.service... Sep 6 00:07:00.037669 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Sep 6 00:07:00.054672 systemd[1]: Started systemd-userdbd.service. Sep 6 00:07:00.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.101280 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 6 00:07:00.106166 systemd-networkd[1042]: lo: Link UP Sep 6 00:07:00.106178 systemd-networkd[1042]: lo: Gained carrier Sep 6 00:07:00.106567 systemd-networkd[1042]: Enumeration completed Sep 6 00:07:00.106670 systemd[1]: Started systemd-networkd.service. Sep 6 00:07:00.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.107631 systemd-networkd[1042]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:07:00.109286 systemd-networkd[1042]: eth0: Link UP Sep 6 00:07:00.109299 systemd-networkd[1042]: eth0: Gained carrier Sep 6 00:07:00.132258 systemd[1]: Finished systemd-udev-settle.service. Sep 6 00:07:00.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.133107 systemd-networkd[1042]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 00:07:00.134291 systemd[1]: Starting lvm2-activation-early.service... Sep 6 00:07:00.143009 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:07:00.172667 systemd[1]: Finished lvm2-activation-early.service. Sep 6 00:07:00.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.173622 systemd[1]: Reached target cryptsetup.target. Sep 6 00:07:00.175591 systemd[1]: Starting lvm2-activation.service... Sep 6 00:07:00.179291 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:07:00.213704 systemd[1]: Finished lvm2-activation.service. Sep 6 00:07:00.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.214562 systemd[1]: Reached target local-fs-pre.target. Sep 6 00:07:00.215276 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:07:00.215306 systemd[1]: Reached target local-fs.target. Sep 6 00:07:00.215907 systemd[1]: Reached target machines.target. Sep 6 00:07:00.217708 systemd[1]: Starting ldconfig.service... Sep 6 00:07:00.218789 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:07:00.218848 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:07:00.220057 systemd[1]: Starting systemd-boot-update.service... Sep 6 00:07:00.222108 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 6 00:07:00.224166 systemd[1]: Starting systemd-machine-id-commit.service... Sep 6 00:07:00.226263 systemd[1]: Starting systemd-sysext.service... Sep 6 00:07:00.227633 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1071 (bootctl) Sep 6 00:07:00.229314 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 6 00:07:00.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.240066 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 6 00:07:00.248530 systemd[1]: Unmounting usr-share-oem.mount... Sep 6 00:07:00.254905 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 6 00:07:00.255121 systemd[1]: Unmounted usr-share-oem.mount. Sep 6 00:07:00.302904 kernel: loop0: detected capacity change from 0 to 203944 Sep 6 00:07:00.307566 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:07:00.308311 systemd[1]: Finished systemd-machine-id-commit.service. Sep 6 00:07:00.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.312124 systemd-fsck[1078]: fsck.fat 4.2 (2021-01-31) Sep 6 00:07:00.312124 systemd-fsck[1078]: /dev/vda1: 236 files, 117310/258078 clusters Sep 6 00:07:00.314791 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:07:00.316006 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 6 00:07:00.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.318915 systemd[1]: Mounting boot.mount... Sep 6 00:07:00.327242 systemd[1]: Mounted boot.mount. Sep 6 00:07:00.333787 kernel: loop1: detected capacity change from 0 to 203944 Sep 6 00:07:00.334993 systemd[1]: Finished systemd-boot-update.service. Sep 6 00:07:00.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.340013 (sd-sysext)[1085]: Using extensions 'kubernetes'. Sep 6 00:07:00.340339 (sd-sysext)[1085]: Merged extensions into '/usr'. Sep 6 00:07:00.357506 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:07:00.359061 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:07:00.361189 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:07:00.363392 systemd[1]: Starting modprobe@loop.service... Sep 6 00:07:00.364340 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:07:00.364530 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:07:00.365397 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:07:00.365553 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:07:00.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.367004 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:07:00.367123 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:07:00.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.368521 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:07:00.368634 systemd[1]: Finished modprobe@loop.service. Sep 6 00:07:00.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.369998 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:07:00.370106 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:07:00.438768 ldconfig[1070]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:07:00.442131 systemd[1]: Finished ldconfig.service. Sep 6 00:07:00.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.494999 systemd[1]: Mounting usr-share-oem.mount... Sep 6 00:07:00.502712 systemd[1]: Mounted usr-share-oem.mount. Sep 6 00:07:00.504948 systemd[1]: Finished systemd-sysext.service. Sep 6 00:07:00.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.509929 systemd[1]: Starting ensure-sysext.service... Sep 6 00:07:00.511955 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 6 00:07:00.521075 systemd[1]: Reloading. Sep 6 00:07:00.527099 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 6 00:07:00.529468 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:07:00.532686 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:07:00.561053 /usr/lib/systemd/system-generators/torcx-generator[1112]: time="2025-09-06T00:07:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:07:00.561460 /usr/lib/systemd/system-generators/torcx-generator[1112]: time="2025-09-06T00:07:00Z" level=info msg="torcx already run" Sep 6 00:07:00.646059 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:07:00.646080 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:07:00.663361 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:07:00.709000 audit: BPF prog-id=24 op=LOAD Sep 6 00:07:00.709000 audit: BPF prog-id=15 op=UNLOAD Sep 6 00:07:00.709000 audit: BPF prog-id=25 op=LOAD Sep 6 00:07:00.709000 audit: BPF prog-id=26 op=LOAD Sep 6 00:07:00.709000 audit: BPF prog-id=16 op=UNLOAD Sep 6 00:07:00.709000 audit: BPF prog-id=17 op=UNLOAD Sep 6 00:07:00.710000 audit: BPF prog-id=27 op=LOAD Sep 6 00:07:00.710000 audit: BPF prog-id=28 op=LOAD Sep 6 00:07:00.710000 audit: BPF prog-id=18 op=UNLOAD Sep 6 00:07:00.710000 audit: BPF prog-id=19 op=UNLOAD Sep 6 00:07:00.711000 audit: BPF prog-id=29 op=LOAD Sep 6 00:07:00.711000 audit: BPF prog-id=20 op=UNLOAD Sep 6 00:07:00.713000 audit: BPF prog-id=30 op=LOAD Sep 6 00:07:00.713000 audit: BPF prog-id=21 op=UNLOAD Sep 6 00:07:00.713000 audit: BPF prog-id=31 op=LOAD Sep 6 00:07:00.714000 audit: BPF prog-id=32 op=LOAD Sep 6 00:07:00.714000 audit: BPF prog-id=22 op=UNLOAD Sep 6 00:07:00.714000 audit: BPF prog-id=23 op=UNLOAD Sep 6 00:07:00.716811 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 6 00:07:00.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.720706 systemd[1]: Starting audit-rules.service... Sep 6 00:07:00.722484 systemd[1]: Starting clean-ca-certificates.service... Sep 6 00:07:00.724572 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 6 00:07:00.725000 audit: BPF prog-id=33 op=LOAD Sep 6 00:07:00.727459 systemd[1]: Starting systemd-resolved.service... Sep 6 00:07:00.731000 audit: BPF prog-id=34 op=LOAD Sep 6 00:07:00.732643 systemd[1]: Starting systemd-timesyncd.service... Sep 6 00:07:00.734582 systemd[1]: Starting systemd-update-utmp.service... Sep 6 00:07:00.736063 systemd[1]: Finished clean-ca-certificates.service. Sep 6 00:07:00.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.738625 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:07:00.738000 audit[1161]: SYSTEM_BOOT pid=1161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.744091 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:07:00.745281 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:07:00.747250 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:07:00.749022 systemd[1]: Starting modprobe@loop.service... Sep 6 00:07:00.749941 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:07:00.750069 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:07:00.750171 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:07:00.751037 systemd[1]: Finished systemd-update-utmp.service. Sep 6 00:07:00.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.752070 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:07:00.752204 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:07:00.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.753439 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 6 00:07:00.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.754549 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:07:00.754659 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:07:00.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.755728 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:07:00.755855 systemd[1]: Finished modprobe@loop.service. Sep 6 00:07:00.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.757531 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:07:00.757638 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:07:00.759135 systemd[1]: Starting systemd-update-done.service... Sep 6 00:07:00.761687 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:07:00.762899 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:07:00.764574 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:07:00.766591 systemd[1]: Starting modprobe@loop.service... Sep 6 00:07:00.767278 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:07:00.767414 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:07:00.767523 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:07:00.768370 systemd[1]: Finished systemd-update-done.service. Sep 6 00:07:00.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.769593 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:07:00.769704 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:07:00.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.770839 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:07:00.770945 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:07:00.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.771989 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:07:00.772132 systemd[1]: Finished modprobe@loop.service. Sep 6 00:07:00.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.773346 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:07:00.773440 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:07:00.775669 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 6 00:07:00.776892 systemd[1]: Starting modprobe@dm_mod.service... Sep 6 00:07:00.778716 systemd[1]: Starting modprobe@drm.service... Sep 6 00:07:00.780592 systemd[1]: Starting modprobe@efi_pstore.service... Sep 6 00:07:00.782587 systemd[1]: Starting modprobe@loop.service... Sep 6 00:07:00.783453 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 6 00:07:00.783575 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:07:00.784919 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 6 00:07:00.785828 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:07:00.786878 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:07:00.787002 systemd[1]: Finished modprobe@dm_mod.service. Sep 6 00:07:00.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.787994 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:07:00.788100 systemd[1]: Finished modprobe@drm.service. Sep 6 00:07:00.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.789122 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:07:00.789237 systemd[1]: Finished modprobe@efi_pstore.service. Sep 6 00:07:00.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.790246 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:07:00.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 6 00:07:00.790000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 6 00:07:00.790000 audit[1181]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffda473260 a2=420 a3=0 items=0 ppid=1150 pid=1181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 6 00:07:00.790000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 6 00:07:00.790355 systemd[1]: Finished modprobe@loop.service. Sep 6 00:07:00.791503 augenrules[1181]: No rules Sep 6 00:07:00.791701 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:07:00.791824 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 6 00:07:00.792943 systemd[1]: Finished ensure-sysext.service. Sep 6 00:07:00.793832 systemd[1]: Finished audit-rules.service. Sep 6 00:07:00.797214 systemd[1]: Started systemd-timesyncd.service. Sep 6 00:07:00.798131 systemd-timesyncd[1160]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 6 00:07:00.798188 systemd-timesyncd[1160]: Initial clock synchronization to Sat 2025-09-06 00:07:00.887643 UTC. Sep 6 00:07:00.798405 systemd-resolved[1154]: Positive Trust Anchors: Sep 6 00:07:00.798414 systemd-resolved[1154]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:07:00.798441 systemd-resolved[1154]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 6 00:07:00.798679 systemd[1]: Reached target time-set.target. Sep 6 00:07:00.810541 systemd-resolved[1154]: Defaulting to hostname 'linux'. Sep 6 00:07:00.812042 systemd[1]: Started systemd-resolved.service. Sep 6 00:07:00.812769 systemd[1]: Reached target network.target. Sep 6 00:07:00.813370 systemd[1]: Reached target nss-lookup.target. Sep 6 00:07:00.814039 systemd[1]: Reached target sysinit.target. Sep 6 00:07:00.814704 systemd[1]: Started motdgen.path. Sep 6 00:07:00.815372 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 6 00:07:00.816398 systemd[1]: Started logrotate.timer. Sep 6 00:07:00.817072 systemd[1]: Started mdadm.timer. Sep 6 00:07:00.817630 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 6 00:07:00.818343 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:07:00.818375 systemd[1]: Reached target paths.target. Sep 6 00:07:00.818975 systemd[1]: Reached target timers.target. Sep 6 00:07:00.819937 systemd[1]: Listening on dbus.socket. Sep 6 00:07:00.821647 systemd[1]: Starting docker.socket... Sep 6 00:07:00.825128 systemd[1]: Listening on sshd.socket. Sep 6 00:07:00.825824 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:07:00.826293 systemd[1]: Listening on docker.socket. Sep 6 00:07:00.826977 systemd[1]: Reached target sockets.target. Sep 6 00:07:00.827553 systemd[1]: Reached target basic.target. Sep 6 00:07:00.828378 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:07:00.828411 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 6 00:07:00.829393 systemd[1]: Starting containerd.service... Sep 6 00:07:00.831018 systemd[1]: Starting dbus.service... Sep 6 00:07:00.832620 systemd[1]: Starting enable-oem-cloudinit.service... Sep 6 00:07:00.834530 systemd[1]: Starting extend-filesystems.service... Sep 6 00:07:00.835364 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 6 00:07:00.836544 systemd[1]: Starting motdgen.service... Sep 6 00:07:00.838374 jq[1192]: false Sep 6 00:07:00.838878 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 6 00:07:00.840980 systemd[1]: Starting sshd-keygen.service... Sep 6 00:07:00.843601 systemd[1]: Starting systemd-logind.service... Sep 6 00:07:00.844928 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 6 00:07:00.844998 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:07:00.845377 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 00:07:00.847439 systemd[1]: Starting update-engine.service... Sep 6 00:07:00.854277 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 6 00:07:00.857728 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:07:00.862448 jq[1206]: true Sep 6 00:07:00.857944 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 6 00:07:00.858247 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:07:00.858392 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 6 00:07:00.866789 jq[1211]: true Sep 6 00:07:00.873707 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:07:00.873911 systemd[1]: Finished motdgen.service. Sep 6 00:07:00.877107 extend-filesystems[1193]: Found loop1 Sep 6 00:07:00.878210 extend-filesystems[1193]: Found vda Sep 6 00:07:00.878210 extend-filesystems[1193]: Found vda1 Sep 6 00:07:00.878210 extend-filesystems[1193]: Found vda2 Sep 6 00:07:00.878210 extend-filesystems[1193]: Found vda3 Sep 6 00:07:00.878210 extend-filesystems[1193]: Found usr Sep 6 00:07:00.878210 extend-filesystems[1193]: Found vda4 Sep 6 00:07:00.878210 extend-filesystems[1193]: Found vda6 Sep 6 00:07:00.878210 extend-filesystems[1193]: Found vda7 Sep 6 00:07:00.878210 extend-filesystems[1193]: Found vda9 Sep 6 00:07:00.878210 extend-filesystems[1193]: Checking size of /dev/vda9 Sep 6 00:07:00.887856 dbus-daemon[1191]: [system] SELinux support is enabled Sep 6 00:07:00.888029 systemd[1]: Started dbus.service. Sep 6 00:07:00.890376 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:07:00.890401 systemd[1]: Reached target system-config.target. Sep 6 00:07:00.891156 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:07:00.891171 systemd[1]: Reached target user-config.target. Sep 6 00:07:00.893721 extend-filesystems[1193]: Resized partition /dev/vda9 Sep 6 00:07:00.901645 extend-filesystems[1238]: resize2fs 1.46.5 (30-Dec-2021) Sep 6 00:07:00.934509 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 6 00:07:00.934575 update_engine[1203]: I0906 00:07:00.920444 1203 main.cc:92] Flatcar Update Engine starting Sep 6 00:07:00.906959 systemd-logind[1200]: Watching system buttons on /dev/input/event0 (Power Button) Sep 6 00:07:00.909889 systemd-logind[1200]: New seat seat0. Sep 6 00:07:00.918457 systemd[1]: Started systemd-logind.service. Sep 6 00:07:00.940100 env[1212]: time="2025-09-06T00:07:00.940032320Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 6 00:07:00.941741 systemd[1]: Started update-engine.service. Sep 6 00:07:00.943631 update_engine[1203]: I0906 00:07:00.941946 1203 update_check_scheduler.cc:74] Next update check in 11m58s Sep 6 00:07:00.944560 systemd[1]: Started locksmithd.service. Sep 6 00:07:00.950991 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 6 00:07:00.961473 env[1212]: time="2025-09-06T00:07:00.961423400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:07:00.973816 extend-filesystems[1238]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 6 00:07:00.973816 extend-filesystems[1238]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 6 00:07:00.973816 extend-filesystems[1238]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 6 00:07:00.977686 extend-filesystems[1193]: Resized filesystem in /dev/vda9 Sep 6 00:07:00.978515 env[1212]: time="2025-09-06T00:07:00.974948560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:07:00.974600 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:07:00.974830 systemd[1]: Finished extend-filesystems.service. Sep 6 00:07:00.979027 env[1212]: time="2025-09-06T00:07:00.978979400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.190-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:07:00.979027 env[1212]: time="2025-09-06T00:07:00.979019800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:07:00.979259 env[1212]: time="2025-09-06T00:07:00.979239040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:07:00.979389 env[1212]: time="2025-09-06T00:07:00.979260240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:07:00.979389 env[1212]: time="2025-09-06T00:07:00.979342480Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 6 00:07:00.979389 env[1212]: time="2025-09-06T00:07:00.979355080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:07:00.980425 env[1212]: time="2025-09-06T00:07:00.980386440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:07:00.980890 env[1212]: time="2025-09-06T00:07:00.980869680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:07:00.981151 env[1212]: time="2025-09-06T00:07:00.981129160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:07:00.981188 env[1212]: time="2025-09-06T00:07:00.981152160Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:07:00.981492 bash[1237]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:07:00.981602 env[1212]: time="2025-09-06T00:07:00.981232600Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 6 00:07:00.981602 env[1212]: time="2025-09-06T00:07:00.981364160Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:07:00.982128 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 6 00:07:00.987587 env[1212]: time="2025-09-06T00:07:00.987552440Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:07:00.987587 env[1212]: time="2025-09-06T00:07:00.987592120Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:07:00.987681 env[1212]: time="2025-09-06T00:07:00.987607160Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:07:00.987681 env[1212]: time="2025-09-06T00:07:00.987639480Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:07:00.987681 env[1212]: time="2025-09-06T00:07:00.987655760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:07:00.987681 env[1212]: time="2025-09-06T00:07:00.987670160Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:07:00.987918 env[1212]: time="2025-09-06T00:07:00.987684000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:07:00.988350 env[1212]: time="2025-09-06T00:07:00.988314840Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:07:00.988380 env[1212]: time="2025-09-06T00:07:00.988356920Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 6 00:07:00.988380 env[1212]: time="2025-09-06T00:07:00.988371080Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:07:00.988422 env[1212]: time="2025-09-06T00:07:00.988383360Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:07:00.988422 env[1212]: time="2025-09-06T00:07:00.988395720Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:07:00.988537 env[1212]: time="2025-09-06T00:07:00.988520520Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:07:00.988613 env[1212]: time="2025-09-06T00:07:00.988599600Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:07:00.988903 env[1212]: time="2025-09-06T00:07:00.988885520Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:07:00.988931 env[1212]: time="2025-09-06T00:07:00.988916920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:07:00.988959 env[1212]: time="2025-09-06T00:07:00.988931160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:07:00.989067 env[1212]: time="2025-09-06T00:07:00.989053680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:07:00.989096 env[1212]: time="2025-09-06T00:07:00.989071040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:07:00.989096 env[1212]: time="2025-09-06T00:07:00.989084680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:07:00.989142 env[1212]: time="2025-09-06T00:07:00.989096000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:07:00.989142 env[1212]: time="2025-09-06T00:07:00.989108640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:07:00.989178 env[1212]: time="2025-09-06T00:07:00.989143440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:07:00.989178 env[1212]: time="2025-09-06T00:07:00.989156160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:07:00.989178 env[1212]: time="2025-09-06T00:07:00.989167880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:07:00.989239 env[1212]: time="2025-09-06T00:07:00.989180440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:07:00.989315 env[1212]: time="2025-09-06T00:07:00.989297840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:07:00.989342 env[1212]: time="2025-09-06T00:07:00.989320280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:07:00.989342 env[1212]: time="2025-09-06T00:07:00.989333920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:07:00.989384 env[1212]: time="2025-09-06T00:07:00.989346800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:07:00.989384 env[1212]: time="2025-09-06T00:07:00.989362920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 6 00:07:00.989384 env[1212]: time="2025-09-06T00:07:00.989373680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:07:00.989462 env[1212]: time="2025-09-06T00:07:00.989395920Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 6 00:07:00.989462 env[1212]: time="2025-09-06T00:07:00.989444280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:07:00.989693 env[1212]: time="2025-09-06T00:07:00.989644280Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:07:00.990457 env[1212]: time="2025-09-06T00:07:00.989704160Z" level=info msg="Connect containerd service" Sep 6 00:07:00.990457 env[1212]: time="2025-09-06T00:07:00.989733240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:07:00.991076 env[1212]: time="2025-09-06T00:07:00.990923240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:07:00.991263 env[1212]: time="2025-09-06T00:07:00.991235160Z" level=info msg="Start subscribing containerd event" Sep 6 00:07:00.991297 env[1212]: time="2025-09-06T00:07:00.991286440Z" level=info msg="Start recovering state" Sep 6 00:07:00.991382 env[1212]: time="2025-09-06T00:07:00.991370080Z" level=info msg="Start event monitor" Sep 6 00:07:00.991414 env[1212]: time="2025-09-06T00:07:00.991393480Z" level=info msg="Start snapshots syncer" Sep 6 00:07:00.991436 env[1212]: time="2025-09-06T00:07:00.991417240Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:07:00.991436 env[1212]: time="2025-09-06T00:07:00.991426320Z" level=info msg="Start streaming server" Sep 6 00:07:00.992441 env[1212]: time="2025-09-06T00:07:00.992343840Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:07:00.992592 env[1212]: time="2025-09-06T00:07:00.992570200Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:07:00.992787 env[1212]: time="2025-09-06T00:07:00.992749240Z" level=info msg="containerd successfully booted in 0.080025s" Sep 6 00:07:00.992867 systemd[1]: Started containerd.service. Sep 6 00:07:01.002232 locksmithd[1243]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:07:01.532573 systemd-networkd[1042]: eth0: Gained IPv6LL Sep 6 00:07:01.535176 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 6 00:07:01.536246 systemd[1]: Reached target network-online.target. Sep 6 00:07:01.540086 systemd[1]: Starting kubelet.service... Sep 6 00:07:02.259001 systemd[1]: Started kubelet.service. Sep 6 00:07:02.689304 sshd_keygen[1213]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:07:02.707598 systemd[1]: Finished sshd-keygen.service. Sep 6 00:07:02.709882 systemd[1]: Starting issuegen.service... Sep 6 00:07:02.714641 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:07:02.714831 systemd[1]: Finished issuegen.service. Sep 6 00:07:02.716832 systemd[1]: Starting systemd-user-sessions.service... Sep 6 00:07:02.722799 kubelet[1255]: E0906 00:07:02.721042 1255 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:07:02.723668 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:07:02.723795 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:07:02.724572 systemd[1]: Finished systemd-user-sessions.service. Sep 6 00:07:02.726716 systemd[1]: Started getty@tty1.service. Sep 6 00:07:02.728658 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 6 00:07:02.729712 systemd[1]: Reached target getty.target. Sep 6 00:07:02.730478 systemd[1]: Reached target multi-user.target. Sep 6 00:07:02.732354 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 6 00:07:02.740517 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 6 00:07:02.740680 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 6 00:07:02.741629 systemd[1]: Startup finished in 564ms (kernel) + 4.010s (initrd) + 5.177s (userspace) = 9.752s. Sep 6 00:07:05.855317 systemd[1]: Created slice system-sshd.slice. Sep 6 00:07:05.856452 systemd[1]: Started sshd@0-10.0.0.73:22-10.0.0.1:59932.service. Sep 6 00:07:05.904362 sshd[1277]: Accepted publickey for core from 10.0.0.1 port 59932 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:07:05.906747 sshd[1277]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:07:05.918587 systemd[1]: Created slice user-500.slice. Sep 6 00:07:05.921602 systemd[1]: Starting user-runtime-dir@500.service... Sep 6 00:07:05.923428 systemd-logind[1200]: New session 1 of user core. Sep 6 00:07:05.929471 systemd[1]: Finished user-runtime-dir@500.service. Sep 6 00:07:05.931165 systemd[1]: Starting user@500.service... Sep 6 00:07:05.933941 (systemd)[1280]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:07:05.995247 systemd[1280]: Queued start job for default target default.target. Sep 6 00:07:05.995703 systemd[1280]: Reached target paths.target. Sep 6 00:07:05.995735 systemd[1280]: Reached target sockets.target. Sep 6 00:07:05.995746 systemd[1280]: Reached target timers.target. Sep 6 00:07:05.995756 systemd[1280]: Reached target basic.target. Sep 6 00:07:05.995809 systemd[1280]: Reached target default.target. Sep 6 00:07:05.995833 systemd[1280]: Startup finished in 54ms. Sep 6 00:07:05.996004 systemd[1]: Started user@500.service. Sep 6 00:07:05.996995 systemd[1]: Started session-1.scope. Sep 6 00:07:06.048654 systemd[1]: Started sshd@1-10.0.0.73:22-10.0.0.1:59948.service. Sep 6 00:07:06.097663 sshd[1289]: Accepted publickey for core from 10.0.0.1 port 59948 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:07:06.098924 sshd[1289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:07:06.103388 systemd[1]: Started session-2.scope. Sep 6 00:07:06.103547 systemd-logind[1200]: New session 2 of user core. Sep 6 00:07:06.156477 sshd[1289]: pam_unix(sshd:session): session closed for user core Sep 6 00:07:06.159229 systemd[1]: sshd@1-10.0.0.73:22-10.0.0.1:59948.service: Deactivated successfully. Sep 6 00:07:06.159754 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:07:06.160300 systemd-logind[1200]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:07:06.161244 systemd[1]: Started sshd@2-10.0.0.73:22-10.0.0.1:59964.service. Sep 6 00:07:06.162132 systemd-logind[1200]: Removed session 2. Sep 6 00:07:06.201928 sshd[1295]: Accepted publickey for core from 10.0.0.1 port 59964 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:07:06.202999 sshd[1295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:07:06.205683 systemd-logind[1200]: New session 3 of user core. Sep 6 00:07:06.206421 systemd[1]: Started session-3.scope. Sep 6 00:07:06.255812 sshd[1295]: pam_unix(sshd:session): session closed for user core Sep 6 00:07:06.259353 systemd[1]: sshd@2-10.0.0.73:22-10.0.0.1:59964.service: Deactivated successfully. Sep 6 00:07:06.259932 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:07:06.260425 systemd-logind[1200]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:07:06.261429 systemd[1]: Started sshd@3-10.0.0.73:22-10.0.0.1:59978.service. Sep 6 00:07:06.262121 systemd-logind[1200]: Removed session 3. Sep 6 00:07:06.302814 sshd[1302]: Accepted publickey for core from 10.0.0.1 port 59978 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:07:06.303930 sshd[1302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:07:06.307071 systemd-logind[1200]: New session 4 of user core. Sep 6 00:07:06.307839 systemd[1]: Started session-4.scope. Sep 6 00:07:06.361395 sshd[1302]: pam_unix(sshd:session): session closed for user core Sep 6 00:07:06.364382 systemd[1]: sshd@3-10.0.0.73:22-10.0.0.1:59978.service: Deactivated successfully. Sep 6 00:07:06.364908 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:07:06.365397 systemd-logind[1200]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:07:06.366376 systemd[1]: Started sshd@4-10.0.0.73:22-10.0.0.1:59994.service. Sep 6 00:07:06.367031 systemd-logind[1200]: Removed session 4. Sep 6 00:07:06.407108 sshd[1308]: Accepted publickey for core from 10.0.0.1 port 59994 ssh2: RSA SHA256:qG1+2xR4oE658eC9Fiw7rGB0rnUmEEqdEKfJgb+zaY4 Sep 6 00:07:06.408196 sshd[1308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:07:06.411522 systemd-logind[1200]: New session 5 of user core. Sep 6 00:07:06.411847 systemd[1]: Started session-5.scope. Sep 6 00:07:06.468122 sudo[1311]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:07:06.468326 sudo[1311]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 6 00:07:06.479198 systemd[1]: Starting coreos-metadata.service... Sep 6 00:07:06.485073 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 6 00:07:06.485214 systemd[1]: Finished coreos-metadata.service. Sep 6 00:07:06.888658 systemd[1]: Stopped kubelet.service. Sep 6 00:07:06.890627 systemd[1]: Starting kubelet.service... Sep 6 00:07:06.912853 systemd[1]: Reloading. Sep 6 00:07:06.982064 /usr/lib/systemd/system-generators/torcx-generator[1369]: time="2025-09-06T00:07:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 6 00:07:06.982370 /usr/lib/systemd/system-generators/torcx-generator[1369]: time="2025-09-06T00:07:06Z" level=info msg="torcx already run" Sep 6 00:07:07.171525 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 6 00:07:07.171742 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 6 00:07:07.187554 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:07:07.263444 systemd[1]: Started kubelet.service. Sep 6 00:07:07.264888 systemd[1]: Stopping kubelet.service... Sep 6 00:07:07.265122 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:07:07.265285 systemd[1]: Stopped kubelet.service. Sep 6 00:07:07.266723 systemd[1]: Starting kubelet.service... Sep 6 00:07:07.360945 systemd[1]: Started kubelet.service. Sep 6 00:07:07.392871 kubelet[1413]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:07:07.392871 kubelet[1413]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 6 00:07:07.392871 kubelet[1413]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:07:07.393266 kubelet[1413]: I0906 00:07:07.392907 1413 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:07:08.001137 kubelet[1413]: I0906 00:07:08.001083 1413 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 6 00:07:08.001137 kubelet[1413]: I0906 00:07:08.001126 1413 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:07:08.001470 kubelet[1413]: I0906 00:07:08.001442 1413 server.go:934] "Client rotation is on, will bootstrap in background" Sep 6 00:07:08.024742 kubelet[1413]: I0906 00:07:08.024694 1413 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:07:08.033851 kubelet[1413]: E0906 00:07:08.033812 1413 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:07:08.033851 kubelet[1413]: I0906 00:07:08.033844 1413 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:07:08.037854 kubelet[1413]: I0906 00:07:08.037815 1413 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:07:08.038430 kubelet[1413]: I0906 00:07:08.038411 1413 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 6 00:07:08.038595 kubelet[1413]: I0906 00:07:08.038565 1413 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:07:08.038755 kubelet[1413]: I0906 00:07:08.038595 1413 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.73","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:07:08.038837 kubelet[1413]: I0906 00:07:08.038832 1413 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:07:08.038862 kubelet[1413]: I0906 00:07:08.038842 1413 container_manager_linux.go:300] "Creating device plugin manager" Sep 6 00:07:08.039099 kubelet[1413]: I0906 00:07:08.039085 1413 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:07:08.045683 kubelet[1413]: I0906 00:07:08.045647 1413 kubelet.go:408] "Attempting to sync node with API server" Sep 6 00:07:08.045683 kubelet[1413]: I0906 00:07:08.045692 1413 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:07:08.045825 kubelet[1413]: I0906 00:07:08.045731 1413 kubelet.go:314] "Adding apiserver pod source" Sep 6 00:07:08.045825 kubelet[1413]: I0906 00:07:08.045817 1413 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:07:08.045972 kubelet[1413]: E0906 00:07:08.045938 1413 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:08.046648 kubelet[1413]: E0906 00:07:08.046610 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:08.051247 kubelet[1413]: I0906 00:07:08.051225 1413 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 6 00:07:08.052130 kubelet[1413]: I0906 00:07:08.052110 1413 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 6 00:07:08.052387 kubelet[1413]: W0906 00:07:08.052377 1413 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:07:08.053618 kubelet[1413]: I0906 00:07:08.053596 1413 server.go:1274] "Started kubelet" Sep 6 00:07:08.054032 kubelet[1413]: I0906 00:07:08.053994 1413 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:07:08.055905 kubelet[1413]: I0906 00:07:08.055879 1413 server.go:449] "Adding debug handlers to kubelet server" Sep 6 00:07:08.056582 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 6 00:07:08.056720 kubelet[1413]: I0906 00:07:08.056695 1413 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:07:08.056720 kubelet[1413]: W0906 00:07:08.056710 1413 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.73" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Sep 6 00:07:08.056814 kubelet[1413]: E0906 00:07:08.056741 1413 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.73\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Sep 6 00:07:08.057512 kubelet[1413]: I0906 00:07:08.057434 1413 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:07:08.057703 kubelet[1413]: W0906 00:07:08.057660 1413 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Sep 6 00:07:08.057703 kubelet[1413]: E0906 00:07:08.057688 1413 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Sep 6 00:07:08.058102 kubelet[1413]: I0906 00:07:08.058077 1413 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:07:08.058356 kubelet[1413]: I0906 00:07:08.058171 1413 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:07:08.059363 kubelet[1413]: I0906 00:07:08.059344 1413 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 6 00:07:08.059430 kubelet[1413]: I0906 00:07:08.059421 1413 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 6 00:07:08.059494 kubelet[1413]: I0906 00:07:08.059479 1413 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:07:08.059964 kubelet[1413]: E0906 00:07:08.059936 1413 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.73\" not found" Sep 6 00:07:08.060107 kubelet[1413]: I0906 00:07:08.060060 1413 factory.go:221] Registration of the systemd container factory successfully Sep 6 00:07:08.060256 kubelet[1413]: I0906 00:07:08.060228 1413 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:07:08.060726 kubelet[1413]: E0906 00:07:08.060705 1413 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:07:08.063613 kubelet[1413]: I0906 00:07:08.063588 1413 factory.go:221] Registration of the containerd container factory successfully Sep 6 00:07:08.065434 kubelet[1413]: E0906 00:07:08.065402 1413 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.73\" not found" node="10.0.0.73" Sep 6 00:07:08.076356 kubelet[1413]: I0906 00:07:08.076331 1413 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 6 00:07:08.076356 kubelet[1413]: I0906 00:07:08.076351 1413 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 6 00:07:08.076483 kubelet[1413]: I0906 00:07:08.076375 1413 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:07:08.149386 kubelet[1413]: I0906 00:07:08.149347 1413 policy_none.go:49] "None policy: Start" Sep 6 00:07:08.150138 kubelet[1413]: I0906 00:07:08.150116 1413 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 6 00:07:08.150198 kubelet[1413]: I0906 00:07:08.150144 1413 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:07:08.157449 systemd[1]: Created slice kubepods.slice. Sep 6 00:07:08.160210 kubelet[1413]: E0906 00:07:08.160178 1413 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.73\" not found" Sep 6 00:07:08.161897 systemd[1]: Created slice kubepods-besteffort.slice. Sep 6 00:07:08.175968 systemd[1]: Created slice kubepods-burstable.slice. Sep 6 00:07:08.177096 kubelet[1413]: I0906 00:07:08.177074 1413 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 6 00:07:08.177336 kubelet[1413]: I0906 00:07:08.177319 1413 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:07:08.177435 kubelet[1413]: I0906 00:07:08.177401 1413 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:07:08.177906 kubelet[1413]: I0906 00:07:08.177889 1413 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:07:08.179208 kubelet[1413]: E0906 00:07:08.179186 1413 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.73\" not found" Sep 6 00:07:08.218702 kubelet[1413]: I0906 00:07:08.218631 1413 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 6 00:07:08.221333 kubelet[1413]: I0906 00:07:08.221157 1413 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 6 00:07:08.221512 kubelet[1413]: I0906 00:07:08.221499 1413 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 6 00:07:08.221626 kubelet[1413]: I0906 00:07:08.221603 1413 kubelet.go:2321] "Starting kubelet main sync loop" Sep 6 00:07:08.221790 kubelet[1413]: E0906 00:07:08.221751 1413 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 6 00:07:08.278502 kubelet[1413]: I0906 00:07:08.278302 1413 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.73" Sep 6 00:07:08.297154 kubelet[1413]: I0906 00:07:08.297099 1413 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.73" Sep 6 00:07:08.297154 kubelet[1413]: E0906 00:07:08.297139 1413 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.73\": node \"10.0.0.73\" not found" Sep 6 00:07:08.330023 kubelet[1413]: I0906 00:07:08.329986 1413 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Sep 6 00:07:08.330659 env[1212]: time="2025-09-06T00:07:08.330586496Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:07:08.330949 kubelet[1413]: I0906 00:07:08.330870 1413 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Sep 6 00:07:08.358487 kubelet[1413]: E0906 00:07:08.358447 1413 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.73\" not found" Sep 6 00:07:08.392913 sudo[1311]: pam_unix(sudo:session): session closed for user root Sep 6 00:07:08.394800 sshd[1308]: pam_unix(sshd:session): session closed for user core Sep 6 00:07:08.397060 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:07:08.397687 systemd-logind[1200]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:07:08.397845 systemd[1]: sshd@4-10.0.0.73:22-10.0.0.1:59994.service: Deactivated successfully. Sep 6 00:07:08.398823 systemd-logind[1200]: Removed session 5. Sep 6 00:07:08.458772 kubelet[1413]: E0906 00:07:08.458711 1413 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.73\" not found" Sep 6 00:07:08.559749 kubelet[1413]: E0906 00:07:08.559647 1413 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.73\" not found" Sep 6 00:07:08.660115 kubelet[1413]: E0906 00:07:08.660077 1413 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.73\" not found" Sep 6 00:07:08.761093 kubelet[1413]: E0906 00:07:08.761011 1413 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.73\" not found" Sep 6 00:07:08.862060 kubelet[1413]: E0906 00:07:08.861959 1413 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.73\" not found" Sep 6 00:07:08.962621 kubelet[1413]: E0906 00:07:08.962560 1413 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.73\" not found" Sep 6 00:07:09.003905 kubelet[1413]: I0906 00:07:09.003853 1413 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Sep 6 00:07:09.004039 kubelet[1413]: W0906 00:07:09.003999 1413 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 6 00:07:09.004039 kubelet[1413]: W0906 00:07:09.004031 1413 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 6 00:07:09.004138 kubelet[1413]: W0906 00:07:09.004052 1413 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 6 00:07:09.047241 kubelet[1413]: E0906 00:07:09.047146 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:09.062878 kubelet[1413]: E0906 00:07:09.062824 1413 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.73\" not found" Sep 6 00:07:09.163097 kubelet[1413]: E0906 00:07:09.162951 1413 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.73\" not found" Sep 6 00:07:09.263742 kubelet[1413]: E0906 00:07:09.263688 1413 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.73\" not found" Sep 6 00:07:10.048282 kubelet[1413]: E0906 00:07:10.048244 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:10.048624 kubelet[1413]: I0906 00:07:10.048365 1413 apiserver.go:52] "Watching apiserver" Sep 6 00:07:10.058899 systemd[1]: Created slice kubepods-besteffort-pod9ec63823_6eb8_4b56_be25_273efe7c4de2.slice. Sep 6 00:07:10.060099 kubelet[1413]: I0906 00:07:10.060068 1413 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 6 00:07:10.069520 kubelet[1413]: I0906 00:07:10.069480 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-host-proc-sys-net\") pod \"cilium-zs8nw\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " pod="kube-system/cilium-zs8nw" Sep 6 00:07:10.069520 kubelet[1413]: I0906 00:07:10.069516 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-host-proc-sys-kernel\") pod \"cilium-zs8nw\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " pod="kube-system/cilium-zs8nw" Sep 6 00:07:10.069620 kubelet[1413]: I0906 00:07:10.069534 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9ec63823-6eb8-4b56-be25-273efe7c4de2-kube-proxy\") pod \"kube-proxy-km2pb\" (UID: \"9ec63823-6eb8-4b56-be25-273efe7c4de2\") " pod="kube-system/kube-proxy-km2pb" Sep 6 00:07:10.069620 kubelet[1413]: I0906 00:07:10.069552 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ec63823-6eb8-4b56-be25-273efe7c4de2-xtables-lock\") pod \"kube-proxy-km2pb\" (UID: \"9ec63823-6eb8-4b56-be25-273efe7c4de2\") " pod="kube-system/kube-proxy-km2pb" Sep 6 00:07:10.069620 kubelet[1413]: I0906 00:07:10.069568 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-bpf-maps\") pod \"cilium-zs8nw\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " pod="kube-system/cilium-zs8nw" Sep 6 00:07:10.069620 kubelet[1413]: I0906 00:07:10.069583 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-cni-path\") pod \"cilium-zs8nw\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " pod="kube-system/cilium-zs8nw" Sep 6 00:07:10.069620 kubelet[1413]: I0906 00:07:10.069598 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/98e04579-0660-4831-8229-d314ae64eae9-clustermesh-secrets\") pod \"cilium-zs8nw\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " pod="kube-system/cilium-zs8nw" Sep 6 00:07:10.069620 kubelet[1413]: I0906 00:07:10.069613 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l2f8\" (UniqueName: \"kubernetes.io/projected/9ec63823-6eb8-4b56-be25-273efe7c4de2-kube-api-access-4l2f8\") pod \"kube-proxy-km2pb\" (UID: \"9ec63823-6eb8-4b56-be25-273efe7c4de2\") " pod="kube-system/kube-proxy-km2pb" Sep 6 00:07:10.069788 kubelet[1413]: I0906 00:07:10.069630 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-cilium-cgroup\") pod \"cilium-zs8nw\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " pod="kube-system/cilium-zs8nw" Sep 6 00:07:10.069788 kubelet[1413]: I0906 00:07:10.069649 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-etc-cni-netd\") pod \"cilium-zs8nw\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " pod="kube-system/cilium-zs8nw" Sep 6 00:07:10.069788 kubelet[1413]: I0906 00:07:10.069663 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z484c\" (UniqueName: \"kubernetes.io/projected/98e04579-0660-4831-8229-d314ae64eae9-kube-api-access-z484c\") pod \"cilium-zs8nw\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " pod="kube-system/cilium-zs8nw" Sep 6 00:07:10.069788 kubelet[1413]: I0906 00:07:10.069677 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-hostproc\") pod \"cilium-zs8nw\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " pod="kube-system/cilium-zs8nw" Sep 6 00:07:10.069788 kubelet[1413]: I0906 00:07:10.069692 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98e04579-0660-4831-8229-d314ae64eae9-cilium-config-path\") pod \"cilium-zs8nw\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " pod="kube-system/cilium-zs8nw" Sep 6 00:07:10.069788 kubelet[1413]: I0906 00:07:10.069706 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/98e04579-0660-4831-8229-d314ae64eae9-hubble-tls\") pod \"cilium-zs8nw\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " pod="kube-system/cilium-zs8nw" Sep 6 00:07:10.069910 kubelet[1413]: I0906 00:07:10.069719 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ec63823-6eb8-4b56-be25-273efe7c4de2-lib-modules\") pod \"kube-proxy-km2pb\" (UID: \"9ec63823-6eb8-4b56-be25-273efe7c4de2\") " pod="kube-system/kube-proxy-km2pb" Sep 6 00:07:10.069910 kubelet[1413]: I0906 00:07:10.069740 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-cilium-run\") pod \"cilium-zs8nw\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " pod="kube-system/cilium-zs8nw" Sep 6 00:07:10.069910 kubelet[1413]: I0906 00:07:10.069791 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-lib-modules\") pod \"cilium-zs8nw\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " pod="kube-system/cilium-zs8nw" Sep 6 00:07:10.069910 kubelet[1413]: I0906 00:07:10.069816 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-xtables-lock\") pod \"cilium-zs8nw\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " pod="kube-system/cilium-zs8nw" Sep 6 00:07:10.076860 systemd[1]: Created slice kubepods-burstable-pod98e04579_0660_4831_8229_d314ae64eae9.slice. Sep 6 00:07:10.172084 kubelet[1413]: I0906 00:07:10.172045 1413 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 6 00:07:10.376469 kubelet[1413]: E0906 00:07:10.376364 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:10.377728 env[1212]: time="2025-09-06T00:07:10.377685522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-km2pb,Uid:9ec63823-6eb8-4b56-be25-273efe7c4de2,Namespace:kube-system,Attempt:0,}" Sep 6 00:07:10.386938 kubelet[1413]: E0906 00:07:10.386913 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:10.387333 env[1212]: time="2025-09-06T00:07:10.387303761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zs8nw,Uid:98e04579-0660-4831-8229-d314ae64eae9,Namespace:kube-system,Attempt:0,}" Sep 6 00:07:10.908958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount440667793.mount: Deactivated successfully. Sep 6 00:07:10.913428 env[1212]: time="2025-09-06T00:07:10.913384744Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:10.915896 env[1212]: time="2025-09-06T00:07:10.915863062Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:10.917288 env[1212]: time="2025-09-06T00:07:10.917251120Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:10.919524 env[1212]: time="2025-09-06T00:07:10.919465030Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:10.921180 env[1212]: time="2025-09-06T00:07:10.921147276Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:10.923658 env[1212]: time="2025-09-06T00:07:10.923595974Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:10.925085 env[1212]: time="2025-09-06T00:07:10.925042870Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:10.929905 env[1212]: time="2025-09-06T00:07:10.929876211Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:10.949443 env[1212]: time="2025-09-06T00:07:10.949384323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:07:10.949631 env[1212]: time="2025-09-06T00:07:10.949376697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:07:10.949631 env[1212]: time="2025-09-06T00:07:10.949416350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:07:10.949631 env[1212]: time="2025-09-06T00:07:10.949431401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:07:10.949631 env[1212]: time="2025-09-06T00:07:10.949594829Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26 pid=1478 runtime=io.containerd.runc.v2 Sep 6 00:07:10.949631 env[1212]: time="2025-09-06T00:07:10.949415427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:07:10.949631 env[1212]: time="2025-09-06T00:07:10.949426344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:07:10.949916 env[1212]: time="2025-09-06T00:07:10.949629666Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a5deebe280b803bc07c62640c84d542dadaf32938100a1ede1db50060e950f0 pid=1477 runtime=io.containerd.runc.v2 Sep 6 00:07:10.968815 systemd[1]: Started cri-containerd-ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26.scope. Sep 6 00:07:10.979537 systemd[1]: Started cri-containerd-2a5deebe280b803bc07c62640c84d542dadaf32938100a1ede1db50060e950f0.scope. Sep 6 00:07:11.008390 env[1212]: time="2025-09-06T00:07:11.008341496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zs8nw,Uid:98e04579-0660-4831-8229-d314ae64eae9,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26\"" Sep 6 00:07:11.009535 kubelet[1413]: E0906 00:07:11.009506 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:11.011106 env[1212]: time="2025-09-06T00:07:11.011075809Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 00:07:11.016210 env[1212]: time="2025-09-06T00:07:11.016168089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-km2pb,Uid:9ec63823-6eb8-4b56-be25-273efe7c4de2,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a5deebe280b803bc07c62640c84d542dadaf32938100a1ede1db50060e950f0\"" Sep 6 00:07:11.017589 kubelet[1413]: E0906 00:07:11.017252 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:11.049453 kubelet[1413]: E0906 00:07:11.049336 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:12.049642 kubelet[1413]: E0906 00:07:12.049598 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:13.050015 kubelet[1413]: E0906 00:07:13.049983 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:14.050461 kubelet[1413]: E0906 00:07:14.050417 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:15.050839 kubelet[1413]: E0906 00:07:15.050787 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:15.531550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount932713389.mount: Deactivated successfully. Sep 6 00:07:16.051510 kubelet[1413]: E0906 00:07:16.051451 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:17.051913 kubelet[1413]: E0906 00:07:17.051850 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:17.863276 env[1212]: time="2025-09-06T00:07:17.863227934Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:17.868421 env[1212]: time="2025-09-06T00:07:17.868383222Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:17.870429 env[1212]: time="2025-09-06T00:07:17.870382462Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:17.871118 env[1212]: time="2025-09-06T00:07:17.871075337Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 6 00:07:17.872896 env[1212]: time="2025-09-06T00:07:17.872869026Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 6 00:07:17.876794 env[1212]: time="2025-09-06T00:07:17.874943005Z" level=info msg="CreateContainer within sandbox \"ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:07:17.889466 env[1212]: time="2025-09-06T00:07:17.889422607Z" level=info msg="CreateContainer within sandbox \"ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"786d652283f0a4dc450df3907cbd5c529e62c6e8cc0dc51a8d30b8a7ada4b531\"" Sep 6 00:07:17.890386 env[1212]: time="2025-09-06T00:07:17.890353356Z" level=info msg="StartContainer for \"786d652283f0a4dc450df3907cbd5c529e62c6e8cc0dc51a8d30b8a7ada4b531\"" Sep 6 00:07:17.911046 systemd[1]: Started cri-containerd-786d652283f0a4dc450df3907cbd5c529e62c6e8cc0dc51a8d30b8a7ada4b531.scope. Sep 6 00:07:17.912700 systemd[1]: run-containerd-runc-k8s.io-786d652283f0a4dc450df3907cbd5c529e62c6e8cc0dc51a8d30b8a7ada4b531-runc.NQdxzK.mount: Deactivated successfully. Sep 6 00:07:17.944248 env[1212]: time="2025-09-06T00:07:17.944201990Z" level=info msg="StartContainer for \"786d652283f0a4dc450df3907cbd5c529e62c6e8cc0dc51a8d30b8a7ada4b531\" returns successfully" Sep 6 00:07:17.954810 systemd[1]: cri-containerd-786d652283f0a4dc450df3907cbd5c529e62c6e8cc0dc51a8d30b8a7ada4b531.scope: Deactivated successfully. Sep 6 00:07:18.052520 kubelet[1413]: E0906 00:07:18.052477 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:18.059634 env[1212]: time="2025-09-06T00:07:18.059538190Z" level=info msg="shim disconnected" id=786d652283f0a4dc450df3907cbd5c529e62c6e8cc0dc51a8d30b8a7ada4b531 Sep 6 00:07:18.059634 env[1212]: time="2025-09-06T00:07:18.059595776Z" level=warning msg="cleaning up after shim disconnected" id=786d652283f0a4dc450df3907cbd5c529e62c6e8cc0dc51a8d30b8a7ada4b531 namespace=k8s.io Sep 6 00:07:18.059634 env[1212]: time="2025-09-06T00:07:18.059631938Z" level=info msg="cleaning up dead shim" Sep 6 00:07:18.066903 env[1212]: time="2025-09-06T00:07:18.066865298Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:07:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1593 runtime=io.containerd.runc.v2\n" Sep 6 00:07:18.239771 kubelet[1413]: E0906 00:07:18.239668 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:18.242968 env[1212]: time="2025-09-06T00:07:18.242927539Z" level=info msg="CreateContainer within sandbox \"ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:07:18.258855 env[1212]: time="2025-09-06T00:07:18.258803647Z" level=info msg="CreateContainer within sandbox \"ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b05247b24bdcc09c4e4ac00e6a7d816277f5d5011ceb71eb0c9cef4a2d70330b\"" Sep 6 00:07:18.259599 env[1212]: time="2025-09-06T00:07:18.259529446Z" level=info msg="StartContainer for \"b05247b24bdcc09c4e4ac00e6a7d816277f5d5011ceb71eb0c9cef4a2d70330b\"" Sep 6 00:07:18.282997 systemd[1]: Started cri-containerd-b05247b24bdcc09c4e4ac00e6a7d816277f5d5011ceb71eb0c9cef4a2d70330b.scope. Sep 6 00:07:18.316245 env[1212]: time="2025-09-06T00:07:18.316201344Z" level=info msg="StartContainer for \"b05247b24bdcc09c4e4ac00e6a7d816277f5d5011ceb71eb0c9cef4a2d70330b\" returns successfully" Sep 6 00:07:18.325840 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:07:18.326084 systemd[1]: Stopped systemd-sysctl.service. Sep 6 00:07:18.326874 systemd[1]: Stopping systemd-sysctl.service... Sep 6 00:07:18.328325 systemd[1]: Starting systemd-sysctl.service... Sep 6 00:07:18.331491 systemd[1]: cri-containerd-b05247b24bdcc09c4e4ac00e6a7d816277f5d5011ceb71eb0c9cef4a2d70330b.scope: Deactivated successfully. Sep 6 00:07:18.335470 systemd[1]: Finished systemd-sysctl.service. Sep 6 00:07:18.354968 env[1212]: time="2025-09-06T00:07:18.354906637Z" level=info msg="shim disconnected" id=b05247b24bdcc09c4e4ac00e6a7d816277f5d5011ceb71eb0c9cef4a2d70330b Sep 6 00:07:18.355110 env[1212]: time="2025-09-06T00:07:18.354957776Z" level=warning msg="cleaning up after shim disconnected" id=b05247b24bdcc09c4e4ac00e6a7d816277f5d5011ceb71eb0c9cef4a2d70330b namespace=k8s.io Sep 6 00:07:18.355110 env[1212]: time="2025-09-06T00:07:18.355058132Z" level=info msg="cleaning up dead shim" Sep 6 00:07:18.361947 env[1212]: time="2025-09-06T00:07:18.361889707Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:07:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1659 runtime=io.containerd.runc.v2\n" Sep 6 00:07:18.886356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-786d652283f0a4dc450df3907cbd5c529e62c6e8cc0dc51a8d30b8a7ada4b531-rootfs.mount: Deactivated successfully. Sep 6 00:07:19.053383 kubelet[1413]: E0906 00:07:19.053343 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:19.082581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount763190111.mount: Deactivated successfully. Sep 6 00:07:19.244807 kubelet[1413]: E0906 00:07:19.244699 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:19.248765 env[1212]: time="2025-09-06T00:07:19.248694780Z" level=info msg="CreateContainer within sandbox \"ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:07:19.268093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount843645373.mount: Deactivated successfully. Sep 6 00:07:19.275530 env[1212]: time="2025-09-06T00:07:19.275487839Z" level=info msg="CreateContainer within sandbox \"ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e85b8e625025bddbcc3f7f1fa46d9e5842eb5716e83537b61f74638334a68a45\"" Sep 6 00:07:19.276206 env[1212]: time="2025-09-06T00:07:19.276092130Z" level=info msg="StartContainer for \"e85b8e625025bddbcc3f7f1fa46d9e5842eb5716e83537b61f74638334a68a45\"" Sep 6 00:07:19.291380 systemd[1]: Started cri-containerd-e85b8e625025bddbcc3f7f1fa46d9e5842eb5716e83537b61f74638334a68a45.scope. Sep 6 00:07:19.321905 env[1212]: time="2025-09-06T00:07:19.321809769Z" level=info msg="StartContainer for \"e85b8e625025bddbcc3f7f1fa46d9e5842eb5716e83537b61f74638334a68a45\" returns successfully" Sep 6 00:07:19.326902 systemd[1]: cri-containerd-e85b8e625025bddbcc3f7f1fa46d9e5842eb5716e83537b61f74638334a68a45.scope: Deactivated successfully. Sep 6 00:07:19.442075 env[1212]: time="2025-09-06T00:07:19.442028560Z" level=info msg="shim disconnected" id=e85b8e625025bddbcc3f7f1fa46d9e5842eb5716e83537b61f74638334a68a45 Sep 6 00:07:19.442317 env[1212]: time="2025-09-06T00:07:19.442297552Z" level=warning msg="cleaning up after shim disconnected" id=e85b8e625025bddbcc3f7f1fa46d9e5842eb5716e83537b61f74638334a68a45 namespace=k8s.io Sep 6 00:07:19.442380 env[1212]: time="2025-09-06T00:07:19.442366622Z" level=info msg="cleaning up dead shim" Sep 6 00:07:19.449616 env[1212]: time="2025-09-06T00:07:19.449580959Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:07:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1717 runtime=io.containerd.runc.v2\n" Sep 6 00:07:19.565796 env[1212]: time="2025-09-06T00:07:19.565682065Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:19.567492 env[1212]: time="2025-09-06T00:07:19.567465028Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:19.569238 env[1212]: time="2025-09-06T00:07:19.569199983Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:19.570541 env[1212]: time="2025-09-06T00:07:19.570515113Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:19.570983 env[1212]: time="2025-09-06T00:07:19.570962205Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 6 00:07:19.573432 env[1212]: time="2025-09-06T00:07:19.573402033Z" level=info msg="CreateContainer within sandbox \"2a5deebe280b803bc07c62640c84d542dadaf32938100a1ede1db50060e950f0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:07:19.587352 env[1212]: time="2025-09-06T00:07:19.587304855Z" level=info msg="CreateContainer within sandbox \"2a5deebe280b803bc07c62640c84d542dadaf32938100a1ede1db50060e950f0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2eaf89b461e39beb3ac072c4f88503d479647410e0d55c1024df31e190eea0af\"" Sep 6 00:07:19.587829 env[1212]: time="2025-09-06T00:07:19.587803879Z" level=info msg="StartContainer for \"2eaf89b461e39beb3ac072c4f88503d479647410e0d55c1024df31e190eea0af\"" Sep 6 00:07:19.601828 systemd[1]: Started cri-containerd-2eaf89b461e39beb3ac072c4f88503d479647410e0d55c1024df31e190eea0af.scope. Sep 6 00:07:19.630568 env[1212]: time="2025-09-06T00:07:19.630526409Z" level=info msg="StartContainer for \"2eaf89b461e39beb3ac072c4f88503d479647410e0d55c1024df31e190eea0af\" returns successfully" Sep 6 00:07:20.053564 kubelet[1413]: E0906 00:07:20.053452 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:20.248149 kubelet[1413]: E0906 00:07:20.247984 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:20.249516 kubelet[1413]: E0906 00:07:20.249494 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:20.250188 env[1212]: time="2025-09-06T00:07:20.250143398Z" level=info msg="CreateContainer within sandbox \"ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:07:20.262710 env[1212]: time="2025-09-06T00:07:20.262660917Z" level=info msg="CreateContainer within sandbox \"ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"170b864af638fec41f84b53bc0efdfa190e4afb1584cd3dcfeeaadd941bbb420\"" Sep 6 00:07:20.263170 env[1212]: time="2025-09-06T00:07:20.263142343Z" level=info msg="StartContainer for \"170b864af638fec41f84b53bc0efdfa190e4afb1584cd3dcfeeaadd941bbb420\"" Sep 6 00:07:20.275291 kubelet[1413]: I0906 00:07:20.275228 1413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-km2pb" podStartSLOduration=3.720921899 podStartE2EDuration="12.275211786s" podCreationTimestamp="2025-09-06 00:07:08 +0000 UTC" firstStartedPulling="2025-09-06 00:07:11.017808067 +0000 UTC m=+3.653749680" lastFinishedPulling="2025-09-06 00:07:19.572097954 +0000 UTC m=+12.208039567" observedRunningTime="2025-09-06 00:07:20.275120825 +0000 UTC m=+12.911062438" watchObservedRunningTime="2025-09-06 00:07:20.275211786 +0000 UTC m=+12.911153399" Sep 6 00:07:20.281204 systemd[1]: Started cri-containerd-170b864af638fec41f84b53bc0efdfa190e4afb1584cd3dcfeeaadd941bbb420.scope. Sep 6 00:07:20.305069 systemd[1]: cri-containerd-170b864af638fec41f84b53bc0efdfa190e4afb1584cd3dcfeeaadd941bbb420.scope: Deactivated successfully. Sep 6 00:07:20.306172 env[1212]: time="2025-09-06T00:07:20.306115578Z" level=info msg="StartContainer for \"170b864af638fec41f84b53bc0efdfa190e4afb1584cd3dcfeeaadd941bbb420\" returns successfully" Sep 6 00:07:20.349941 env[1212]: time="2025-09-06T00:07:20.349897129Z" level=info msg="shim disconnected" id=170b864af638fec41f84b53bc0efdfa190e4afb1584cd3dcfeeaadd941bbb420 Sep 6 00:07:20.350189 env[1212]: time="2025-09-06T00:07:20.350168009Z" level=warning msg="cleaning up after shim disconnected" id=170b864af638fec41f84b53bc0efdfa190e4afb1584cd3dcfeeaadd941bbb420 namespace=k8s.io Sep 6 00:07:20.350256 env[1212]: time="2025-09-06T00:07:20.350242515Z" level=info msg="cleaning up dead shim" Sep 6 00:07:20.356594 env[1212]: time="2025-09-06T00:07:20.356566432Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:07:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1946 runtime=io.containerd.runc.v2\n" Sep 6 00:07:20.885419 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-170b864af638fec41f84b53bc0efdfa190e4afb1584cd3dcfeeaadd941bbb420-rootfs.mount: Deactivated successfully. Sep 6 00:07:21.054470 kubelet[1413]: E0906 00:07:21.054433 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:21.253065 kubelet[1413]: E0906 00:07:21.252935 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:21.253065 kubelet[1413]: E0906 00:07:21.252981 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:21.255340 env[1212]: time="2025-09-06T00:07:21.255301247Z" level=info msg="CreateContainer within sandbox \"ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:07:21.268152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2569615615.mount: Deactivated successfully. Sep 6 00:07:21.269847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3910968977.mount: Deactivated successfully. Sep 6 00:07:21.271997 env[1212]: time="2025-09-06T00:07:21.271909831Z" level=info msg="CreateContainer within sandbox \"ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778\"" Sep 6 00:07:21.272534 env[1212]: time="2025-09-06T00:07:21.272444925Z" level=info msg="StartContainer for \"bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778\"" Sep 6 00:07:21.287044 systemd[1]: Started cri-containerd-bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778.scope. Sep 6 00:07:21.317730 env[1212]: time="2025-09-06T00:07:21.317689089Z" level=info msg="StartContainer for \"bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778\" returns successfully" Sep 6 00:07:21.384205 kubelet[1413]: I0906 00:07:21.384174 1413 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 6 00:07:21.457781 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 6 00:07:21.695778 kernel: Initializing XFRM netlink socket Sep 6 00:07:21.697791 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 6 00:07:22.055568 kubelet[1413]: E0906 00:07:22.055440 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:22.256767 kubelet[1413]: E0906 00:07:22.256717 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:22.900775 systemd-networkd[1042]: cilium_host: Link UP Sep 6 00:07:22.901473 systemd-networkd[1042]: cilium_net: Link UP Sep 6 00:07:22.903290 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 6 00:07:22.903347 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 6 00:07:22.903463 systemd-networkd[1042]: cilium_net: Gained carrier Sep 6 00:07:22.903628 systemd-networkd[1042]: cilium_host: Gained carrier Sep 6 00:07:22.975193 systemd-networkd[1042]: cilium_vxlan: Link UP Sep 6 00:07:22.975200 systemd-networkd[1042]: cilium_vxlan: Gained carrier Sep 6 00:07:23.055673 kubelet[1413]: E0906 00:07:23.055631 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:23.220891 systemd-networkd[1042]: cilium_net: Gained IPv6LL Sep 6 00:07:23.225785 kernel: NET: Registered PF_ALG protocol family Sep 6 00:07:23.258450 kubelet[1413]: E0906 00:07:23.258406 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:23.283935 systemd-networkd[1042]: cilium_host: Gained IPv6LL Sep 6 00:07:23.787974 systemd-networkd[1042]: lxc_health: Link UP Sep 6 00:07:23.798281 systemd-networkd[1042]: lxc_health: Gained carrier Sep 6 00:07:23.798782 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:07:24.056434 kubelet[1413]: E0906 00:07:24.056324 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:24.059889 systemd-networkd[1042]: cilium_vxlan: Gained IPv6LL Sep 6 00:07:24.259967 kubelet[1413]: E0906 00:07:24.259939 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:24.378662 kubelet[1413]: I0906 00:07:24.378533 1413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zs8nw" podStartSLOduration=9.516591742 podStartE2EDuration="16.378512863s" podCreationTimestamp="2025-09-06 00:07:08 +0000 UTC" firstStartedPulling="2025-09-06 00:07:11.010522583 +0000 UTC m=+3.646464196" lastFinishedPulling="2025-09-06 00:07:17.872443704 +0000 UTC m=+10.508385317" observedRunningTime="2025-09-06 00:07:22.272079873 +0000 UTC m=+14.908021486" watchObservedRunningTime="2025-09-06 00:07:24.378512863 +0000 UTC m=+17.014454476" Sep 6 00:07:24.383332 systemd[1]: Created slice kubepods-besteffort-podd56a2b3a_1f71_4a32_b1e4_80e9f03dc232.slice. Sep 6 00:07:24.467321 kubelet[1413]: I0906 00:07:24.467274 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvzbm\" (UniqueName: \"kubernetes.io/projected/d56a2b3a-1f71-4a32-b1e4-80e9f03dc232-kube-api-access-dvzbm\") pod \"nginx-deployment-8587fbcb89-f5w5w\" (UID: \"d56a2b3a-1f71-4a32-b1e4-80e9f03dc232\") " pod="default/nginx-deployment-8587fbcb89-f5w5w" Sep 6 00:07:24.686689 env[1212]: time="2025-09-06T00:07:24.686539415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-f5w5w,Uid:d56a2b3a-1f71-4a32-b1e4-80e9f03dc232,Namespace:default,Attempt:0,}" Sep 6 00:07:24.720854 systemd-networkd[1042]: lxceb9eb00e12d2: Link UP Sep 6 00:07:24.732795 kernel: eth0: renamed from tmp985d5 Sep 6 00:07:24.740787 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:07:24.740869 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxceb9eb00e12d2: link becomes ready Sep 6 00:07:24.741177 systemd-networkd[1042]: lxceb9eb00e12d2: Gained carrier Sep 6 00:07:25.057498 kubelet[1413]: E0906 00:07:25.057380 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:25.261282 kubelet[1413]: E0906 00:07:25.261226 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:25.467942 systemd-networkd[1042]: lxc_health: Gained IPv6LL Sep 6 00:07:26.057861 kubelet[1413]: E0906 00:07:26.057804 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:26.235950 systemd-networkd[1042]: lxceb9eb00e12d2: Gained IPv6LL Sep 6 00:07:26.262471 kubelet[1413]: E0906 00:07:26.262425 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:27.057976 kubelet[1413]: E0906 00:07:27.057929 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:27.264087 kubelet[1413]: E0906 00:07:27.263889 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:07:28.050464 kubelet[1413]: E0906 00:07:28.050397 1413 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:28.059765 kubelet[1413]: E0906 00:07:28.059730 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:28.272274 env[1212]: time="2025-09-06T00:07:28.272209436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:07:28.272274 env[1212]: time="2025-09-06T00:07:28.272249808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:07:28.272274 env[1212]: time="2025-09-06T00:07:28.272260171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:07:28.272630 env[1212]: time="2025-09-06T00:07:28.272408176Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/985d522e4121ebb1dede6ff9c5490f2c77cbf3fb6d694c6cc87391209686216b pid=2485 runtime=io.containerd.runc.v2 Sep 6 00:07:28.286947 systemd[1]: Started cri-containerd-985d522e4121ebb1dede6ff9c5490f2c77cbf3fb6d694c6cc87391209686216b.scope. Sep 6 00:07:28.303140 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:07:28.320595 env[1212]: time="2025-09-06T00:07:28.320551987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-f5w5w,Uid:d56a2b3a-1f71-4a32-b1e4-80e9f03dc232,Namespace:default,Attempt:0,} returns sandbox id \"985d522e4121ebb1dede6ff9c5490f2c77cbf3fb6d694c6cc87391209686216b\"" Sep 6 00:07:28.322112 env[1212]: time="2025-09-06T00:07:28.322079331Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 6 00:07:29.060072 kubelet[1413]: E0906 00:07:29.060018 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:30.060869 kubelet[1413]: E0906 00:07:30.060825 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:30.259018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2384998334.mount: Deactivated successfully. Sep 6 00:07:31.061333 kubelet[1413]: E0906 00:07:31.061282 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:31.561107 env[1212]: time="2025-09-06T00:07:31.561010356Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:31.563192 env[1212]: time="2025-09-06T00:07:31.563153713Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:31.565625 env[1212]: time="2025-09-06T00:07:31.565598051Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:31.568065 env[1212]: time="2025-09-06T00:07:31.568023666Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:31.568871 env[1212]: time="2025-09-06T00:07:31.568843833Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 6 00:07:31.571490 env[1212]: time="2025-09-06T00:07:31.571458006Z" level=info msg="CreateContainer within sandbox \"985d522e4121ebb1dede6ff9c5490f2c77cbf3fb6d694c6cc87391209686216b\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Sep 6 00:07:31.584941 env[1212]: time="2025-09-06T00:07:31.584906908Z" level=info msg="CreateContainer within sandbox \"985d522e4121ebb1dede6ff9c5490f2c77cbf3fb6d694c6cc87391209686216b\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"866c8c0aed6fc7bc3f437dc8f8a433849029c8ed491667187825f0aa46cc3c53\"" Sep 6 00:07:31.585676 env[1212]: time="2025-09-06T00:07:31.585640658Z" level=info msg="StartContainer for \"866c8c0aed6fc7bc3f437dc8f8a433849029c8ed491667187825f0aa46cc3c53\"" Sep 6 00:07:31.612845 systemd[1]: Started cri-containerd-866c8c0aed6fc7bc3f437dc8f8a433849029c8ed491667187825f0aa46cc3c53.scope. Sep 6 00:07:31.640205 env[1212]: time="2025-09-06T00:07:31.640161733Z" level=info msg="StartContainer for \"866c8c0aed6fc7bc3f437dc8f8a433849029c8ed491667187825f0aa46cc3c53\" returns successfully" Sep 6 00:07:32.061988 kubelet[1413]: E0906 00:07:32.061948 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:32.286657 kubelet[1413]: I0906 00:07:32.286576 1413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-f5w5w" podStartSLOduration=5.038357066 podStartE2EDuration="8.286531893s" podCreationTimestamp="2025-09-06 00:07:24 +0000 UTC" firstStartedPulling="2025-09-06 00:07:28.321701256 +0000 UTC m=+20.957642869" lastFinishedPulling="2025-09-06 00:07:31.569876083 +0000 UTC m=+24.205817696" observedRunningTime="2025-09-06 00:07:32.286203554 +0000 UTC m=+24.922145167" watchObservedRunningTime="2025-09-06 00:07:32.286531893 +0000 UTC m=+24.922473506" Sep 6 00:07:33.062842 kubelet[1413]: E0906 00:07:33.062794 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:34.063243 kubelet[1413]: E0906 00:07:34.063153 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:35.063920 kubelet[1413]: E0906 00:07:35.063826 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:36.064823 kubelet[1413]: E0906 00:07:36.064789 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:36.340669 systemd[1]: Created slice kubepods-besteffort-podca06f2ea_c3b8_4abf_bc0d_41539a533114.slice. Sep 6 00:07:36.443928 kubelet[1413]: I0906 00:07:36.443883 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/ca06f2ea-c3b8-4abf-bc0d-41539a533114-data\") pod \"nfs-server-provisioner-0\" (UID: \"ca06f2ea-c3b8-4abf-bc0d-41539a533114\") " pod="default/nfs-server-provisioner-0" Sep 6 00:07:36.444192 kubelet[1413]: I0906 00:07:36.444173 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh6h2\" (UniqueName: \"kubernetes.io/projected/ca06f2ea-c3b8-4abf-bc0d-41539a533114-kube-api-access-fh6h2\") pod \"nfs-server-provisioner-0\" (UID: \"ca06f2ea-c3b8-4abf-bc0d-41539a533114\") " pod="default/nfs-server-provisioner-0" Sep 6 00:07:36.643914 env[1212]: time="2025-09-06T00:07:36.643807641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ca06f2ea-c3b8-4abf-bc0d-41539a533114,Namespace:default,Attempt:0,}" Sep 6 00:07:36.673428 systemd-networkd[1042]: lxc09f4d05a0501: Link UP Sep 6 00:07:36.686834 kernel: eth0: renamed from tmpb9a5e Sep 6 00:07:36.693780 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:07:36.693851 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc09f4d05a0501: link becomes ready Sep 6 00:07:36.694925 systemd-networkd[1042]: lxc09f4d05a0501: Gained carrier Sep 6 00:07:36.830218 env[1212]: time="2025-09-06T00:07:36.830150715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:07:36.830350 env[1212]: time="2025-09-06T00:07:36.830191040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:07:36.830350 env[1212]: time="2025-09-06T00:07:36.830208882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:07:36.830447 env[1212]: time="2025-09-06T00:07:36.830341697Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b9a5e87ad99ea1397299a61b1105ae22bc0cbff58417858294de76daa5c5811e pid=2617 runtime=io.containerd.runc.v2 Sep 6 00:07:36.845341 systemd[1]: Started cri-containerd-b9a5e87ad99ea1397299a61b1105ae22bc0cbff58417858294de76daa5c5811e.scope. Sep 6 00:07:36.861652 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:07:36.876777 env[1212]: time="2025-09-06T00:07:36.876728122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ca06f2ea-c3b8-4abf-bc0d-41539a533114,Namespace:default,Attempt:0,} returns sandbox id \"b9a5e87ad99ea1397299a61b1105ae22bc0cbff58417858294de76daa5c5811e\"" Sep 6 00:07:36.878255 env[1212]: time="2025-09-06T00:07:36.878233098Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Sep 6 00:07:37.066154 kubelet[1413]: E0906 00:07:37.065590 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:38.012076 systemd-networkd[1042]: lxc09f4d05a0501: Gained IPv6LL Sep 6 00:07:38.066764 kubelet[1413]: E0906 00:07:38.066657 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:38.960139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount390617790.mount: Deactivated successfully. Sep 6 00:07:39.067207 kubelet[1413]: E0906 00:07:39.067037 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:40.067616 kubelet[1413]: E0906 00:07:40.067561 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:40.783766 env[1212]: time="2025-09-06T00:07:40.783705332Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:40.784722 env[1212]: time="2025-09-06T00:07:40.784693945Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:40.788437 env[1212]: time="2025-09-06T00:07:40.786609924Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:40.790324 env[1212]: time="2025-09-06T00:07:40.790278708Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:40.791339 env[1212]: time="2025-09-06T00:07:40.791267640Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Sep 6 00:07:40.795258 env[1212]: time="2025-09-06T00:07:40.795200129Z" level=info msg="CreateContainer within sandbox \"b9a5e87ad99ea1397299a61b1105ae22bc0cbff58417858294de76daa5c5811e\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Sep 6 00:07:40.807895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount491607758.mount: Deactivated successfully. Sep 6 00:07:40.809883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2630532278.mount: Deactivated successfully. Sep 6 00:07:40.815005 env[1212]: time="2025-09-06T00:07:40.814957139Z" level=info msg="CreateContainer within sandbox \"b9a5e87ad99ea1397299a61b1105ae22bc0cbff58417858294de76daa5c5811e\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"135239ec2b38c136c4949e1b21794ba9068da7884921902ae1648f3ef5b79e69\"" Sep 6 00:07:40.815587 env[1212]: time="2025-09-06T00:07:40.815560556Z" level=info msg="StartContainer for \"135239ec2b38c136c4949e1b21794ba9068da7884921902ae1648f3ef5b79e69\"" Sep 6 00:07:40.839725 systemd[1]: Started cri-containerd-135239ec2b38c136c4949e1b21794ba9068da7884921902ae1648f3ef5b79e69.scope. Sep 6 00:07:40.869776 env[1212]: time="2025-09-06T00:07:40.869717708Z" level=info msg="StartContainer for \"135239ec2b38c136c4949e1b21794ba9068da7884921902ae1648f3ef5b79e69\" returns successfully" Sep 6 00:07:41.068346 kubelet[1413]: E0906 00:07:41.068223 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:41.319574 kubelet[1413]: I0906 00:07:41.318074 1413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.4030922860000001 podStartE2EDuration="5.318059696s" podCreationTimestamp="2025-09-06 00:07:36 +0000 UTC" firstStartedPulling="2025-09-06 00:07:36.877936704 +0000 UTC m=+29.513878317" lastFinishedPulling="2025-09-06 00:07:40.792904114 +0000 UTC m=+33.428845727" observedRunningTime="2025-09-06 00:07:41.317854558 +0000 UTC m=+33.953796171" watchObservedRunningTime="2025-09-06 00:07:41.318059696 +0000 UTC m=+33.954001309" Sep 6 00:07:41.803389 systemd[1]: run-containerd-runc-k8s.io-135239ec2b38c136c4949e1b21794ba9068da7884921902ae1648f3ef5b79e69-runc.FKKSHl.mount: Deactivated successfully. Sep 6 00:07:42.069149 kubelet[1413]: E0906 00:07:42.069039 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:43.070712 kubelet[1413]: E0906 00:07:43.070357 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:44.071905 kubelet[1413]: E0906 00:07:44.071862 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:45.072917 kubelet[1413]: E0906 00:07:45.072860 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:45.994103 update_engine[1203]: I0906 00:07:45.993817 1203 update_attempter.cc:509] Updating boot flags... Sep 6 00:07:46.073421 kubelet[1413]: E0906 00:07:46.073380 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:47.074492 kubelet[1413]: E0906 00:07:47.074423 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:48.046073 kubelet[1413]: E0906 00:07:48.046028 1413 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:48.075448 kubelet[1413]: E0906 00:07:48.075418 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:49.076129 kubelet[1413]: E0906 00:07:49.076080 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:50.077098 kubelet[1413]: E0906 00:07:50.076686 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:50.294289 systemd[1]: Created slice kubepods-besteffort-pod41d8e1bc_313f_491d_9a73_5b93c3020234.slice. Sep 6 00:07:50.330986 kubelet[1413]: I0906 00:07:50.330900 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e4c121dd-975c-4a11-b605-b53ffe863c56\" (UniqueName: \"kubernetes.io/nfs/41d8e1bc-313f-491d-9a73-5b93c3020234-pvc-e4c121dd-975c-4a11-b605-b53ffe863c56\") pod \"test-pod-1\" (UID: \"41d8e1bc-313f-491d-9a73-5b93c3020234\") " pod="default/test-pod-1" Sep 6 00:07:50.331153 kubelet[1413]: I0906 00:07:50.331134 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbfl9\" (UniqueName: \"kubernetes.io/projected/41d8e1bc-313f-491d-9a73-5b93c3020234-kube-api-access-qbfl9\") pod \"test-pod-1\" (UID: \"41d8e1bc-313f-491d-9a73-5b93c3020234\") " pod="default/test-pod-1" Sep 6 00:07:50.464793 kernel: FS-Cache: Loaded Sep 6 00:07:50.494234 kernel: RPC: Registered named UNIX socket transport module. Sep 6 00:07:50.494351 kernel: RPC: Registered udp transport module. Sep 6 00:07:50.494374 kernel: RPC: Registered tcp transport module. Sep 6 00:07:50.495775 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Sep 6 00:07:50.536789 kernel: FS-Cache: Netfs 'nfs' registered for caching Sep 6 00:07:50.667215 kernel: NFS: Registering the id_resolver key type Sep 6 00:07:50.667354 kernel: Key type id_resolver registered Sep 6 00:07:50.667378 kernel: Key type id_legacy registered Sep 6 00:07:50.704882 nfsidmap[2753]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Sep 6 00:07:50.708948 nfsidmap[2756]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Sep 6 00:07:50.898169 env[1212]: time="2025-09-06T00:07:50.897536624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:41d8e1bc-313f-491d-9a73-5b93c3020234,Namespace:default,Attempt:0,}" Sep 6 00:07:50.940829 systemd-networkd[1042]: lxc0befd9ee1af0: Link UP Sep 6 00:07:50.954306 kernel: eth0: renamed from tmpcf8a0 Sep 6 00:07:50.958877 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 6 00:07:50.958953 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc0befd9ee1af0: link becomes ready Sep 6 00:07:50.959134 systemd-networkd[1042]: lxc0befd9ee1af0: Gained carrier Sep 6 00:07:51.077250 kubelet[1413]: E0906 00:07:51.077199 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:51.154836 env[1212]: time="2025-09-06T00:07:51.154747600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:07:51.154836 env[1212]: time="2025-09-06T00:07:51.154801323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:07:51.154836 env[1212]: time="2025-09-06T00:07:51.154811524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:07:51.155315 env[1212]: time="2025-09-06T00:07:51.155235707Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cf8a0498afbc7c80866cf425836a4409a0b51f8cdd036daec69de9a196c3d090 pid=2792 runtime=io.containerd.runc.v2 Sep 6 00:07:51.165475 systemd[1]: Started cri-containerd-cf8a0498afbc7c80866cf425836a4409a0b51f8cdd036daec69de9a196c3d090.scope. Sep 6 00:07:51.186024 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:07:51.213022 env[1212]: time="2025-09-06T00:07:51.212912550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:41d8e1bc-313f-491d-9a73-5b93c3020234,Namespace:default,Attempt:0,} returns sandbox id \"cf8a0498afbc7c80866cf425836a4409a0b51f8cdd036daec69de9a196c3d090\"" Sep 6 00:07:51.216829 env[1212]: time="2025-09-06T00:07:51.216298611Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 6 00:07:51.469310 env[1212]: time="2025-09-06T00:07:51.469215650Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:51.471557 env[1212]: time="2025-09-06T00:07:51.471531734Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:51.473260 env[1212]: time="2025-09-06T00:07:51.473231505Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:51.475188 env[1212]: time="2025-09-06T00:07:51.475153968Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:07:51.475964 env[1212]: time="2025-09-06T00:07:51.475935450Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 6 00:07:51.478589 env[1212]: time="2025-09-06T00:07:51.478546989Z" level=info msg="CreateContainer within sandbox \"cf8a0498afbc7c80866cf425836a4409a0b51f8cdd036daec69de9a196c3d090\" for container &ContainerMetadata{Name:test,Attempt:0,}" Sep 6 00:07:51.491173 env[1212]: time="2025-09-06T00:07:51.491136462Z" level=info msg="CreateContainer within sandbox \"cf8a0498afbc7c80866cf425836a4409a0b51f8cdd036daec69de9a196c3d090\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"1045e59033adb0b9945bf4f8b87b6fc12f90bd72173867d745dcb99adc45a924\"" Sep 6 00:07:51.491566 env[1212]: time="2025-09-06T00:07:51.491455999Z" level=info msg="StartContainer for \"1045e59033adb0b9945bf4f8b87b6fc12f90bd72173867d745dcb99adc45a924\"" Sep 6 00:07:51.507816 systemd[1]: Started cri-containerd-1045e59033adb0b9945bf4f8b87b6fc12f90bd72173867d745dcb99adc45a924.scope. Sep 6 00:07:51.538375 env[1212]: time="2025-09-06T00:07:51.538337385Z" level=info msg="StartContainer for \"1045e59033adb0b9945bf4f8b87b6fc12f90bd72173867d745dcb99adc45a924\" returns successfully" Sep 6 00:07:52.077826 kubelet[1413]: E0906 00:07:52.077781 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:52.411899 systemd-networkd[1042]: lxc0befd9ee1af0: Gained IPv6LL Sep 6 00:07:52.451892 systemd[1]: run-containerd-runc-k8s.io-1045e59033adb0b9945bf4f8b87b6fc12f90bd72173867d745dcb99adc45a924-runc.4tMiA7.mount: Deactivated successfully. Sep 6 00:07:53.078503 kubelet[1413]: E0906 00:07:53.078460 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:54.079479 kubelet[1413]: E0906 00:07:54.079437 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:55.079738 kubelet[1413]: E0906 00:07:55.079697 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:56.081454 kubelet[1413]: E0906 00:07:56.081405 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:57.082207 kubelet[1413]: E0906 00:07:57.082140 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:58.082715 kubelet[1413]: E0906 00:07:58.082668 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:59.083003 kubelet[1413]: E0906 00:07:59.082959 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:07:59.214975 kubelet[1413]: I0906 00:07:59.214561 1413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=22.952290203 podStartE2EDuration="23.214543582s" podCreationTimestamp="2025-09-06 00:07:36 +0000 UTC" firstStartedPulling="2025-09-06 00:07:51.21497462 +0000 UTC m=+43.850916233" lastFinishedPulling="2025-09-06 00:07:51.477227999 +0000 UTC m=+44.113169612" observedRunningTime="2025-09-06 00:07:52.329956048 +0000 UTC m=+44.965897701" watchObservedRunningTime="2025-09-06 00:07:59.214543582 +0000 UTC m=+51.850485155" Sep 6 00:07:59.240714 systemd[1]: run-containerd-runc-k8s.io-bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778-runc.6H2MjE.mount: Deactivated successfully. Sep 6 00:07:59.262053 env[1212]: time="2025-09-06T00:07:59.261954572Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:07:59.269225 env[1212]: time="2025-09-06T00:07:59.269103402Z" level=info msg="StopContainer for \"bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778\" with timeout 2 (s)" Sep 6 00:07:59.269578 env[1212]: time="2025-09-06T00:07:59.269431935Z" level=info msg="Stop container \"bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778\" with signal terminated" Sep 6 00:07:59.283047 systemd-networkd[1042]: lxc_health: Link DOWN Sep 6 00:07:59.283053 systemd-networkd[1042]: lxc_health: Lost carrier Sep 6 00:07:59.329339 systemd[1]: cri-containerd-bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778.scope: Deactivated successfully. Sep 6 00:07:59.329967 systemd[1]: cri-containerd-bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778.scope: Consumed 6.249s CPU time. Sep 6 00:07:59.362611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778-rootfs.mount: Deactivated successfully. Sep 6 00:07:59.383357 env[1212]: time="2025-09-06T00:07:59.383136590Z" level=info msg="shim disconnected" id=bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778 Sep 6 00:07:59.383357 env[1212]: time="2025-09-06T00:07:59.383178271Z" level=warning msg="cleaning up after shim disconnected" id=bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778 namespace=k8s.io Sep 6 00:07:59.383357 env[1212]: time="2025-09-06T00:07:59.383186872Z" level=info msg="cleaning up dead shim" Sep 6 00:07:59.392434 env[1212]: time="2025-09-06T00:07:59.392376619Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:07:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2925 runtime=io.containerd.runc.v2\n" Sep 6 00:07:59.395578 env[1212]: time="2025-09-06T00:07:59.395530058Z" level=info msg="StopContainer for \"bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778\" returns successfully" Sep 6 00:07:59.396246 env[1212]: time="2025-09-06T00:07:59.396219684Z" level=info msg="StopPodSandbox for \"ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26\"" Sep 6 00:07:59.396316 env[1212]: time="2025-09-06T00:07:59.396278086Z" level=info msg="Container to stop \"b05247b24bdcc09c4e4ac00e6a7d816277f5d5011ceb71eb0c9cef4a2d70330b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:07:59.396316 env[1212]: time="2025-09-06T00:07:59.396292327Z" level=info msg="Container to stop \"e85b8e625025bddbcc3f7f1fa46d9e5842eb5716e83537b61f74638334a68a45\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:07:59.396316 env[1212]: time="2025-09-06T00:07:59.396303967Z" level=info msg="Container to stop \"bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:07:59.398011 env[1212]: time="2025-09-06T00:07:59.396315687Z" level=info msg="Container to stop \"786d652283f0a4dc450df3907cbd5c529e62c6e8cc0dc51a8d30b8a7ada4b531\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:07:59.398011 env[1212]: time="2025-09-06T00:07:59.396327328Z" level=info msg="Container to stop \"170b864af638fec41f84b53bc0efdfa190e4afb1584cd3dcfeeaadd941bbb420\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:07:59.397972 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26-shm.mount: Deactivated successfully. Sep 6 00:07:59.403342 systemd[1]: cri-containerd-ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26.scope: Deactivated successfully. Sep 6 00:07:59.420124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26-rootfs.mount: Deactivated successfully. Sep 6 00:07:59.426462 env[1212]: time="2025-09-06T00:07:59.426372743Z" level=info msg="shim disconnected" id=ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26 Sep 6 00:07:59.426462 env[1212]: time="2025-09-06T00:07:59.426433785Z" level=warning msg="cleaning up after shim disconnected" id=ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26 namespace=k8s.io Sep 6 00:07:59.426462 env[1212]: time="2025-09-06T00:07:59.426444505Z" level=info msg="cleaning up dead shim" Sep 6 00:07:59.433341 env[1212]: time="2025-09-06T00:07:59.433303284Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:07:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2956 runtime=io.containerd.runc.v2\n" Sep 6 00:07:59.433623 env[1212]: time="2025-09-06T00:07:59.433599016Z" level=info msg="TearDown network for sandbox \"ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26\" successfully" Sep 6 00:07:59.433658 env[1212]: time="2025-09-06T00:07:59.433622537Z" level=info msg="StopPodSandbox for \"ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26\" returns successfully" Sep 6 00:07:59.491465 kubelet[1413]: I0906 00:07:59.490810 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-bpf-maps\") pod \"98e04579-0660-4831-8229-d314ae64eae9\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " Sep 6 00:07:59.491465 kubelet[1413]: I0906 00:07:59.490858 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-cni-path\") pod \"98e04579-0660-4831-8229-d314ae64eae9\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " Sep 6 00:07:59.491465 kubelet[1413]: I0906 00:07:59.490884 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z484c\" (UniqueName: \"kubernetes.io/projected/98e04579-0660-4831-8229-d314ae64eae9-kube-api-access-z484c\") pod \"98e04579-0660-4831-8229-d314ae64eae9\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " Sep 6 00:07:59.491465 kubelet[1413]: I0906 00:07:59.490902 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-host-proc-sys-net\") pod \"98e04579-0660-4831-8229-d314ae64eae9\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " Sep 6 00:07:59.491465 kubelet[1413]: I0906 00:07:59.490916 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-host-proc-sys-kernel\") pod \"98e04579-0660-4831-8229-d314ae64eae9\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " Sep 6 00:07:59.491465 kubelet[1413]: I0906 00:07:59.490929 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-etc-cni-netd\") pod \"98e04579-0660-4831-8229-d314ae64eae9\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " Sep 6 00:07:59.491729 kubelet[1413]: I0906 00:07:59.490944 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-hostproc\") pod \"98e04579-0660-4831-8229-d314ae64eae9\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " Sep 6 00:07:59.491729 kubelet[1413]: I0906 00:07:59.490959 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-xtables-lock\") pod \"98e04579-0660-4831-8229-d314ae64eae9\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " Sep 6 00:07:59.491729 kubelet[1413]: I0906 00:07:59.490974 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-cilium-cgroup\") pod \"98e04579-0660-4831-8229-d314ae64eae9\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " Sep 6 00:07:59.491729 kubelet[1413]: I0906 00:07:59.490986 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-cilium-run\") pod \"98e04579-0660-4831-8229-d314ae64eae9\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " Sep 6 00:07:59.491729 kubelet[1413]: I0906 00:07:59.491007 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/98e04579-0660-4831-8229-d314ae64eae9-clustermesh-secrets\") pod \"98e04579-0660-4831-8229-d314ae64eae9\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " Sep 6 00:07:59.491729 kubelet[1413]: I0906 00:07:59.491024 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98e04579-0660-4831-8229-d314ae64eae9-cilium-config-path\") pod \"98e04579-0660-4831-8229-d314ae64eae9\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " Sep 6 00:07:59.491887 kubelet[1413]: I0906 00:07:59.491039 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/98e04579-0660-4831-8229-d314ae64eae9-hubble-tls\") pod \"98e04579-0660-4831-8229-d314ae64eae9\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " Sep 6 00:07:59.491887 kubelet[1413]: I0906 00:07:59.491051 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-lib-modules\") pod \"98e04579-0660-4831-8229-d314ae64eae9\" (UID: \"98e04579-0660-4831-8229-d314ae64eae9\") " Sep 6 00:07:59.491887 kubelet[1413]: I0906 00:07:59.491125 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "98e04579-0660-4831-8229-d314ae64eae9" (UID: "98e04579-0660-4831-8229-d314ae64eae9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:07:59.491887 kubelet[1413]: I0906 00:07:59.491157 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "98e04579-0660-4831-8229-d314ae64eae9" (UID: "98e04579-0660-4831-8229-d314ae64eae9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:07:59.491887 kubelet[1413]: I0906 00:07:59.491171 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-cni-path" (OuterVolumeSpecName: "cni-path") pod "98e04579-0660-4831-8229-d314ae64eae9" (UID: "98e04579-0660-4831-8229-d314ae64eae9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:07:59.492056 kubelet[1413]: I0906 00:07:59.491490 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "98e04579-0660-4831-8229-d314ae64eae9" (UID: "98e04579-0660-4831-8229-d314ae64eae9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:07:59.492056 kubelet[1413]: I0906 00:07:59.491516 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "98e04579-0660-4831-8229-d314ae64eae9" (UID: "98e04579-0660-4831-8229-d314ae64eae9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:07:59.492056 kubelet[1413]: I0906 00:07:59.491530 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "98e04579-0660-4831-8229-d314ae64eae9" (UID: "98e04579-0660-4831-8229-d314ae64eae9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:07:59.492056 kubelet[1413]: I0906 00:07:59.491544 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "98e04579-0660-4831-8229-d314ae64eae9" (UID: "98e04579-0660-4831-8229-d314ae64eae9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:07:59.492056 kubelet[1413]: I0906 00:07:59.491543 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "98e04579-0660-4831-8229-d314ae64eae9" (UID: "98e04579-0660-4831-8229-d314ae64eae9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:07:59.492185 kubelet[1413]: I0906 00:07:59.491557 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-hostproc" (OuterVolumeSpecName: "hostproc") pod "98e04579-0660-4831-8229-d314ae64eae9" (UID: "98e04579-0660-4831-8229-d314ae64eae9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:07:59.492185 kubelet[1413]: I0906 00:07:59.491588 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "98e04579-0660-4831-8229-d314ae64eae9" (UID: "98e04579-0660-4831-8229-d314ae64eae9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:07:59.493335 kubelet[1413]: I0906 00:07:59.493281 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98e04579-0660-4831-8229-d314ae64eae9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "98e04579-0660-4831-8229-d314ae64eae9" (UID: "98e04579-0660-4831-8229-d314ae64eae9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:07:59.495800 kubelet[1413]: I0906 00:07:59.495767 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98e04579-0660-4831-8229-d314ae64eae9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "98e04579-0660-4831-8229-d314ae64eae9" (UID: "98e04579-0660-4831-8229-d314ae64eae9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:07:59.495866 kubelet[1413]: I0906 00:07:59.495806 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98e04579-0660-4831-8229-d314ae64eae9-kube-api-access-z484c" (OuterVolumeSpecName: "kube-api-access-z484c") pod "98e04579-0660-4831-8229-d314ae64eae9" (UID: "98e04579-0660-4831-8229-d314ae64eae9"). InnerVolumeSpecName "kube-api-access-z484c". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:07:59.496342 kubelet[1413]: I0906 00:07:59.496292 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98e04579-0660-4831-8229-d314ae64eae9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "98e04579-0660-4831-8229-d314ae64eae9" (UID: "98e04579-0660-4831-8229-d314ae64eae9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:07:59.591950 kubelet[1413]: I0906 00:07:59.591864 1413 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/98e04579-0660-4831-8229-d314ae64eae9-clustermesh-secrets\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:07:59.591950 kubelet[1413]: I0906 00:07:59.591930 1413 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/98e04579-0660-4831-8229-d314ae64eae9-cilium-config-path\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:07:59.592342 kubelet[1413]: I0906 00:07:59.592161 1413 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/98e04579-0660-4831-8229-d314ae64eae9-hubble-tls\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:07:59.592342 kubelet[1413]: I0906 00:07:59.592196 1413 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-lib-modules\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:07:59.592342 kubelet[1413]: I0906 00:07:59.592207 1413 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-bpf-maps\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:07:59.592342 kubelet[1413]: I0906 00:07:59.592216 1413 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-cni-path\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:07:59.592342 kubelet[1413]: I0906 00:07:59.592224 1413 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z484c\" (UniqueName: \"kubernetes.io/projected/98e04579-0660-4831-8229-d314ae64eae9-kube-api-access-z484c\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:07:59.592342 kubelet[1413]: I0906 00:07:59.592232 1413 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-host-proc-sys-net\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:07:59.592342 kubelet[1413]: I0906 00:07:59.592240 1413 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-host-proc-sys-kernel\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:07:59.592342 kubelet[1413]: I0906 00:07:59.592248 1413 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-etc-cni-netd\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:07:59.592569 kubelet[1413]: I0906 00:07:59.592255 1413 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-hostproc\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:07:59.592569 kubelet[1413]: I0906 00:07:59.592285 1413 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-xtables-lock\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:07:59.592569 kubelet[1413]: I0906 00:07:59.592294 1413 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-cilium-cgroup\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:07:59.592569 kubelet[1413]: I0906 00:07:59.592302 1413 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/98e04579-0660-4831-8229-d314ae64eae9-cilium-run\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:08:00.083830 kubelet[1413]: E0906 00:08:00.083782 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:00.227922 systemd[1]: Removed slice kubepods-burstable-pod98e04579_0660_4831_8229_d314ae64eae9.slice. Sep 6 00:08:00.228006 systemd[1]: kubepods-burstable-pod98e04579_0660_4831_8229_d314ae64eae9.slice: Consumed 6.372s CPU time. Sep 6 00:08:00.238481 systemd[1]: var-lib-kubelet-pods-98e04579\x2d0660\x2d4831\x2d8229\x2dd314ae64eae9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz484c.mount: Deactivated successfully. Sep 6 00:08:00.238576 systemd[1]: var-lib-kubelet-pods-98e04579\x2d0660\x2d4831\x2d8229\x2dd314ae64eae9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:08:00.238631 systemd[1]: var-lib-kubelet-pods-98e04579\x2d0660\x2d4831\x2d8229\x2dd314ae64eae9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:08:00.339449 kubelet[1413]: I0906 00:08:00.339351 1413 scope.go:117] "RemoveContainer" containerID="bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778" Sep 6 00:08:00.341725 env[1212]: time="2025-09-06T00:08:00.341686542Z" level=info msg="RemoveContainer for \"bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778\"" Sep 6 00:08:00.344421 env[1212]: time="2025-09-06T00:08:00.344371640Z" level=info msg="RemoveContainer for \"bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778\" returns successfully" Sep 6 00:08:00.344968 kubelet[1413]: I0906 00:08:00.344934 1413 scope.go:117] "RemoveContainer" containerID="170b864af638fec41f84b53bc0efdfa190e4afb1584cd3dcfeeaadd941bbb420" Sep 6 00:08:00.345933 env[1212]: time="2025-09-06T00:08:00.345900815Z" level=info msg="RemoveContainer for \"170b864af638fec41f84b53bc0efdfa190e4afb1584cd3dcfeeaadd941bbb420\"" Sep 6 00:08:00.349245 env[1212]: time="2025-09-06T00:08:00.349203655Z" level=info msg="RemoveContainer for \"170b864af638fec41f84b53bc0efdfa190e4afb1584cd3dcfeeaadd941bbb420\" returns successfully" Sep 6 00:08:00.349404 kubelet[1413]: I0906 00:08:00.349385 1413 scope.go:117] "RemoveContainer" containerID="e85b8e625025bddbcc3f7f1fa46d9e5842eb5716e83537b61f74638334a68a45" Sep 6 00:08:00.350407 env[1212]: time="2025-09-06T00:08:00.350366377Z" level=info msg="RemoveContainer for \"e85b8e625025bddbcc3f7f1fa46d9e5842eb5716e83537b61f74638334a68a45\"" Sep 6 00:08:00.352644 env[1212]: time="2025-09-06T00:08:00.352605739Z" level=info msg="RemoveContainer for \"e85b8e625025bddbcc3f7f1fa46d9e5842eb5716e83537b61f74638334a68a45\" returns successfully" Sep 6 00:08:00.352832 kubelet[1413]: I0906 00:08:00.352809 1413 scope.go:117] "RemoveContainer" containerID="b05247b24bdcc09c4e4ac00e6a7d816277f5d5011ceb71eb0c9cef4a2d70330b" Sep 6 00:08:00.353854 env[1212]: time="2025-09-06T00:08:00.353829103Z" level=info msg="RemoveContainer for \"b05247b24bdcc09c4e4ac00e6a7d816277f5d5011ceb71eb0c9cef4a2d70330b\"" Sep 6 00:08:00.357411 env[1212]: time="2025-09-06T00:08:00.357349231Z" level=info msg="RemoveContainer for \"b05247b24bdcc09c4e4ac00e6a7d816277f5d5011ceb71eb0c9cef4a2d70330b\" returns successfully" Sep 6 00:08:00.357561 kubelet[1413]: I0906 00:08:00.357535 1413 scope.go:117] "RemoveContainer" containerID="786d652283f0a4dc450df3907cbd5c529e62c6e8cc0dc51a8d30b8a7ada4b531" Sep 6 00:08:00.358802 env[1212]: time="2025-09-06T00:08:00.358768082Z" level=info msg="RemoveContainer for \"786d652283f0a4dc450df3907cbd5c529e62c6e8cc0dc51a8d30b8a7ada4b531\"" Sep 6 00:08:00.360958 env[1212]: time="2025-09-06T00:08:00.360924201Z" level=info msg="RemoveContainer for \"786d652283f0a4dc450df3907cbd5c529e62c6e8cc0dc51a8d30b8a7ada4b531\" returns successfully" Sep 6 00:08:00.361146 kubelet[1413]: I0906 00:08:00.361121 1413 scope.go:117] "RemoveContainer" containerID="bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778" Sep 6 00:08:00.361506 env[1212]: time="2025-09-06T00:08:00.361412578Z" level=error msg="ContainerStatus for \"bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778\": not found" Sep 6 00:08:00.361667 kubelet[1413]: E0906 00:08:00.361629 1413 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778\": not found" containerID="bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778" Sep 6 00:08:00.361783 kubelet[1413]: I0906 00:08:00.361678 1413 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778"} err="failed to get container status \"bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778\": rpc error: code = NotFound desc = an error occurred when try to find container \"bfab87402b716e9ad9688ed17e2e1060887847e5599e1ef88d89d2d737ea9778\": not found" Sep 6 00:08:00.361831 kubelet[1413]: I0906 00:08:00.361785 1413 scope.go:117] "RemoveContainer" containerID="170b864af638fec41f84b53bc0efdfa190e4afb1584cd3dcfeeaadd941bbb420" Sep 6 00:08:00.362023 env[1212]: time="2025-09-06T00:08:00.361972599Z" level=error msg="ContainerStatus for \"170b864af638fec41f84b53bc0efdfa190e4afb1584cd3dcfeeaadd941bbb420\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"170b864af638fec41f84b53bc0efdfa190e4afb1584cd3dcfeeaadd941bbb420\": not found" Sep 6 00:08:00.362120 kubelet[1413]: E0906 00:08:00.362101 1413 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"170b864af638fec41f84b53bc0efdfa190e4afb1584cd3dcfeeaadd941bbb420\": not found" containerID="170b864af638fec41f84b53bc0efdfa190e4afb1584cd3dcfeeaadd941bbb420" Sep 6 00:08:00.362168 kubelet[1413]: I0906 00:08:00.362126 1413 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"170b864af638fec41f84b53bc0efdfa190e4afb1584cd3dcfeeaadd941bbb420"} err="failed to get container status \"170b864af638fec41f84b53bc0efdfa190e4afb1584cd3dcfeeaadd941bbb420\": rpc error: code = NotFound desc = an error occurred when try to find container \"170b864af638fec41f84b53bc0efdfa190e4afb1584cd3dcfeeaadd941bbb420\": not found" Sep 6 00:08:00.362168 kubelet[1413]: I0906 00:08:00.362145 1413 scope.go:117] "RemoveContainer" containerID="e85b8e625025bddbcc3f7f1fa46d9e5842eb5716e83537b61f74638334a68a45" Sep 6 00:08:00.362312 env[1212]: time="2025-09-06T00:08:00.362272970Z" level=error msg="ContainerStatus for \"e85b8e625025bddbcc3f7f1fa46d9e5842eb5716e83537b61f74638334a68a45\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e85b8e625025bddbcc3f7f1fa46d9e5842eb5716e83537b61f74638334a68a45\": not found" Sep 6 00:08:00.362390 kubelet[1413]: E0906 00:08:00.362373 1413 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e85b8e625025bddbcc3f7f1fa46d9e5842eb5716e83537b61f74638334a68a45\": not found" containerID="e85b8e625025bddbcc3f7f1fa46d9e5842eb5716e83537b61f74638334a68a45" Sep 6 00:08:00.362431 kubelet[1413]: I0906 00:08:00.362392 1413 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e85b8e625025bddbcc3f7f1fa46d9e5842eb5716e83537b61f74638334a68a45"} err="failed to get container status \"e85b8e625025bddbcc3f7f1fa46d9e5842eb5716e83537b61f74638334a68a45\": rpc error: code = NotFound desc = an error occurred when try to find container \"e85b8e625025bddbcc3f7f1fa46d9e5842eb5716e83537b61f74638334a68a45\": not found" Sep 6 00:08:00.362431 kubelet[1413]: I0906 00:08:00.362407 1413 scope.go:117] "RemoveContainer" containerID="b05247b24bdcc09c4e4ac00e6a7d816277f5d5011ceb71eb0c9cef4a2d70330b" Sep 6 00:08:00.362576 env[1212]: time="2025-09-06T00:08:00.362535699Z" level=error msg="ContainerStatus for \"b05247b24bdcc09c4e4ac00e6a7d816277f5d5011ceb71eb0c9cef4a2d70330b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b05247b24bdcc09c4e4ac00e6a7d816277f5d5011ceb71eb0c9cef4a2d70330b\": not found" Sep 6 00:08:00.362645 kubelet[1413]: E0906 00:08:00.362627 1413 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b05247b24bdcc09c4e4ac00e6a7d816277f5d5011ceb71eb0c9cef4a2d70330b\": not found" containerID="b05247b24bdcc09c4e4ac00e6a7d816277f5d5011ceb71eb0c9cef4a2d70330b" Sep 6 00:08:00.362686 kubelet[1413]: I0906 00:08:00.362645 1413 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b05247b24bdcc09c4e4ac00e6a7d816277f5d5011ceb71eb0c9cef4a2d70330b"} err="failed to get container status \"b05247b24bdcc09c4e4ac00e6a7d816277f5d5011ceb71eb0c9cef4a2d70330b\": rpc error: code = NotFound desc = an error occurred when try to find container \"b05247b24bdcc09c4e4ac00e6a7d816277f5d5011ceb71eb0c9cef4a2d70330b\": not found" Sep 6 00:08:00.362686 kubelet[1413]: I0906 00:08:00.362657 1413 scope.go:117] "RemoveContainer" containerID="786d652283f0a4dc450df3907cbd5c529e62c6e8cc0dc51a8d30b8a7ada4b531" Sep 6 00:08:00.362902 env[1212]: time="2025-09-06T00:08:00.362866071Z" level=error msg="ContainerStatus for \"786d652283f0a4dc450df3907cbd5c529e62c6e8cc0dc51a8d30b8a7ada4b531\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"786d652283f0a4dc450df3907cbd5c529e62c6e8cc0dc51a8d30b8a7ada4b531\": not found" Sep 6 00:08:00.363038 kubelet[1413]: E0906 00:08:00.363016 1413 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"786d652283f0a4dc450df3907cbd5c529e62c6e8cc0dc51a8d30b8a7ada4b531\": not found" containerID="786d652283f0a4dc450df3907cbd5c529e62c6e8cc0dc51a8d30b8a7ada4b531" Sep 6 00:08:00.363122 kubelet[1413]: I0906 00:08:00.363101 1413 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"786d652283f0a4dc450df3907cbd5c529e62c6e8cc0dc51a8d30b8a7ada4b531"} err="failed to get container status \"786d652283f0a4dc450df3907cbd5c529e62c6e8cc0dc51a8d30b8a7ada4b531\": rpc error: code = NotFound desc = an error occurred when try to find container \"786d652283f0a4dc450df3907cbd5c529e62c6e8cc0dc51a8d30b8a7ada4b531\": not found" Sep 6 00:08:01.084408 kubelet[1413]: E0906 00:08:01.084363 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:02.085292 kubelet[1413]: E0906 00:08:02.085237 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:02.224673 kubelet[1413]: I0906 00:08:02.224617 1413 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98e04579-0660-4831-8229-d314ae64eae9" path="/var/lib/kubelet/pods/98e04579-0660-4831-8229-d314ae64eae9/volumes" Sep 6 00:08:02.449424 kubelet[1413]: E0906 00:08:02.449378 1413 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="98e04579-0660-4831-8229-d314ae64eae9" containerName="apply-sysctl-overwrites" Sep 6 00:08:02.449424 kubelet[1413]: E0906 00:08:02.449408 1413 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="98e04579-0660-4831-8229-d314ae64eae9" containerName="mount-bpf-fs" Sep 6 00:08:02.449424 kubelet[1413]: E0906 00:08:02.449415 1413 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="98e04579-0660-4831-8229-d314ae64eae9" containerName="clean-cilium-state" Sep 6 00:08:02.449424 kubelet[1413]: E0906 00:08:02.449421 1413 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="98e04579-0660-4831-8229-d314ae64eae9" containerName="cilium-agent" Sep 6 00:08:02.449424 kubelet[1413]: E0906 00:08:02.449427 1413 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="98e04579-0660-4831-8229-d314ae64eae9" containerName="mount-cgroup" Sep 6 00:08:02.449682 kubelet[1413]: I0906 00:08:02.449445 1413 memory_manager.go:354] "RemoveStaleState removing state" podUID="98e04579-0660-4831-8229-d314ae64eae9" containerName="cilium-agent" Sep 6 00:08:02.453960 systemd[1]: Created slice kubepods-besteffort-pod5e3d99fc_086a_4c82_9809_8389b39c5f66.slice. Sep 6 00:08:02.462815 systemd[1]: Created slice kubepods-burstable-pod88f7c28b_bb18_43e8_9d3e_39f80715d321.slice. Sep 6 00:08:02.508705 kubelet[1413]: I0906 00:08:02.508636 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-etc-cni-netd\") pod \"cilium-pf72g\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " pod="kube-system/cilium-pf72g" Sep 6 00:08:02.508705 kubelet[1413]: I0906 00:08:02.508702 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/88f7c28b-bb18-43e8-9d3e-39f80715d321-cilium-config-path\") pod \"cilium-pf72g\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " pod="kube-system/cilium-pf72g" Sep 6 00:08:02.508885 kubelet[1413]: I0906 00:08:02.508723 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk6jz\" (UniqueName: \"kubernetes.io/projected/88f7c28b-bb18-43e8-9d3e-39f80715d321-kube-api-access-dk6jz\") pod \"cilium-pf72g\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " pod="kube-system/cilium-pf72g" Sep 6 00:08:02.508885 kubelet[1413]: I0906 00:08:02.508751 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-cilium-run\") pod \"cilium-pf72g\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " pod="kube-system/cilium-pf72g" Sep 6 00:08:02.508885 kubelet[1413]: I0906 00:08:02.508779 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-bpf-maps\") pod \"cilium-pf72g\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " pod="kube-system/cilium-pf72g" Sep 6 00:08:02.508885 kubelet[1413]: I0906 00:08:02.508810 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-hostproc\") pod \"cilium-pf72g\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " pod="kube-system/cilium-pf72g" Sep 6 00:08:02.508885 kubelet[1413]: I0906 00:08:02.508837 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-xtables-lock\") pod \"cilium-pf72g\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " pod="kube-system/cilium-pf72g" Sep 6 00:08:02.508885 kubelet[1413]: I0906 00:08:02.508858 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/88f7c28b-bb18-43e8-9d3e-39f80715d321-clustermesh-secrets\") pod \"cilium-pf72g\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " pod="kube-system/cilium-pf72g" Sep 6 00:08:02.509023 kubelet[1413]: I0906 00:08:02.508876 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/88f7c28b-bb18-43e8-9d3e-39f80715d321-hubble-tls\") pod \"cilium-pf72g\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " pod="kube-system/cilium-pf72g" Sep 6 00:08:02.509023 kubelet[1413]: I0906 00:08:02.508900 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-host-proc-sys-kernel\") pod \"cilium-pf72g\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " pod="kube-system/cilium-pf72g" Sep 6 00:08:02.509023 kubelet[1413]: I0906 00:08:02.508918 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5e3d99fc-086a-4c82-9809-8389b39c5f66-cilium-config-path\") pod \"cilium-operator-5d85765b45-j4dhh\" (UID: \"5e3d99fc-086a-4c82-9809-8389b39c5f66\") " pod="kube-system/cilium-operator-5d85765b45-j4dhh" Sep 6 00:08:02.509184 kubelet[1413]: I0906 00:08:02.509042 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-cni-path\") pod \"cilium-pf72g\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " pod="kube-system/cilium-pf72g" Sep 6 00:08:02.509226 kubelet[1413]: I0906 00:08:02.509199 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-host-proc-sys-net\") pod \"cilium-pf72g\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " pod="kube-system/cilium-pf72g" Sep 6 00:08:02.509253 kubelet[1413]: I0906 00:08:02.509232 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-lib-modules\") pod \"cilium-pf72g\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " pod="kube-system/cilium-pf72g" Sep 6 00:08:02.509281 kubelet[1413]: I0906 00:08:02.509259 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/88f7c28b-bb18-43e8-9d3e-39f80715d321-cilium-ipsec-secrets\") pod \"cilium-pf72g\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " pod="kube-system/cilium-pf72g" Sep 6 00:08:02.509328 kubelet[1413]: I0906 00:08:02.509313 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqrr2\" (UniqueName: \"kubernetes.io/projected/5e3d99fc-086a-4c82-9809-8389b39c5f66-kube-api-access-lqrr2\") pod \"cilium-operator-5d85765b45-j4dhh\" (UID: \"5e3d99fc-086a-4c82-9809-8389b39c5f66\") " pod="kube-system/cilium-operator-5d85765b45-j4dhh" Sep 6 00:08:02.509358 kubelet[1413]: I0906 00:08:02.509336 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-cilium-cgroup\") pod \"cilium-pf72g\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " pod="kube-system/cilium-pf72g" Sep 6 00:08:02.628234 kubelet[1413]: E0906 00:08:02.628189 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:08:02.628789 env[1212]: time="2025-09-06T00:08:02.628736837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pf72g,Uid:88f7c28b-bb18-43e8-9d3e-39f80715d321,Namespace:kube-system,Attempt:0,}" Sep 6 00:08:02.642005 env[1212]: time="2025-09-06T00:08:02.641824118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:08:02.642005 env[1212]: time="2025-09-06T00:08:02.641979563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:08:02.642005 env[1212]: time="2025-09-06T00:08:02.641990404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:08:02.642190 env[1212]: time="2025-09-06T00:08:02.642126088Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f9ba357cab466e6500773aaea2702dc5c58e21df97b8ff1ef51ad9137472b44 pid=2984 runtime=io.containerd.runc.v2 Sep 6 00:08:02.652782 systemd[1]: Started cri-containerd-7f9ba357cab466e6500773aaea2702dc5c58e21df97b8ff1ef51ad9137472b44.scope. Sep 6 00:08:02.677067 env[1212]: time="2025-09-06T00:08:02.677025104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pf72g,Uid:88f7c28b-bb18-43e8-9d3e-39f80715d321,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f9ba357cab466e6500773aaea2702dc5c58e21df97b8ff1ef51ad9137472b44\"" Sep 6 00:08:02.677751 kubelet[1413]: E0906 00:08:02.677684 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:08:02.679277 env[1212]: time="2025-09-06T00:08:02.679241059Z" level=info msg="CreateContainer within sandbox \"7f9ba357cab466e6500773aaea2702dc5c58e21df97b8ff1ef51ad9137472b44\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:08:02.689050 env[1212]: time="2025-09-06T00:08:02.688993667Z" level=info msg="CreateContainer within sandbox \"7f9ba357cab466e6500773aaea2702dc5c58e21df97b8ff1ef51ad9137472b44\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7d6371ed48da9d3ebcf4daf74623472a9037f313d59af01fa320bf24c2d2905b\"" Sep 6 00:08:02.689573 env[1212]: time="2025-09-06T00:08:02.689539166Z" level=info msg="StartContainer for \"7d6371ed48da9d3ebcf4daf74623472a9037f313d59af01fa320bf24c2d2905b\"" Sep 6 00:08:02.702355 systemd[1]: Started cri-containerd-7d6371ed48da9d3ebcf4daf74623472a9037f313d59af01fa320bf24c2d2905b.scope. Sep 6 00:08:02.718433 systemd[1]: cri-containerd-7d6371ed48da9d3ebcf4daf74623472a9037f313d59af01fa320bf24c2d2905b.scope: Deactivated successfully. Sep 6 00:08:02.733443 env[1212]: time="2025-09-06T00:08:02.733390083Z" level=info msg="shim disconnected" id=7d6371ed48da9d3ebcf4daf74623472a9037f313d59af01fa320bf24c2d2905b Sep 6 00:08:02.733443 env[1212]: time="2025-09-06T00:08:02.733443445Z" level=warning msg="cleaning up after shim disconnected" id=7d6371ed48da9d3ebcf4daf74623472a9037f313d59af01fa320bf24c2d2905b namespace=k8s.io Sep 6 00:08:02.733684 env[1212]: time="2025-09-06T00:08:02.733453285Z" level=info msg="cleaning up dead shim" Sep 6 00:08:02.740126 env[1212]: time="2025-09-06T00:08:02.740076428Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:08:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3042 runtime=io.containerd.runc.v2\ntime=\"2025-09-06T00:08:02Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7d6371ed48da9d3ebcf4daf74623472a9037f313d59af01fa320bf24c2d2905b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Sep 6 00:08:02.740461 env[1212]: time="2025-09-06T00:08:02.740337957Z" level=error msg="copy shim log" error="read /proc/self/fd/58: file already closed" Sep 6 00:08:02.740667 env[1212]: time="2025-09-06T00:08:02.740635767Z" level=error msg="Failed to pipe stderr of container \"7d6371ed48da9d3ebcf4daf74623472a9037f313d59af01fa320bf24c2d2905b\"" error="reading from a closed fifo" Sep 6 00:08:02.740709 env[1212]: time="2025-09-06T00:08:02.740624486Z" level=error msg="Failed to pipe stdout of container \"7d6371ed48da9d3ebcf4daf74623472a9037f313d59af01fa320bf24c2d2905b\"" error="reading from a closed fifo" Sep 6 00:08:02.742560 env[1212]: time="2025-09-06T00:08:02.742499270Z" level=error msg="StartContainer for \"7d6371ed48da9d3ebcf4daf74623472a9037f313d59af01fa320bf24c2d2905b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Sep 6 00:08:02.743343 kubelet[1413]: E0906 00:08:02.743015 1413 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7d6371ed48da9d3ebcf4daf74623472a9037f313d59af01fa320bf24c2d2905b" Sep 6 00:08:02.743343 kubelet[1413]: E0906 00:08:02.743310 1413 kuberuntime_manager.go:1274] "Unhandled Error" err=< Sep 6 00:08:02.743343 kubelet[1413]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Sep 6 00:08:02.743343 kubelet[1413]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Sep 6 00:08:02.743343 kubelet[1413]: rm /hostbin/cilium-mount Sep 6 00:08:02.743597 kubelet[1413]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dk6jz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-pf72g_kube-system(88f7c28b-bb18-43e8-9d3e-39f80715d321): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Sep 6 00:08:02.743597 kubelet[1413]: > logger="UnhandledError" Sep 6 00:08:02.744481 kubelet[1413]: E0906 00:08:02.744447 1413 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pf72g" podUID="88f7c28b-bb18-43e8-9d3e-39f80715d321" Sep 6 00:08:02.756899 kubelet[1413]: E0906 00:08:02.756748 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:08:02.757272 env[1212]: time="2025-09-06T00:08:02.757225846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-j4dhh,Uid:5e3d99fc-086a-4c82-9809-8389b39c5f66,Namespace:kube-system,Attempt:0,}" Sep 6 00:08:02.769050 env[1212]: time="2025-09-06T00:08:02.768888319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:08:02.769050 env[1212]: time="2025-09-06T00:08:02.768926960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:08:02.769050 env[1212]: time="2025-09-06T00:08:02.768940320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:08:02.769237 env[1212]: time="2025-09-06T00:08:02.769087765Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/60402eb9cbc7f57eea030fce98fc11ca2757a39935ea72dc8231c967e9a468da pid=3065 runtime=io.containerd.runc.v2 Sep 6 00:08:02.779794 systemd[1]: Started cri-containerd-60402eb9cbc7f57eea030fce98fc11ca2757a39935ea72dc8231c967e9a468da.scope. Sep 6 00:08:02.814220 env[1212]: time="2025-09-06T00:08:02.814169364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-j4dhh,Uid:5e3d99fc-086a-4c82-9809-8389b39c5f66,Namespace:kube-system,Attempt:0,} returns sandbox id \"60402eb9cbc7f57eea030fce98fc11ca2757a39935ea72dc8231c967e9a468da\"" Sep 6 00:08:02.815201 kubelet[1413]: E0906 00:08:02.815178 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:08:02.816394 env[1212]: time="2025-09-06T00:08:02.816360518Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 00:08:03.086234 kubelet[1413]: E0906 00:08:03.086120 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:03.190214 kubelet[1413]: E0906 00:08:03.190176 1413 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:08:03.351159 env[1212]: time="2025-09-06T00:08:03.351046476Z" level=info msg="StopPodSandbox for \"7f9ba357cab466e6500773aaea2702dc5c58e21df97b8ff1ef51ad9137472b44\"" Sep 6 00:08:03.351347 env[1212]: time="2025-09-06T00:08:03.351322725Z" level=info msg="Container to stop \"7d6371ed48da9d3ebcf4daf74623472a9037f313d59af01fa320bf24c2d2905b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:08:03.356355 systemd[1]: cri-containerd-7f9ba357cab466e6500773aaea2702dc5c58e21df97b8ff1ef51ad9137472b44.scope: Deactivated successfully. Sep 6 00:08:03.379216 env[1212]: time="2025-09-06T00:08:03.379152710Z" level=info msg="shim disconnected" id=7f9ba357cab466e6500773aaea2702dc5c58e21df97b8ff1ef51ad9137472b44 Sep 6 00:08:03.379216 env[1212]: time="2025-09-06T00:08:03.379214152Z" level=warning msg="cleaning up after shim disconnected" id=7f9ba357cab466e6500773aaea2702dc5c58e21df97b8ff1ef51ad9137472b44 namespace=k8s.io Sep 6 00:08:03.379424 env[1212]: time="2025-09-06T00:08:03.379224872Z" level=info msg="cleaning up dead shim" Sep 6 00:08:03.385517 env[1212]: time="2025-09-06T00:08:03.385477915Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:08:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3117 runtime=io.containerd.runc.v2\n" Sep 6 00:08:03.385832 env[1212]: time="2025-09-06T00:08:03.385805646Z" level=info msg="TearDown network for sandbox \"7f9ba357cab466e6500773aaea2702dc5c58e21df97b8ff1ef51ad9137472b44\" successfully" Sep 6 00:08:03.385869 env[1212]: time="2025-09-06T00:08:03.385834447Z" level=info msg="StopPodSandbox for \"7f9ba357cab466e6500773aaea2702dc5c58e21df97b8ff1ef51ad9137472b44\" returns successfully" Sep 6 00:08:03.414850 kubelet[1413]: I0906 00:08:03.414811 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-lib-modules\") pod \"88f7c28b-bb18-43e8-9d3e-39f80715d321\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " Sep 6 00:08:03.415056 kubelet[1413]: I0906 00:08:03.415039 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-xtables-lock\") pod \"88f7c28b-bb18-43e8-9d3e-39f80715d321\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " Sep 6 00:08:03.415154 kubelet[1413]: I0906 00:08:03.415142 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-cilium-run\") pod \"88f7c28b-bb18-43e8-9d3e-39f80715d321\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " Sep 6 00:08:03.415255 kubelet[1413]: I0906 00:08:03.415243 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/88f7c28b-bb18-43e8-9d3e-39f80715d321-hubble-tls\") pod \"88f7c28b-bb18-43e8-9d3e-39f80715d321\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " Sep 6 00:08:03.415349 kubelet[1413]: I0906 00:08:03.415335 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-host-proc-sys-kernel\") pod \"88f7c28b-bb18-43e8-9d3e-39f80715d321\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " Sep 6 00:08:03.415439 kubelet[1413]: I0906 00:08:03.415426 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-host-proc-sys-net\") pod \"88f7c28b-bb18-43e8-9d3e-39f80715d321\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " Sep 6 00:08:03.415523 kubelet[1413]: I0906 00:08:03.415511 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-etc-cni-netd\") pod \"88f7c28b-bb18-43e8-9d3e-39f80715d321\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " Sep 6 00:08:03.415631 kubelet[1413]: I0906 00:08:03.415617 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk6jz\" (UniqueName: \"kubernetes.io/projected/88f7c28b-bb18-43e8-9d3e-39f80715d321-kube-api-access-dk6jz\") pod \"88f7c28b-bb18-43e8-9d3e-39f80715d321\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " Sep 6 00:08:03.415733 kubelet[1413]: I0906 00:08:03.415721 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/88f7c28b-bb18-43e8-9d3e-39f80715d321-clustermesh-secrets\") pod \"88f7c28b-bb18-43e8-9d3e-39f80715d321\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " Sep 6 00:08:03.415833 kubelet[1413]: I0906 00:08:03.415821 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-cni-path\") pod \"88f7c28b-bb18-43e8-9d3e-39f80715d321\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " Sep 6 00:08:03.415929 kubelet[1413]: I0906 00:08:03.414908 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "88f7c28b-bb18-43e8-9d3e-39f80715d321" (UID: "88f7c28b-bb18-43e8-9d3e-39f80715d321"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:08:03.415980 kubelet[1413]: I0906 00:08:03.415077 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "88f7c28b-bb18-43e8-9d3e-39f80715d321" (UID: "88f7c28b-bb18-43e8-9d3e-39f80715d321"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:08:03.415980 kubelet[1413]: I0906 00:08:03.415204 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "88f7c28b-bb18-43e8-9d3e-39f80715d321" (UID: "88f7c28b-bb18-43e8-9d3e-39f80715d321"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:08:03.415980 kubelet[1413]: I0906 00:08:03.415386 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "88f7c28b-bb18-43e8-9d3e-39f80715d321" (UID: "88f7c28b-bb18-43e8-9d3e-39f80715d321"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:08:03.415980 kubelet[1413]: I0906 00:08:03.415479 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "88f7c28b-bb18-43e8-9d3e-39f80715d321" (UID: "88f7c28b-bb18-43e8-9d3e-39f80715d321"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:08:03.415980 kubelet[1413]: I0906 00:08:03.415567 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "88f7c28b-bb18-43e8-9d3e-39f80715d321" (UID: "88f7c28b-bb18-43e8-9d3e-39f80715d321"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:08:03.416105 kubelet[1413]: I0906 00:08:03.415974 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-cni-path" (OuterVolumeSpecName: "cni-path") pod "88f7c28b-bb18-43e8-9d3e-39f80715d321" (UID: "88f7c28b-bb18-43e8-9d3e-39f80715d321"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:08:03.416158 kubelet[1413]: I0906 00:08:03.416144 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/88f7c28b-bb18-43e8-9d3e-39f80715d321-cilium-ipsec-secrets\") pod \"88f7c28b-bb18-43e8-9d3e-39f80715d321\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " Sep 6 00:08:03.416249 kubelet[1413]: I0906 00:08:03.416236 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/88f7c28b-bb18-43e8-9d3e-39f80715d321-cilium-config-path\") pod \"88f7c28b-bb18-43e8-9d3e-39f80715d321\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " Sep 6 00:08:03.416332 kubelet[1413]: I0906 00:08:03.416320 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-bpf-maps\") pod \"88f7c28b-bb18-43e8-9d3e-39f80715d321\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " Sep 6 00:08:03.416414 kubelet[1413]: I0906 00:08:03.416400 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-hostproc\") pod \"88f7c28b-bb18-43e8-9d3e-39f80715d321\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " Sep 6 00:08:03.416564 kubelet[1413]: I0906 00:08:03.416547 1413 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-cilium-cgroup\") pod \"88f7c28b-bb18-43e8-9d3e-39f80715d321\" (UID: \"88f7c28b-bb18-43e8-9d3e-39f80715d321\") " Sep 6 00:08:03.416686 kubelet[1413]: I0906 00:08:03.416670 1413 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-xtables-lock\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:08:03.416780 kubelet[1413]: I0906 00:08:03.416768 1413 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-cilium-run\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:08:03.416865 kubelet[1413]: I0906 00:08:03.416852 1413 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-host-proc-sys-kernel\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:08:03.416949 kubelet[1413]: I0906 00:08:03.416939 1413 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-lib-modules\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:08:03.417022 kubelet[1413]: I0906 00:08:03.417013 1413 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-etc-cni-netd\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:08:03.417097 kubelet[1413]: I0906 00:08:03.417076 1413 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-host-proc-sys-net\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:08:03.417186 kubelet[1413]: I0906 00:08:03.417175 1413 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-cni-path\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:08:03.417287 kubelet[1413]: I0906 00:08:03.417273 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "88f7c28b-bb18-43e8-9d3e-39f80715d321" (UID: "88f7c28b-bb18-43e8-9d3e-39f80715d321"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:08:03.417383 kubelet[1413]: I0906 00:08:03.417370 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "88f7c28b-bb18-43e8-9d3e-39f80715d321" (UID: "88f7c28b-bb18-43e8-9d3e-39f80715d321"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:08:03.417473 kubelet[1413]: I0906 00:08:03.417460 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-hostproc" (OuterVolumeSpecName: "hostproc") pod "88f7c28b-bb18-43e8-9d3e-39f80715d321" (UID: "88f7c28b-bb18-43e8-9d3e-39f80715d321"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 6 00:08:03.418172 kubelet[1413]: I0906 00:08:03.418138 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88f7c28b-bb18-43e8-9d3e-39f80715d321-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "88f7c28b-bb18-43e8-9d3e-39f80715d321" (UID: "88f7c28b-bb18-43e8-9d3e-39f80715d321"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 6 00:08:03.418364 kubelet[1413]: I0906 00:08:03.418334 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88f7c28b-bb18-43e8-9d3e-39f80715d321-kube-api-access-dk6jz" (OuterVolumeSpecName: "kube-api-access-dk6jz") pod "88f7c28b-bb18-43e8-9d3e-39f80715d321" (UID: "88f7c28b-bb18-43e8-9d3e-39f80715d321"). InnerVolumeSpecName "kube-api-access-dk6jz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:08:03.418487 kubelet[1413]: I0906 00:08:03.418458 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88f7c28b-bb18-43e8-9d3e-39f80715d321-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "88f7c28b-bb18-43e8-9d3e-39f80715d321" (UID: "88f7c28b-bb18-43e8-9d3e-39f80715d321"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:08:03.419992 kubelet[1413]: I0906 00:08:03.419964 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88f7c28b-bb18-43e8-9d3e-39f80715d321-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "88f7c28b-bb18-43e8-9d3e-39f80715d321" (UID: "88f7c28b-bb18-43e8-9d3e-39f80715d321"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 6 00:08:03.420689 kubelet[1413]: I0906 00:08:03.420661 1413 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88f7c28b-bb18-43e8-9d3e-39f80715d321-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "88f7c28b-bb18-43e8-9d3e-39f80715d321" (UID: "88f7c28b-bb18-43e8-9d3e-39f80715d321"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 6 00:08:03.517750 kubelet[1413]: I0906 00:08:03.517705 1413 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/88f7c28b-bb18-43e8-9d3e-39f80715d321-cilium-ipsec-secrets\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:08:03.517750 kubelet[1413]: I0906 00:08:03.517740 1413 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/88f7c28b-bb18-43e8-9d3e-39f80715d321-cilium-config-path\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:08:03.517750 kubelet[1413]: I0906 00:08:03.517750 1413 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-bpf-maps\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:08:03.517934 kubelet[1413]: I0906 00:08:03.517770 1413 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-hostproc\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:08:03.517934 kubelet[1413]: I0906 00:08:03.517778 1413 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/88f7c28b-bb18-43e8-9d3e-39f80715d321-clustermesh-secrets\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:08:03.517934 kubelet[1413]: I0906 00:08:03.517787 1413 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/88f7c28b-bb18-43e8-9d3e-39f80715d321-cilium-cgroup\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:08:03.517934 kubelet[1413]: I0906 00:08:03.517795 1413 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/88f7c28b-bb18-43e8-9d3e-39f80715d321-hubble-tls\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:08:03.517934 kubelet[1413]: I0906 00:08:03.517802 1413 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk6jz\" (UniqueName: \"kubernetes.io/projected/88f7c28b-bb18-43e8-9d3e-39f80715d321-kube-api-access-dk6jz\") on node \"10.0.0.73\" DevicePath \"\"" Sep 6 00:08:03.615658 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f9ba357cab466e6500773aaea2702dc5c58e21df97b8ff1ef51ad9137472b44-shm.mount: Deactivated successfully. Sep 6 00:08:03.615751 systemd[1]: var-lib-kubelet-pods-88f7c28b\x2dbb18\x2d43e8\x2d9d3e\x2d39f80715d321-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddk6jz.mount: Deactivated successfully. Sep 6 00:08:03.615816 systemd[1]: var-lib-kubelet-pods-88f7c28b\x2dbb18\x2d43e8\x2d9d3e\x2d39f80715d321-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:08:03.615868 systemd[1]: var-lib-kubelet-pods-88f7c28b\x2dbb18\x2d43e8\x2d9d3e\x2d39f80715d321-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:08:03.615918 systemd[1]: var-lib-kubelet-pods-88f7c28b\x2dbb18\x2d43e8\x2d9d3e\x2d39f80715d321-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 6 00:08:04.086760 kubelet[1413]: E0906 00:08:04.086710 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:04.168425 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount721911158.mount: Deactivated successfully. Sep 6 00:08:04.227965 systemd[1]: Removed slice kubepods-burstable-pod88f7c28b_bb18_43e8_9d3e_39f80715d321.slice. Sep 6 00:08:04.354252 kubelet[1413]: I0906 00:08:04.354154 1413 scope.go:117] "RemoveContainer" containerID="7d6371ed48da9d3ebcf4daf74623472a9037f313d59af01fa320bf24c2d2905b" Sep 6 00:08:04.356289 env[1212]: time="2025-09-06T00:08:04.356249138Z" level=info msg="RemoveContainer for \"7d6371ed48da9d3ebcf4daf74623472a9037f313d59af01fa320bf24c2d2905b\"" Sep 6 00:08:04.359911 env[1212]: time="2025-09-06T00:08:04.359877532Z" level=info msg="RemoveContainer for \"7d6371ed48da9d3ebcf4daf74623472a9037f313d59af01fa320bf24c2d2905b\" returns successfully" Sep 6 00:08:04.387699 kubelet[1413]: E0906 00:08:04.387663 1413 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="88f7c28b-bb18-43e8-9d3e-39f80715d321" containerName="mount-cgroup" Sep 6 00:08:04.387845 kubelet[1413]: I0906 00:08:04.387710 1413 memory_manager.go:354] "RemoveStaleState removing state" podUID="88f7c28b-bb18-43e8-9d3e-39f80715d321" containerName="mount-cgroup" Sep 6 00:08:04.392140 systemd[1]: Created slice kubepods-burstable-pod007f1794_4bde_4090_8175_e2d1e6a8ab1c.slice. Sep 6 00:08:04.422092 kubelet[1413]: I0906 00:08:04.422056 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zl6w\" (UniqueName: \"kubernetes.io/projected/007f1794-4bde-4090-8175-e2d1e6a8ab1c-kube-api-access-5zl6w\") pod \"cilium-rnwdw\" (UID: \"007f1794-4bde-4090-8175-e2d1e6a8ab1c\") " pod="kube-system/cilium-rnwdw" Sep 6 00:08:04.422092 kubelet[1413]: I0906 00:08:04.422096 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/007f1794-4bde-4090-8175-e2d1e6a8ab1c-lib-modules\") pod \"cilium-rnwdw\" (UID: \"007f1794-4bde-4090-8175-e2d1e6a8ab1c\") " pod="kube-system/cilium-rnwdw" Sep 6 00:08:04.422268 kubelet[1413]: I0906 00:08:04.422121 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/007f1794-4bde-4090-8175-e2d1e6a8ab1c-xtables-lock\") pod \"cilium-rnwdw\" (UID: \"007f1794-4bde-4090-8175-e2d1e6a8ab1c\") " pod="kube-system/cilium-rnwdw" Sep 6 00:08:04.422268 kubelet[1413]: I0906 00:08:04.422140 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/007f1794-4bde-4090-8175-e2d1e6a8ab1c-cilium-ipsec-secrets\") pod \"cilium-rnwdw\" (UID: \"007f1794-4bde-4090-8175-e2d1e6a8ab1c\") " pod="kube-system/cilium-rnwdw" Sep 6 00:08:04.422268 kubelet[1413]: I0906 00:08:04.422156 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/007f1794-4bde-4090-8175-e2d1e6a8ab1c-host-proc-sys-net\") pod \"cilium-rnwdw\" (UID: \"007f1794-4bde-4090-8175-e2d1e6a8ab1c\") " pod="kube-system/cilium-rnwdw" Sep 6 00:08:04.422268 kubelet[1413]: I0906 00:08:04.422175 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/007f1794-4bde-4090-8175-e2d1e6a8ab1c-host-proc-sys-kernel\") pod \"cilium-rnwdw\" (UID: \"007f1794-4bde-4090-8175-e2d1e6a8ab1c\") " pod="kube-system/cilium-rnwdw" Sep 6 00:08:04.422268 kubelet[1413]: I0906 00:08:04.422190 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/007f1794-4bde-4090-8175-e2d1e6a8ab1c-cilium-config-path\") pod \"cilium-rnwdw\" (UID: \"007f1794-4bde-4090-8175-e2d1e6a8ab1c\") " pod="kube-system/cilium-rnwdw" Sep 6 00:08:04.422393 kubelet[1413]: I0906 00:08:04.422216 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/007f1794-4bde-4090-8175-e2d1e6a8ab1c-hubble-tls\") pod \"cilium-rnwdw\" (UID: \"007f1794-4bde-4090-8175-e2d1e6a8ab1c\") " pod="kube-system/cilium-rnwdw" Sep 6 00:08:04.422393 kubelet[1413]: I0906 00:08:04.422241 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/007f1794-4bde-4090-8175-e2d1e6a8ab1c-bpf-maps\") pod \"cilium-rnwdw\" (UID: \"007f1794-4bde-4090-8175-e2d1e6a8ab1c\") " pod="kube-system/cilium-rnwdw" Sep 6 00:08:04.422393 kubelet[1413]: I0906 00:08:04.422256 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/007f1794-4bde-4090-8175-e2d1e6a8ab1c-clustermesh-secrets\") pod \"cilium-rnwdw\" (UID: \"007f1794-4bde-4090-8175-e2d1e6a8ab1c\") " pod="kube-system/cilium-rnwdw" Sep 6 00:08:04.422393 kubelet[1413]: I0906 00:08:04.422273 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/007f1794-4bde-4090-8175-e2d1e6a8ab1c-cilium-cgroup\") pod \"cilium-rnwdw\" (UID: \"007f1794-4bde-4090-8175-e2d1e6a8ab1c\") " pod="kube-system/cilium-rnwdw" Sep 6 00:08:04.422393 kubelet[1413]: I0906 00:08:04.422289 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/007f1794-4bde-4090-8175-e2d1e6a8ab1c-cni-path\") pod \"cilium-rnwdw\" (UID: \"007f1794-4bde-4090-8175-e2d1e6a8ab1c\") " pod="kube-system/cilium-rnwdw" Sep 6 00:08:04.422393 kubelet[1413]: I0906 00:08:04.422304 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/007f1794-4bde-4090-8175-e2d1e6a8ab1c-etc-cni-netd\") pod \"cilium-rnwdw\" (UID: \"007f1794-4bde-4090-8175-e2d1e6a8ab1c\") " pod="kube-system/cilium-rnwdw" Sep 6 00:08:04.422532 kubelet[1413]: I0906 00:08:04.422319 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/007f1794-4bde-4090-8175-e2d1e6a8ab1c-cilium-run\") pod \"cilium-rnwdw\" (UID: \"007f1794-4bde-4090-8175-e2d1e6a8ab1c\") " pod="kube-system/cilium-rnwdw" Sep 6 00:08:04.422532 kubelet[1413]: I0906 00:08:04.422334 1413 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/007f1794-4bde-4090-8175-e2d1e6a8ab1c-hostproc\") pod \"cilium-rnwdw\" (UID: \"007f1794-4bde-4090-8175-e2d1e6a8ab1c\") " pod="kube-system/cilium-rnwdw" Sep 6 00:08:04.704880 kubelet[1413]: E0906 00:08:04.704566 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:08:04.705391 env[1212]: time="2025-09-06T00:08:04.705350650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rnwdw,Uid:007f1794-4bde-4090-8175-e2d1e6a8ab1c,Namespace:kube-system,Attempt:0,}" Sep 6 00:08:04.720278 env[1212]: time="2025-09-06T00:08:04.720211796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:08:04.720278 env[1212]: time="2025-09-06T00:08:04.720252478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:08:04.720278 env[1212]: time="2025-09-06T00:08:04.720262878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:08:04.720429 env[1212]: time="2025-09-06T00:08:04.720377521Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/705a6d2a01fdd94feca1b9a4e6b2ef756de189a712d1339abe76f668fca008c6 pid=3146 runtime=io.containerd.runc.v2 Sep 6 00:08:04.735914 systemd[1]: Started cri-containerd-705a6d2a01fdd94feca1b9a4e6b2ef756de189a712d1339abe76f668fca008c6.scope. Sep 6 00:08:04.755785 env[1212]: time="2025-09-06T00:08:04.755591306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rnwdw,Uid:007f1794-4bde-4090-8175-e2d1e6a8ab1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"705a6d2a01fdd94feca1b9a4e6b2ef756de189a712d1339abe76f668fca008c6\"" Sep 6 00:08:04.756326 kubelet[1413]: E0906 00:08:04.756293 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:08:04.758237 env[1212]: time="2025-09-06T00:08:04.758207188Z" level=info msg="CreateContainer within sandbox \"705a6d2a01fdd94feca1b9a4e6b2ef756de189a712d1339abe76f668fca008c6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:08:04.767427 env[1212]: time="2025-09-06T00:08:04.767387516Z" level=info msg="CreateContainer within sandbox \"705a6d2a01fdd94feca1b9a4e6b2ef756de189a712d1339abe76f668fca008c6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3d27c7f8d983e3e4e5132f61b9683346f9a1c0d2c4b65191bbb7f0a8ce5b6822\"" Sep 6 00:08:04.767876 env[1212]: time="2025-09-06T00:08:04.767853211Z" level=info msg="StartContainer for \"3d27c7f8d983e3e4e5132f61b9683346f9a1c0d2c4b65191bbb7f0a8ce5b6822\"" Sep 6 00:08:04.780441 systemd[1]: Started cri-containerd-3d27c7f8d983e3e4e5132f61b9683346f9a1c0d2c4b65191bbb7f0a8ce5b6822.scope. Sep 6 00:08:04.812854 env[1212]: time="2025-09-06T00:08:04.811825990Z" level=info msg="StartContainer for \"3d27c7f8d983e3e4e5132f61b9683346f9a1c0d2c4b65191bbb7f0a8ce5b6822\" returns successfully" Sep 6 00:08:04.815209 systemd[1]: cri-containerd-3d27c7f8d983e3e4e5132f61b9683346f9a1c0d2c4b65191bbb7f0a8ce5b6822.scope: Deactivated successfully. Sep 6 00:08:04.873995 env[1212]: time="2025-09-06T00:08:04.873951059Z" level=info msg="shim disconnected" id=3d27c7f8d983e3e4e5132f61b9683346f9a1c0d2c4b65191bbb7f0a8ce5b6822 Sep 6 00:08:04.873995 env[1212]: time="2025-09-06T00:08:04.873991060Z" level=warning msg="cleaning up after shim disconnected" id=3d27c7f8d983e3e4e5132f61b9683346f9a1c0d2c4b65191bbb7f0a8ce5b6822 namespace=k8s.io Sep 6 00:08:04.873995 env[1212]: time="2025-09-06T00:08:04.874000621Z" level=info msg="cleaning up dead shim" Sep 6 00:08:04.880272 env[1212]: time="2025-09-06T00:08:04.880236376Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:08:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3233 runtime=io.containerd.runc.v2\n" Sep 6 00:08:05.051662 env[1212]: time="2025-09-06T00:08:05.051129444Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:08:05.052467 env[1212]: time="2025-09-06T00:08:05.052411163Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:08:05.054158 env[1212]: time="2025-09-06T00:08:05.054126735Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 6 00:08:05.055286 env[1212]: time="2025-09-06T00:08:05.055245089Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 6 00:08:05.057183 env[1212]: time="2025-09-06T00:08:05.057151147Z" level=info msg="CreateContainer within sandbox \"60402eb9cbc7f57eea030fce98fc11ca2757a39935ea72dc8231c967e9a468da\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 00:08:05.067528 env[1212]: time="2025-09-06T00:08:05.067478860Z" level=info msg="CreateContainer within sandbox \"60402eb9cbc7f57eea030fce98fc11ca2757a39935ea72dc8231c967e9a468da\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"722cec39c012b5f4cfa329c8137ce1ac7432698f8b12fc1ca76765c2073bf51e\"" Sep 6 00:08:05.068154 env[1212]: time="2025-09-06T00:08:05.068125280Z" level=info msg="StartContainer for \"722cec39c012b5f4cfa329c8137ce1ac7432698f8b12fc1ca76765c2073bf51e\"" Sep 6 00:08:05.085057 systemd[1]: Started cri-containerd-722cec39c012b5f4cfa329c8137ce1ac7432698f8b12fc1ca76765c2073bf51e.scope. Sep 6 00:08:05.086890 kubelet[1413]: E0906 00:08:05.086863 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:05.121930 env[1212]: time="2025-09-06T00:08:05.121845509Z" level=info msg="StartContainer for \"722cec39c012b5f4cfa329c8137ce1ac7432698f8b12fc1ca76765c2073bf51e\" returns successfully" Sep 6 00:08:05.357400 kubelet[1413]: E0906 00:08:05.357293 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:08:05.359421 kubelet[1413]: E0906 00:08:05.359171 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:08:05.360403 env[1212]: time="2025-09-06T00:08:05.360370101Z" level=info msg="CreateContainer within sandbox \"705a6d2a01fdd94feca1b9a4e6b2ef756de189a712d1339abe76f668fca008c6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:08:05.370883 env[1212]: time="2025-09-06T00:08:05.370838018Z" level=info msg="CreateContainer within sandbox \"705a6d2a01fdd94feca1b9a4e6b2ef756de189a712d1339abe76f668fca008c6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"6dc3ba02b4cb6eeba8e526cc1b6ebe87b84fd0d73e2584ea0e07df2d6b5e52c1\"" Sep 6 00:08:05.371361 env[1212]: time="2025-09-06T00:08:05.371297712Z" level=info msg="StartContainer for \"6dc3ba02b4cb6eeba8e526cc1b6ebe87b84fd0d73e2584ea0e07df2d6b5e52c1\"" Sep 6 00:08:05.387168 systemd[1]: Started cri-containerd-6dc3ba02b4cb6eeba8e526cc1b6ebe87b84fd0d73e2584ea0e07df2d6b5e52c1.scope. Sep 6 00:08:05.418968 env[1212]: time="2025-09-06T00:08:05.418923516Z" level=info msg="StartContainer for \"6dc3ba02b4cb6eeba8e526cc1b6ebe87b84fd0d73e2584ea0e07df2d6b5e52c1\" returns successfully" Sep 6 00:08:05.426604 systemd[1]: cri-containerd-6dc3ba02b4cb6eeba8e526cc1b6ebe87b84fd0d73e2584ea0e07df2d6b5e52c1.scope: Deactivated successfully. Sep 6 00:08:05.450696 env[1212]: time="2025-09-06T00:08:05.450650038Z" level=info msg="shim disconnected" id=6dc3ba02b4cb6eeba8e526cc1b6ebe87b84fd0d73e2584ea0e07df2d6b5e52c1 Sep 6 00:08:05.450696 env[1212]: time="2025-09-06T00:08:05.450694479Z" level=warning msg="cleaning up after shim disconnected" id=6dc3ba02b4cb6eeba8e526cc1b6ebe87b84fd0d73e2584ea0e07df2d6b5e52c1 namespace=k8s.io Sep 6 00:08:05.450696 env[1212]: time="2025-09-06T00:08:05.450704080Z" level=info msg="cleaning up dead shim" Sep 6 00:08:05.457117 env[1212]: time="2025-09-06T00:08:05.457078753Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:08:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3332 runtime=io.containerd.runc.v2\n" Sep 6 00:08:05.838141 kubelet[1413]: W0906 00:08:05.838088 1413 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88f7c28b_bb18_43e8_9d3e_39f80715d321.slice/cri-containerd-7d6371ed48da9d3ebcf4daf74623472a9037f313d59af01fa320bf24c2d2905b.scope WatchSource:0}: container "7d6371ed48da9d3ebcf4daf74623472a9037f313d59af01fa320bf24c2d2905b" in namespace "k8s.io": not found Sep 6 00:08:06.087076 kubelet[1413]: E0906 00:08:06.087027 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:06.224737 kubelet[1413]: I0906 00:08:06.224682 1413 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88f7c28b-bb18-43e8-9d3e-39f80715d321" path="/var/lib/kubelet/pods/88f7c28b-bb18-43e8-9d3e-39f80715d321/volumes" Sep 6 00:08:06.362204 kubelet[1413]: E0906 00:08:06.362168 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:08:06.362762 kubelet[1413]: E0906 00:08:06.362722 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:08:06.364260 env[1212]: time="2025-09-06T00:08:06.364223321Z" level=info msg="CreateContainer within sandbox \"705a6d2a01fdd94feca1b9a4e6b2ef756de189a712d1339abe76f668fca008c6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:08:06.377411 kubelet[1413]: I0906 00:08:06.377355 1413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-j4dhh" podStartSLOduration=2.137466063 podStartE2EDuration="4.377339586s" podCreationTimestamp="2025-09-06 00:08:02 +0000 UTC" firstStartedPulling="2025-09-06 00:08:02.816043547 +0000 UTC m=+55.451985160" lastFinishedPulling="2025-09-06 00:08:05.05591711 +0000 UTC m=+57.691858683" observedRunningTime="2025-09-06 00:08:05.382383248 +0000 UTC m=+58.018324861" watchObservedRunningTime="2025-09-06 00:08:06.377339586 +0000 UTC m=+59.013281199" Sep 6 00:08:06.379516 env[1212]: time="2025-09-06T00:08:06.379455968Z" level=info msg="CreateContainer within sandbox \"705a6d2a01fdd94feca1b9a4e6b2ef756de189a712d1339abe76f668fca008c6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2231c12eb4b11535296a15dca9d363e5b5db6dac5639e763ba84a6370563833f\"" Sep 6 00:08:06.379986 env[1212]: time="2025-09-06T00:08:06.379960223Z" level=info msg="StartContainer for \"2231c12eb4b11535296a15dca9d363e5b5db6dac5639e763ba84a6370563833f\"" Sep 6 00:08:06.396991 systemd[1]: Started cri-containerd-2231c12eb4b11535296a15dca9d363e5b5db6dac5639e763ba84a6370563833f.scope. Sep 6 00:08:06.426279 systemd[1]: cri-containerd-2231c12eb4b11535296a15dca9d363e5b5db6dac5639e763ba84a6370563833f.scope: Deactivated successfully. Sep 6 00:08:06.430340 env[1212]: time="2025-09-06T00:08:06.430297539Z" level=info msg="StartContainer for \"2231c12eb4b11535296a15dca9d363e5b5db6dac5639e763ba84a6370563833f\" returns successfully" Sep 6 00:08:06.448971 env[1212]: time="2025-09-06T00:08:06.448925446Z" level=info msg="shim disconnected" id=2231c12eb4b11535296a15dca9d363e5b5db6dac5639e763ba84a6370563833f Sep 6 00:08:06.449143 env[1212]: time="2025-09-06T00:08:06.448973807Z" level=warning msg="cleaning up after shim disconnected" id=2231c12eb4b11535296a15dca9d363e5b5db6dac5639e763ba84a6370563833f namespace=k8s.io Sep 6 00:08:06.449143 env[1212]: time="2025-09-06T00:08:06.448983928Z" level=info msg="cleaning up dead shim" Sep 6 00:08:06.455784 env[1212]: time="2025-09-06T00:08:06.455734886Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:08:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3388 runtime=io.containerd.runc.v2\n" Sep 6 00:08:06.614748 systemd[1]: run-containerd-runc-k8s.io-2231c12eb4b11535296a15dca9d363e5b5db6dac5639e763ba84a6370563833f-runc.8E5F3G.mount: Deactivated successfully. Sep 6 00:08:06.614872 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2231c12eb4b11535296a15dca9d363e5b5db6dac5639e763ba84a6370563833f-rootfs.mount: Deactivated successfully. Sep 6 00:08:07.087569 kubelet[1413]: E0906 00:08:07.087513 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:07.366498 kubelet[1413]: E0906 00:08:07.366297 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:08:07.368251 env[1212]: time="2025-09-06T00:08:07.368212557Z" level=info msg="CreateContainer within sandbox \"705a6d2a01fdd94feca1b9a4e6b2ef756de189a712d1339abe76f668fca008c6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:08:07.379680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3885046348.mount: Deactivated successfully. Sep 6 00:08:07.383225 env[1212]: time="2025-09-06T00:08:07.383183103Z" level=info msg="CreateContainer within sandbox \"705a6d2a01fdd94feca1b9a4e6b2ef756de189a712d1339abe76f668fca008c6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a544b40a040f75e28bb93323dca9c28733b8557de84451dd926b94f7b6a659a5\"" Sep 6 00:08:07.383659 env[1212]: time="2025-09-06T00:08:07.383623755Z" level=info msg="StartContainer for \"a544b40a040f75e28bb93323dca9c28733b8557de84451dd926b94f7b6a659a5\"" Sep 6 00:08:07.397192 systemd[1]: Started cri-containerd-a544b40a040f75e28bb93323dca9c28733b8557de84451dd926b94f7b6a659a5.scope. Sep 6 00:08:07.424089 systemd[1]: cri-containerd-a544b40a040f75e28bb93323dca9c28733b8557de84451dd926b94f7b6a659a5.scope: Deactivated successfully. Sep 6 00:08:07.425024 env[1212]: time="2025-09-06T00:08:07.424987370Z" level=info msg="StartContainer for \"a544b40a040f75e28bb93323dca9c28733b8557de84451dd926b94f7b6a659a5\" returns successfully" Sep 6 00:08:07.443363 env[1212]: time="2025-09-06T00:08:07.443317731Z" level=info msg="shim disconnected" id=a544b40a040f75e28bb93323dca9c28733b8557de84451dd926b94f7b6a659a5 Sep 6 00:08:07.443363 env[1212]: time="2025-09-06T00:08:07.443361692Z" level=warning msg="cleaning up after shim disconnected" id=a544b40a040f75e28bb93323dca9c28733b8557de84451dd926b94f7b6a659a5 namespace=k8s.io Sep 6 00:08:07.443571 env[1212]: time="2025-09-06T00:08:07.443371613Z" level=info msg="cleaning up dead shim" Sep 6 00:08:07.450811 env[1212]: time="2025-09-06T00:08:07.450764743Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:08:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3444 runtime=io.containerd.runc.v2\n" Sep 6 00:08:07.614794 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a544b40a040f75e28bb93323dca9c28733b8557de84451dd926b94f7b6a659a5-rootfs.mount: Deactivated successfully. Sep 6 00:08:08.046711 kubelet[1413]: E0906 00:08:08.045956 1413 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:08.062133 env[1212]: time="2025-09-06T00:08:08.062082223Z" level=info msg="StopPodSandbox for \"7f9ba357cab466e6500773aaea2702dc5c58e21df97b8ff1ef51ad9137472b44\"" Sep 6 00:08:08.062259 env[1212]: time="2025-09-06T00:08:08.062182426Z" level=info msg="TearDown network for sandbox \"7f9ba357cab466e6500773aaea2702dc5c58e21df97b8ff1ef51ad9137472b44\" successfully" Sep 6 00:08:08.062259 env[1212]: time="2025-09-06T00:08:08.062219827Z" level=info msg="StopPodSandbox for \"7f9ba357cab466e6500773aaea2702dc5c58e21df97b8ff1ef51ad9137472b44\" returns successfully" Sep 6 00:08:08.062598 env[1212]: time="2025-09-06T00:08:08.062559116Z" level=info msg="RemovePodSandbox for \"7f9ba357cab466e6500773aaea2702dc5c58e21df97b8ff1ef51ad9137472b44\"" Sep 6 00:08:08.062666 env[1212]: time="2025-09-06T00:08:08.062600157Z" level=info msg="Forcibly stopping sandbox \"7f9ba357cab466e6500773aaea2702dc5c58e21df97b8ff1ef51ad9137472b44\"" Sep 6 00:08:08.062698 env[1212]: time="2025-09-06T00:08:08.062679879Z" level=info msg="TearDown network for sandbox \"7f9ba357cab466e6500773aaea2702dc5c58e21df97b8ff1ef51ad9137472b44\" successfully" Sep 6 00:08:08.066770 env[1212]: time="2025-09-06T00:08:08.066705390Z" level=info msg="RemovePodSandbox \"7f9ba357cab466e6500773aaea2702dc5c58e21df97b8ff1ef51ad9137472b44\" returns successfully" Sep 6 00:08:08.068375 env[1212]: time="2025-09-06T00:08:08.068348236Z" level=info msg="StopPodSandbox for \"ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26\"" Sep 6 00:08:08.068461 env[1212]: time="2025-09-06T00:08:08.068428638Z" level=info msg="TearDown network for sandbox \"ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26\" successfully" Sep 6 00:08:08.068502 env[1212]: time="2025-09-06T00:08:08.068461279Z" level=info msg="StopPodSandbox for \"ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26\" returns successfully" Sep 6 00:08:08.068750 env[1212]: time="2025-09-06T00:08:08.068727446Z" level=info msg="RemovePodSandbox for \"ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26\"" Sep 6 00:08:08.068811 env[1212]: time="2025-09-06T00:08:08.068764047Z" level=info msg="Forcibly stopping sandbox \"ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26\"" Sep 6 00:08:08.068839 env[1212]: time="2025-09-06T00:08:08.068829009Z" level=info msg="TearDown network for sandbox \"ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26\" successfully" Sep 6 00:08:08.071364 env[1212]: time="2025-09-06T00:08:08.071329638Z" level=info msg="RemovePodSandbox \"ce3efd97f58308b528c851df6cea5b8a19ac1b8b3dedaa08f8d929797080fc26\" returns successfully" Sep 6 00:08:08.088656 kubelet[1413]: E0906 00:08:08.088611 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:08.190882 kubelet[1413]: E0906 00:08:08.190849 1413 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:08:08.370736 kubelet[1413]: E0906 00:08:08.370619 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:08:08.375032 env[1212]: time="2025-09-06T00:08:08.374981682Z" level=info msg="CreateContainer within sandbox \"705a6d2a01fdd94feca1b9a4e6b2ef756de189a712d1339abe76f668fca008c6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:08:08.389407 env[1212]: time="2025-09-06T00:08:08.389337278Z" level=info msg="CreateContainer within sandbox \"705a6d2a01fdd94feca1b9a4e6b2ef756de189a712d1339abe76f668fca008c6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c07cc216c42340c1a510ec4bf83012462d663079bb69a55043ede5747dfa3203\"" Sep 6 00:08:08.390567 env[1212]: time="2025-09-06T00:08:08.390534071Z" level=info msg="StartContainer for \"c07cc216c42340c1a510ec4bf83012462d663079bb69a55043ede5747dfa3203\"" Sep 6 00:08:08.410631 systemd[1]: Started cri-containerd-c07cc216c42340c1a510ec4bf83012462d663079bb69a55043ede5747dfa3203.scope. Sep 6 00:08:08.451437 env[1212]: time="2025-09-06T00:08:08.451368947Z" level=info msg="StartContainer for \"c07cc216c42340c1a510ec4bf83012462d663079bb69a55043ede5747dfa3203\" returns successfully" Sep 6 00:08:08.681788 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Sep 6 00:08:08.948697 kubelet[1413]: W0906 00:08:08.948483 1413 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod007f1794_4bde_4090_8175_e2d1e6a8ab1c.slice/cri-containerd-3d27c7f8d983e3e4e5132f61b9683346f9a1c0d2c4b65191bbb7f0a8ce5b6822.scope WatchSource:0}: task 3d27c7f8d983e3e4e5132f61b9683346f9a1c0d2c4b65191bbb7f0a8ce5b6822 not found: not found Sep 6 00:08:09.090414 kubelet[1413]: E0906 00:08:09.089194 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:09.376548 kubelet[1413]: E0906 00:08:09.375002 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:08:09.394815 kubelet[1413]: I0906 00:08:09.394696 1413 setters.go:600] "Node became not ready" node="10.0.0.73" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-06T00:08:09Z","lastTransitionTime":"2025-09-06T00:08:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 6 00:08:09.395469 kubelet[1413]: I0906 00:08:09.395432 1413 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rnwdw" podStartSLOduration=5.395419556 podStartE2EDuration="5.395419556s" podCreationTimestamp="2025-09-06 00:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:08:09.394293486 +0000 UTC m=+62.030235099" watchObservedRunningTime="2025-09-06 00:08:09.395419556 +0000 UTC m=+62.031361169" Sep 6 00:08:10.091514 kubelet[1413]: E0906 00:08:10.091463 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:10.706516 kubelet[1413]: E0906 00:08:10.706433 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:08:11.091809 kubelet[1413]: E0906 00:08:11.091674 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:11.607184 systemd-networkd[1042]: lxc_health: Link UP Sep 6 00:08:11.616585 systemd-networkd[1042]: lxc_health: Gained carrier Sep 6 00:08:11.616851 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 6 00:08:12.059431 kubelet[1413]: W0906 00:08:12.059379 1413 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod007f1794_4bde_4090_8175_e2d1e6a8ab1c.slice/cri-containerd-6dc3ba02b4cb6eeba8e526cc1b6ebe87b84fd0d73e2584ea0e07df2d6b5e52c1.scope WatchSource:0}: task 6dc3ba02b4cb6eeba8e526cc1b6ebe87b84fd0d73e2584ea0e07df2d6b5e52c1 not found: not found Sep 6 00:08:12.092063 kubelet[1413]: E0906 00:08:12.092024 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:12.699923 systemd-networkd[1042]: lxc_health: Gained IPv6LL Sep 6 00:08:12.706455 kubelet[1413]: E0906 00:08:12.706426 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:08:13.044028 systemd[1]: run-containerd-runc-k8s.io-c07cc216c42340c1a510ec4bf83012462d663079bb69a55043ede5747dfa3203-runc.a38cjW.mount: Deactivated successfully. Sep 6 00:08:13.092535 kubelet[1413]: E0906 00:08:13.092502 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:13.383237 kubelet[1413]: E0906 00:08:13.382905 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:08:14.093239 kubelet[1413]: E0906 00:08:14.093191 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:14.385149 kubelet[1413]: E0906 00:08:14.385045 1413 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:08:15.094085 kubelet[1413]: E0906 00:08:15.094041 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:15.168113 kubelet[1413]: W0906 00:08:15.168068 1413 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod007f1794_4bde_4090_8175_e2d1e6a8ab1c.slice/cri-containerd-2231c12eb4b11535296a15dca9d363e5b5db6dac5639e763ba84a6370563833f.scope WatchSource:0}: task 2231c12eb4b11535296a15dca9d363e5b5db6dac5639e763ba84a6370563833f not found: not found Sep 6 00:08:15.189281 systemd[1]: run-containerd-runc-k8s.io-c07cc216c42340c1a510ec4bf83012462d663079bb69a55043ede5747dfa3203-runc.fZo6iS.mount: Deactivated successfully. Sep 6 00:08:16.094884 kubelet[1413]: E0906 00:08:16.094824 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:17.095277 kubelet[1413]: E0906 00:08:17.095230 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:18.095421 kubelet[1413]: E0906 00:08:18.095361 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 6 00:08:18.275602 kubelet[1413]: W0906 00:08:18.275552 1413 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod007f1794_4bde_4090_8175_e2d1e6a8ab1c.slice/cri-containerd-a544b40a040f75e28bb93323dca9c28733b8557de84451dd926b94f7b6a659a5.scope WatchSource:0}: task a544b40a040f75e28bb93323dca9c28733b8557de84451dd926b94f7b6a659a5 not found: not found Sep 6 00:08:19.095629 kubelet[1413]: E0906 00:08:19.095578 1413 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"