Jul 12 00:19:21.723708 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 00:19:21.723728 kernel: Linux version 5.15.186-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Jul 11 23:15:18 -00 2025 Jul 12 00:19:21.723738 kernel: efi: EFI v2.70 by EDK II Jul 12 00:19:21.723743 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Jul 12 00:19:21.723748 kernel: random: crng init done Jul 12 00:19:21.723754 kernel: ACPI: Early table checksum verification disabled Jul 12 00:19:21.723760 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jul 12 00:19:21.723768 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 12 00:19:21.723774 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:19:21.723779 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:19:21.723785 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:19:21.723790 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:19:21.723795 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:19:21.723801 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:19:21.723810 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:19:21.723816 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:19:21.723822 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:19:21.723827 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 12 00:19:21.723833 kernel: NUMA: Failed to initialise from firmware Jul 12 00:19:21.723839 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:19:21.723845 kernel: NUMA: NODE_DATA [mem 0xdcb09900-0xdcb0efff] Jul 12 00:19:21.723850 kernel: Zone ranges: Jul 12 00:19:21.723856 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:19:21.723862 kernel: DMA32 empty Jul 12 00:19:21.723888 kernel: Normal empty Jul 12 00:19:21.723894 kernel: Movable zone start for each node Jul 12 00:19:21.723899 kernel: Early memory node ranges Jul 12 00:19:21.723918 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jul 12 00:19:21.723925 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jul 12 00:19:21.723930 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jul 12 00:19:21.723936 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jul 12 00:19:21.723941 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jul 12 00:19:21.723947 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jul 12 00:19:21.723952 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jul 12 00:19:21.723958 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:19:21.723966 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 12 00:19:21.723972 kernel: psci: probing for conduit method from ACPI. Jul 12 00:19:21.723978 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 00:19:21.723983 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:19:21.723989 kernel: psci: Trusted OS migration not required Jul 12 00:19:21.723997 kernel: psci: SMC Calling Convention v1.1 Jul 12 00:19:21.724004 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 12 00:19:21.724011 kernel: ACPI: SRAT not present Jul 12 00:19:21.724017 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 12 00:19:21.724023 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 12 00:19:21.724029 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 12 00:19:21.724035 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:19:21.724041 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:19:21.724047 kernel: CPU features: detected: Hardware dirty bit management Jul 12 00:19:21.724053 kernel: CPU features: detected: Spectre-v4 Jul 12 00:19:21.724060 kernel: CPU features: detected: Spectre-BHB Jul 12 00:19:21.724067 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:19:21.724073 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:19:21.724079 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 00:19:21.724085 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 00:19:21.724091 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 12 00:19:21.724097 kernel: Policy zone: DMA Jul 12 00:19:21.724105 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6cb548cec1e3020e9c3dcbc1d7670f4d8bdc2e3c8e062898ccaed7fc9d588f65 Jul 12 00:19:21.724111 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:19:21.724117 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:19:21.724124 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:19:21.724130 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:19:21.724137 kernel: Memory: 2457332K/2572288K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 114956K reserved, 0K cma-reserved) Jul 12 00:19:21.724144 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 12 00:19:21.724150 kernel: trace event string verifier disabled Jul 12 00:19:21.724156 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:19:21.724162 kernel: rcu: RCU event tracing is enabled. Jul 12 00:19:21.724168 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 12 00:19:21.724174 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:19:21.724181 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:19:21.724187 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:19:21.724193 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 12 00:19:21.724199 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:19:21.724206 kernel: GICv3: 256 SPIs implemented Jul 12 00:19:21.724212 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:19:21.724218 kernel: GICv3: Distributor has no Range Selector support Jul 12 00:19:21.724224 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:19:21.724230 kernel: GICv3: 16 PPIs implemented Jul 12 00:19:21.724236 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 12 00:19:21.724242 kernel: ACPI: SRAT not present Jul 12 00:19:21.724248 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 12 00:19:21.724254 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Jul 12 00:19:21.724260 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Jul 12 00:19:21.724266 kernel: GICv3: using LPI property table @0x00000000400d0000 Jul 12 00:19:21.724272 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Jul 12 00:19:21.724280 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:19:21.724285 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 00:19:21.724292 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 00:19:21.724298 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 00:19:21.724308 kernel: arm-pv: using stolen time PV Jul 12 00:19:21.724314 kernel: Console: colour dummy device 80x25 Jul 12 00:19:21.724320 kernel: ACPI: Core revision 20210730 Jul 12 00:19:21.724327 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 00:19:21.724334 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:19:21.724340 kernel: LSM: Security Framework initializing Jul 12 00:19:21.724348 kernel: SELinux: Initializing. Jul 12 00:19:21.724354 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:19:21.724360 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:19:21.724366 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:19:21.724372 kernel: Platform MSI: ITS@0x8080000 domain created Jul 12 00:19:21.724379 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 12 00:19:21.724385 kernel: Remapping and enabling EFI services. Jul 12 00:19:21.724391 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:19:21.724397 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:19:21.724405 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 12 00:19:21.724411 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Jul 12 00:19:21.724418 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:19:21.724424 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 00:19:21.724430 kernel: Detected PIPT I-cache on CPU2 Jul 12 00:19:21.724436 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 12 00:19:21.724443 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Jul 12 00:19:21.724450 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:19:21.724456 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 12 00:19:21.724462 kernel: Detected PIPT I-cache on CPU3 Jul 12 00:19:21.724470 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 12 00:19:21.724476 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Jul 12 00:19:21.724482 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:19:21.724489 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 12 00:19:21.724499 kernel: smp: Brought up 1 node, 4 CPUs Jul 12 00:19:21.724507 kernel: SMP: Total of 4 processors activated. Jul 12 00:19:21.724513 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:19:21.724520 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 00:19:21.724526 kernel: CPU features: detected: Common not Private translations Jul 12 00:19:21.724532 kernel: CPU features: detected: CRC32 instructions Jul 12 00:19:21.724539 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 00:19:21.724545 kernel: CPU features: detected: LSE atomic instructions Jul 12 00:19:21.724553 kernel: CPU features: detected: Privileged Access Never Jul 12 00:19:21.724560 kernel: CPU features: detected: RAS Extension Support Jul 12 00:19:21.724567 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 12 00:19:21.724573 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:19:21.724580 kernel: alternatives: patching kernel code Jul 12 00:19:21.724588 kernel: devtmpfs: initialized Jul 12 00:19:21.724594 kernel: KASLR enabled Jul 12 00:19:21.724601 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:19:21.724607 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 12 00:19:21.724614 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:19:21.724620 kernel: SMBIOS 3.0.0 present. Jul 12 00:19:21.724627 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jul 12 00:19:21.724633 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:19:21.724640 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:19:21.724648 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:19:21.724655 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:19:21.724661 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:19:21.724668 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 Jul 12 00:19:21.724675 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:19:21.724681 kernel: cpuidle: using governor menu Jul 12 00:19:21.724688 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:19:21.724694 kernel: ASID allocator initialised with 32768 entries Jul 12 00:19:21.724701 kernel: ACPI: bus type PCI registered Jul 12 00:19:21.724709 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:19:21.724715 kernel: Serial: AMBA PL011 UART driver Jul 12 00:19:21.724722 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:19:21.724728 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:19:21.724735 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:19:21.724741 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:19:21.724748 kernel: cryptd: max_cpu_qlen set to 1000 Jul 12 00:19:21.724754 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:19:21.724761 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:19:21.724769 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:19:21.724775 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:19:21.724782 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 12 00:19:21.724788 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 12 00:19:21.724795 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 12 00:19:21.724801 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:19:21.724808 kernel: ACPI: Interpreter enabled Jul 12 00:19:21.724814 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:19:21.724821 kernel: ACPI: MCFG table detected, 1 entries Jul 12 00:19:21.724829 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 12 00:19:21.724835 kernel: printk: console [ttyAMA0] enabled Jul 12 00:19:21.724842 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 12 00:19:21.724980 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:19:21.725041 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 00:19:21.725104 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 00:19:21.725162 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 12 00:19:21.725222 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 12 00:19:21.725230 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 12 00:19:21.725237 kernel: PCI host bridge to bus 0000:00 Jul 12 00:19:21.725301 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 12 00:19:21.725353 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 00:19:21.725403 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 12 00:19:21.725454 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 12 00:19:21.725527 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 12 00:19:21.725596 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 12 00:19:21.725672 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 12 00:19:21.725763 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 12 00:19:21.725821 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:19:21.725927 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:19:21.725988 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 12 00:19:21.726049 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 12 00:19:21.726099 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 12 00:19:21.726149 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 00:19:21.726198 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 12 00:19:21.726207 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 00:19:21.726214 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 00:19:21.726220 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 00:19:21.726229 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 00:19:21.726235 kernel: iommu: Default domain type: Translated Jul 12 00:19:21.726243 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:19:21.726249 kernel: vgaarb: loaded Jul 12 00:19:21.726256 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 12 00:19:21.726263 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 12 00:19:21.726269 kernel: PTP clock support registered Jul 12 00:19:21.726275 kernel: Registered efivars operations Jul 12 00:19:21.726282 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:19:21.726289 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:19:21.726297 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:19:21.726303 kernel: pnp: PnP ACPI init Jul 12 00:19:21.726371 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 12 00:19:21.726380 kernel: pnp: PnP ACPI: found 1 devices Jul 12 00:19:21.726387 kernel: NET: Registered PF_INET protocol family Jul 12 00:19:21.726394 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:19:21.726401 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:19:21.726408 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:19:21.726417 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:19:21.726424 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 12 00:19:21.726431 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:19:21.726438 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:19:21.726444 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:19:21.726451 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:19:21.726457 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:19:21.726464 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 12 00:19:21.726470 kernel: kvm [1]: HYP mode not available Jul 12 00:19:21.726478 kernel: Initialise system trusted keyrings Jul 12 00:19:21.726485 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:19:21.726491 kernel: Key type asymmetric registered Jul 12 00:19:21.726497 kernel: Asymmetric key parser 'x509' registered Jul 12 00:19:21.726504 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 12 00:19:21.726511 kernel: io scheduler mq-deadline registered Jul 12 00:19:21.726517 kernel: io scheduler kyber registered Jul 12 00:19:21.726524 kernel: io scheduler bfq registered Jul 12 00:19:21.726531 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 00:19:21.726539 kernel: ACPI: button: Power Button [PWRB] Jul 12 00:19:21.726546 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 00:19:21.726605 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 12 00:19:21.726614 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:19:21.726620 kernel: thunder_xcv, ver 1.0 Jul 12 00:19:21.726627 kernel: thunder_bgx, ver 1.0 Jul 12 00:19:21.726633 kernel: nicpf, ver 1.0 Jul 12 00:19:21.726640 kernel: nicvf, ver 1.0 Jul 12 00:19:21.726705 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:19:21.726761 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:19:21 UTC (1752279561) Jul 12 00:19:21.726769 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:19:21.726776 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:19:21.726783 kernel: Segment Routing with IPv6 Jul 12 00:19:21.726789 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:19:21.726796 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:19:21.726802 kernel: Key type dns_resolver registered Jul 12 00:19:21.726809 kernel: registered taskstats version 1 Jul 12 00:19:21.726817 kernel: Loading compiled-in X.509 certificates Jul 12 00:19:21.726824 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.186-flatcar: de2ee1d04443f96c763927c453375bbe23b5752a' Jul 12 00:19:21.726831 kernel: Key type .fscrypt registered Jul 12 00:19:21.726837 kernel: Key type fscrypt-provisioning registered Jul 12 00:19:21.726844 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:19:21.726851 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:19:21.726857 kernel: ima: No architecture policies found Jul 12 00:19:21.726863 kernel: clk: Disabling unused clocks Jul 12 00:19:21.726883 kernel: Freeing unused kernel memory: 36416K Jul 12 00:19:21.726892 kernel: Run /init as init process Jul 12 00:19:21.726899 kernel: with arguments: Jul 12 00:19:21.726905 kernel: /init Jul 12 00:19:21.726911 kernel: with environment: Jul 12 00:19:21.726918 kernel: HOME=/ Jul 12 00:19:21.726924 kernel: TERM=linux Jul 12 00:19:21.726930 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:19:21.726939 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 12 00:19:21.726949 systemd[1]: Detected virtualization kvm. Jul 12 00:19:21.726956 systemd[1]: Detected architecture arm64. Jul 12 00:19:21.726963 systemd[1]: Running in initrd. Jul 12 00:19:21.726970 systemd[1]: No hostname configured, using default hostname. Jul 12 00:19:21.726977 systemd[1]: Hostname set to . Jul 12 00:19:21.726984 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:19:21.726991 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:19:21.726998 systemd[1]: Started systemd-ask-password-console.path. Jul 12 00:19:21.727006 systemd[1]: Reached target cryptsetup.target. Jul 12 00:19:21.727013 systemd[1]: Reached target paths.target. Jul 12 00:19:21.727020 systemd[1]: Reached target slices.target. Jul 12 00:19:21.727027 systemd[1]: Reached target swap.target. Jul 12 00:19:21.727034 systemd[1]: Reached target timers.target. Jul 12 00:19:21.727041 systemd[1]: Listening on iscsid.socket. Jul 12 00:19:21.727048 systemd[1]: Listening on iscsiuio.socket. Jul 12 00:19:21.727056 systemd[1]: Listening on systemd-journald-audit.socket. Jul 12 00:19:21.727063 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 12 00:19:21.727070 systemd[1]: Listening on systemd-journald.socket. Jul 12 00:19:21.727077 systemd[1]: Listening on systemd-networkd.socket. Jul 12 00:19:21.727084 systemd[1]: Listening on systemd-udevd-control.socket. Jul 12 00:19:21.727091 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 12 00:19:21.727098 systemd[1]: Reached target sockets.target. Jul 12 00:19:21.727105 systemd[1]: Starting kmod-static-nodes.service... Jul 12 00:19:21.727112 systemd[1]: Finished network-cleanup.service. Jul 12 00:19:21.727120 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:19:21.727127 systemd[1]: Starting systemd-journald.service... Jul 12 00:19:21.727134 systemd[1]: Starting systemd-modules-load.service... Jul 12 00:19:21.727153 systemd[1]: Starting systemd-resolved.service... Jul 12 00:19:21.727160 systemd[1]: Starting systemd-vconsole-setup.service... Jul 12 00:19:21.727167 systemd[1]: Finished kmod-static-nodes.service. Jul 12 00:19:21.727174 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:19:21.727181 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 12 00:19:21.727188 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 12 00:19:21.727202 systemd-journald[290]: Journal started Jul 12 00:19:21.727248 systemd-journald[290]: Runtime Journal (/run/log/journal/0160c8b7324a41959694b4a392b9fc70) is 6.0M, max 48.7M, 42.6M free. Jul 12 00:19:21.727283 kernel: audit: type=1130 audit(1752279561.727:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:21.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:21.721020 systemd-modules-load[291]: Inserted module 'overlay' Jul 12 00:19:21.731178 systemd[1]: Started systemd-journald.service. Jul 12 00:19:21.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:21.732110 systemd[1]: Finished systemd-vconsole-setup.service. Jul 12 00:19:21.735042 kernel: audit: type=1130 audit(1752279561.730:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:21.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:21.735326 systemd[1]: Starting dracut-cmdline-ask.service... Jul 12 00:19:21.738371 kernel: audit: type=1130 audit(1752279561.734:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:21.744499 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:19:21.744780 systemd-resolved[292]: Positive Trust Anchors: Jul 12 00:19:21.744797 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:19:21.744824 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 12 00:19:21.754064 kernel: Bridge firewalling registered Jul 12 00:19:21.750088 systemd-resolved[292]: Defaulting to hostname 'linux'. Jul 12 00:19:21.751891 systemd[1]: Started systemd-resolved.service. Jul 12 00:19:21.759067 kernel: audit: type=1130 audit(1752279561.755:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:21.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:21.752617 systemd-modules-load[291]: Inserted module 'br_netfilter' Jul 12 00:19:21.755900 systemd[1]: Reached target nss-lookup.target. Jul 12 00:19:21.760090 systemd[1]: Finished dracut-cmdline-ask.service. Jul 12 00:19:21.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:21.763898 kernel: audit: type=1130 audit(1752279561.760:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:21.764164 systemd[1]: Starting dracut-cmdline.service... Jul 12 00:19:21.765901 kernel: SCSI subsystem initialized Jul 12 00:19:21.773184 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:19:21.773248 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:19:21.773259 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 12 00:19:21.773545 dracut-cmdline[307]: dracut-dracut-053 Jul 12 00:19:21.775510 systemd-modules-load[291]: Inserted module 'dm_multipath' Jul 12 00:19:21.776359 systemd[1]: Finished systemd-modules-load.service. Jul 12 00:19:21.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:21.779542 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6cb548cec1e3020e9c3dcbc1d7670f4d8bdc2e3c8e062898ccaed7fc9d588f65 Jul 12 00:19:21.782961 kernel: audit: type=1130 audit(1752279561.777:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:21.780260 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:19:21.790020 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:19:21.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:21.792904 kernel: audit: type=1130 audit(1752279561.789:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:21.840898 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:19:21.856909 kernel: iscsi: registered transport (tcp) Jul 12 00:19:21.871891 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:19:21.871918 kernel: QLogic iSCSI HBA Driver Jul 12 00:19:21.910918 kernel: audit: type=1130 audit(1752279561.907:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:21.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:21.907724 systemd[1]: Finished dracut-cmdline.service. Jul 12 00:19:21.909397 systemd[1]: Starting dracut-pre-udev.service... Jul 12 00:19:21.956917 kernel: raid6: neonx8 gen() 13696 MB/s Jul 12 00:19:21.973948 kernel: raid6: neonx8 xor() 10746 MB/s Jul 12 00:19:21.990901 kernel: raid6: neonx4 gen() 13535 MB/s Jul 12 00:19:22.007925 kernel: raid6: neonx4 xor() 11201 MB/s Jul 12 00:19:22.024898 kernel: raid6: neonx2 gen() 13081 MB/s Jul 12 00:19:22.041914 kernel: raid6: neonx2 xor() 10369 MB/s Jul 12 00:19:22.058916 kernel: raid6: neonx1 gen() 10593 MB/s Jul 12 00:19:22.075898 kernel: raid6: neonx1 xor() 8760 MB/s Jul 12 00:19:22.092910 kernel: raid6: int64x8 gen() 6268 MB/s Jul 12 00:19:22.109914 kernel: raid6: int64x8 xor() 3539 MB/s Jul 12 00:19:22.126947 kernel: raid6: int64x4 gen() 7246 MB/s Jul 12 00:19:22.143903 kernel: raid6: int64x4 xor() 3846 MB/s Jul 12 00:19:22.160908 kernel: raid6: int64x2 gen() 6142 MB/s Jul 12 00:19:22.177927 kernel: raid6: int64x2 xor() 3314 MB/s Jul 12 00:19:22.195674 kernel: raid6: int64x1 gen() 5025 MB/s Jul 12 00:19:22.212122 kernel: raid6: int64x1 xor() 2645 MB/s Jul 12 00:19:22.212184 kernel: raid6: using algorithm neonx8 gen() 13696 MB/s Jul 12 00:19:22.212193 kernel: raid6: .... xor() 10746 MB/s, rmw enabled Jul 12 00:19:22.212202 kernel: raid6: using neon recovery algorithm Jul 12 00:19:22.222906 kernel: xor: measuring software checksum speed Jul 12 00:19:22.222964 kernel: 8regs : 17199 MB/sec Jul 12 00:19:22.224348 kernel: 32regs : 19119 MB/sec Jul 12 00:19:22.224379 kernel: arm64_neon : 27775 MB/sec Jul 12 00:19:22.224396 kernel: xor: using function: arm64_neon (27775 MB/sec) Jul 12 00:19:22.277937 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 12 00:19:22.290515 systemd[1]: Finished dracut-pre-udev.service. Jul 12 00:19:22.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:22.292120 systemd[1]: Starting systemd-udevd.service... Jul 12 00:19:22.295573 kernel: audit: type=1130 audit(1752279562.290:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:22.290000 audit: BPF prog-id=7 op=LOAD Jul 12 00:19:22.290000 audit: BPF prog-id=8 op=LOAD Jul 12 00:19:22.308594 systemd-udevd[492]: Using default interface naming scheme 'v252'. Jul 12 00:19:22.311974 systemd[1]: Started systemd-udevd.service. Jul 12 00:19:22.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:22.313923 systemd[1]: Starting dracut-pre-trigger.service... Jul 12 00:19:22.324742 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Jul 12 00:19:22.355777 systemd[1]: Finished dracut-pre-trigger.service. Jul 12 00:19:22.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:22.357286 systemd[1]: Starting systemd-udev-trigger.service... Jul 12 00:19:22.391347 systemd[1]: Finished systemd-udev-trigger.service. Jul 12 00:19:22.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:22.421388 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 12 00:19:22.427457 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:19:22.427473 kernel: GPT:9289727 != 19775487 Jul 12 00:19:22.427482 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:19:22.427490 kernel: GPT:9289727 != 19775487 Jul 12 00:19:22.427498 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:19:22.427506 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:19:22.441897 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (543) Jul 12 00:19:22.444247 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 12 00:19:22.445575 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 12 00:19:22.456389 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 12 00:19:22.459977 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 12 00:19:22.464939 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 12 00:19:22.466450 systemd[1]: Starting disk-uuid.service... Jul 12 00:19:22.472337 disk-uuid[562]: Primary Header is updated. Jul 12 00:19:22.472337 disk-uuid[562]: Secondary Entries is updated. Jul 12 00:19:22.472337 disk-uuid[562]: Secondary Header is updated. Jul 12 00:19:22.475888 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:19:23.486445 disk-uuid[563]: The operation has completed successfully. Jul 12 00:19:23.487982 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:19:23.515169 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:19:23.516069 systemd[1]: Finished disk-uuid.service. Jul 12 00:19:23.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:23.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:23.519958 systemd[1]: Starting verity-setup.service... Jul 12 00:19:23.539887 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:19:23.564724 systemd[1]: Found device dev-mapper-usr.device. Jul 12 00:19:23.567103 systemd[1]: Mounting sysusr-usr.mount... Jul 12 00:19:23.569004 systemd[1]: Finished verity-setup.service. Jul 12 00:19:23.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:23.623904 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 12 00:19:23.624348 systemd[1]: Mounted sysusr-usr.mount. Jul 12 00:19:23.625212 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 12 00:19:23.625969 systemd[1]: Starting ignition-setup.service... Jul 12 00:19:23.627740 systemd[1]: Starting parse-ip-for-networkd.service... Jul 12 00:19:23.654949 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:19:23.655000 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:19:23.655009 kernel: BTRFS info (device vda6): has skinny extents Jul 12 00:19:23.663718 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:19:23.682289 systemd[1]: Finished ignition-setup.service. Jul 12 00:19:23.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:23.683802 systemd[1]: Starting ignition-fetch-offline.service... Jul 12 00:19:23.713726 systemd[1]: Finished parse-ip-for-networkd.service. Jul 12 00:19:23.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:23.716000 audit: BPF prog-id=9 op=LOAD Jul 12 00:19:23.717923 systemd[1]: Starting systemd-networkd.service... Jul 12 00:19:23.754165 systemd-networkd[734]: lo: Link UP Jul 12 00:19:23.754177 systemd-networkd[734]: lo: Gained carrier Jul 12 00:19:23.755185 systemd-networkd[734]: Enumeration completed Jul 12 00:19:23.755574 systemd-networkd[734]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:19:23.755974 systemd[1]: Started systemd-networkd.service. Jul 12 00:19:23.756852 systemd[1]: Reached target network.target. Jul 12 00:19:23.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:23.757685 systemd-networkd[734]: eth0: Link UP Jul 12 00:19:23.757688 systemd-networkd[734]: eth0: Gained carrier Jul 12 00:19:23.758763 systemd[1]: Starting iscsiuio.service... Jul 12 00:19:23.772672 systemd[1]: Started iscsiuio.service. Jul 12 00:19:23.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:23.774284 systemd[1]: Starting iscsid.service... Jul 12 00:19:23.778838 iscsid[744]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 12 00:19:23.778838 iscsid[744]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 12 00:19:23.778838 iscsid[744]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 12 00:19:23.778838 iscsid[744]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 12 00:19:23.778838 iscsid[744]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 12 00:19:23.778838 iscsid[744]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 12 00:19:23.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:23.782074 systemd-networkd[734]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:19:23.783637 systemd[1]: Started iscsid.service. Jul 12 00:19:23.785496 systemd[1]: Starting dracut-initqueue.service... Jul 12 00:19:23.797274 systemd[1]: Finished dracut-initqueue.service. Jul 12 00:19:23.798135 systemd[1]: Reached target remote-fs-pre.target. Jul 12 00:19:23.799092 systemd[1]: Reached target remote-cryptsetup.target. Jul 12 00:19:23.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:23.800284 systemd[1]: Reached target remote-fs.target. Jul 12 00:19:23.801094 ignition[695]: Ignition 2.14.0 Jul 12 00:19:23.801100 ignition[695]: Stage: fetch-offline Jul 12 00:19:23.801135 ignition[695]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:19:23.802710 systemd[1]: Starting dracut-pre-mount.service... Jul 12 00:19:23.801143 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:19:23.801259 ignition[695]: parsed url from cmdline: "" Jul 12 00:19:23.801262 ignition[695]: no config URL provided Jul 12 00:19:23.801266 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:19:23.801272 ignition[695]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:19:23.801289 ignition[695]: op(1): [started] loading QEMU firmware config module Jul 12 00:19:23.801293 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 12 00:19:23.808454 ignition[695]: op(1): [finished] loading QEMU firmware config module Jul 12 00:19:23.813187 systemd[1]: Finished dracut-pre-mount.service. Jul 12 00:19:23.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:23.815413 ignition[695]: parsing config with SHA512: 10e0102c801d1587e527c69b1df0f7dd55d80f55ecefd0cd01347217887efd5828295d12be2132f062b033513b77efa25ba7d563abe038e1b27348731384726f Jul 12 00:19:23.820982 unknown[695]: fetched base config from "system" Jul 12 00:19:23.821357 ignition[695]: fetch-offline: fetch-offline passed Jul 12 00:19:23.820993 unknown[695]: fetched user config from "qemu" Jul 12 00:19:23.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:23.821412 ignition[695]: Ignition finished successfully Jul 12 00:19:23.822405 systemd[1]: Finished ignition-fetch-offline.service. Jul 12 00:19:23.823363 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 12 00:19:23.824033 systemd[1]: Starting ignition-kargs.service... Jul 12 00:19:23.832353 ignition[760]: Ignition 2.14.0 Jul 12 00:19:23.832361 ignition[760]: Stage: kargs Jul 12 00:19:23.832598 ignition[760]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:19:23.832614 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:19:23.833435 ignition[760]: kargs: kargs passed Jul 12 00:19:23.834827 systemd[1]: Finished ignition-kargs.service. Jul 12 00:19:23.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:23.833481 ignition[760]: Ignition finished successfully Jul 12 00:19:23.836373 systemd[1]: Starting ignition-disks.service... Jul 12 00:19:23.842388 ignition[766]: Ignition 2.14.0 Jul 12 00:19:23.842397 ignition[766]: Stage: disks Jul 12 00:19:23.842479 ignition[766]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:19:23.844172 systemd[1]: Finished ignition-disks.service. Jul 12 00:19:23.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:23.842489 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:19:23.845271 systemd[1]: Reached target initrd-root-device.target. Jul 12 00:19:23.843144 ignition[766]: disks: disks passed Jul 12 00:19:23.846201 systemd[1]: Reached target local-fs-pre.target. Jul 12 00:19:23.843178 ignition[766]: Ignition finished successfully Jul 12 00:19:23.847390 systemd[1]: Reached target local-fs.target. Jul 12 00:19:23.848498 systemd[1]: Reached target sysinit.target. Jul 12 00:19:23.849478 systemd[1]: Reached target basic.target. Jul 12 00:19:23.851167 systemd[1]: Starting systemd-fsck-root.service... Jul 12 00:19:23.861855 systemd-fsck[774]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 12 00:19:23.864781 systemd[1]: Finished systemd-fsck-root.service. Jul 12 00:19:23.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:23.867168 systemd[1]: Mounting sysroot.mount... Jul 12 00:19:23.873483 systemd[1]: Mounted sysroot.mount. Jul 12 00:19:23.874387 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 12 00:19:23.874048 systemd[1]: Reached target initrd-root-fs.target. Jul 12 00:19:23.875795 systemd[1]: Mounting sysroot-usr.mount... Jul 12 00:19:23.876585 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 12 00:19:23.876620 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:19:23.876641 systemd[1]: Reached target ignition-diskful.target. Jul 12 00:19:23.878290 systemd[1]: Mounted sysroot-usr.mount. Jul 12 00:19:23.879446 systemd[1]: Starting initrd-setup-root.service... Jul 12 00:19:23.883672 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:19:23.888126 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:19:23.891787 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:19:23.895454 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:19:23.922561 systemd[1]: Finished initrd-setup-root.service. Jul 12 00:19:23.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:23.923901 systemd[1]: Starting ignition-mount.service... Jul 12 00:19:23.925442 systemd[1]: Starting sysroot-boot.service... Jul 12 00:19:23.928845 bash[825]: umount: /sysroot/usr/share/oem: not mounted. Jul 12 00:19:23.937099 ignition[826]: INFO : Ignition 2.14.0 Jul 12 00:19:23.937099 ignition[826]: INFO : Stage: mount Jul 12 00:19:23.938271 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:19:23.938271 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:19:23.938271 ignition[826]: INFO : mount: mount passed Jul 12 00:19:23.938271 ignition[826]: INFO : Ignition finished successfully Jul 12 00:19:23.941465 systemd[1]: Finished ignition-mount.service. Jul 12 00:19:23.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:23.945522 systemd[1]: Finished sysroot-boot.service. Jul 12 00:19:23.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:24.578883 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 12 00:19:24.585442 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (835) Jul 12 00:19:24.585483 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:19:24.585494 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:19:24.586877 kernel: BTRFS info (device vda6): has skinny extents Jul 12 00:19:24.589274 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 12 00:19:24.590648 systemd[1]: Starting ignition-files.service... Jul 12 00:19:24.604416 ignition[855]: INFO : Ignition 2.14.0 Jul 12 00:19:24.604416 ignition[855]: INFO : Stage: files Jul 12 00:19:24.606110 ignition[855]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:19:24.606110 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:19:24.606110 ignition[855]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:19:24.606110 ignition[855]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:19:24.606110 ignition[855]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:19:24.612542 ignition[855]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:19:24.612542 ignition[855]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:19:24.612542 ignition[855]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:19:24.612542 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:19:24.612542 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:19:24.612542 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:19:24.612542 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:19:24.612542 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:19:24.612542 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:19:24.612542 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:19:24.612542 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 12 00:19:24.610665 unknown[855]: wrote ssh authorized keys file for user: core Jul 12 00:19:25.114254 systemd-networkd[734]: eth0: Gained IPv6LL Jul 12 00:19:25.138275 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jul 12 00:19:25.528260 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:19:25.528260 ignition[855]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jul 12 00:19:25.532642 ignition[855]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:19:25.532642 ignition[855]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:19:25.532642 ignition[855]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jul 12 00:19:25.532642 ignition[855]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jul 12 00:19:25.532642 ignition[855]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:19:25.564628 ignition[855]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:19:25.565788 ignition[855]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jul 12 00:19:25.565788 ignition[855]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:19:25.565788 ignition[855]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:19:25.565788 ignition[855]: INFO : files: files passed Jul 12 00:19:25.565788 ignition[855]: INFO : Ignition finished successfully Jul 12 00:19:25.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.566245 systemd[1]: Finished ignition-files.service. Jul 12 00:19:25.568636 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 12 00:19:25.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.569688 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 12 00:19:25.578243 initrd-setup-root-after-ignition[880]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 12 00:19:25.570319 systemd[1]: Starting ignition-quench.service... Jul 12 00:19:25.580365 initrd-setup-root-after-ignition[882]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:19:25.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.574613 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:19:25.574699 systemd[1]: Finished ignition-quench.service. Jul 12 00:19:25.580124 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 12 00:19:25.581130 systemd[1]: Reached target ignition-complete.target. Jul 12 00:19:25.583540 systemd[1]: Starting initrd-parse-etc.service... Jul 12 00:19:25.596345 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:19:25.596438 systemd[1]: Finished initrd-parse-etc.service. Jul 12 00:19:25.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.597825 systemd[1]: Reached target initrd-fs.target. Jul 12 00:19:25.598695 systemd[1]: Reached target initrd.target. Jul 12 00:19:25.599654 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 12 00:19:25.600395 systemd[1]: Starting dracut-pre-pivot.service... Jul 12 00:19:25.610548 systemd[1]: Finished dracut-pre-pivot.service. Jul 12 00:19:25.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.611995 systemd[1]: Starting initrd-cleanup.service... Jul 12 00:19:25.620444 systemd[1]: Stopped target nss-lookup.target. Jul 12 00:19:25.621136 systemd[1]: Stopped target remote-cryptsetup.target. Jul 12 00:19:25.622189 systemd[1]: Stopped target timers.target. Jul 12 00:19:25.623444 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:19:25.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.623557 systemd[1]: Stopped dracut-pre-pivot.service. Jul 12 00:19:25.624745 systemd[1]: Stopped target initrd.target. Jul 12 00:19:25.626085 systemd[1]: Stopped target basic.target. Jul 12 00:19:25.627160 systemd[1]: Stopped target ignition-complete.target. Jul 12 00:19:25.628316 systemd[1]: Stopped target ignition-diskful.target. Jul 12 00:19:25.629406 systemd[1]: Stopped target initrd-root-device.target. Jul 12 00:19:25.630557 systemd[1]: Stopped target remote-fs.target. Jul 12 00:19:25.631913 systemd[1]: Stopped target remote-fs-pre.target. Jul 12 00:19:25.633030 systemd[1]: Stopped target sysinit.target. Jul 12 00:19:25.634143 systemd[1]: Stopped target local-fs.target. Jul 12 00:19:25.635394 systemd[1]: Stopped target local-fs-pre.target. Jul 12 00:19:25.636378 systemd[1]: Stopped target swap.target. Jul 12 00:19:25.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.637343 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:19:25.637452 systemd[1]: Stopped dracut-pre-mount.service. Jul 12 00:19:25.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.638656 systemd[1]: Stopped target cryptsetup.target. Jul 12 00:19:25.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.639598 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:19:25.639687 systemd[1]: Stopped dracut-initqueue.service. Jul 12 00:19:25.640886 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:19:25.640984 systemd[1]: Stopped ignition-fetch-offline.service. Jul 12 00:19:25.642964 systemd[1]: Stopped target paths.target. Jul 12 00:19:25.643795 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:19:25.647918 systemd[1]: Stopped systemd-ask-password-console.path. Jul 12 00:19:25.649331 systemd[1]: Stopped target slices.target. Jul 12 00:19:25.650513 systemd[1]: Stopped target sockets.target. Jul 12 00:19:25.651515 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:19:25.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.651621 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 12 00:19:25.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.652884 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:19:25.656563 iscsid[744]: iscsid shutting down. Jul 12 00:19:25.652981 systemd[1]: Stopped ignition-files.service. Jul 12 00:19:25.655027 systemd[1]: Stopping ignition-mount.service... Jul 12 00:19:25.655687 systemd[1]: Stopping iscsid.service... Jul 12 00:19:25.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.657696 systemd[1]: Stopping sysroot-boot.service... Jul 12 00:19:25.667701 ignition[895]: INFO : Ignition 2.14.0 Jul 12 00:19:25.667701 ignition[895]: INFO : Stage: umount Jul 12 00:19:25.667701 ignition[895]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:19:25.667701 ignition[895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:19:25.667701 ignition[895]: INFO : umount: umount passed Jul 12 00:19:25.667701 ignition[895]: INFO : Ignition finished successfully Jul 12 00:19:25.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.658697 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:19:25.658904 systemd[1]: Stopped systemd-udev-trigger.service. Jul 12 00:19:25.660279 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:19:25.660432 systemd[1]: Stopped dracut-pre-trigger.service. Jul 12 00:19:25.663156 systemd[1]: iscsid.service: Deactivated successfully. Jul 12 00:19:25.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.663262 systemd[1]: Stopped iscsid.service. Jul 12 00:19:25.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.664652 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:19:25.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.664796 systemd[1]: Closed iscsid.socket. Jul 12 00:19:25.666929 systemd[1]: Stopping iscsiuio.service... Jul 12 00:19:25.669714 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:19:25.670300 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:19:25.670500 systemd[1]: Finished initrd-cleanup.service. Jul 12 00:19:25.671809 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:19:25.671926 systemd[1]: Stopped ignition-mount.service. Jul 12 00:19:25.674213 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 12 00:19:25.674307 systemd[1]: Stopped iscsiuio.service. Jul 12 00:19:25.676030 systemd[1]: Stopped target network.target. Jul 12 00:19:25.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.677186 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:19:25.677220 systemd[1]: Closed iscsiuio.socket. Jul 12 00:19:25.678140 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:19:25.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.678179 systemd[1]: Stopped ignition-disks.service. Jul 12 00:19:25.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.680062 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:19:25.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.680103 systemd[1]: Stopped ignition-kargs.service. Jul 12 00:19:25.681088 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:19:25.681123 systemd[1]: Stopped ignition-setup.service. Jul 12 00:19:25.682212 systemd[1]: Stopping systemd-networkd.service... Jul 12 00:19:25.683247 systemd[1]: Stopping systemd-resolved.service... Jul 12 00:19:25.687921 systemd-networkd[734]: eth0: DHCPv6 lease lost Jul 12 00:19:25.688949 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:19:25.689047 systemd[1]: Stopped systemd-networkd.service. Jul 12 00:19:25.703000 audit: BPF prog-id=9 op=UNLOAD Jul 12 00:19:25.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.691310 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:19:25.691343 systemd[1]: Closed systemd-networkd.socket. Jul 12 00:19:25.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.692719 systemd[1]: Stopping network-cleanup.service... Jul 12 00:19:25.694290 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:19:25.711000 audit: BPF prog-id=6 op=UNLOAD Jul 12 00:19:25.694354 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 12 00:19:25.695616 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:19:25.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.695665 systemd[1]: Stopped systemd-sysctl.service. Jul 12 00:19:25.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.696881 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:19:25.696930 systemd[1]: Stopped systemd-modules-load.service. Jul 12 00:19:25.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.697954 systemd[1]: Stopping systemd-udevd.service... Jul 12 00:19:25.704377 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 12 00:19:25.705143 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:19:25.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.705768 systemd[1]: Stopped systemd-resolved.service. Jul 12 00:19:25.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.708167 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:19:25.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.708252 systemd[1]: Stopped sysroot-boot.service. Jul 12 00:19:25.711800 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:19:25.711985 systemd[1]: Stopped initrd-setup-root.service. Jul 12 00:19:25.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.714549 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:19:25.714698 systemd[1]: Stopped systemd-udevd.service. Jul 12 00:19:25.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.717807 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:19:25.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.717981 systemd[1]: Stopped network-cleanup.service. Jul 12 00:19:25.719789 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:19:25.719828 systemd[1]: Closed systemd-udevd-control.socket. Jul 12 00:19:25.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.724450 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:19:25.724488 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 12 00:19:25.725460 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:19:25.725503 systemd[1]: Stopped dracut-pre-udev.service. Jul 12 00:19:25.726887 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:19:25.726930 systemd[1]: Stopped dracut-cmdline.service. Jul 12 00:19:25.728008 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:19:25.728043 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 12 00:19:25.729800 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 12 00:19:25.730738 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:19:25.730793 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 12 00:19:25.732804 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:19:25.732851 systemd[1]: Stopped kmod-static-nodes.service. Jul 12 00:19:25.734229 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:19:25.734268 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 12 00:19:25.736103 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 12 00:19:25.736523 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:19:25.736621 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 12 00:19:25.738075 systemd[1]: Reached target initrd-switch-root.target. Jul 12 00:19:25.739788 systemd[1]: Starting initrd-switch-root.service... Jul 12 00:19:25.746507 systemd[1]: Switching root. Jul 12 00:19:25.764091 systemd-journald[290]: Journal stopped Jul 12 00:19:27.801100 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Jul 12 00:19:27.801157 kernel: SELinux: Class mctp_socket not defined in policy. Jul 12 00:19:27.801172 kernel: SELinux: Class anon_inode not defined in policy. Jul 12 00:19:27.801185 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 12 00:19:27.801195 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:19:27.801204 kernel: SELinux: policy capability open_perms=1 Jul 12 00:19:27.801214 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:19:27.801223 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:19:27.801233 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:19:27.801242 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:19:27.801251 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:19:27.801262 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:19:27.801277 systemd[1]: Successfully loaded SELinux policy in 36.803ms. Jul 12 00:19:27.801293 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.158ms. Jul 12 00:19:27.801304 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 12 00:19:27.801315 systemd[1]: Detected virtualization kvm. Jul 12 00:19:27.801325 systemd[1]: Detected architecture arm64. Jul 12 00:19:27.801335 systemd[1]: Detected first boot. Jul 12 00:19:27.801346 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:19:27.801357 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 12 00:19:27.801366 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:19:27.801380 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:19:27.801391 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:19:27.801403 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:19:27.801413 kernel: kauditd_printk_skb: 81 callbacks suppressed Jul 12 00:19:27.801423 kernel: audit: type=1334 audit(1752279567.656:85): prog-id=12 op=LOAD Jul 12 00:19:27.801434 kernel: audit: type=1334 audit(1752279567.656:86): prog-id=3 op=UNLOAD Jul 12 00:19:27.801444 kernel: audit: type=1334 audit(1752279567.656:87): prog-id=13 op=LOAD Jul 12 00:19:27.801453 kernel: audit: type=1334 audit(1752279567.657:88): prog-id=14 op=LOAD Jul 12 00:19:27.801462 kernel: audit: type=1334 audit(1752279567.657:89): prog-id=4 op=UNLOAD Jul 12 00:19:27.801471 kernel: audit: type=1334 audit(1752279567.657:90): prog-id=5 op=UNLOAD Jul 12 00:19:27.801481 kernel: audit: type=1334 audit(1752279567.658:91): prog-id=15 op=LOAD Jul 12 00:19:27.801490 kernel: audit: type=1334 audit(1752279567.658:92): prog-id=12 op=UNLOAD Jul 12 00:19:27.801501 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 00:19:27.801511 kernel: audit: type=1334 audit(1752279567.658:93): prog-id=16 op=LOAD Jul 12 00:19:27.801522 kernel: audit: type=1334 audit(1752279567.659:94): prog-id=17 op=LOAD Jul 12 00:19:27.801532 systemd[1]: Stopped initrd-switch-root.service. Jul 12 00:19:27.801542 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 00:19:27.801553 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 12 00:19:27.801563 systemd[1]: Created slice system-addon\x2drun.slice. Jul 12 00:19:27.801574 systemd[1]: Created slice system-getty.slice. Jul 12 00:19:27.801585 systemd[1]: Created slice system-modprobe.slice. Jul 12 00:19:27.801596 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 12 00:19:27.801607 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 12 00:19:27.801617 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 12 00:19:27.801627 systemd[1]: Created slice user.slice. Jul 12 00:19:27.801637 systemd[1]: Started systemd-ask-password-console.path. Jul 12 00:19:27.801648 systemd[1]: Started systemd-ask-password-wall.path. Jul 12 00:19:27.801658 systemd[1]: Set up automount boot.automount. Jul 12 00:19:27.801669 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 12 00:19:27.801679 systemd[1]: Stopped target initrd-switch-root.target. Jul 12 00:19:27.801689 systemd[1]: Stopped target initrd-fs.target. Jul 12 00:19:27.801700 systemd[1]: Stopped target initrd-root-fs.target. Jul 12 00:19:27.801710 systemd[1]: Reached target integritysetup.target. Jul 12 00:19:27.801721 systemd[1]: Reached target remote-cryptsetup.target. Jul 12 00:19:27.801731 systemd[1]: Reached target remote-fs.target. Jul 12 00:19:27.801743 systemd[1]: Reached target slices.target. Jul 12 00:19:27.801753 systemd[1]: Reached target swap.target. Jul 12 00:19:27.801763 systemd[1]: Reached target torcx.target. Jul 12 00:19:27.801773 systemd[1]: Reached target veritysetup.target. Jul 12 00:19:27.801783 systemd[1]: Listening on systemd-coredump.socket. Jul 12 00:19:27.801793 systemd[1]: Listening on systemd-initctl.socket. Jul 12 00:19:27.801803 systemd[1]: Listening on systemd-networkd.socket. Jul 12 00:19:27.801814 systemd[1]: Listening on systemd-udevd-control.socket. Jul 12 00:19:27.801824 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 12 00:19:27.801834 systemd[1]: Listening on systemd-userdbd.socket. Jul 12 00:19:27.801846 systemd[1]: Mounting dev-hugepages.mount... Jul 12 00:19:27.801878 systemd[1]: Mounting dev-mqueue.mount... Jul 12 00:19:27.801890 systemd[1]: Mounting media.mount... Jul 12 00:19:27.801900 systemd[1]: Mounting sys-kernel-debug.mount... Jul 12 00:19:27.801910 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 12 00:19:27.801920 systemd[1]: Mounting tmp.mount... Jul 12 00:19:27.801931 systemd[1]: Starting flatcar-tmpfiles.service... Jul 12 00:19:27.801941 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:19:27.801951 systemd[1]: Starting kmod-static-nodes.service... Jul 12 00:19:27.801965 systemd[1]: Starting modprobe@configfs.service... Jul 12 00:19:27.801977 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:19:27.801987 systemd[1]: Starting modprobe@drm.service... Jul 12 00:19:27.801997 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:19:27.802008 systemd[1]: Starting modprobe@fuse.service... Jul 12 00:19:27.802017 systemd[1]: Starting modprobe@loop.service... Jul 12 00:19:27.802028 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:19:27.802038 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 00:19:27.802048 systemd[1]: Stopped systemd-fsck-root.service. Jul 12 00:19:27.802060 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 00:19:27.802070 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 00:19:27.802080 systemd[1]: Stopped systemd-journald.service. Jul 12 00:19:27.802090 systemd[1]: Starting systemd-journald.service... Jul 12 00:19:27.802100 kernel: loop: module loaded Jul 12 00:19:27.802111 kernel: fuse: init (API version 7.34) Jul 12 00:19:27.802122 systemd[1]: Starting systemd-modules-load.service... Jul 12 00:19:27.802133 systemd[1]: Starting systemd-network-generator.service... Jul 12 00:19:27.802143 systemd[1]: Starting systemd-remount-fs.service... Jul 12 00:19:27.802153 systemd[1]: Starting systemd-udev-trigger.service... Jul 12 00:19:27.802164 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 00:19:27.802175 systemd[1]: Stopped verity-setup.service. Jul 12 00:19:27.802187 systemd[1]: Mounted dev-hugepages.mount. Jul 12 00:19:27.802197 systemd[1]: Mounted dev-mqueue.mount. Jul 12 00:19:27.802207 systemd[1]: Mounted media.mount. Jul 12 00:19:27.802217 systemd[1]: Mounted sys-kernel-debug.mount. Jul 12 00:19:27.802227 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 12 00:19:27.802237 systemd[1]: Mounted tmp.mount. Jul 12 00:19:27.802248 systemd[1]: Finished kmod-static-nodes.service. Jul 12 00:19:27.802258 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:19:27.802270 systemd[1]: Finished modprobe@configfs.service. Jul 12 00:19:27.802280 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:19:27.802291 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:19:27.802301 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:19:27.802311 systemd[1]: Finished modprobe@drm.service. Jul 12 00:19:27.802322 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:19:27.802333 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:19:27.802343 systemd[1]: Finished flatcar-tmpfiles.service. Jul 12 00:19:27.802353 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:19:27.802364 systemd[1]: Finished modprobe@fuse.service. Jul 12 00:19:27.802374 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:19:27.802387 systemd-journald[990]: Journal started Jul 12 00:19:27.802426 systemd-journald[990]: Runtime Journal (/run/log/journal/0160c8b7324a41959694b4a392b9fc70) is 6.0M, max 48.7M, 42.6M free. Jul 12 00:19:27.802456 systemd[1]: Finished modprobe@loop.service. Jul 12 00:19:25.831000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:19:25.898000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 12 00:19:25.898000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 12 00:19:25.898000 audit: BPF prog-id=10 op=LOAD Jul 12 00:19:25.898000 audit: BPF prog-id=10 op=UNLOAD Jul 12 00:19:25.898000 audit: BPF prog-id=11 op=LOAD Jul 12 00:19:25.898000 audit: BPF prog-id=11 op=UNLOAD Jul 12 00:19:25.947000 audit[928]: AVC avc: denied { associate } for pid=928 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 12 00:19:25.947000 audit[928]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40002038cc a1=4000028e40 a2=4000027100 a3=32 items=0 ppid=911 pid=928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:19:25.947000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 12 00:19:25.948000 audit[928]: AVC avc: denied { associate } for pid=928 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 12 00:19:25.948000 audit[928]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40002039a5 a2=1ed a3=0 items=2 ppid=911 pid=928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:19:25.948000 audit: CWD cwd="/" Jul 12 00:19:25.948000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:19:25.948000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 12 00:19:25.948000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 12 00:19:27.656000 audit: BPF prog-id=12 op=LOAD Jul 12 00:19:27.656000 audit: BPF prog-id=3 op=UNLOAD Jul 12 00:19:27.656000 audit: BPF prog-id=13 op=LOAD Jul 12 00:19:27.657000 audit: BPF prog-id=14 op=LOAD Jul 12 00:19:27.657000 audit: BPF prog-id=4 op=UNLOAD Jul 12 00:19:27.657000 audit: BPF prog-id=5 op=UNLOAD Jul 12 00:19:27.658000 audit: BPF prog-id=15 op=LOAD Jul 12 00:19:27.658000 audit: BPF prog-id=12 op=UNLOAD Jul 12 00:19:27.658000 audit: BPF prog-id=16 op=LOAD Jul 12 00:19:27.659000 audit: BPF prog-id=17 op=LOAD Jul 12 00:19:27.659000 audit: BPF prog-id=13 op=UNLOAD Jul 12 00:19:27.659000 audit: BPF prog-id=14 op=UNLOAD Jul 12 00:19:27.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.678000 audit: BPF prog-id=15 op=UNLOAD Jul 12 00:19:27.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.751000 audit: BPF prog-id=18 op=LOAD Jul 12 00:19:27.755000 audit: BPF prog-id=19 op=LOAD Jul 12 00:19:27.758000 audit: BPF prog-id=20 op=LOAD Jul 12 00:19:27.758000 audit: BPF prog-id=16 op=UNLOAD Jul 12 00:19:27.758000 audit: BPF prog-id=17 op=UNLOAD Jul 12 00:19:27.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.799000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 12 00:19:27.799000 audit[990]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffc6f7c2b0 a2=4000 a3=1 items=0 ppid=1 pid=990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:19:27.799000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 12 00:19:27.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.655589 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:19:25.945573 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-12T00:19:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:19:27.655602 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 12 00:19:27.804008 systemd[1]: Started systemd-journald.service. Jul 12 00:19:25.945827 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-12T00:19:25Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 12 00:19:27.661124 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 00:19:25.945845 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-12T00:19:25Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 12 00:19:25.945896 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-12T00:19:25Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 12 00:19:25.945907 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-12T00:19:25Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 12 00:19:25.945935 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-12T00:19:25Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 12 00:19:25.945947 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-12T00:19:25Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 12 00:19:25.946156 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-12T00:19:25Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 12 00:19:25.946197 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-12T00:19:25Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 12 00:19:25.946209 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-12T00:19:25Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 12 00:19:25.947246 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-12T00:19:25Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 12 00:19:27.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:25.947281 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-12T00:19:25Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 12 00:19:25.947301 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-12T00:19:25Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Jul 12 00:19:25.947317 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-12T00:19:25Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 12 00:19:25.947336 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-12T00:19:25Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Jul 12 00:19:25.947350 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-12T00:19:25Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 12 00:19:27.411979 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-12T00:19:27Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 12 00:19:27.412253 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-12T00:19:27Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 12 00:19:27.805235 systemd[1]: Finished systemd-modules-load.service. Jul 12 00:19:27.412349 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-12T00:19:27Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 12 00:19:27.412502 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-12T00:19:27Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 12 00:19:27.412550 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-12T00:19:27Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 12 00:19:27.412612 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-12T00:19:27Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 12 00:19:27.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.806320 systemd[1]: Finished systemd-network-generator.service. Jul 12 00:19:27.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.807261 systemd[1]: Finished systemd-remount-fs.service. Jul 12 00:19:27.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.808308 systemd[1]: Reached target network-pre.target. Jul 12 00:19:27.810062 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 12 00:19:27.811646 systemd[1]: Mounting sys-kernel-config.mount... Jul 12 00:19:27.812233 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:19:27.813647 systemd[1]: Starting systemd-hwdb-update.service... Jul 12 00:19:27.815445 systemd[1]: Starting systemd-journal-flush.service... Jul 12 00:19:27.816216 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:19:27.817230 systemd[1]: Starting systemd-random-seed.service... Jul 12 00:19:27.817961 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:19:27.819428 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:19:27.821942 systemd-journald[990]: Time spent on flushing to /var/log/journal/0160c8b7324a41959694b4a392b9fc70 is 13.804ms for 979 entries. Jul 12 00:19:27.821942 systemd-journald[990]: System Journal (/var/log/journal/0160c8b7324a41959694b4a392b9fc70) is 8.0M, max 195.6M, 187.6M free. Jul 12 00:19:27.850640 systemd-journald[990]: Received client request to flush runtime journal. Jul 12 00:19:27.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.822836 systemd[1]: Starting systemd-sysusers.service... Jul 12 00:19:27.826193 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 12 00:19:27.851802 udevadm[1028]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 12 00:19:27.826956 systemd[1]: Mounted sys-kernel-config.mount. Jul 12 00:19:27.827783 systemd[1]: Finished systemd-random-seed.service. Jul 12 00:19:27.828654 systemd[1]: Reached target first-boot-complete.target. Jul 12 00:19:27.829670 systemd[1]: Finished systemd-udev-trigger.service. Jul 12 00:19:27.831669 systemd[1]: Starting systemd-udev-settle.service... Jul 12 00:19:27.842326 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:19:27.847461 systemd[1]: Finished systemd-sysusers.service. Jul 12 00:19:27.849223 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 12 00:19:27.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:27.851636 systemd[1]: Finished systemd-journal-flush.service. Jul 12 00:19:27.866212 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 12 00:19:27.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.184349 systemd[1]: Finished systemd-hwdb-update.service. Jul 12 00:19:28.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.184000 audit: BPF prog-id=21 op=LOAD Jul 12 00:19:28.184000 audit: BPF prog-id=22 op=LOAD Jul 12 00:19:28.184000 audit: BPF prog-id=7 op=UNLOAD Jul 12 00:19:28.184000 audit: BPF prog-id=8 op=UNLOAD Jul 12 00:19:28.186291 systemd[1]: Starting systemd-udevd.service... Jul 12 00:19:28.210628 systemd-udevd[1034]: Using default interface naming scheme 'v252'. Jul 12 00:19:28.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.225000 audit: BPF prog-id=23 op=LOAD Jul 12 00:19:28.221742 systemd[1]: Started systemd-udevd.service. Jul 12 00:19:28.226754 systemd[1]: Starting systemd-networkd.service... Jul 12 00:19:28.237000 audit: BPF prog-id=24 op=LOAD Jul 12 00:19:28.239000 audit: BPF prog-id=25 op=LOAD Jul 12 00:19:28.241000 audit: BPF prog-id=26 op=LOAD Jul 12 00:19:28.243056 systemd[1]: Starting systemd-userdbd.service... Jul 12 00:19:28.246986 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Jul 12 00:19:28.295042 systemd[1]: Started systemd-userdbd.service. Jul 12 00:19:28.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.301055 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 12 00:19:28.331267 systemd[1]: Finished systemd-udev-settle.service. Jul 12 00:19:28.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.333197 systemd[1]: Starting lvm2-activation-early.service... Jul 12 00:19:28.347116 systemd-networkd[1050]: lo: Link UP Jul 12 00:19:28.347378 systemd-networkd[1050]: lo: Gained carrier Jul 12 00:19:28.347815 systemd-networkd[1050]: Enumeration completed Jul 12 00:19:28.348053 systemd[1]: Started systemd-networkd.service. Jul 12 00:19:28.348054 systemd-networkd[1050]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:19:28.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.350989 lvm[1067]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:19:28.351377 systemd-networkd[1050]: eth0: Link UP Jul 12 00:19:28.351493 systemd-networkd[1050]: eth0: Gained carrier Jul 12 00:19:28.373003 systemd-networkd[1050]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:19:28.378653 systemd[1]: Finished lvm2-activation-early.service. Jul 12 00:19:28.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.379586 systemd[1]: Reached target cryptsetup.target. Jul 12 00:19:28.381605 systemd[1]: Starting lvm2-activation.service... Jul 12 00:19:28.385341 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:19:28.422809 systemd[1]: Finished lvm2-activation.service. Jul 12 00:19:28.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.423662 systemd[1]: Reached target local-fs-pre.target. Jul 12 00:19:28.424359 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:19:28.424392 systemd[1]: Reached target local-fs.target. Jul 12 00:19:28.424993 systemd[1]: Reached target machines.target. Jul 12 00:19:28.426938 systemd[1]: Starting ldconfig.service... Jul 12 00:19:28.427831 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:19:28.428011 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:19:28.429200 systemd[1]: Starting systemd-boot-update.service... Jul 12 00:19:28.431113 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 12 00:19:28.433304 systemd[1]: Starting systemd-machine-id-commit.service... Jul 12 00:19:28.435918 systemd[1]: Starting systemd-sysext.service... Jul 12 00:19:28.436898 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1070 (bootctl) Jul 12 00:19:28.440923 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 12 00:19:28.449304 systemd[1]: Unmounting usr-share-oem.mount... Jul 12 00:19:28.455799 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 12 00:19:28.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.458445 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 12 00:19:28.458642 systemd[1]: Unmounted usr-share-oem.mount. Jul 12 00:19:28.505890 kernel: loop0: detected capacity change from 0 to 207008 Jul 12 00:19:28.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.510717 systemd[1]: Finished systemd-machine-id-commit.service. Jul 12 00:19:28.520922 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:19:28.530205 systemd-fsck[1081]: fsck.fat 4.2 (2021-01-31) Jul 12 00:19:28.530205 systemd-fsck[1081]: /dev/vda1: 236 files, 117310/258078 clusters Jul 12 00:19:28.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.532285 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 12 00:19:28.541922 kernel: loop1: detected capacity change from 0 to 207008 Jul 12 00:19:28.546282 (sd-sysext)[1085]: Using extensions 'kubernetes'. Jul 12 00:19:28.546615 (sd-sysext)[1085]: Merged extensions into '/usr'. Jul 12 00:19:28.566153 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:19:28.567634 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:19:28.569997 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:19:28.572322 systemd[1]: Starting modprobe@loop.service... Jul 12 00:19:28.573248 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:19:28.573412 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:19:28.574392 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:19:28.574524 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:19:28.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.575909 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:19:28.576024 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:19:28.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.577446 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:19:28.577589 systemd[1]: Finished modprobe@loop.service. Jul 12 00:19:28.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.579116 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:19:28.579229 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:19:28.625146 ldconfig[1069]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:19:28.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.628293 systemd[1]: Finished ldconfig.service. Jul 12 00:19:28.777343 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:19:28.779378 systemd[1]: Mounting boot.mount... Jul 12 00:19:28.781457 systemd[1]: Mounting usr-share-oem.mount... Jul 12 00:19:28.787128 systemd[1]: Mounted boot.mount. Jul 12 00:19:28.788045 systemd[1]: Mounted usr-share-oem.mount. Jul 12 00:19:28.789932 systemd[1]: Finished systemd-sysext.service. Jul 12 00:19:28.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.792483 systemd[1]: Starting ensure-sysext.service... Jul 12 00:19:28.794122 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 12 00:19:28.795263 systemd[1]: Finished systemd-boot-update.service. Jul 12 00:19:28.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.798953 systemd[1]: Reloading. Jul 12 00:19:28.803108 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 12 00:19:28.803770 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:19:28.805091 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:19:28.845012 /usr/lib/systemd/system-generators/torcx-generator[1113]: time="2025-07-12T00:19:28Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:19:28.845042 /usr/lib/systemd/system-generators/torcx-generator[1113]: time="2025-07-12T00:19:28Z" level=info msg="torcx already run" Jul 12 00:19:28.893987 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:19:28.894007 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:19:28.909294 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:19:28.950000 audit: BPF prog-id=27 op=LOAD Jul 12 00:19:28.950000 audit: BPF prog-id=18 op=UNLOAD Jul 12 00:19:28.950000 audit: BPF prog-id=28 op=LOAD Jul 12 00:19:28.950000 audit: BPF prog-id=29 op=LOAD Jul 12 00:19:28.950000 audit: BPF prog-id=19 op=UNLOAD Jul 12 00:19:28.950000 audit: BPF prog-id=20 op=UNLOAD Jul 12 00:19:28.951000 audit: BPF prog-id=30 op=LOAD Jul 12 00:19:28.951000 audit: BPF prog-id=23 op=UNLOAD Jul 12 00:19:28.952000 audit: BPF prog-id=31 op=LOAD Jul 12 00:19:28.952000 audit: BPF prog-id=24 op=UNLOAD Jul 12 00:19:28.952000 audit: BPF prog-id=32 op=LOAD Jul 12 00:19:28.952000 audit: BPF prog-id=33 op=LOAD Jul 12 00:19:28.952000 audit: BPF prog-id=25 op=UNLOAD Jul 12 00:19:28.952000 audit: BPF prog-id=26 op=UNLOAD Jul 12 00:19:28.953000 audit: BPF prog-id=34 op=LOAD Jul 12 00:19:28.953000 audit: BPF prog-id=35 op=LOAD Jul 12 00:19:28.953000 audit: BPF prog-id=21 op=UNLOAD Jul 12 00:19:28.953000 audit: BPF prog-id=22 op=UNLOAD Jul 12 00:19:28.955941 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 12 00:19:28.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.960188 systemd[1]: Starting audit-rules.service... Jul 12 00:19:28.962027 systemd[1]: Starting clean-ca-certificates.service... Jul 12 00:19:28.963975 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 12 00:19:28.967000 audit: BPF prog-id=36 op=LOAD Jul 12 00:19:28.969642 systemd[1]: Starting systemd-resolved.service... Jul 12 00:19:28.970000 audit: BPF prog-id=37 op=LOAD Jul 12 00:19:28.972624 systemd[1]: Starting systemd-timesyncd.service... Jul 12 00:19:28.974357 systemd[1]: Starting systemd-update-utmp.service... Jul 12 00:19:28.978000 audit[1163]: SYSTEM_BOOT pid=1163 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.981563 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:19:28.982979 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:19:28.985287 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:19:28.987063 systemd[1]: Starting modprobe@loop.service... Jul 12 00:19:28.987664 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:19:28.987792 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:19:28.988731 systemd[1]: Finished clean-ca-certificates.service. Jul 12 00:19:28.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.989966 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 12 00:19:28.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.991160 systemd[1]: Finished systemd-update-utmp.service. Jul 12 00:19:28.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.992200 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:19:28.992325 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:19:28.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.993520 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:19:28.993637 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:19:28.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.994942 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:19:28.995058 systemd[1]: Finished modprobe@loop.service. Jul 12 00:19:28.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:28.998107 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:19:28.999553 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:19:29.002584 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:19:29.004639 systemd[1]: Starting modprobe@loop.service... Jul 12 00:19:29.005471 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:19:29.005606 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:19:29.007055 systemd[1]: Starting systemd-update-done.service... Jul 12 00:19:29.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:29.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:29.008072 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:19:29.009066 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:19:29.009197 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:19:29.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:29.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:29.010505 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:19:29.010629 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:19:29.012044 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:19:29.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:29.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:29.012168 systemd[1]: Finished modprobe@loop.service. Jul 12 00:19:29.013411 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:19:29.013495 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:19:29.016525 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 12 00:19:29.017828 systemd[1]: Starting modprobe@dm_mod.service... Jul 12 00:19:29.020069 systemd[1]: Starting modprobe@drm.service... Jul 12 00:19:29.022447 systemd[1]: Starting modprobe@efi_pstore.service... Jul 12 00:19:29.024531 systemd[1]: Starting modprobe@loop.service... Jul 12 00:19:29.025709 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 12 00:19:29.025898 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:19:29.027276 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 12 00:19:29.028523 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:19:29.029792 systemd[1]: Finished systemd-update-done.service. Jul 12 00:19:29.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:29.031351 systemd-resolved[1156]: Positive Trust Anchors: Jul 12 00:19:29.031426 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:19:29.031557 systemd[1]: Finished modprobe@dm_mod.service. Jul 12 00:19:29.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:29.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:29.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:29.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:29.032841 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:19:29.033024 systemd[1]: Finished modprobe@drm.service. Jul 12 00:19:29.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:29.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 12 00:19:29.034223 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:19:29.034331 systemd[1]: Finished modprobe@efi_pstore.service. Jul 12 00:19:29.034422 systemd-resolved[1156]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:19:29.034450 systemd-resolved[1156]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 12 00:19:29.035754 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:19:29.035991 systemd[1]: Finished modprobe@loop.service. Jul 12 00:19:29.036574 augenrules[1184]: No rules Jul 12 00:19:29.035000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 12 00:19:29.035000 audit[1184]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffde17fa00 a2=420 a3=0 items=0 ppid=1152 pid=1184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 12 00:19:29.035000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 12 00:19:29.037410 systemd[1]: Started systemd-timesyncd.service. Jul 12 00:19:29.038078 systemd-timesyncd[1162]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 12 00:19:29.038129 systemd-timesyncd[1162]: Initial clock synchronization to Sat 2025-07-12 00:19:29.105699 UTC. Jul 12 00:19:29.039247 systemd[1]: Finished ensure-sysext.service. Jul 12 00:19:29.040474 systemd[1]: Finished audit-rules.service. Jul 12 00:19:29.042513 systemd[1]: Reached target time-set.target. Jul 12 00:19:29.043479 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:19:29.043531 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 12 00:19:29.050232 systemd-resolved[1156]: Defaulting to hostname 'linux'. Jul 12 00:19:29.051727 systemd[1]: Started systemd-resolved.service. Jul 12 00:19:29.052677 systemd[1]: Reached target network.target. Jul 12 00:19:29.053515 systemd[1]: Reached target nss-lookup.target. Jul 12 00:19:29.054391 systemd[1]: Reached target sysinit.target. Jul 12 00:19:29.055283 systemd[1]: Started motdgen.path. Jul 12 00:19:29.056023 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 12 00:19:29.057369 systemd[1]: Started logrotate.timer. Jul 12 00:19:29.058214 systemd[1]: Started mdadm.timer. Jul 12 00:19:29.058913 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 12 00:19:29.059762 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:19:29.059804 systemd[1]: Reached target paths.target. Jul 12 00:19:29.060657 systemd[1]: Reached target timers.target. Jul 12 00:19:29.061808 systemd[1]: Listening on dbus.socket. Jul 12 00:19:29.063801 systemd[1]: Starting docker.socket... Jul 12 00:19:29.067239 systemd[1]: Listening on sshd.socket. Jul 12 00:19:29.068135 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:19:29.068617 systemd[1]: Listening on docker.socket. Jul 12 00:19:29.069474 systemd[1]: Reached target sockets.target. Jul 12 00:19:29.070296 systemd[1]: Reached target basic.target. Jul 12 00:19:29.071161 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 12 00:19:29.071208 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 12 00:19:29.072194 systemd[1]: Starting containerd.service... Jul 12 00:19:29.073944 systemd[1]: Starting dbus.service... Jul 12 00:19:29.075631 systemd[1]: Starting enable-oem-cloudinit.service... Jul 12 00:19:29.077798 systemd[1]: Starting extend-filesystems.service... Jul 12 00:19:29.078821 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 12 00:19:29.080326 systemd[1]: Starting motdgen.service... Jul 12 00:19:29.082494 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 12 00:19:29.084995 systemd[1]: Starting sshd-keygen.service... Jul 12 00:19:29.085941 jq[1194]: false Jul 12 00:19:29.088989 systemd[1]: Starting systemd-logind.service... Jul 12 00:19:29.089931 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 12 00:19:29.090146 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:19:29.090969 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:19:29.092693 systemd[1]: Starting update-engine.service... Jul 12 00:19:29.099438 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 12 00:19:29.099657 jq[1205]: true Jul 12 00:19:29.102674 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:19:29.102928 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 12 00:19:29.103449 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:19:29.103626 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 12 00:19:29.105929 extend-filesystems[1195]: Found loop1 Jul 12 00:19:29.105929 extend-filesystems[1195]: Found vda Jul 12 00:19:29.105929 extend-filesystems[1195]: Found vda1 Jul 12 00:19:29.105929 extend-filesystems[1195]: Found vda2 Jul 12 00:19:29.105929 extend-filesystems[1195]: Found vda3 Jul 12 00:19:29.105929 extend-filesystems[1195]: Found usr Jul 12 00:19:29.105929 extend-filesystems[1195]: Found vda4 Jul 12 00:19:29.120569 extend-filesystems[1195]: Found vda6 Jul 12 00:19:29.120569 extend-filesystems[1195]: Found vda7 Jul 12 00:19:29.120569 extend-filesystems[1195]: Found vda9 Jul 12 00:19:29.120569 extend-filesystems[1195]: Checking size of /dev/vda9 Jul 12 00:19:29.120141 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:19:29.123734 jq[1211]: true Jul 12 00:19:29.120338 systemd[1]: Finished motdgen.service. Jul 12 00:19:29.139591 extend-filesystems[1195]: Resized partition /dev/vda9 Jul 12 00:19:29.147341 extend-filesystems[1221]: resize2fs 1.46.5 (30-Dec-2021) Jul 12 00:19:29.160321 dbus-daemon[1193]: [system] SELinux support is enabled Jul 12 00:19:29.160488 systemd[1]: Started dbus.service. Jul 12 00:19:29.164987 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:19:29.165017 systemd[1]: Reached target system-config.target. Jul 12 00:19:29.165962 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:19:29.165988 systemd[1]: Reached target user-config.target. Jul 12 00:19:29.168882 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 12 00:19:29.191908 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 12 00:19:29.208116 update_engine[1204]: I0712 00:19:29.207813 1204 main.cc:92] Flatcar Update Engine starting Jul 12 00:19:29.209910 systemd-logind[1203]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 00:19:29.212775 systemd-logind[1203]: New seat seat0. Jul 12 00:19:29.215270 extend-filesystems[1221]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 12 00:19:29.215270 extend-filesystems[1221]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 00:19:29.215270 extend-filesystems[1221]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 12 00:19:29.219252 bash[1241]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:19:29.219349 update_engine[1204]: I0712 00:19:29.217983 1204 update_check_scheduler.cc:74] Next update check in 2m27s Jul 12 00:19:29.219384 extend-filesystems[1195]: Resized filesystem in /dev/vda9 Jul 12 00:19:29.215664 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 12 00:19:29.217588 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:19:29.221105 systemd[1]: Finished extend-filesystems.service. Jul 12 00:19:29.222191 systemd[1]: Started systemd-logind.service. Jul 12 00:19:29.223054 systemd[1]: Started update-engine.service. Jul 12 00:19:29.226199 systemd[1]: Started locksmithd.service. Jul 12 00:19:29.238601 env[1212]: time="2025-07-12T00:19:29.238541600Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 12 00:19:29.260758 env[1212]: time="2025-07-12T00:19:29.260707400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:19:29.260935 env[1212]: time="2025-07-12T00:19:29.260902440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:19:29.262156 env[1212]: time="2025-07-12T00:19:29.262117040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.186-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:19:29.262156 env[1212]: time="2025-07-12T00:19:29.262151800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:19:29.262381 env[1212]: time="2025-07-12T00:19:29.262359560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:19:29.262428 env[1212]: time="2025-07-12T00:19:29.262381280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:19:29.262428 env[1212]: time="2025-07-12T00:19:29.262394920Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 12 00:19:29.262428 env[1212]: time="2025-07-12T00:19:29.262404680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:19:29.262485 env[1212]: time="2025-07-12T00:19:29.262476440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:19:29.262855 env[1212]: time="2025-07-12T00:19:29.262820920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:19:29.262992 env[1212]: time="2025-07-12T00:19:29.262972200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:19:29.263032 env[1212]: time="2025-07-12T00:19:29.262992200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:19:29.263070 env[1212]: time="2025-07-12T00:19:29.263054120Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 12 00:19:29.263100 env[1212]: time="2025-07-12T00:19:29.263070760Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:19:29.266514 env[1212]: time="2025-07-12T00:19:29.266484840Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:19:29.266573 env[1212]: time="2025-07-12T00:19:29.266520240Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:19:29.266573 env[1212]: time="2025-07-12T00:19:29.266533520Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:19:29.266573 env[1212]: time="2025-07-12T00:19:29.266565640Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:19:29.266655 env[1212]: time="2025-07-12T00:19:29.266580040Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:19:29.266655 env[1212]: time="2025-07-12T00:19:29.266594600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:19:29.266655 env[1212]: time="2025-07-12T00:19:29.266608960Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:19:29.266992 env[1212]: time="2025-07-12T00:19:29.266973440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:19:29.267025 env[1212]: time="2025-07-12T00:19:29.266999120Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 12 00:19:29.267025 env[1212]: time="2025-07-12T00:19:29.267013640Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:19:29.267073 env[1212]: time="2025-07-12T00:19:29.267026880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:19:29.267073 env[1212]: time="2025-07-12T00:19:29.267042040Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:19:29.267183 env[1212]: time="2025-07-12T00:19:29.267164560Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:19:29.267263 env[1212]: time="2025-07-12T00:19:29.267249760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:19:29.267494 env[1212]: time="2025-07-12T00:19:29.267478200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:19:29.267527 env[1212]: time="2025-07-12T00:19:29.267507280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:19:29.267527 env[1212]: time="2025-07-12T00:19:29.267520360Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:19:29.267634 env[1212]: time="2025-07-12T00:19:29.267622680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:19:29.267661 env[1212]: time="2025-07-12T00:19:29.267638000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:19:29.267661 env[1212]: time="2025-07-12T00:19:29.267651040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:19:29.267706 env[1212]: time="2025-07-12T00:19:29.267663600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:19:29.267706 env[1212]: time="2025-07-12T00:19:29.267676640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:19:29.267706 env[1212]: time="2025-07-12T00:19:29.267688840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:19:29.267706 env[1212]: time="2025-07-12T00:19:29.267700000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:19:29.267786 env[1212]: time="2025-07-12T00:19:29.267711720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:19:29.267786 env[1212]: time="2025-07-12T00:19:29.267724840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:19:29.267921 env[1212]: time="2025-07-12T00:19:29.267842160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:19:29.267921 env[1212]: time="2025-07-12T00:19:29.267891440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:19:29.267921 env[1212]: time="2025-07-12T00:19:29.267905280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:19:29.267921 env[1212]: time="2025-07-12T00:19:29.267917960Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:19:29.268024 env[1212]: time="2025-07-12T00:19:29.267934360Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 12 00:19:29.268024 env[1212]: time="2025-07-12T00:19:29.267945560Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:19:29.268024 env[1212]: time="2025-07-12T00:19:29.267963360Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 12 00:19:29.268024 env[1212]: time="2025-07-12T00:19:29.267995680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:19:29.268237 env[1212]: time="2025-07-12T00:19:29.268187800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:19:29.268983 env[1212]: time="2025-07-12T00:19:29.268247840Z" level=info msg="Connect containerd service" Jul 12 00:19:29.268983 env[1212]: time="2025-07-12T00:19:29.268282160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:19:29.269095 env[1212]: time="2025-07-12T00:19:29.269068120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:19:29.269353 env[1212]: time="2025-07-12T00:19:29.269312560Z" level=info msg="Start subscribing containerd event" Jul 12 00:19:29.269392 env[1212]: time="2025-07-12T00:19:29.269374920Z" level=info msg="Start recovering state" Jul 12 00:19:29.270546 env[1212]: time="2025-07-12T00:19:29.269425680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:19:29.270546 env[1212]: time="2025-07-12T00:19:29.269470920Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:19:29.270546 env[1212]: time="2025-07-12T00:19:29.269478040Z" level=info msg="Start event monitor" Jul 12 00:19:29.270546 env[1212]: time="2025-07-12T00:19:29.269500880Z" level=info msg="Start snapshots syncer" Jul 12 00:19:29.270546 env[1212]: time="2025-07-12T00:19:29.269511680Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:19:29.270546 env[1212]: time="2025-07-12T00:19:29.269517640Z" level=info msg="containerd successfully booted in 0.031981s" Jul 12 00:19:29.270546 env[1212]: time="2025-07-12T00:19:29.269521040Z" level=info msg="Start streaming server" Jul 12 00:19:29.269598 systemd[1]: Started containerd.service. Jul 12 00:19:29.280819 locksmithd[1243]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:19:30.106313 systemd-networkd[1050]: eth0: Gained IPv6LL Jul 12 00:19:30.108577 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 12 00:19:30.109719 systemd[1]: Reached target network-online.target. Jul 12 00:19:30.112401 systemd[1]: Starting kubelet.service... Jul 12 00:19:30.730293 systemd[1]: Started kubelet.service. Jul 12 00:19:31.199459 kubelet[1257]: E0712 00:19:31.199344 1257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:19:31.201361 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:19:31.201488 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:19:31.237133 sshd_keygen[1218]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:19:31.256475 systemd[1]: Finished sshd-keygen.service. Jul 12 00:19:31.258800 systemd[1]: Starting issuegen.service... Jul 12 00:19:31.263908 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:19:31.264079 systemd[1]: Finished issuegen.service. Jul 12 00:19:31.266316 systemd[1]: Starting systemd-user-sessions.service... Jul 12 00:19:31.273138 systemd[1]: Finished systemd-user-sessions.service. Jul 12 00:19:31.275547 systemd[1]: Started getty@tty1.service. Jul 12 00:19:31.277664 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 12 00:19:31.278579 systemd[1]: Reached target getty.target. Jul 12 00:19:31.279269 systemd[1]: Reached target multi-user.target. Jul 12 00:19:31.281364 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 12 00:19:31.288527 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 12 00:19:31.288717 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 12 00:19:31.289687 systemd[1]: Startup finished in 576ms (kernel) + 4.229s (initrd) + 5.496s (userspace) = 10.302s. Jul 12 00:19:34.061293 systemd[1]: Created slice system-sshd.slice. Jul 12 00:19:34.062425 systemd[1]: Started sshd@0-10.0.0.35:22-10.0.0.1:45674.service. Jul 12 00:19:34.113505 sshd[1279]: Accepted publickey for core from 10.0.0.1 port 45674 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:19:34.115980 sshd[1279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:19:34.125755 systemd[1]: Created slice user-500.slice. Jul 12 00:19:34.126831 systemd[1]: Starting user-runtime-dir@500.service... Jul 12 00:19:34.128429 systemd-logind[1203]: New session 1 of user core. Jul 12 00:19:34.134817 systemd[1]: Finished user-runtime-dir@500.service. Jul 12 00:19:34.136077 systemd[1]: Starting user@500.service... Jul 12 00:19:34.138815 (systemd)[1282]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:19:34.197202 systemd[1282]: Queued start job for default target default.target. Jul 12 00:19:34.197699 systemd[1282]: Reached target paths.target. Jul 12 00:19:34.197733 systemd[1282]: Reached target sockets.target. Jul 12 00:19:34.197744 systemd[1282]: Reached target timers.target. Jul 12 00:19:34.197754 systemd[1282]: Reached target basic.target. Jul 12 00:19:34.197796 systemd[1282]: Reached target default.target. Jul 12 00:19:34.197822 systemd[1282]: Startup finished in 53ms. Jul 12 00:19:34.197899 systemd[1]: Started user@500.service. Jul 12 00:19:34.198833 systemd[1]: Started session-1.scope. Jul 12 00:19:34.249996 systemd[1]: Started sshd@1-10.0.0.35:22-10.0.0.1:45686.service. Jul 12 00:19:34.292476 sshd[1291]: Accepted publickey for core from 10.0.0.1 port 45686 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:19:34.293794 sshd[1291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:19:34.297402 systemd-logind[1203]: New session 2 of user core. Jul 12 00:19:34.298615 systemd[1]: Started session-2.scope. Jul 12 00:19:34.352153 sshd[1291]: pam_unix(sshd:session): session closed for user core Jul 12 00:19:34.354887 systemd[1]: sshd@1-10.0.0.35:22-10.0.0.1:45686.service: Deactivated successfully. Jul 12 00:19:34.355552 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:19:34.356043 systemd-logind[1203]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:19:34.357104 systemd[1]: Started sshd@2-10.0.0.35:22-10.0.0.1:45700.service. Jul 12 00:19:34.357709 systemd-logind[1203]: Removed session 2. Jul 12 00:19:34.399788 sshd[1297]: Accepted publickey for core from 10.0.0.1 port 45700 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:19:34.401489 sshd[1297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:19:34.404805 systemd-logind[1203]: New session 3 of user core. Jul 12 00:19:34.405655 systemd[1]: Started session-3.scope. Jul 12 00:19:34.455976 sshd[1297]: pam_unix(sshd:session): session closed for user core Jul 12 00:19:34.460569 systemd[1]: sshd@2-10.0.0.35:22-10.0.0.1:45700.service: Deactivated successfully. Jul 12 00:19:34.461318 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:19:34.461927 systemd-logind[1203]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:19:34.463617 systemd[1]: Started sshd@3-10.0.0.35:22-10.0.0.1:45708.service. Jul 12 00:19:34.464429 systemd-logind[1203]: Removed session 3. Jul 12 00:19:34.505324 sshd[1303]: Accepted publickey for core from 10.0.0.1 port 45708 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:19:34.507089 sshd[1303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:19:34.510495 systemd-logind[1203]: New session 4 of user core. Jul 12 00:19:34.512006 systemd[1]: Started session-4.scope. Jul 12 00:19:34.568724 sshd[1303]: pam_unix(sshd:session): session closed for user core Jul 12 00:19:34.571996 systemd[1]: sshd@3-10.0.0.35:22-10.0.0.1:45708.service: Deactivated successfully. Jul 12 00:19:34.572744 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:19:34.573427 systemd-logind[1203]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:19:34.575270 systemd[1]: Started sshd@4-10.0.0.35:22-10.0.0.1:45710.service. Jul 12 00:19:34.576465 systemd-logind[1203]: Removed session 4. Jul 12 00:19:34.617248 sshd[1309]: Accepted publickey for core from 10.0.0.1 port 45710 ssh2: RSA SHA256:lOTsI5S5omJPCdinbmTXhzZlC32lNQZJGtwxzlZSG1o Jul 12 00:19:34.618806 sshd[1309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:19:34.622291 systemd-logind[1203]: New session 5 of user core. Jul 12 00:19:34.623173 systemd[1]: Started session-5.scope. Jul 12 00:19:34.685678 sudo[1312]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:19:34.685925 sudo[1312]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 12 00:19:34.697667 systemd[1]: Starting coreos-metadata.service... Jul 12 00:19:34.704229 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 12 00:19:34.704391 systemd[1]: Finished coreos-metadata.service. Jul 12 00:19:35.166728 systemd[1]: Stopped kubelet.service. Jul 12 00:19:35.168761 systemd[1]: Starting kubelet.service... Jul 12 00:19:35.192385 systemd[1]: Reloading. Jul 12 00:19:35.242494 /usr/lib/systemd/system-generators/torcx-generator[1371]: time="2025-07-12T00:19:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 12 00:19:35.242530 /usr/lib/systemd/system-generators/torcx-generator[1371]: time="2025-07-12T00:19:35Z" level=info msg="torcx already run" Jul 12 00:19:35.332388 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 12 00:19:35.332408 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 12 00:19:35.347888 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:19:35.412363 systemd[1]: Started kubelet.service. Jul 12 00:19:35.414081 systemd[1]: Stopping kubelet.service... Jul 12 00:19:35.414312 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:19:35.414485 systemd[1]: Stopped kubelet.service. Jul 12 00:19:35.415923 systemd[1]: Starting kubelet.service... Jul 12 00:19:35.516248 systemd[1]: Started kubelet.service. Jul 12 00:19:35.557247 kubelet[1417]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:19:35.557247 kubelet[1417]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:19:35.557247 kubelet[1417]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:19:35.557580 kubelet[1417]: I0712 00:19:35.557347 1417 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:19:36.535213 kubelet[1417]: I0712 00:19:36.535172 1417 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 12 00:19:36.535352 kubelet[1417]: I0712 00:19:36.535340 1417 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:19:36.535692 kubelet[1417]: I0712 00:19:36.535671 1417 server.go:954] "Client rotation is on, will bootstrap in background" Jul 12 00:19:36.587943 kubelet[1417]: I0712 00:19:36.587902 1417 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:19:36.597584 kubelet[1417]: E0712 00:19:36.597535 1417 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:19:36.597584 kubelet[1417]: I0712 00:19:36.597576 1417 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:19:36.600355 kubelet[1417]: I0712 00:19:36.600327 1417 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:19:36.601701 kubelet[1417]: I0712 00:19:36.601655 1417 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:19:36.601891 kubelet[1417]: I0712 00:19:36.601703 1417 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.35","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:19:36.601972 kubelet[1417]: I0712 00:19:36.601966 1417 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:19:36.601999 kubelet[1417]: I0712 00:19:36.601977 1417 container_manager_linux.go:304] "Creating device plugin manager" Jul 12 00:19:36.602240 kubelet[1417]: I0712 00:19:36.602225 1417 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:19:36.605019 kubelet[1417]: I0712 00:19:36.604999 1417 kubelet.go:446] "Attempting to sync node with API server" Jul 12 00:19:36.605066 kubelet[1417]: I0712 00:19:36.605031 1417 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:19:36.605066 kubelet[1417]: I0712 00:19:36.605051 1417 kubelet.go:352] "Adding apiserver pod source" Jul 12 00:19:36.605066 kubelet[1417]: I0712 00:19:36.605062 1417 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:19:36.605240 kubelet[1417]: E0712 00:19:36.605211 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:36.605293 kubelet[1417]: E0712 00:19:36.605265 1417 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:36.616415 kubelet[1417]: I0712 00:19:36.616386 1417 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 12 00:19:36.617084 kubelet[1417]: I0712 00:19:36.617047 1417 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:19:36.617194 kubelet[1417]: W0712 00:19:36.617183 1417 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:19:36.618128 kubelet[1417]: I0712 00:19:36.618109 1417 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:19:36.618185 kubelet[1417]: I0712 00:19:36.618147 1417 server.go:1287] "Started kubelet" Jul 12 00:19:36.618824 kubelet[1417]: I0712 00:19:36.618771 1417 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:19:36.619693 kubelet[1417]: I0712 00:19:36.619662 1417 server.go:479] "Adding debug handlers to kubelet server" Jul 12 00:19:36.629482 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 12 00:19:36.629700 kubelet[1417]: I0712 00:19:36.629673 1417 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:19:36.634371 kubelet[1417]: I0712 00:19:36.634212 1417 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:19:36.637293 kubelet[1417]: I0712 00:19:36.635447 1417 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:19:36.637293 kubelet[1417]: E0712 00:19:36.635946 1417 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.35\" not found" Jul 12 00:19:36.637293 kubelet[1417]: I0712 00:19:36.636280 1417 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:19:36.637293 kubelet[1417]: I0712 00:19:36.636571 1417 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:19:36.637293 kubelet[1417]: I0712 00:19:36.636635 1417 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:19:36.637293 kubelet[1417]: I0712 00:19:36.636714 1417 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:19:36.638129 kubelet[1417]: I0712 00:19:36.638104 1417 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:19:36.638223 kubelet[1417]: I0712 00:19:36.638203 1417 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:19:36.639288 kubelet[1417]: W0712 00:19:36.639251 1417 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 12 00:19:36.639367 kubelet[1417]: E0712 00:19:36.639293 1417 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jul 12 00:19:36.640068 kubelet[1417]: I0712 00:19:36.640042 1417 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:19:36.641200 kubelet[1417]: E0712 00:19:36.641177 1417 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 00:19:36.653572 kubelet[1417]: I0712 00:19:36.653539 1417 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:19:36.653572 kubelet[1417]: I0712 00:19:36.653560 1417 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:19:36.653572 kubelet[1417]: I0712 00:19:36.653581 1417 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:19:36.726990 kubelet[1417]: I0712 00:19:36.726938 1417 policy_none.go:49] "None policy: Start" Jul 12 00:19:36.726990 kubelet[1417]: I0712 00:19:36.726975 1417 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:19:36.726990 kubelet[1417]: I0712 00:19:36.726989 1417 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:19:36.729791 kubelet[1417]: E0712 00:19:36.729742 1417 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.35\" not found" node="10.0.0.35" Jul 12 00:19:36.733856 systemd[1]: Created slice kubepods.slice. Jul 12 00:19:36.736076 kubelet[1417]: E0712 00:19:36.736053 1417 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.35\" not found" Jul 12 00:19:36.739309 systemd[1]: Created slice kubepods-burstable.slice. Jul 12 00:19:36.741714 systemd[1]: Created slice kubepods-besteffort.slice. Jul 12 00:19:36.748670 kubelet[1417]: I0712 00:19:36.748647 1417 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:19:36.749569 kubelet[1417]: I0712 00:19:36.749549 1417 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:19:36.749724 kubelet[1417]: I0712 00:19:36.749686 1417 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:19:36.750088 kubelet[1417]: I0712 00:19:36.750070 1417 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:19:36.750708 kubelet[1417]: E0712 00:19:36.750692 1417 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:19:36.750826 kubelet[1417]: E0712 00:19:36.750810 1417 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.35\" not found" Jul 12 00:19:36.801852 kubelet[1417]: I0712 00:19:36.801742 1417 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:19:36.803496 kubelet[1417]: I0712 00:19:36.803455 1417 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:19:36.803496 kubelet[1417]: I0712 00:19:36.803488 1417 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 12 00:19:36.803614 kubelet[1417]: I0712 00:19:36.803511 1417 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:19:36.803614 kubelet[1417]: I0712 00:19:36.803518 1417 kubelet.go:2382] "Starting kubelet main sync loop" Jul 12 00:19:36.803614 kubelet[1417]: E0712 00:19:36.803566 1417 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 12 00:19:36.850847 kubelet[1417]: I0712 00:19:36.850804 1417 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.35" Jul 12 00:19:36.855407 kubelet[1417]: I0712 00:19:36.855376 1417 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.35" Jul 12 00:19:36.855449 kubelet[1417]: E0712 00:19:36.855409 1417 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.35\": node \"10.0.0.35\" not found" Jul 12 00:19:36.871803 kubelet[1417]: I0712 00:19:36.871776 1417 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 12 00:19:36.872308 env[1212]: time="2025-07-12T00:19:36.872244635Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:19:36.872569 kubelet[1417]: I0712 00:19:36.872530 1417 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 12 00:19:37.045408 sudo[1312]: pam_unix(sudo:session): session closed for user root Jul 12 00:19:37.047120 sshd[1309]: pam_unix(sshd:session): session closed for user core Jul 12 00:19:37.049478 systemd[1]: sshd@4-10.0.0.35:22-10.0.0.1:45710.service: Deactivated successfully. Jul 12 00:19:37.050195 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:19:37.050664 systemd-logind[1203]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:19:37.051291 systemd-logind[1203]: Removed session 5. Jul 12 00:19:37.537542 kubelet[1417]: I0712 00:19:37.537507 1417 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 12 00:19:37.537805 kubelet[1417]: W0712 00:19:37.537785 1417 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 12 00:19:37.537845 kubelet[1417]: W0712 00:19:37.537798 1417 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 12 00:19:37.537918 kubelet[1417]: W0712 00:19:37.537849 1417 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 12 00:19:37.605434 kubelet[1417]: E0712 00:19:37.605395 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:37.618858 kubelet[1417]: I0712 00:19:37.618827 1417 apiserver.go:52] "Watching apiserver" Jul 12 00:19:37.634118 systemd[1]: Created slice kubepods-besteffort-pode45a544c_2871_471d_a4cf_2971f36d72e4.slice. Jul 12 00:19:37.637629 kubelet[1417]: I0712 00:19:37.637603 1417 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:19:37.641113 kubelet[1417]: I0712 00:19:37.641080 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e45a544c-2871-471d-a4cf-2971f36d72e4-xtables-lock\") pod \"kube-proxy-dpj4z\" (UID: \"e45a544c-2871-471d-a4cf-2971f36d72e4\") " pod="kube-system/kube-proxy-dpj4z" Jul 12 00:19:37.641183 kubelet[1417]: I0712 00:19:37.641121 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-cni-path\") pod \"cilium-sxg72\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " pod="kube-system/cilium-sxg72" Jul 12 00:19:37.641183 kubelet[1417]: I0712 00:19:37.641152 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxk92\" (UniqueName: \"kubernetes.io/projected/e45a544c-2871-471d-a4cf-2971f36d72e4-kube-api-access-mxk92\") pod \"kube-proxy-dpj4z\" (UID: \"e45a544c-2871-471d-a4cf-2971f36d72e4\") " pod="kube-system/kube-proxy-dpj4z" Jul 12 00:19:37.641183 kubelet[1417]: I0712 00:19:37.641177 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-hostproc\") pod \"cilium-sxg72\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " pod="kube-system/cilium-sxg72" Jul 12 00:19:37.641257 kubelet[1417]: I0712 00:19:37.641195 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-host-proc-sys-kernel\") pod \"cilium-sxg72\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " pod="kube-system/cilium-sxg72" Jul 12 00:19:37.641257 kubelet[1417]: I0712 00:19:37.641210 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be18bc05-b71c-463d-a155-81e02674c93a-hubble-tls\") pod \"cilium-sxg72\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " pod="kube-system/cilium-sxg72" Jul 12 00:19:37.641257 kubelet[1417]: I0712 00:19:37.641226 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e45a544c-2871-471d-a4cf-2971f36d72e4-lib-modules\") pod \"kube-proxy-dpj4z\" (UID: \"e45a544c-2871-471d-a4cf-2971f36d72e4\") " pod="kube-system/kube-proxy-dpj4z" Jul 12 00:19:37.641257 kubelet[1417]: I0712 00:19:37.641240 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-cilium-run\") pod \"cilium-sxg72\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " pod="kube-system/cilium-sxg72" Jul 12 00:19:37.641257 kubelet[1417]: I0712 00:19:37.641255 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-host-proc-sys-net\") pod \"cilium-sxg72\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " pod="kube-system/cilium-sxg72" Jul 12 00:19:37.641375 kubelet[1417]: I0712 00:19:37.641270 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g89hl\" (UniqueName: \"kubernetes.io/projected/be18bc05-b71c-463d-a155-81e02674c93a-kube-api-access-g89hl\") pod \"cilium-sxg72\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " pod="kube-system/cilium-sxg72" Jul 12 00:19:37.641375 kubelet[1417]: I0712 00:19:37.641286 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be18bc05-b71c-463d-a155-81e02674c93a-clustermesh-secrets\") pod \"cilium-sxg72\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " pod="kube-system/cilium-sxg72" Jul 12 00:19:37.641375 kubelet[1417]: I0712 00:19:37.641322 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be18bc05-b71c-463d-a155-81e02674c93a-cilium-config-path\") pod \"cilium-sxg72\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " pod="kube-system/cilium-sxg72" Jul 12 00:19:37.641375 kubelet[1417]: I0712 00:19:37.641355 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e45a544c-2871-471d-a4cf-2971f36d72e4-kube-proxy\") pod \"kube-proxy-dpj4z\" (UID: \"e45a544c-2871-471d-a4cf-2971f36d72e4\") " pod="kube-system/kube-proxy-dpj4z" Jul 12 00:19:37.641504 kubelet[1417]: I0712 00:19:37.641372 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-bpf-maps\") pod \"cilium-sxg72\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " pod="kube-system/cilium-sxg72" Jul 12 00:19:37.641504 kubelet[1417]: I0712 00:19:37.641396 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-cilium-cgroup\") pod \"cilium-sxg72\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " pod="kube-system/cilium-sxg72" Jul 12 00:19:37.641504 kubelet[1417]: I0712 00:19:37.641411 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-etc-cni-netd\") pod \"cilium-sxg72\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " pod="kube-system/cilium-sxg72" Jul 12 00:19:37.641504 kubelet[1417]: I0712 00:19:37.641424 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-lib-modules\") pod \"cilium-sxg72\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " pod="kube-system/cilium-sxg72" Jul 12 00:19:37.641504 kubelet[1417]: I0712 00:19:37.641438 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-xtables-lock\") pod \"cilium-sxg72\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " pod="kube-system/cilium-sxg72" Jul 12 00:19:37.644912 systemd[1]: Created slice kubepods-burstable-podbe18bc05_b71c_463d_a155_81e02674c93a.slice. Jul 12 00:19:37.743270 kubelet[1417]: I0712 00:19:37.743191 1417 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 12 00:19:37.944530 kubelet[1417]: E0712 00:19:37.943793 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:37.944708 env[1212]: time="2025-07-12T00:19:37.944648361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dpj4z,Uid:e45a544c-2871-471d-a4cf-2971f36d72e4,Namespace:kube-system,Attempt:0,}" Jul 12 00:19:37.953519 kubelet[1417]: E0712 00:19:37.953489 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:37.954203 env[1212]: time="2025-07-12T00:19:37.954133784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sxg72,Uid:be18bc05-b71c-463d-a155-81e02674c93a,Namespace:kube-system,Attempt:0,}" Jul 12 00:19:38.514900 env[1212]: time="2025-07-12T00:19:38.514840428Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:19:38.516045 env[1212]: time="2025-07-12T00:19:38.516019247Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:19:38.519151 env[1212]: time="2025-07-12T00:19:38.519108084Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:19:38.520885 env[1212]: time="2025-07-12T00:19:38.520848050Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:19:38.522610 env[1212]: time="2025-07-12T00:19:38.522561298Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:19:38.523496 env[1212]: time="2025-07-12T00:19:38.523465921Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:19:38.526083 env[1212]: time="2025-07-12T00:19:38.526039504Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:19:38.530718 env[1212]: time="2025-07-12T00:19:38.530675107Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:19:38.563615 env[1212]: time="2025-07-12T00:19:38.563542098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:19:38.563615 env[1212]: time="2025-07-12T00:19:38.563584541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:19:38.563784 env[1212]: time="2025-07-12T00:19:38.563595332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:19:38.563834 env[1212]: time="2025-07-12T00:19:38.563590237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:19:38.563834 env[1212]: time="2025-07-12T00:19:38.563655787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:19:38.563834 env[1212]: time="2025-07-12T00:19:38.563668825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:19:38.564248 env[1212]: time="2025-07-12T00:19:38.564158886Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b pid=1482 runtime=io.containerd.runc.v2 Jul 12 00:19:38.564360 env[1212]: time="2025-07-12T00:19:38.564154193Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8749387ea05eec3685af5fffabc1d1f4181b35cace92551e6fe13af614e63b14 pid=1483 runtime=io.containerd.runc.v2 Jul 12 00:19:38.606040 kubelet[1417]: E0712 00:19:38.606004 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:38.630731 systemd[1]: Started cri-containerd-8749387ea05eec3685af5fffabc1d1f4181b35cace92551e6fe13af614e63b14.scope. Jul 12 00:19:38.640376 systemd[1]: Started cri-containerd-0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b.scope. Jul 12 00:19:38.722960 env[1212]: time="2025-07-12T00:19:38.722911611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sxg72,Uid:be18bc05-b71c-463d-a155-81e02674c93a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b\"" Jul 12 00:19:38.724485 kubelet[1417]: E0712 00:19:38.723958 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:38.725645 env[1212]: time="2025-07-12T00:19:38.725609635Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 12 00:19:38.735461 env[1212]: time="2025-07-12T00:19:38.735419282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dpj4z,Uid:e45a544c-2871-471d-a4cf-2971f36d72e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"8749387ea05eec3685af5fffabc1d1f4181b35cace92551e6fe13af614e63b14\"" Jul 12 00:19:38.736901 kubelet[1417]: E0712 00:19:38.736794 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:38.749288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3588537102.mount: Deactivated successfully. Jul 12 00:19:39.607398 kubelet[1417]: E0712 00:19:39.607334 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:40.607944 kubelet[1417]: E0712 00:19:40.607863 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:41.608332 kubelet[1417]: E0712 00:19:41.608280 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:42.608530 kubelet[1417]: E0712 00:19:42.608485 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:42.859002 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount548817822.mount: Deactivated successfully. Jul 12 00:19:43.609126 kubelet[1417]: E0712 00:19:43.609074 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:44.609698 kubelet[1417]: E0712 00:19:44.609648 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:45.111145 env[1212]: time="2025-07-12T00:19:45.111106600Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:19:45.112339 env[1212]: time="2025-07-12T00:19:45.112311415Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:19:45.114635 env[1212]: time="2025-07-12T00:19:45.114608155Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:19:45.115802 env[1212]: time="2025-07-12T00:19:45.115773244Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 12 00:19:45.117449 env[1212]: time="2025-07-12T00:19:45.117424808Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 12 00:19:45.118717 env[1212]: time="2025-07-12T00:19:45.118686768Z" level=info msg="CreateContainer within sandbox \"0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:19:45.137953 env[1212]: time="2025-07-12T00:19:45.137832529Z" level=info msg="CreateContainer within sandbox \"0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0531f24562841c721f6c102fae40a4bf1d21f604c15540b31cc26db38819b913\"" Jul 12 00:19:45.138731 env[1212]: time="2025-07-12T00:19:45.138702121Z" level=info msg="StartContainer for \"0531f24562841c721f6c102fae40a4bf1d21f604c15540b31cc26db38819b913\"" Jul 12 00:19:45.155941 systemd[1]: Started cri-containerd-0531f24562841c721f6c102fae40a4bf1d21f604c15540b31cc26db38819b913.scope. Jul 12 00:19:45.200330 env[1212]: time="2025-07-12T00:19:45.200283932Z" level=info msg="StartContainer for \"0531f24562841c721f6c102fae40a4bf1d21f604c15540b31cc26db38819b913\" returns successfully" Jul 12 00:19:45.276968 systemd[1]: cri-containerd-0531f24562841c721f6c102fae40a4bf1d21f604c15540b31cc26db38819b913.scope: Deactivated successfully. Jul 12 00:19:45.378124 env[1212]: time="2025-07-12T00:19:45.377906721Z" level=info msg="shim disconnected" id=0531f24562841c721f6c102fae40a4bf1d21f604c15540b31cc26db38819b913 Jul 12 00:19:45.378328 env[1212]: time="2025-07-12T00:19:45.378308900Z" level=warning msg="cleaning up after shim disconnected" id=0531f24562841c721f6c102fae40a4bf1d21f604c15540b31cc26db38819b913 namespace=k8s.io Jul 12 00:19:45.378385 env[1212]: time="2025-07-12T00:19:45.378372212Z" level=info msg="cleaning up dead shim" Jul 12 00:19:45.386658 env[1212]: time="2025-07-12T00:19:45.386620702Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:19:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1600 runtime=io.containerd.runc.v2\n" Jul 12 00:19:45.610647 kubelet[1417]: E0712 00:19:45.610593 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:45.820655 kubelet[1417]: E0712 00:19:45.819861 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:45.821934 env[1212]: time="2025-07-12T00:19:45.821888167Z" level=info msg="CreateContainer within sandbox \"0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:19:45.858033 env[1212]: time="2025-07-12T00:19:45.857976696Z" level=info msg="CreateContainer within sandbox \"0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0bb78f198d46020d1d28386f758869f732aedeb9587c8c97699e2abfa0dc5583\"" Jul 12 00:19:45.860606 env[1212]: time="2025-07-12T00:19:45.860562967Z" level=info msg="StartContainer for \"0bb78f198d46020d1d28386f758869f732aedeb9587c8c97699e2abfa0dc5583\"" Jul 12 00:19:45.878755 systemd[1]: Started cri-containerd-0bb78f198d46020d1d28386f758869f732aedeb9587c8c97699e2abfa0dc5583.scope. Jul 12 00:19:45.934541 env[1212]: time="2025-07-12T00:19:45.932726770Z" level=info msg="StartContainer for \"0bb78f198d46020d1d28386f758869f732aedeb9587c8c97699e2abfa0dc5583\" returns successfully" Jul 12 00:19:45.963285 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:19:45.963494 systemd[1]: Stopped systemd-sysctl.service. Jul 12 00:19:45.967563 systemd[1]: Stopping systemd-sysctl.service... Jul 12 00:19:45.969073 systemd[1]: Starting systemd-sysctl.service... Jul 12 00:19:45.969310 systemd[1]: cri-containerd-0bb78f198d46020d1d28386f758869f732aedeb9587c8c97699e2abfa0dc5583.scope: Deactivated successfully. Jul 12 00:19:45.979067 systemd[1]: Finished systemd-sysctl.service. Jul 12 00:19:45.995706 env[1212]: time="2025-07-12T00:19:45.995660083Z" level=info msg="shim disconnected" id=0bb78f198d46020d1d28386f758869f732aedeb9587c8c97699e2abfa0dc5583 Jul 12 00:19:45.995933 env[1212]: time="2025-07-12T00:19:45.995911050Z" level=warning msg="cleaning up after shim disconnected" id=0bb78f198d46020d1d28386f758869f732aedeb9587c8c97699e2abfa0dc5583 namespace=k8s.io Jul 12 00:19:45.996003 env[1212]: time="2025-07-12T00:19:45.995988097Z" level=info msg="cleaning up dead shim" Jul 12 00:19:46.002380 env[1212]: time="2025-07-12T00:19:46.002344139Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:19:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1668 runtime=io.containerd.runc.v2\n" Jul 12 00:19:46.133331 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0531f24562841c721f6c102fae40a4bf1d21f604c15540b31cc26db38819b913-rootfs.mount: Deactivated successfully. Jul 12 00:19:46.257303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount398290145.mount: Deactivated successfully. Jul 12 00:19:46.611711 kubelet[1417]: E0712 00:19:46.611600 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:46.732654 env[1212]: time="2025-07-12T00:19:46.732607821Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:19:46.734073 env[1212]: time="2025-07-12T00:19:46.734046377Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:19:46.735692 env[1212]: time="2025-07-12T00:19:46.735654262Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:19:46.737245 env[1212]: time="2025-07-12T00:19:46.737217062Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:19:46.737564 env[1212]: time="2025-07-12T00:19:46.737542827Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 12 00:19:46.739691 env[1212]: time="2025-07-12T00:19:46.739649571Z" level=info msg="CreateContainer within sandbox \"8749387ea05eec3685af5fffabc1d1f4181b35cace92551e6fe13af614e63b14\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:19:46.757621 env[1212]: time="2025-07-12T00:19:46.757580391Z" level=info msg="CreateContainer within sandbox \"8749387ea05eec3685af5fffabc1d1f4181b35cace92551e6fe13af614e63b14\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d43408cdd9dcc0bc16713f122bddedb862d13bab6e8db9abed24c3c06e1350f2\"" Jul 12 00:19:46.758134 env[1212]: time="2025-07-12T00:19:46.758104194Z" level=info msg="StartContainer for \"d43408cdd9dcc0bc16713f122bddedb862d13bab6e8db9abed24c3c06e1350f2\"" Jul 12 00:19:46.773533 systemd[1]: Started cri-containerd-d43408cdd9dcc0bc16713f122bddedb862d13bab6e8db9abed24c3c06e1350f2.scope. Jul 12 00:19:46.813835 env[1212]: time="2025-07-12T00:19:46.813777214Z" level=info msg="StartContainer for \"d43408cdd9dcc0bc16713f122bddedb862d13bab6e8db9abed24c3c06e1350f2\" returns successfully" Jul 12 00:19:46.822467 kubelet[1417]: E0712 00:19:46.822437 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:46.824673 kubelet[1417]: E0712 00:19:46.824645 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:46.825419 env[1212]: time="2025-07-12T00:19:46.825379797Z" level=info msg="CreateContainer within sandbox \"0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:19:46.840856 env[1212]: time="2025-07-12T00:19:46.840794706Z" level=info msg="CreateContainer within sandbox \"0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a09d4b072e2a7b3f7f6bbd94fbda8f136b646b18c83c625acdcb0a9104548edf\"" Jul 12 00:19:46.841813 env[1212]: time="2025-07-12T00:19:46.841768919Z" level=info msg="StartContainer for \"a09d4b072e2a7b3f7f6bbd94fbda8f136b646b18c83c625acdcb0a9104548edf\"" Jul 12 00:19:46.858641 systemd[1]: Started cri-containerd-a09d4b072e2a7b3f7f6bbd94fbda8f136b646b18c83c625acdcb0a9104548edf.scope. Jul 12 00:19:46.917663 env[1212]: time="2025-07-12T00:19:46.917550534Z" level=info msg="StartContainer for \"a09d4b072e2a7b3f7f6bbd94fbda8f136b646b18c83c625acdcb0a9104548edf\" returns successfully" Jul 12 00:19:46.920425 systemd[1]: cri-containerd-a09d4b072e2a7b3f7f6bbd94fbda8f136b646b18c83c625acdcb0a9104548edf.scope: Deactivated successfully. Jul 12 00:19:47.082991 env[1212]: time="2025-07-12T00:19:47.082932687Z" level=info msg="shim disconnected" id=a09d4b072e2a7b3f7f6bbd94fbda8f136b646b18c83c625acdcb0a9104548edf Jul 12 00:19:47.082991 env[1212]: time="2025-07-12T00:19:47.082983011Z" level=warning msg="cleaning up after shim disconnected" id=a09d4b072e2a7b3f7f6bbd94fbda8f136b646b18c83c625acdcb0a9104548edf namespace=k8s.io Jul 12 00:19:47.082991 env[1212]: time="2025-07-12T00:19:47.082992699Z" level=info msg="cleaning up dead shim" Jul 12 00:19:47.093409 env[1212]: time="2025-07-12T00:19:47.093346785Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:19:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1798 runtime=io.containerd.runc.v2\n" Jul 12 00:19:47.133419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2705077726.mount: Deactivated successfully. Jul 12 00:19:47.611924 kubelet[1417]: E0712 00:19:47.611890 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:47.827990 kubelet[1417]: E0712 00:19:47.827961 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:47.828185 kubelet[1417]: E0712 00:19:47.828019 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:47.829924 env[1212]: time="2025-07-12T00:19:47.829862358Z" level=info msg="CreateContainer within sandbox \"0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:19:47.843516 kubelet[1417]: I0712 00:19:47.843460 1417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dpj4z" podStartSLOduration=3.842426919 podStartE2EDuration="11.843442142s" podCreationTimestamp="2025-07-12 00:19:36 +0000 UTC" firstStartedPulling="2025-07-12 00:19:38.737405 +0000 UTC m=+3.215895023" lastFinishedPulling="2025-07-12 00:19:46.738420223 +0000 UTC m=+11.216910246" observedRunningTime="2025-07-12 00:19:46.846605548 +0000 UTC m=+11.325095571" watchObservedRunningTime="2025-07-12 00:19:47.843442142 +0000 UTC m=+12.321932165" Jul 12 00:19:47.845703 env[1212]: time="2025-07-12T00:19:47.845638821Z" level=info msg="CreateContainer within sandbox \"0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e8a62ef47f3426b24b645115061c2594de2c75f335e6d0b00a4a479f383073b1\"" Jul 12 00:19:47.846321 env[1212]: time="2025-07-12T00:19:47.846292953Z" level=info msg="StartContainer for \"e8a62ef47f3426b24b645115061c2594de2c75f335e6d0b00a4a479f383073b1\"" Jul 12 00:19:47.861925 systemd[1]: Started cri-containerd-e8a62ef47f3426b24b645115061c2594de2c75f335e6d0b00a4a479f383073b1.scope. Jul 12 00:19:47.900925 systemd[1]: cri-containerd-e8a62ef47f3426b24b645115061c2594de2c75f335e6d0b00a4a479f383073b1.scope: Deactivated successfully. Jul 12 00:19:47.901698 env[1212]: time="2025-07-12T00:19:47.901613884Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe18bc05_b71c_463d_a155_81e02674c93a.slice/cri-containerd-e8a62ef47f3426b24b645115061c2594de2c75f335e6d0b00a4a479f383073b1.scope/memory.events\": no such file or directory" Jul 12 00:19:47.904025 env[1212]: time="2025-07-12T00:19:47.903978590Z" level=info msg="StartContainer for \"e8a62ef47f3426b24b645115061c2594de2c75f335e6d0b00a4a479f383073b1\" returns successfully" Jul 12 00:19:47.925718 env[1212]: time="2025-07-12T00:19:47.925674184Z" level=info msg="shim disconnected" id=e8a62ef47f3426b24b645115061c2594de2c75f335e6d0b00a4a479f383073b1 Jul 12 00:19:47.925974 env[1212]: time="2025-07-12T00:19:47.925952507Z" level=warning msg="cleaning up after shim disconnected" id=e8a62ef47f3426b24b645115061c2594de2c75f335e6d0b00a4a479f383073b1 namespace=k8s.io Jul 12 00:19:47.926056 env[1212]: time="2025-07-12T00:19:47.926041545Z" level=info msg="cleaning up dead shim" Jul 12 00:19:47.932511 env[1212]: time="2025-07-12T00:19:47.932472723Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:19:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1952 runtime=io.containerd.runc.v2\n" Jul 12 00:19:48.132776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8a62ef47f3426b24b645115061c2594de2c75f335e6d0b00a4a479f383073b1-rootfs.mount: Deactivated successfully. Jul 12 00:19:48.612296 kubelet[1417]: E0712 00:19:48.612156 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:48.833085 kubelet[1417]: E0712 00:19:48.833053 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:48.835187 env[1212]: time="2025-07-12T00:19:48.835142810Z" level=info msg="CreateContainer within sandbox \"0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:19:48.849165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1847789474.mount: Deactivated successfully. Jul 12 00:19:48.852495 env[1212]: time="2025-07-12T00:19:48.852429105Z" level=info msg="CreateContainer within sandbox \"0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90\"" Jul 12 00:19:48.853281 env[1212]: time="2025-07-12T00:19:48.853247731Z" level=info msg="StartContainer for \"e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90\"" Jul 12 00:19:48.868651 systemd[1]: Started cri-containerd-e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90.scope. Jul 12 00:19:48.910218 env[1212]: time="2025-07-12T00:19:48.910153517Z" level=info msg="StartContainer for \"e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90\" returns successfully" Jul 12 00:19:48.987328 kubelet[1417]: I0712 00:19:48.987288 1417 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 12 00:19:49.178903 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 12 00:19:49.439901 kernel: Initializing XFRM netlink socket Jul 12 00:19:49.441894 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 12 00:19:49.612973 kubelet[1417]: E0712 00:19:49.612921 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:49.837777 kubelet[1417]: E0712 00:19:49.837688 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:49.854225 kubelet[1417]: I0712 00:19:49.854144 1417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sxg72" podStartSLOduration=7.462004487 podStartE2EDuration="13.85412608s" podCreationTimestamp="2025-07-12 00:19:36 +0000 UTC" firstStartedPulling="2025-07-12 00:19:38.725092575 +0000 UTC m=+3.203582598" lastFinishedPulling="2025-07-12 00:19:45.117214168 +0000 UTC m=+9.595704191" observedRunningTime="2025-07-12 00:19:49.854082331 +0000 UTC m=+14.332572354" watchObservedRunningTime="2025-07-12 00:19:49.85412608 +0000 UTC m=+14.332616103" Jul 12 00:19:50.613659 kubelet[1417]: E0712 00:19:50.613603 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:50.839246 kubelet[1417]: E0712 00:19:50.839206 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:51.054944 systemd-networkd[1050]: cilium_host: Link UP Jul 12 00:19:51.057221 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 12 00:19:51.057320 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 12 00:19:51.055732 systemd-networkd[1050]: cilium_net: Link UP Jul 12 00:19:51.056547 systemd-networkd[1050]: cilium_net: Gained carrier Jul 12 00:19:51.059018 systemd-networkd[1050]: cilium_host: Gained carrier Jul 12 00:19:51.141443 systemd-networkd[1050]: cilium_vxlan: Link UP Jul 12 00:19:51.141451 systemd-networkd[1050]: cilium_vxlan: Gained carrier Jul 12 00:19:51.282028 systemd-networkd[1050]: cilium_host: Gained IPv6LL Jul 12 00:19:51.477924 kernel: NET: Registered PF_ALG protocol family Jul 12 00:19:51.613936 kubelet[1417]: E0712 00:19:51.613850 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:51.802025 systemd-networkd[1050]: cilium_net: Gained IPv6LL Jul 12 00:19:51.840783 kubelet[1417]: E0712 00:19:51.840744 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:52.066506 systemd-networkd[1050]: lxc_health: Link UP Jul 12 00:19:52.080030 systemd-networkd[1050]: lxc_health: Gained carrier Jul 12 00:19:52.081005 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 12 00:19:52.442007 systemd-networkd[1050]: cilium_vxlan: Gained IPv6LL Jul 12 00:19:52.615078 kubelet[1417]: E0712 00:19:52.615030 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:52.957216 systemd[1]: Created slice kubepods-besteffort-pod50ee22a7_65a8_4090_bdc9_a6040315dcac.slice. Jul 12 00:19:53.035304 kubelet[1417]: I0712 00:19:53.035226 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmclf\" (UniqueName: \"kubernetes.io/projected/50ee22a7-65a8-4090-bdc9-a6040315dcac-kube-api-access-zmclf\") pod \"nginx-deployment-7fcdb87857-2ljwq\" (UID: \"50ee22a7-65a8-4090-bdc9-a6040315dcac\") " pod="default/nginx-deployment-7fcdb87857-2ljwq" Jul 12 00:19:53.259666 env[1212]: time="2025-07-12T00:19:53.259530824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-2ljwq,Uid:50ee22a7-65a8-4090-bdc9-a6040315dcac,Namespace:default,Attempt:0,}" Jul 12 00:19:53.295661 systemd-networkd[1050]: lxcd43cc593facd: Link UP Jul 12 00:19:53.304903 kernel: eth0: renamed from tmp2fea1 Jul 12 00:19:53.312479 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 12 00:19:53.312564 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd43cc593facd: link becomes ready Jul 12 00:19:53.312525 systemd-networkd[1050]: lxcd43cc593facd: Gained carrier Jul 12 00:19:53.465984 systemd-networkd[1050]: lxc_health: Gained IPv6LL Jul 12 00:19:53.616244 kubelet[1417]: E0712 00:19:53.616134 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:53.756661 kubelet[1417]: E0712 00:19:53.756626 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:19:54.616750 kubelet[1417]: E0712 00:19:54.616696 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:55.002005 systemd-networkd[1050]: lxcd43cc593facd: Gained IPv6LL Jul 12 00:19:55.617321 kubelet[1417]: E0712 00:19:55.617272 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:56.570543 env[1212]: time="2025-07-12T00:19:56.570475527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:19:56.570543 env[1212]: time="2025-07-12T00:19:56.570515457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:19:56.570892 env[1212]: time="2025-07-12T00:19:56.570525860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:19:56.570892 env[1212]: time="2025-07-12T00:19:56.570662656Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2fea1e0d33549aad2a2d0668fab26a0c8855774700818fc0305804fc4e2ef809 pid=2487 runtime=io.containerd.runc.v2 Jul 12 00:19:56.585547 systemd[1]: run-containerd-runc-k8s.io-2fea1e0d33549aad2a2d0668fab26a0c8855774700818fc0305804fc4e2ef809-runc.KNmXst.mount: Deactivated successfully. Jul 12 00:19:56.587972 systemd[1]: Started cri-containerd-2fea1e0d33549aad2a2d0668fab26a0c8855774700818fc0305804fc4e2ef809.scope. Jul 12 00:19:56.607421 kubelet[1417]: E0712 00:19:56.607370 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:56.618279 kubelet[1417]: E0712 00:19:56.618245 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:56.650383 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:19:56.667317 env[1212]: time="2025-07-12T00:19:56.667269447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-2ljwq,Uid:50ee22a7-65a8-4090-bdc9-a6040315dcac,Namespace:default,Attempt:0,} returns sandbox id \"2fea1e0d33549aad2a2d0668fab26a0c8855774700818fc0305804fc4e2ef809\"" Jul 12 00:19:56.668726 env[1212]: time="2025-07-12T00:19:56.668692541Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 12 00:19:57.618396 kubelet[1417]: E0712 00:19:57.618354 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:58.565966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1500591398.mount: Deactivated successfully. Jul 12 00:19:58.619020 kubelet[1417]: E0712 00:19:58.618970 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:59.619575 kubelet[1417]: E0712 00:19:59.619518 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:19:59.965765 env[1212]: time="2025-07-12T00:19:59.965718811Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:19:59.967274 env[1212]: time="2025-07-12T00:19:59.967234718Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:19:59.968840 env[1212]: time="2025-07-12T00:19:59.968810355Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:19:59.971069 env[1212]: time="2025-07-12T00:19:59.971036347Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:19:59.971666 env[1212]: time="2025-07-12T00:19:59.971626051Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\"" Jul 12 00:19:59.974118 env[1212]: time="2025-07-12T00:19:59.974074682Z" level=info msg="CreateContainer within sandbox \"2fea1e0d33549aad2a2d0668fab26a0c8855774700818fc0305804fc4e2ef809\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 12 00:19:59.982715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2983098038.mount: Deactivated successfully. Jul 12 00:19:59.985577 env[1212]: time="2025-07-12T00:19:59.985541141Z" level=info msg="CreateContainer within sandbox \"2fea1e0d33549aad2a2d0668fab26a0c8855774700818fc0305804fc4e2ef809\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"3f776993aca22a407acd293f05569b2dea31a925936c821bb8c67a61fa4a87ab\"" Jul 12 00:19:59.986124 env[1212]: time="2025-07-12T00:19:59.986097279Z" level=info msg="StartContainer for \"3f776993aca22a407acd293f05569b2dea31a925936c821bb8c67a61fa4a87ab\"" Jul 12 00:20:00.004575 systemd[1]: Started cri-containerd-3f776993aca22a407acd293f05569b2dea31a925936c821bb8c67a61fa4a87ab.scope. Jul 12 00:20:00.041715 env[1212]: time="2025-07-12T00:20:00.041664806Z" level=info msg="StartContainer for \"3f776993aca22a407acd293f05569b2dea31a925936c821bb8c67a61fa4a87ab\" returns successfully" Jul 12 00:20:00.619981 kubelet[1417]: E0712 00:20:00.619934 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:00.870502 kubelet[1417]: I0712 00:20:00.870238 1417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-2ljwq" podStartSLOduration=5.565488122 podStartE2EDuration="8.870223873s" podCreationTimestamp="2025-07-12 00:19:52 +0000 UTC" firstStartedPulling="2025-07-12 00:19:56.66819377 +0000 UTC m=+21.146683793" lastFinishedPulling="2025-07-12 00:19:59.972929521 +0000 UTC m=+24.451419544" observedRunningTime="2025-07-12 00:20:00.869911184 +0000 UTC m=+25.348401207" watchObservedRunningTime="2025-07-12 00:20:00.870223873 +0000 UTC m=+25.348713896" Jul 12 00:20:00.980987 systemd[1]: run-containerd-runc-k8s.io-3f776993aca22a407acd293f05569b2dea31a925936c821bb8c67a61fa4a87ab-runc.GNDKYC.mount: Deactivated successfully. Jul 12 00:20:01.620968 kubelet[1417]: E0712 00:20:01.620928 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:02.621648 kubelet[1417]: E0712 00:20:02.621596 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:03.622449 kubelet[1417]: E0712 00:20:03.622388 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:03.697442 kubelet[1417]: E0712 00:20:03.697410 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:03.864012 kubelet[1417]: E0712 00:20:03.863978 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:04.417331 systemd[1]: Created slice kubepods-besteffort-pod096137be_4748_4365_b8ac_4ea7e849f64d.slice. Jul 12 00:20:04.498283 kubelet[1417]: I0712 00:20:04.498235 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j7w7\" (UniqueName: \"kubernetes.io/projected/096137be-4748-4365-b8ac-4ea7e849f64d-kube-api-access-7j7w7\") pod \"nfs-server-provisioner-0\" (UID: \"096137be-4748-4365-b8ac-4ea7e849f64d\") " pod="default/nfs-server-provisioner-0" Jul 12 00:20:04.498283 kubelet[1417]: I0712 00:20:04.498284 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/096137be-4748-4365-b8ac-4ea7e849f64d-data\") pod \"nfs-server-provisioner-0\" (UID: \"096137be-4748-4365-b8ac-4ea7e849f64d\") " pod="default/nfs-server-provisioner-0" Jul 12 00:20:04.623337 kubelet[1417]: E0712 00:20:04.623294 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:04.720794 env[1212]: time="2025-07-12T00:20:04.720742006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:096137be-4748-4365-b8ac-4ea7e849f64d,Namespace:default,Attempt:0,}" Jul 12 00:20:04.751687 systemd-networkd[1050]: lxcba4208196895: Link UP Jul 12 00:20:04.761950 kernel: eth0: renamed from tmp3824e Jul 12 00:20:04.770979 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 12 00:20:04.771095 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcba4208196895: link becomes ready Jul 12 00:20:04.771246 systemd-networkd[1050]: lxcba4208196895: Gained carrier Jul 12 00:20:04.921548 env[1212]: time="2025-07-12T00:20:04.921467276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:20:04.921548 env[1212]: time="2025-07-12T00:20:04.921510200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:20:04.921754 env[1212]: time="2025-07-12T00:20:04.921521161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:20:04.921809 env[1212]: time="2025-07-12T00:20:04.921782545Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3824efc2481760ed092859aef5dd16723692ef6d8bb1bee25f17c99e2e476dc1 pid=2617 runtime=io.containerd.runc.v2 Jul 12 00:20:04.935483 systemd[1]: Started cri-containerd-3824efc2481760ed092859aef5dd16723692ef6d8bb1bee25f17c99e2e476dc1.scope. Jul 12 00:20:04.956983 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:20:04.973546 env[1212]: time="2025-07-12T00:20:04.973178223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:096137be-4748-4365-b8ac-4ea7e849f64d,Namespace:default,Attempt:0,} returns sandbox id \"3824efc2481760ed092859aef5dd16723692ef6d8bb1bee25f17c99e2e476dc1\"" Jul 12 00:20:04.975207 env[1212]: time="2025-07-12T00:20:04.975165364Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 12 00:20:05.623742 kubelet[1417]: E0712 00:20:05.623671 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:05.882002 systemd-networkd[1050]: lxcba4208196895: Gained IPv6LL Jul 12 00:20:06.624105 kubelet[1417]: E0712 00:20:06.624034 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:07.376447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3548067190.mount: Deactivated successfully. Jul 12 00:20:07.625224 kubelet[1417]: E0712 00:20:07.625175 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:08.625338 kubelet[1417]: E0712 00:20:08.625278 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:09.158653 env[1212]: time="2025-07-12T00:20:09.158586013Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:09.160382 env[1212]: time="2025-07-12T00:20:09.160345414Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:09.162558 env[1212]: time="2025-07-12T00:20:09.162530405Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:09.166830 env[1212]: time="2025-07-12T00:20:09.166795660Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:09.168340 env[1212]: time="2025-07-12T00:20:09.168303764Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jul 12 00:20:09.170224 env[1212]: time="2025-07-12T00:20:09.170185454Z" level=info msg="CreateContainer within sandbox \"3824efc2481760ed092859aef5dd16723692ef6d8bb1bee25f17c99e2e476dc1\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 12 00:20:09.181944 env[1212]: time="2025-07-12T00:20:09.181902823Z" level=info msg="CreateContainer within sandbox \"3824efc2481760ed092859aef5dd16723692ef6d8bb1bee25f17c99e2e476dc1\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a55b6363552d289b4b5e3c9e307360f0524067789955e7ca1f3b57356e5ab7f1\"" Jul 12 00:20:09.182493 env[1212]: time="2025-07-12T00:20:09.182466061Z" level=info msg="StartContainer for \"a55b6363552d289b4b5e3c9e307360f0524067789955e7ca1f3b57356e5ab7f1\"" Jul 12 00:20:09.205204 systemd[1]: Started cri-containerd-a55b6363552d289b4b5e3c9e307360f0524067789955e7ca1f3b57356e5ab7f1.scope. Jul 12 00:20:09.281475 env[1212]: time="2025-07-12T00:20:09.281292285Z" level=info msg="StartContainer for \"a55b6363552d289b4b5e3c9e307360f0524067789955e7ca1f3b57356e5ab7f1\" returns successfully" Jul 12 00:20:09.626310 kubelet[1417]: E0712 00:20:09.626260 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:09.886769 kubelet[1417]: I0712 00:20:09.886433 1417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.692413278 podStartE2EDuration="5.886411384s" podCreationTimestamp="2025-07-12 00:20:04 +0000 UTC" firstStartedPulling="2025-07-12 00:20:04.974898779 +0000 UTC m=+29.453388802" lastFinishedPulling="2025-07-12 00:20:09.168896885 +0000 UTC m=+33.647386908" observedRunningTime="2025-07-12 00:20:09.885157498 +0000 UTC m=+34.363647521" watchObservedRunningTime="2025-07-12 00:20:09.886411384 +0000 UTC m=+34.364901407" Jul 12 00:20:10.627259 kubelet[1417]: E0712 00:20:10.627201 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:11.628398 kubelet[1417]: E0712 00:20:11.628333 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:12.628971 kubelet[1417]: E0712 00:20:12.628898 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:13.629039 kubelet[1417]: E0712 00:20:13.628976 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:13.992057 update_engine[1204]: I0712 00:20:13.991994 1204 update_attempter.cc:509] Updating boot flags... Jul 12 00:20:14.629708 kubelet[1417]: E0712 00:20:14.629645 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:15.630229 kubelet[1417]: E0712 00:20:15.630175 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:16.605419 kubelet[1417]: E0712 00:20:16.605363 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:16.630979 kubelet[1417]: E0712 00:20:16.630935 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:17.631667 kubelet[1417]: E0712 00:20:17.631600 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:18.632166 kubelet[1417]: E0712 00:20:18.632128 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:19.242109 systemd[1]: Created slice kubepods-besteffort-podd4a2ba88_f173_450b_aa42_31f0409fee83.slice. Jul 12 00:20:19.286935 kubelet[1417]: I0712 00:20:19.286886 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wn6r\" (UniqueName: \"kubernetes.io/projected/d4a2ba88-f173-450b-aa42-31f0409fee83-kube-api-access-6wn6r\") pod \"test-pod-1\" (UID: \"d4a2ba88-f173-450b-aa42-31f0409fee83\") " pod="default/test-pod-1" Jul 12 00:20:19.286935 kubelet[1417]: I0712 00:20:19.286930 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-386c9b60-599e-4587-a91f-a22cda6a60e0\" (UniqueName: \"kubernetes.io/nfs/d4a2ba88-f173-450b-aa42-31f0409fee83-pvc-386c9b60-599e-4587-a91f-a22cda6a60e0\") pod \"test-pod-1\" (UID: \"d4a2ba88-f173-450b-aa42-31f0409fee83\") " pod="default/test-pod-1" Jul 12 00:20:19.434899 kernel: FS-Cache: Loaded Jul 12 00:20:19.462933 kernel: RPC: Registered named UNIX socket transport module. Jul 12 00:20:19.463048 kernel: RPC: Registered udp transport module. Jul 12 00:20:19.463078 kernel: RPC: Registered tcp transport module. Jul 12 00:20:19.463909 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 12 00:20:19.514909 kernel: FS-Cache: Netfs 'nfs' registered for caching Jul 12 00:20:19.632972 kubelet[1417]: E0712 00:20:19.632913 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:19.659984 kernel: NFS: Registering the id_resolver key type Jul 12 00:20:19.660046 kernel: Key type id_resolver registered Jul 12 00:20:19.660063 kernel: Key type id_legacy registered Jul 12 00:20:19.721353 nfsidmap[2756]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 12 00:20:19.725483 nfsidmap[2759]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 12 00:20:19.844885 env[1212]: time="2025-07-12T00:20:19.844828207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d4a2ba88-f173-450b-aa42-31f0409fee83,Namespace:default,Attempt:0,}" Jul 12 00:20:19.886948 systemd-networkd[1050]: lxc9d98d1de53ef: Link UP Jul 12 00:20:19.898892 kernel: eth0: renamed from tmp17d5f Jul 12 00:20:19.909935 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 12 00:20:19.910044 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc9d98d1de53ef: link becomes ready Jul 12 00:20:19.910060 systemd-networkd[1050]: lxc9d98d1de53ef: Gained carrier Jul 12 00:20:20.093123 env[1212]: time="2025-07-12T00:20:20.091143923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:20:20.093123 env[1212]: time="2025-07-12T00:20:20.093084680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:20:20.093377 env[1212]: time="2025-07-12T00:20:20.093096561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:20:20.093467 env[1212]: time="2025-07-12T00:20:20.093432854Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/17d5ffabdf2665d98cc09acfb9787e7bf4428a73a42c404c3b07d2ffd6c25371 pid=2794 runtime=io.containerd.runc.v2 Jul 12 00:20:20.105703 systemd[1]: Started cri-containerd-17d5ffabdf2665d98cc09acfb9787e7bf4428a73a42c404c3b07d2ffd6c25371.scope. Jul 12 00:20:20.158930 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:20:20.180754 env[1212]: time="2025-07-12T00:20:20.180705240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d4a2ba88-f173-450b-aa42-31f0409fee83,Namespace:default,Attempt:0,} returns sandbox id \"17d5ffabdf2665d98cc09acfb9787e7bf4428a73a42c404c3b07d2ffd6c25371\"" Jul 12 00:20:20.182240 env[1212]: time="2025-07-12T00:20:20.182207699Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 12 00:20:20.455841 env[1212]: time="2025-07-12T00:20:20.455797523Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:20.457536 env[1212]: time="2025-07-12T00:20:20.457503791Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:20.459491 env[1212]: time="2025-07-12T00:20:20.459450869Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:20.461209 env[1212]: time="2025-07-12T00:20:20.461160696Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:20.462743 env[1212]: time="2025-07-12T00:20:20.462708838Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\"" Jul 12 00:20:20.465213 env[1212]: time="2025-07-12T00:20:20.465174456Z" level=info msg="CreateContainer within sandbox \"17d5ffabdf2665d98cc09acfb9787e7bf4428a73a42c404c3b07d2ffd6c25371\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 12 00:20:20.475905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2575938146.mount: Deactivated successfully. Jul 12 00:20:20.481089 env[1212]: time="2025-07-12T00:20:20.481025485Z" level=info msg="CreateContainer within sandbox \"17d5ffabdf2665d98cc09acfb9787e7bf4428a73a42c404c3b07d2ffd6c25371\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"9df8e9ef46aa74b6a83d2ba4d6108cc4d48ada885945b64ea0dd9afdd3815e12\"" Jul 12 00:20:20.481681 env[1212]: time="2025-07-12T00:20:20.481628309Z" level=info msg="StartContainer for \"9df8e9ef46aa74b6a83d2ba4d6108cc4d48ada885945b64ea0dd9afdd3815e12\"" Jul 12 00:20:20.500904 systemd[1]: Started cri-containerd-9df8e9ef46aa74b6a83d2ba4d6108cc4d48ada885945b64ea0dd9afdd3815e12.scope. Jul 12 00:20:20.575750 env[1212]: time="2025-07-12T00:20:20.575692484Z" level=info msg="StartContainer for \"9df8e9ef46aa74b6a83d2ba4d6108cc4d48ada885945b64ea0dd9afdd3815e12\" returns successfully" Jul 12 00:20:20.634007 kubelet[1417]: E0712 00:20:20.633955 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:21.401288 systemd[1]: run-containerd-runc-k8s.io-9df8e9ef46aa74b6a83d2ba4d6108cc4d48ada885945b64ea0dd9afdd3815e12-runc.9rS4IM.mount: Deactivated successfully. Jul 12 00:20:21.626084 systemd-networkd[1050]: lxc9d98d1de53ef: Gained IPv6LL Jul 12 00:20:21.634544 kubelet[1417]: E0712 00:20:21.634512 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:22.635538 kubelet[1417]: E0712 00:20:22.635485 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:23.636085 kubelet[1417]: E0712 00:20:23.636030 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:24.636454 kubelet[1417]: E0712 00:20:24.636422 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:25.637770 kubelet[1417]: E0712 00:20:25.637731 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:26.639200 kubelet[1417]: E0712 00:20:26.639144 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:27.640060 kubelet[1417]: E0712 00:20:27.640008 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:28.629296 kubelet[1417]: I0712 00:20:28.629221 1417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=24.347547434 podStartE2EDuration="24.629203938s" podCreationTimestamp="2025-07-12 00:20:04 +0000 UTC" firstStartedPulling="2025-07-12 00:20:20.181842045 +0000 UTC m=+44.660332068" lastFinishedPulling="2025-07-12 00:20:20.463498549 +0000 UTC m=+44.941988572" observedRunningTime="2025-07-12 00:20:20.904682668 +0000 UTC m=+45.383172691" watchObservedRunningTime="2025-07-12 00:20:28.629203938 +0000 UTC m=+53.107693921" Jul 12 00:20:28.640750 kubelet[1417]: E0712 00:20:28.640699 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:28.685660 env[1212]: time="2025-07-12T00:20:28.685594252Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:20:28.691138 env[1212]: time="2025-07-12T00:20:28.691097167Z" level=info msg="StopContainer for \"e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90\" with timeout 2 (s)" Jul 12 00:20:28.691594 env[1212]: time="2025-07-12T00:20:28.691559220Z" level=info msg="Stop container \"e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90\" with signal terminated" Jul 12 00:20:28.697169 systemd-networkd[1050]: lxc_health: Link DOWN Jul 12 00:20:28.697177 systemd-networkd[1050]: lxc_health: Lost carrier Jul 12 00:20:28.737305 systemd[1]: cri-containerd-e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90.scope: Deactivated successfully. Jul 12 00:20:28.737649 systemd[1]: cri-containerd-e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90.scope: Consumed 6.538s CPU time. Jul 12 00:20:28.753345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90-rootfs.mount: Deactivated successfully. Jul 12 00:20:28.764710 env[1212]: time="2025-07-12T00:20:28.764661086Z" level=info msg="shim disconnected" id=e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90 Jul 12 00:20:28.764710 env[1212]: time="2025-07-12T00:20:28.764712208Z" level=warning msg="cleaning up after shim disconnected" id=e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90 namespace=k8s.io Jul 12 00:20:28.764930 env[1212]: time="2025-07-12T00:20:28.764722888Z" level=info msg="cleaning up dead shim" Jul 12 00:20:28.770931 env[1212]: time="2025-07-12T00:20:28.770874582Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:20:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2928 runtime=io.containerd.runc.v2\n" Jul 12 00:20:28.773773 env[1212]: time="2025-07-12T00:20:28.773731183Z" level=info msg="StopContainer for \"e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90\" returns successfully" Jul 12 00:20:28.774469 env[1212]: time="2025-07-12T00:20:28.774433683Z" level=info msg="StopPodSandbox for \"0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b\"" Jul 12 00:20:28.774523 env[1212]: time="2025-07-12T00:20:28.774492964Z" level=info msg="Container to stop \"0bb78f198d46020d1d28386f758869f732aedeb9587c8c97699e2abfa0dc5583\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:20:28.774523 env[1212]: time="2025-07-12T00:20:28.774516085Z" level=info msg="Container to stop \"e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:20:28.774575 env[1212]: time="2025-07-12T00:20:28.774528485Z" level=info msg="Container to stop \"0531f24562841c721f6c102fae40a4bf1d21f604c15540b31cc26db38819b913\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:20:28.774575 env[1212]: time="2025-07-12T00:20:28.774540166Z" level=info msg="Container to stop \"a09d4b072e2a7b3f7f6bbd94fbda8f136b646b18c83c625acdcb0a9104548edf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:20:28.774575 env[1212]: time="2025-07-12T00:20:28.774551606Z" level=info msg="Container to stop \"e8a62ef47f3426b24b645115061c2594de2c75f335e6d0b00a4a479f383073b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:20:28.777480 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b-shm.mount: Deactivated successfully. Jul 12 00:20:28.782604 systemd[1]: cri-containerd-0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b.scope: Deactivated successfully. Jul 12 00:20:28.801185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b-rootfs.mount: Deactivated successfully. Jul 12 00:20:28.805655 env[1212]: time="2025-07-12T00:20:28.805597004Z" level=info msg="shim disconnected" id=0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b Jul 12 00:20:28.805655 env[1212]: time="2025-07-12T00:20:28.805647325Z" level=warning msg="cleaning up after shim disconnected" id=0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b namespace=k8s.io Jul 12 00:20:28.805655 env[1212]: time="2025-07-12T00:20:28.805657365Z" level=info msg="cleaning up dead shim" Jul 12 00:20:28.812776 env[1212]: time="2025-07-12T00:20:28.812733845Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:20:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2959 runtime=io.containerd.runc.v2\n" Jul 12 00:20:28.813090 env[1212]: time="2025-07-12T00:20:28.813067815Z" level=info msg="TearDown network for sandbox \"0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b\" successfully" Jul 12 00:20:28.813133 env[1212]: time="2025-07-12T00:20:28.813093175Z" level=info msg="StopPodSandbox for \"0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b\" returns successfully" Jul 12 00:20:28.846773 kubelet[1417]: I0712 00:20:28.846705 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-host-proc-sys-kernel\") pod \"be18bc05-b71c-463d-a155-81e02674c93a\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " Jul 12 00:20:28.846773 kubelet[1417]: I0712 00:20:28.846746 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "be18bc05-b71c-463d-a155-81e02674c93a" (UID: "be18bc05-b71c-463d-a155-81e02674c93a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:20:28.846773 kubelet[1417]: I0712 00:20:28.846788 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be18bc05-b71c-463d-a155-81e02674c93a-cilium-config-path\") pod \"be18bc05-b71c-463d-a155-81e02674c93a\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " Jul 12 00:20:28.847030 kubelet[1417]: I0712 00:20:28.846812 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-hostproc\") pod \"be18bc05-b71c-463d-a155-81e02674c93a\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " Jul 12 00:20:28.847030 kubelet[1417]: I0712 00:20:28.846832 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g89hl\" (UniqueName: \"kubernetes.io/projected/be18bc05-b71c-463d-a155-81e02674c93a-kube-api-access-g89hl\") pod \"be18bc05-b71c-463d-a155-81e02674c93a\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " Jul 12 00:20:28.847030 kubelet[1417]: I0712 00:20:28.846861 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be18bc05-b71c-463d-a155-81e02674c93a-clustermesh-secrets\") pod \"be18bc05-b71c-463d-a155-81e02674c93a\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " Jul 12 00:20:28.847030 kubelet[1417]: I0712 00:20:28.846883 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-hostproc" (OuterVolumeSpecName: "hostproc") pod "be18bc05-b71c-463d-a155-81e02674c93a" (UID: "be18bc05-b71c-463d-a155-81e02674c93a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:20:28.847030 kubelet[1417]: I0712 00:20:28.846891 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-cilium-cgroup\") pod \"be18bc05-b71c-463d-a155-81e02674c93a\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " Jul 12 00:20:28.847030 kubelet[1417]: I0712 00:20:28.846923 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-cni-path\") pod \"be18bc05-b71c-463d-a155-81e02674c93a\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " Jul 12 00:20:28.847171 kubelet[1417]: I0712 00:20:28.846932 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "be18bc05-b71c-463d-a155-81e02674c93a" (UID: "be18bc05-b71c-463d-a155-81e02674c93a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:20:28.847171 kubelet[1417]: I0712 00:20:28.846940 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-cilium-run\") pod \"be18bc05-b71c-463d-a155-81e02674c93a\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " Jul 12 00:20:28.847171 kubelet[1417]: I0712 00:20:28.846956 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-host-proc-sys-net\") pod \"be18bc05-b71c-463d-a155-81e02674c93a\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " Jul 12 00:20:28.847171 kubelet[1417]: I0712 00:20:28.846971 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-xtables-lock\") pod \"be18bc05-b71c-463d-a155-81e02674c93a\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " Jul 12 00:20:28.847171 kubelet[1417]: I0712 00:20:28.846985 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-bpf-maps\") pod \"be18bc05-b71c-463d-a155-81e02674c93a\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " Jul 12 00:20:28.847171 kubelet[1417]: I0712 00:20:28.846999 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-etc-cni-netd\") pod \"be18bc05-b71c-463d-a155-81e02674c93a\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " Jul 12 00:20:28.847306 kubelet[1417]: I0712 00:20:28.847018 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be18bc05-b71c-463d-a155-81e02674c93a-hubble-tls\") pod \"be18bc05-b71c-463d-a155-81e02674c93a\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " Jul 12 00:20:28.847306 kubelet[1417]: I0712 00:20:28.847033 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-lib-modules\") pod \"be18bc05-b71c-463d-a155-81e02674c93a\" (UID: \"be18bc05-b71c-463d-a155-81e02674c93a\") " Jul 12 00:20:28.847306 kubelet[1417]: I0712 00:20:28.847062 1417 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-host-proc-sys-kernel\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:28.847306 kubelet[1417]: I0712 00:20:28.847072 1417 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-hostproc\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:28.847306 kubelet[1417]: I0712 00:20:28.847080 1417 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-cilium-cgroup\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:28.847306 kubelet[1417]: I0712 00:20:28.847098 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "be18bc05-b71c-463d-a155-81e02674c93a" (UID: "be18bc05-b71c-463d-a155-81e02674c93a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:20:28.848823 kubelet[1417]: I0712 00:20:28.847532 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "be18bc05-b71c-463d-a155-81e02674c93a" (UID: "be18bc05-b71c-463d-a155-81e02674c93a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:20:28.848823 kubelet[1417]: I0712 00:20:28.847572 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-cni-path" (OuterVolumeSpecName: "cni-path") pod "be18bc05-b71c-463d-a155-81e02674c93a" (UID: "be18bc05-b71c-463d-a155-81e02674c93a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:20:28.848823 kubelet[1417]: I0712 00:20:28.847589 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "be18bc05-b71c-463d-a155-81e02674c93a" (UID: "be18bc05-b71c-463d-a155-81e02674c93a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:20:28.848823 kubelet[1417]: I0712 00:20:28.847602 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "be18bc05-b71c-463d-a155-81e02674c93a" (UID: "be18bc05-b71c-463d-a155-81e02674c93a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:20:28.848823 kubelet[1417]: I0712 00:20:28.847618 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "be18bc05-b71c-463d-a155-81e02674c93a" (UID: "be18bc05-b71c-463d-a155-81e02674c93a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:20:28.849058 kubelet[1417]: I0712 00:20:28.847630 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "be18bc05-b71c-463d-a155-81e02674c93a" (UID: "be18bc05-b71c-463d-a155-81e02674c93a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:20:28.849058 kubelet[1417]: I0712 00:20:28.848946 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be18bc05-b71c-463d-a155-81e02674c93a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "be18bc05-b71c-463d-a155-81e02674c93a" (UID: "be18bc05-b71c-463d-a155-81e02674c93a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:20:28.853632 kubelet[1417]: I0712 00:20:28.851420 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be18bc05-b71c-463d-a155-81e02674c93a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "be18bc05-b71c-463d-a155-81e02674c93a" (UID: "be18bc05-b71c-463d-a155-81e02674c93a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:20:28.854298 kubelet[1417]: I0712 00:20:28.854257 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be18bc05-b71c-463d-a155-81e02674c93a-kube-api-access-g89hl" (OuterVolumeSpecName: "kube-api-access-g89hl") pod "be18bc05-b71c-463d-a155-81e02674c93a" (UID: "be18bc05-b71c-463d-a155-81e02674c93a"). InnerVolumeSpecName "kube-api-access-g89hl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:20:28.854674 kubelet[1417]: I0712 00:20:28.854647 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be18bc05-b71c-463d-a155-81e02674c93a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "be18bc05-b71c-463d-a155-81e02674c93a" (UID: "be18bc05-b71c-463d-a155-81e02674c93a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:20:28.854816 systemd[1]: var-lib-kubelet-pods-be18bc05\x2db71c\x2d463d\x2da155\x2d81e02674c93a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 12 00:20:28.912304 kubelet[1417]: I0712 00:20:28.910820 1417 scope.go:117] "RemoveContainer" containerID="e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90" Jul 12 00:20:28.912441 env[1212]: time="2025-07-12T00:20:28.911943290Z" level=info msg="RemoveContainer for \"e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90\"" Jul 12 00:20:28.916314 systemd[1]: Removed slice kubepods-burstable-podbe18bc05_b71c_463d_a155_81e02674c93a.slice. Jul 12 00:20:28.916393 systemd[1]: kubepods-burstable-podbe18bc05_b71c_463d_a155_81e02674c93a.slice: Consumed 6.875s CPU time. Jul 12 00:20:28.917728 env[1212]: time="2025-07-12T00:20:28.917684772Z" level=info msg="RemoveContainer for \"e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90\" returns successfully" Jul 12 00:20:28.917976 kubelet[1417]: I0712 00:20:28.917955 1417 scope.go:117] "RemoveContainer" containerID="e8a62ef47f3426b24b645115061c2594de2c75f335e6d0b00a4a479f383073b1" Jul 12 00:20:28.919097 env[1212]: time="2025-07-12T00:20:28.919071611Z" level=info msg="RemoveContainer for \"e8a62ef47f3426b24b645115061c2594de2c75f335e6d0b00a4a479f383073b1\"" Jul 12 00:20:28.921530 env[1212]: time="2025-07-12T00:20:28.921482199Z" level=info msg="RemoveContainer for \"e8a62ef47f3426b24b645115061c2594de2c75f335e6d0b00a4a479f383073b1\" returns successfully" Jul 12 00:20:28.921710 kubelet[1417]: I0712 00:20:28.921673 1417 scope.go:117] "RemoveContainer" containerID="a09d4b072e2a7b3f7f6bbd94fbda8f136b646b18c83c625acdcb0a9104548edf" Jul 12 00:20:28.922889 env[1212]: time="2025-07-12T00:20:28.922690393Z" level=info msg="RemoveContainer for \"a09d4b072e2a7b3f7f6bbd94fbda8f136b646b18c83c625acdcb0a9104548edf\"" Jul 12 00:20:28.924846 env[1212]: time="2025-07-12T00:20:28.924804573Z" level=info msg="RemoveContainer for \"a09d4b072e2a7b3f7f6bbd94fbda8f136b646b18c83c625acdcb0a9104548edf\" returns successfully" Jul 12 00:20:28.925136 kubelet[1417]: I0712 00:20:28.925110 1417 scope.go:117] "RemoveContainer" containerID="0bb78f198d46020d1d28386f758869f732aedeb9587c8c97699e2abfa0dc5583" Jul 12 00:20:28.926153 env[1212]: time="2025-07-12T00:20:28.926123890Z" level=info msg="RemoveContainer for \"0bb78f198d46020d1d28386f758869f732aedeb9587c8c97699e2abfa0dc5583\"" Jul 12 00:20:28.928363 env[1212]: time="2025-07-12T00:20:28.928313232Z" level=info msg="RemoveContainer for \"0bb78f198d46020d1d28386f758869f732aedeb9587c8c97699e2abfa0dc5583\" returns successfully" Jul 12 00:20:28.928570 kubelet[1417]: I0712 00:20:28.928548 1417 scope.go:117] "RemoveContainer" containerID="0531f24562841c721f6c102fae40a4bf1d21f604c15540b31cc26db38819b913" Jul 12 00:20:28.929402 env[1212]: time="2025-07-12T00:20:28.929377382Z" level=info msg="RemoveContainer for \"0531f24562841c721f6c102fae40a4bf1d21f604c15540b31cc26db38819b913\"" Jul 12 00:20:28.931389 env[1212]: time="2025-07-12T00:20:28.931356998Z" level=info msg="RemoveContainer for \"0531f24562841c721f6c102fae40a4bf1d21f604c15540b31cc26db38819b913\" returns successfully" Jul 12 00:20:28.931645 kubelet[1417]: I0712 00:20:28.931622 1417 scope.go:117] "RemoveContainer" containerID="e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90" Jul 12 00:20:28.931890 env[1212]: time="2025-07-12T00:20:28.931803051Z" level=error msg="ContainerStatus for \"e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90\": not found" Jul 12 00:20:28.932004 kubelet[1417]: E0712 00:20:28.931987 1417 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90\": not found" containerID="e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90" Jul 12 00:20:28.932105 kubelet[1417]: I0712 00:20:28.932013 1417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90"} err="failed to get container status \"e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90\": rpc error: code = NotFound desc = an error occurred when try to find container \"e4a77e3251cf15dfe99af02c6c1dca166ff25eca90f3dadd999cbe3ff22faa90\": not found" Jul 12 00:20:28.932145 kubelet[1417]: I0712 00:20:28.932106 1417 scope.go:117] "RemoveContainer" containerID="e8a62ef47f3426b24b645115061c2594de2c75f335e6d0b00a4a479f383073b1" Jul 12 00:20:28.932333 env[1212]: time="2025-07-12T00:20:28.932282985Z" level=error msg="ContainerStatus for \"e8a62ef47f3426b24b645115061c2594de2c75f335e6d0b00a4a479f383073b1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e8a62ef47f3426b24b645115061c2594de2c75f335e6d0b00a4a479f383073b1\": not found" Jul 12 00:20:28.932508 kubelet[1417]: E0712 00:20:28.932485 1417 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e8a62ef47f3426b24b645115061c2594de2c75f335e6d0b00a4a479f383073b1\": not found" containerID="e8a62ef47f3426b24b645115061c2594de2c75f335e6d0b00a4a479f383073b1" Jul 12 00:20:28.932561 kubelet[1417]: I0712 00:20:28.932511 1417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e8a62ef47f3426b24b645115061c2594de2c75f335e6d0b00a4a479f383073b1"} err="failed to get container status \"e8a62ef47f3426b24b645115061c2594de2c75f335e6d0b00a4a479f383073b1\": rpc error: code = NotFound desc = an error occurred when try to find container \"e8a62ef47f3426b24b645115061c2594de2c75f335e6d0b00a4a479f383073b1\": not found" Jul 12 00:20:28.932561 kubelet[1417]: I0712 00:20:28.932526 1417 scope.go:117] "RemoveContainer" containerID="a09d4b072e2a7b3f7f6bbd94fbda8f136b646b18c83c625acdcb0a9104548edf" Jul 12 00:20:28.932769 env[1212]: time="2025-07-12T00:20:28.932723717Z" level=error msg="ContainerStatus for \"a09d4b072e2a7b3f7f6bbd94fbda8f136b646b18c83c625acdcb0a9104548edf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a09d4b072e2a7b3f7f6bbd94fbda8f136b646b18c83c625acdcb0a9104548edf\": not found" Jul 12 00:20:28.932946 kubelet[1417]: E0712 00:20:28.932926 1417 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a09d4b072e2a7b3f7f6bbd94fbda8f136b646b18c83c625acdcb0a9104548edf\": not found" containerID="a09d4b072e2a7b3f7f6bbd94fbda8f136b646b18c83c625acdcb0a9104548edf" Jul 12 00:20:28.932992 kubelet[1417]: I0712 00:20:28.932948 1417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a09d4b072e2a7b3f7f6bbd94fbda8f136b646b18c83c625acdcb0a9104548edf"} err="failed to get container status \"a09d4b072e2a7b3f7f6bbd94fbda8f136b646b18c83c625acdcb0a9104548edf\": rpc error: code = NotFound desc = an error occurred when try to find container \"a09d4b072e2a7b3f7f6bbd94fbda8f136b646b18c83c625acdcb0a9104548edf\": not found" Jul 12 00:20:28.932992 kubelet[1417]: I0712 00:20:28.932966 1417 scope.go:117] "RemoveContainer" containerID="0bb78f198d46020d1d28386f758869f732aedeb9587c8c97699e2abfa0dc5583" Jul 12 00:20:28.933204 env[1212]: time="2025-07-12T00:20:28.933161169Z" level=error msg="ContainerStatus for \"0bb78f198d46020d1d28386f758869f732aedeb9587c8c97699e2abfa0dc5583\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0bb78f198d46020d1d28386f758869f732aedeb9587c8c97699e2abfa0dc5583\": not found" Jul 12 00:20:28.933380 kubelet[1417]: E0712 00:20:28.933361 1417 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0bb78f198d46020d1d28386f758869f732aedeb9587c8c97699e2abfa0dc5583\": not found" containerID="0bb78f198d46020d1d28386f758869f732aedeb9587c8c97699e2abfa0dc5583" Jul 12 00:20:28.933428 kubelet[1417]: I0712 00:20:28.933385 1417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0bb78f198d46020d1d28386f758869f732aedeb9587c8c97699e2abfa0dc5583"} err="failed to get container status \"0bb78f198d46020d1d28386f758869f732aedeb9587c8c97699e2abfa0dc5583\": rpc error: code = NotFound desc = an error occurred when try to find container \"0bb78f198d46020d1d28386f758869f732aedeb9587c8c97699e2abfa0dc5583\": not found" Jul 12 00:20:28.933428 kubelet[1417]: I0712 00:20:28.933400 1417 scope.go:117] "RemoveContainer" containerID="0531f24562841c721f6c102fae40a4bf1d21f604c15540b31cc26db38819b913" Jul 12 00:20:28.933635 env[1212]: time="2025-07-12T00:20:28.933591942Z" level=error msg="ContainerStatus for \"0531f24562841c721f6c102fae40a4bf1d21f604c15540b31cc26db38819b913\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0531f24562841c721f6c102fae40a4bf1d21f604c15540b31cc26db38819b913\": not found" Jul 12 00:20:28.933789 kubelet[1417]: E0712 00:20:28.933770 1417 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0531f24562841c721f6c102fae40a4bf1d21f604c15540b31cc26db38819b913\": not found" containerID="0531f24562841c721f6c102fae40a4bf1d21f604c15540b31cc26db38819b913" Jul 12 00:20:28.933847 kubelet[1417]: I0712 00:20:28.933789 1417 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0531f24562841c721f6c102fae40a4bf1d21f604c15540b31cc26db38819b913"} err="failed to get container status \"0531f24562841c721f6c102fae40a4bf1d21f604c15540b31cc26db38819b913\": rpc error: code = NotFound desc = an error occurred when try to find container \"0531f24562841c721f6c102fae40a4bf1d21f604c15540b31cc26db38819b913\": not found" Jul 12 00:20:28.947363 kubelet[1417]: I0712 00:20:28.947319 1417 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g89hl\" (UniqueName: \"kubernetes.io/projected/be18bc05-b71c-463d-a155-81e02674c93a-kube-api-access-g89hl\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:28.947363 kubelet[1417]: I0712 00:20:28.947353 1417 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/be18bc05-b71c-463d-a155-81e02674c93a-cilium-config-path\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:28.947363 kubelet[1417]: I0712 00:20:28.947364 1417 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-cni-path\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:28.947363 kubelet[1417]: I0712 00:20:28.947372 1417 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/be18bc05-b71c-463d-a155-81e02674c93a-clustermesh-secrets\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:28.947550 kubelet[1417]: I0712 00:20:28.947381 1417 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-bpf-maps\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:28.947550 kubelet[1417]: I0712 00:20:28.947399 1417 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-etc-cni-netd\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:28.947550 kubelet[1417]: I0712 00:20:28.947414 1417 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/be18bc05-b71c-463d-a155-81e02674c93a-hubble-tls\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:28.947550 kubelet[1417]: I0712 00:20:28.947422 1417 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-cilium-run\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:28.947550 kubelet[1417]: I0712 00:20:28.947429 1417 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-host-proc-sys-net\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:28.947550 kubelet[1417]: I0712 00:20:28.947437 1417 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-xtables-lock\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:28.947550 kubelet[1417]: I0712 00:20:28.947444 1417 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be18bc05-b71c-463d-a155-81e02674c93a-lib-modules\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:29.641166 kubelet[1417]: E0712 00:20:29.641116 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:29.642074 systemd[1]: var-lib-kubelet-pods-be18bc05\x2db71c\x2d463d\x2da155\x2d81e02674c93a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg89hl.mount: Deactivated successfully. Jul 12 00:20:29.642173 systemd[1]: var-lib-kubelet-pods-be18bc05\x2db71c\x2d463d\x2da155\x2d81e02674c93a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 12 00:20:30.641834 kubelet[1417]: E0712 00:20:30.641789 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:30.806431 kubelet[1417]: I0712 00:20:30.806370 1417 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be18bc05-b71c-463d-a155-81e02674c93a" path="/var/lib/kubelet/pods/be18bc05-b71c-463d-a155-81e02674c93a/volumes" Jul 12 00:20:31.642976 kubelet[1417]: E0712 00:20:31.642927 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:31.760472 kubelet[1417]: E0712 00:20:31.760423 1417 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:20:31.795747 kubelet[1417]: I0712 00:20:31.795686 1417 memory_manager.go:355] "RemoveStaleState removing state" podUID="be18bc05-b71c-463d-a155-81e02674c93a" containerName="cilium-agent" Jul 12 00:20:31.802485 systemd[1]: Created slice kubepods-burstable-podd47b098e_a481_44ae_b891_97c65b0e5e38.slice. Jul 12 00:20:31.819426 systemd[1]: Created slice kubepods-besteffort-pod857499ce_dd0a_4244_b244_fbf236c6d49a.slice. Jul 12 00:20:31.869079 kubelet[1417]: I0712 00:20:31.869039 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-lib-modules\") pod \"cilium-6d86l\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " pod="kube-system/cilium-6d86l" Jul 12 00:20:31.869236 kubelet[1417]: I0712 00:20:31.869101 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d47b098e-a481-44ae-b891-97c65b0e5e38-clustermesh-secrets\") pod \"cilium-6d86l\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " pod="kube-system/cilium-6d86l" Jul 12 00:20:31.869236 kubelet[1417]: I0712 00:20:31.869119 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-host-proc-sys-kernel\") pod \"cilium-6d86l\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " pod="kube-system/cilium-6d86l" Jul 12 00:20:31.869236 kubelet[1417]: I0712 00:20:31.869136 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/857499ce-dd0a-4244-b244-fbf236c6d49a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-nmw8n\" (UID: \"857499ce-dd0a-4244-b244-fbf236c6d49a\") " pod="kube-system/cilium-operator-6c4d7847fc-nmw8n" Jul 12 00:20:31.869236 kubelet[1417]: I0712 00:20:31.869156 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hr6w\" (UniqueName: \"kubernetes.io/projected/857499ce-dd0a-4244-b244-fbf236c6d49a-kube-api-access-7hr6w\") pod \"cilium-operator-6c4d7847fc-nmw8n\" (UID: \"857499ce-dd0a-4244-b244-fbf236c6d49a\") " pod="kube-system/cilium-operator-6c4d7847fc-nmw8n" Jul 12 00:20:31.869236 kubelet[1417]: I0712 00:20:31.869174 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-cni-path\") pod \"cilium-6d86l\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " pod="kube-system/cilium-6d86l" Jul 12 00:20:31.869358 kubelet[1417]: I0712 00:20:31.869191 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d47b098e-a481-44ae-b891-97c65b0e5e38-hubble-tls\") pod \"cilium-6d86l\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " pod="kube-system/cilium-6d86l" Jul 12 00:20:31.869358 kubelet[1417]: I0712 00:20:31.869208 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcnh6\" (UniqueName: \"kubernetes.io/projected/d47b098e-a481-44ae-b891-97c65b0e5e38-kube-api-access-xcnh6\") pod \"cilium-6d86l\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " pod="kube-system/cilium-6d86l" Jul 12 00:20:31.869358 kubelet[1417]: I0712 00:20:31.869224 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-hostproc\") pod \"cilium-6d86l\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " pod="kube-system/cilium-6d86l" Jul 12 00:20:31.869358 kubelet[1417]: I0712 00:20:31.869241 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-cilium-cgroup\") pod \"cilium-6d86l\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " pod="kube-system/cilium-6d86l" Jul 12 00:20:31.869358 kubelet[1417]: I0712 00:20:31.869295 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-etc-cni-netd\") pod \"cilium-6d86l\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " pod="kube-system/cilium-6d86l" Jul 12 00:20:31.869358 kubelet[1417]: I0712 00:20:31.869331 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-xtables-lock\") pod \"cilium-6d86l\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " pod="kube-system/cilium-6d86l" Jul 12 00:20:31.869490 kubelet[1417]: I0712 00:20:31.869349 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d47b098e-a481-44ae-b891-97c65b0e5e38-cilium-config-path\") pod \"cilium-6d86l\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " pod="kube-system/cilium-6d86l" Jul 12 00:20:31.869490 kubelet[1417]: I0712 00:20:31.869370 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d47b098e-a481-44ae-b891-97c65b0e5e38-cilium-ipsec-secrets\") pod \"cilium-6d86l\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " pod="kube-system/cilium-6d86l" Jul 12 00:20:31.869490 kubelet[1417]: I0712 00:20:31.869386 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-host-proc-sys-net\") pod \"cilium-6d86l\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " pod="kube-system/cilium-6d86l" Jul 12 00:20:31.869490 kubelet[1417]: I0712 00:20:31.869401 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-cilium-run\") pod \"cilium-6d86l\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " pod="kube-system/cilium-6d86l" Jul 12 00:20:31.869490 kubelet[1417]: I0712 00:20:31.869417 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-bpf-maps\") pod \"cilium-6d86l\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " pod="kube-system/cilium-6d86l" Jul 12 00:20:31.958377 kubelet[1417]: E0712 00:20:31.958333 1417 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-xcnh6 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-6d86l" podUID="d47b098e-a481-44ae-b891-97c65b0e5e38" Jul 12 00:20:32.122083 kubelet[1417]: E0712 00:20:32.122044 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:32.122831 env[1212]: time="2025-07-12T00:20:32.122778408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-nmw8n,Uid:857499ce-dd0a-4244-b244-fbf236c6d49a,Namespace:kube-system,Attempt:0,}" Jul 12 00:20:32.135448 env[1212]: time="2025-07-12T00:20:32.135377996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:20:32.135448 env[1212]: time="2025-07-12T00:20:32.135420397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:20:32.135585 env[1212]: time="2025-07-12T00:20:32.135451437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:20:32.135642 env[1212]: time="2025-07-12T00:20:32.135609321Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/effbaa27cb9bbc86ea2e86c9bd997a8f72dc178aaa696f028ea0c12666567b7e pid=2987 runtime=io.containerd.runc.v2 Jul 12 00:20:32.145457 systemd[1]: Started cri-containerd-effbaa27cb9bbc86ea2e86c9bd997a8f72dc178aaa696f028ea0c12666567b7e.scope. Jul 12 00:20:32.188673 env[1212]: time="2025-07-12T00:20:32.188630136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-nmw8n,Uid:857499ce-dd0a-4244-b244-fbf236c6d49a,Namespace:kube-system,Attempt:0,} returns sandbox id \"effbaa27cb9bbc86ea2e86c9bd997a8f72dc178aaa696f028ea0c12666567b7e\"" Jul 12 00:20:32.189455 kubelet[1417]: E0712 00:20:32.189431 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:32.190516 env[1212]: time="2025-07-12T00:20:32.190489981Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 12 00:20:32.643236 kubelet[1417]: E0712 00:20:32.643196 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:32.977411 kubelet[1417]: I0712 00:20:32.977319 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-cilium-run\") pod \"d47b098e-a481-44ae-b891-97c65b0e5e38\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " Jul 12 00:20:32.977411 kubelet[1417]: I0712 00:20:32.977368 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcnh6\" (UniqueName: \"kubernetes.io/projected/d47b098e-a481-44ae-b891-97c65b0e5e38-kube-api-access-xcnh6\") pod \"d47b098e-a481-44ae-b891-97c65b0e5e38\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " Jul 12 00:20:32.977411 kubelet[1417]: I0712 00:20:32.977408 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d47b098e-a481-44ae-b891-97c65b0e5e38-cilium-config-path\") pod \"d47b098e-a481-44ae-b891-97c65b0e5e38\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " Jul 12 00:20:32.977612 kubelet[1417]: I0712 00:20:32.977425 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-xtables-lock\") pod \"d47b098e-a481-44ae-b891-97c65b0e5e38\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " Jul 12 00:20:32.977612 kubelet[1417]: I0712 00:20:32.977431 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d47b098e-a481-44ae-b891-97c65b0e5e38" (UID: "d47b098e-a481-44ae-b891-97c65b0e5e38"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:20:32.977612 kubelet[1417]: I0712 00:20:32.977473 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d47b098e-a481-44ae-b891-97c65b0e5e38" (UID: "d47b098e-a481-44ae-b891-97c65b0e5e38"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:20:32.977827 kubelet[1417]: I0712 00:20:32.977803 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d47b098e-a481-44ae-b891-97c65b0e5e38" (UID: "d47b098e-a481-44ae-b891-97c65b0e5e38"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:20:32.978941 kubelet[1417]: I0712 00:20:32.978923 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-lib-modules\") pod \"d47b098e-a481-44ae-b891-97c65b0e5e38\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " Jul 12 00:20:32.979050 kubelet[1417]: I0712 00:20:32.979036 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-cni-path\") pod \"d47b098e-a481-44ae-b891-97c65b0e5e38\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " Jul 12 00:20:32.979153 kubelet[1417]: I0712 00:20:32.979134 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d47b098e-a481-44ae-b891-97c65b0e5e38-hubble-tls\") pod \"d47b098e-a481-44ae-b891-97c65b0e5e38\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " Jul 12 00:20:32.979249 kubelet[1417]: I0712 00:20:32.979236 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-hostproc\") pod \"d47b098e-a481-44ae-b891-97c65b0e5e38\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " Jul 12 00:20:32.979313 kubelet[1417]: I0712 00:20:32.979302 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-etc-cni-netd\") pod \"d47b098e-a481-44ae-b891-97c65b0e5e38\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " Jul 12 00:20:32.979387 kubelet[1417]: I0712 00:20:32.979368 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-host-proc-sys-net\") pod \"d47b098e-a481-44ae-b891-97c65b0e5e38\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " Jul 12 00:20:32.979468 kubelet[1417]: I0712 00:20:32.979456 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-host-proc-sys-kernel\") pod \"d47b098e-a481-44ae-b891-97c65b0e5e38\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " Jul 12 00:20:32.979542 kubelet[1417]: I0712 00:20:32.979530 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d47b098e-a481-44ae-b891-97c65b0e5e38-clustermesh-secrets\") pod \"d47b098e-a481-44ae-b891-97c65b0e5e38\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " Jul 12 00:20:32.979627 kubelet[1417]: I0712 00:20:32.979613 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-cilium-cgroup\") pod \"d47b098e-a481-44ae-b891-97c65b0e5e38\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " Jul 12 00:20:32.979695 kubelet[1417]: I0712 00:20:32.979683 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-bpf-maps\") pod \"d47b098e-a481-44ae-b891-97c65b0e5e38\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " Jul 12 00:20:32.979768 kubelet[1417]: I0712 00:20:32.979756 1417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d47b098e-a481-44ae-b891-97c65b0e5e38-cilium-ipsec-secrets\") pod \"d47b098e-a481-44ae-b891-97c65b0e5e38\" (UID: \"d47b098e-a481-44ae-b891-97c65b0e5e38\") " Jul 12 00:20:32.979859 kubelet[1417]: I0712 00:20:32.979846 1417 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-xtables-lock\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:32.979940 kubelet[1417]: I0712 00:20:32.979930 1417 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-cilium-run\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:32.980011 kubelet[1417]: I0712 00:20:32.980000 1417 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-lib-modules\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:32.983834 kubelet[1417]: I0712 00:20:32.979072 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-cni-path" (OuterVolumeSpecName: "cni-path") pod "d47b098e-a481-44ae-b891-97c65b0e5e38" (UID: "d47b098e-a481-44ae-b891-97c65b0e5e38"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:20:32.983834 kubelet[1417]: I0712 00:20:32.979451 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d47b098e-a481-44ae-b891-97c65b0e5e38-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d47b098e-a481-44ae-b891-97c65b0e5e38" (UID: "d47b098e-a481-44ae-b891-97c65b0e5e38"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:20:32.983834 kubelet[1417]: I0712 00:20:32.979486 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-hostproc" (OuterVolumeSpecName: "hostproc") pod "d47b098e-a481-44ae-b891-97c65b0e5e38" (UID: "d47b098e-a481-44ae-b891-97c65b0e5e38"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:20:32.983834 kubelet[1417]: I0712 00:20:32.979502 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d47b098e-a481-44ae-b891-97c65b0e5e38" (UID: "d47b098e-a481-44ae-b891-97c65b0e5e38"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:20:32.983834 kubelet[1417]: I0712 00:20:32.979518 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d47b098e-a481-44ae-b891-97c65b0e5e38" (UID: "d47b098e-a481-44ae-b891-97c65b0e5e38"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:20:32.981540 systemd[1]: var-lib-kubelet-pods-d47b098e\x2da481\x2d44ae\x2db891\x2d97c65b0e5e38-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxcnh6.mount: Deactivated successfully. Jul 12 00:20:32.984287 kubelet[1417]: I0712 00:20:32.979531 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d47b098e-a481-44ae-b891-97c65b0e5e38" (UID: "d47b098e-a481-44ae-b891-97c65b0e5e38"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:20:32.984287 kubelet[1417]: I0712 00:20:32.981263 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d47b098e-a481-44ae-b891-97c65b0e5e38-kube-api-access-xcnh6" (OuterVolumeSpecName: "kube-api-access-xcnh6") pod "d47b098e-a481-44ae-b891-97c65b0e5e38" (UID: "d47b098e-a481-44ae-b891-97c65b0e5e38"). InnerVolumeSpecName "kube-api-access-xcnh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:20:32.984287 kubelet[1417]: I0712 00:20:32.981295 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d47b098e-a481-44ae-b891-97c65b0e5e38" (UID: "d47b098e-a481-44ae-b891-97c65b0e5e38"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:20:32.984287 kubelet[1417]: I0712 00:20:32.981367 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d47b098e-a481-44ae-b891-97c65b0e5e38" (UID: "d47b098e-a481-44ae-b891-97c65b0e5e38"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:20:32.984287 kubelet[1417]: I0712 00:20:32.983334 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d47b098e-a481-44ae-b891-97c65b0e5e38-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d47b098e-a481-44ae-b891-97c65b0e5e38" (UID: "d47b098e-a481-44ae-b891-97c65b0e5e38"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:20:32.983481 systemd[1]: var-lib-kubelet-pods-d47b098e\x2da481\x2d44ae\x2db891\x2d97c65b0e5e38-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 12 00:20:32.984452 kubelet[1417]: I0712 00:20:32.983627 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d47b098e-a481-44ae-b891-97c65b0e5e38-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "d47b098e-a481-44ae-b891-97c65b0e5e38" (UID: "d47b098e-a481-44ae-b891-97c65b0e5e38"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:20:32.984452 kubelet[1417]: I0712 00:20:32.983966 1417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d47b098e-a481-44ae-b891-97c65b0e5e38-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d47b098e-a481-44ae-b891-97c65b0e5e38" (UID: "d47b098e-a481-44ae-b891-97c65b0e5e38"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:20:32.985159 systemd[1]: var-lib-kubelet-pods-d47b098e\x2da481\x2d44ae\x2db891\x2d97c65b0e5e38-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 12 00:20:32.985237 systemd[1]: var-lib-kubelet-pods-d47b098e\x2da481\x2d44ae\x2db891\x2d97c65b0e5e38-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 12 00:20:33.081204 kubelet[1417]: I0712 00:20:33.081145 1417 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xcnh6\" (UniqueName: \"kubernetes.io/projected/d47b098e-a481-44ae-b891-97c65b0e5e38-kube-api-access-xcnh6\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:33.081204 kubelet[1417]: I0712 00:20:33.081191 1417 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d47b098e-a481-44ae-b891-97c65b0e5e38-cilium-config-path\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:33.081204 kubelet[1417]: I0712 00:20:33.081203 1417 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-hostproc\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:33.081204 kubelet[1417]: I0712 00:20:33.081215 1417 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-etc-cni-netd\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:33.081416 kubelet[1417]: I0712 00:20:33.081223 1417 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-host-proc-sys-net\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:33.081416 kubelet[1417]: I0712 00:20:33.081231 1417 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-cni-path\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:33.081416 kubelet[1417]: I0712 00:20:33.081239 1417 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d47b098e-a481-44ae-b891-97c65b0e5e38-hubble-tls\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:33.081416 kubelet[1417]: I0712 00:20:33.081246 1417 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-bpf-maps\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:33.081416 kubelet[1417]: I0712 00:20:33.081253 1417 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d47b098e-a481-44ae-b891-97c65b0e5e38-cilium-ipsec-secrets\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:33.081416 kubelet[1417]: I0712 00:20:33.081261 1417 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-host-proc-sys-kernel\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:33.081416 kubelet[1417]: I0712 00:20:33.081269 1417 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d47b098e-a481-44ae-b891-97c65b0e5e38-clustermesh-secrets\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:33.081416 kubelet[1417]: I0712 00:20:33.081276 1417 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d47b098e-a481-44ae-b891-97c65b0e5e38-cilium-cgroup\") on node \"10.0.0.35\" DevicePath \"\"" Jul 12 00:20:33.643729 kubelet[1417]: E0712 00:20:33.643683 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:33.837306 env[1212]: time="2025-07-12T00:20:33.837259144Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:33.839076 env[1212]: time="2025-07-12T00:20:33.839049146Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:33.840309 env[1212]: time="2025-07-12T00:20:33.840280535Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 12 00:20:33.841052 env[1212]: time="2025-07-12T00:20:33.841023113Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 12 00:20:33.843078 env[1212]: time="2025-07-12T00:20:33.843046440Z" level=info msg="CreateContainer within sandbox \"effbaa27cb9bbc86ea2e86c9bd997a8f72dc178aaa696f028ea0c12666567b7e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 12 00:20:33.851580 env[1212]: time="2025-07-12T00:20:33.851537881Z" level=info msg="CreateContainer within sandbox \"effbaa27cb9bbc86ea2e86c9bd997a8f72dc178aaa696f028ea0c12666567b7e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3704a8485e47344d97eff05e72be32f4880b10e34283579f1ad9cf111d76c01c\"" Jul 12 00:20:33.852304 env[1212]: time="2025-07-12T00:20:33.852248618Z" level=info msg="StartContainer for \"3704a8485e47344d97eff05e72be32f4880b10e34283579f1ad9cf111d76c01c\"" Jul 12 00:20:33.865694 systemd[1]: Started cri-containerd-3704a8485e47344d97eff05e72be32f4880b10e34283579f1ad9cf111d76c01c.scope. Jul 12 00:20:33.900987 env[1212]: time="2025-07-12T00:20:33.899114483Z" level=info msg="StartContainer for \"3704a8485e47344d97eff05e72be32f4880b10e34283579f1ad9cf111d76c01c\" returns successfully" Jul 12 00:20:33.927087 kubelet[1417]: E0712 00:20:33.926590 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:33.929430 systemd[1]: Removed slice kubepods-burstable-podd47b098e_a481_44ae_b891_97c65b0e5e38.slice. Jul 12 00:20:33.964290 systemd[1]: Created slice kubepods-burstable-pod80f24ab2_3ad5_401c_8398_4f0a1eeff5e9.slice. Jul 12 00:20:33.976984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3713107904.mount: Deactivated successfully. Jul 12 00:20:33.983437 kubelet[1417]: I0712 00:20:33.983381 1417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-nmw8n" podStartSLOduration=1.331636311 podStartE2EDuration="2.983349431s" podCreationTimestamp="2025-07-12 00:20:31 +0000 UTC" firstStartedPulling="2025-07-12 00:20:32.190192854 +0000 UTC m=+56.668682877" lastFinishedPulling="2025-07-12 00:20:33.841905974 +0000 UTC m=+58.320395997" observedRunningTime="2025-07-12 00:20:33.967092727 +0000 UTC m=+58.445582750" watchObservedRunningTime="2025-07-12 00:20:33.983349431 +0000 UTC m=+58.461839454" Jul 12 00:20:33.986112 kubelet[1417]: I0712 00:20:33.986077 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/80f24ab2-3ad5-401c-8398-4f0a1eeff5e9-cilium-run\") pod \"cilium-kfdcm\" (UID: \"80f24ab2-3ad5-401c-8398-4f0a1eeff5e9\") " pod="kube-system/cilium-kfdcm" Jul 12 00:20:33.986274 kubelet[1417]: I0712 00:20:33.986259 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/80f24ab2-3ad5-401c-8398-4f0a1eeff5e9-cni-path\") pod \"cilium-kfdcm\" (UID: \"80f24ab2-3ad5-401c-8398-4f0a1eeff5e9\") " pod="kube-system/cilium-kfdcm" Jul 12 00:20:33.986385 kubelet[1417]: I0712 00:20:33.986370 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/80f24ab2-3ad5-401c-8398-4f0a1eeff5e9-host-proc-sys-net\") pod \"cilium-kfdcm\" (UID: \"80f24ab2-3ad5-401c-8398-4f0a1eeff5e9\") " pod="kube-system/cilium-kfdcm" Jul 12 00:20:33.986473 kubelet[1417]: I0712 00:20:33.986460 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/80f24ab2-3ad5-401c-8398-4f0a1eeff5e9-hubble-tls\") pod \"cilium-kfdcm\" (UID: \"80f24ab2-3ad5-401c-8398-4f0a1eeff5e9\") " pod="kube-system/cilium-kfdcm" Jul 12 00:20:33.986562 kubelet[1417]: I0712 00:20:33.986550 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/80f24ab2-3ad5-401c-8398-4f0a1eeff5e9-cilium-ipsec-secrets\") pod \"cilium-kfdcm\" (UID: \"80f24ab2-3ad5-401c-8398-4f0a1eeff5e9\") " pod="kube-system/cilium-kfdcm" Jul 12 00:20:33.986718 kubelet[1417]: I0712 00:20:33.986695 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80f24ab2-3ad5-401c-8398-4f0a1eeff5e9-etc-cni-netd\") pod \"cilium-kfdcm\" (UID: \"80f24ab2-3ad5-401c-8398-4f0a1eeff5e9\") " pod="kube-system/cilium-kfdcm" Jul 12 00:20:33.988786 kubelet[1417]: I0712 00:20:33.988766 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80f24ab2-3ad5-401c-8398-4f0a1eeff5e9-lib-modules\") pod \"cilium-kfdcm\" (UID: \"80f24ab2-3ad5-401c-8398-4f0a1eeff5e9\") " pod="kube-system/cilium-kfdcm" Jul 12 00:20:33.988938 kubelet[1417]: I0712 00:20:33.988921 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80f24ab2-3ad5-401c-8398-4f0a1eeff5e9-cilium-config-path\") pod \"cilium-kfdcm\" (UID: \"80f24ab2-3ad5-401c-8398-4f0a1eeff5e9\") " pod="kube-system/cilium-kfdcm" Jul 12 00:20:33.989047 kubelet[1417]: I0712 00:20:33.989034 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrg4z\" (UniqueName: \"kubernetes.io/projected/80f24ab2-3ad5-401c-8398-4f0a1eeff5e9-kube-api-access-vrg4z\") pod \"cilium-kfdcm\" (UID: \"80f24ab2-3ad5-401c-8398-4f0a1eeff5e9\") " pod="kube-system/cilium-kfdcm" Jul 12 00:20:33.989151 kubelet[1417]: I0712 00:20:33.989139 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/80f24ab2-3ad5-401c-8398-4f0a1eeff5e9-hostproc\") pod \"cilium-kfdcm\" (UID: \"80f24ab2-3ad5-401c-8398-4f0a1eeff5e9\") " pod="kube-system/cilium-kfdcm" Jul 12 00:20:33.989268 kubelet[1417]: I0712 00:20:33.989255 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/80f24ab2-3ad5-401c-8398-4f0a1eeff5e9-clustermesh-secrets\") pod \"cilium-kfdcm\" (UID: \"80f24ab2-3ad5-401c-8398-4f0a1eeff5e9\") " pod="kube-system/cilium-kfdcm" Jul 12 00:20:33.989373 kubelet[1417]: I0712 00:20:33.989360 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/80f24ab2-3ad5-401c-8398-4f0a1eeff5e9-host-proc-sys-kernel\") pod \"cilium-kfdcm\" (UID: \"80f24ab2-3ad5-401c-8398-4f0a1eeff5e9\") " pod="kube-system/cilium-kfdcm" Jul 12 00:20:33.989467 kubelet[1417]: I0712 00:20:33.989455 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/80f24ab2-3ad5-401c-8398-4f0a1eeff5e9-bpf-maps\") pod \"cilium-kfdcm\" (UID: \"80f24ab2-3ad5-401c-8398-4f0a1eeff5e9\") " pod="kube-system/cilium-kfdcm" Jul 12 00:20:33.989559 kubelet[1417]: I0712 00:20:33.989547 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/80f24ab2-3ad5-401c-8398-4f0a1eeff5e9-cilium-cgroup\") pod \"cilium-kfdcm\" (UID: \"80f24ab2-3ad5-401c-8398-4f0a1eeff5e9\") " pod="kube-system/cilium-kfdcm" Jul 12 00:20:33.989670 kubelet[1417]: I0712 00:20:33.989657 1417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80f24ab2-3ad5-401c-8398-4f0a1eeff5e9-xtables-lock\") pod \"cilium-kfdcm\" (UID: \"80f24ab2-3ad5-401c-8398-4f0a1eeff5e9\") " pod="kube-system/cilium-kfdcm" Jul 12 00:20:34.277047 kubelet[1417]: E0712 00:20:34.277012 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:34.277905 env[1212]: time="2025-07-12T00:20:34.277657124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kfdcm,Uid:80f24ab2-3ad5-401c-8398-4f0a1eeff5e9,Namespace:kube-system,Attempt:0,}" Jul 12 00:20:34.293382 env[1212]: time="2025-07-12T00:20:34.293299001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:20:34.293382 env[1212]: time="2025-07-12T00:20:34.293349922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:20:34.293382 env[1212]: time="2025-07-12T00:20:34.293359922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:20:34.293925 env[1212]: time="2025-07-12T00:20:34.293853573Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a23d4a7d89f6efc284236270072992e25ccb3033d4bf0c38c170c62245a7ba3c pid=3074 runtime=io.containerd.runc.v2 Jul 12 00:20:34.304461 systemd[1]: Started cri-containerd-a23d4a7d89f6efc284236270072992e25ccb3033d4bf0c38c170c62245a7ba3c.scope. Jul 12 00:20:34.351086 env[1212]: time="2025-07-12T00:20:34.351042239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kfdcm,Uid:80f24ab2-3ad5-401c-8398-4f0a1eeff5e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"a23d4a7d89f6efc284236270072992e25ccb3033d4bf0c38c170c62245a7ba3c\"" Jul 12 00:20:34.352457 kubelet[1417]: E0712 00:20:34.351851 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:34.353952 env[1212]: time="2025-07-12T00:20:34.353915985Z" level=info msg="CreateContainer within sandbox \"a23d4a7d89f6efc284236270072992e25ccb3033d4bf0c38c170c62245a7ba3c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:20:34.411953 env[1212]: time="2025-07-12T00:20:34.411862668Z" level=info msg="CreateContainer within sandbox \"a23d4a7d89f6efc284236270072992e25ccb3033d4bf0c38c170c62245a7ba3c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0e852247db98d11fab76f8a2a61fb6cc2c2917ad91111695572015d578923cc9\"" Jul 12 00:20:34.412442 env[1212]: time="2025-07-12T00:20:34.412414640Z" level=info msg="StartContainer for \"0e852247db98d11fab76f8a2a61fb6cc2c2917ad91111695572015d578923cc9\"" Jul 12 00:20:34.425964 systemd[1]: Started cri-containerd-0e852247db98d11fab76f8a2a61fb6cc2c2917ad91111695572015d578923cc9.scope. Jul 12 00:20:34.464509 env[1212]: time="2025-07-12T00:20:34.464460788Z" level=info msg="StartContainer for \"0e852247db98d11fab76f8a2a61fb6cc2c2917ad91111695572015d578923cc9\" returns successfully" Jul 12 00:20:34.479908 systemd[1]: cri-containerd-0e852247db98d11fab76f8a2a61fb6cc2c2917ad91111695572015d578923cc9.scope: Deactivated successfully. Jul 12 00:20:34.499411 env[1212]: time="2025-07-12T00:20:34.499364465Z" level=info msg="shim disconnected" id=0e852247db98d11fab76f8a2a61fb6cc2c2917ad91111695572015d578923cc9 Jul 12 00:20:34.499411 env[1212]: time="2025-07-12T00:20:34.499404746Z" level=warning msg="cleaning up after shim disconnected" id=0e852247db98d11fab76f8a2a61fb6cc2c2917ad91111695572015d578923cc9 namespace=k8s.io Jul 12 00:20:34.499411 env[1212]: time="2025-07-12T00:20:34.499413146Z" level=info msg="cleaning up dead shim" Jul 12 00:20:34.506579 env[1212]: time="2025-07-12T00:20:34.506536309Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:20:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3158 runtime=io.containerd.runc.v2\n" Jul 12 00:20:34.644190 kubelet[1417]: E0712 00:20:34.644056 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:34.806634 kubelet[1417]: I0712 00:20:34.806584 1417 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d47b098e-a481-44ae-b891-97c65b0e5e38" path="/var/lib/kubelet/pods/d47b098e-a481-44ae-b891-97c65b0e5e38/volumes" Jul 12 00:20:34.929269 kubelet[1417]: E0712 00:20:34.928995 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:34.929269 kubelet[1417]: E0712 00:20:34.929103 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:34.930638 env[1212]: time="2025-07-12T00:20:34.930593070Z" level=info msg="CreateContainer within sandbox \"a23d4a7d89f6efc284236270072992e25ccb3033d4bf0c38c170c62245a7ba3c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:20:34.941537 env[1212]: time="2025-07-12T00:20:34.941485079Z" level=info msg="CreateContainer within sandbox \"a23d4a7d89f6efc284236270072992e25ccb3033d4bf0c38c170c62245a7ba3c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4d11dc083209a40977f5ae83158a39fd288ad43e0ef1872d6b8d25785786e90f\"" Jul 12 00:20:34.941991 env[1212]: time="2025-07-12T00:20:34.941959850Z" level=info msg="StartContainer for \"4d11dc083209a40977f5ae83158a39fd288ad43e0ef1872d6b8d25785786e90f\"" Jul 12 00:20:34.955747 systemd[1]: Started cri-containerd-4d11dc083209a40977f5ae83158a39fd288ad43e0ef1872d6b8d25785786e90f.scope. Jul 12 00:20:35.006278 env[1212]: time="2025-07-12T00:20:35.006225833Z" level=info msg="StartContainer for \"4d11dc083209a40977f5ae83158a39fd288ad43e0ef1872d6b8d25785786e90f\" returns successfully" Jul 12 00:20:35.011051 systemd[1]: cri-containerd-4d11dc083209a40977f5ae83158a39fd288ad43e0ef1872d6b8d25785786e90f.scope: Deactivated successfully. Jul 12 00:20:35.025311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d11dc083209a40977f5ae83158a39fd288ad43e0ef1872d6b8d25785786e90f-rootfs.mount: Deactivated successfully. Jul 12 00:20:35.030533 env[1212]: time="2025-07-12T00:20:35.030488249Z" level=info msg="shim disconnected" id=4d11dc083209a40977f5ae83158a39fd288ad43e0ef1872d6b8d25785786e90f Jul 12 00:20:35.030533 env[1212]: time="2025-07-12T00:20:35.030528090Z" level=warning msg="cleaning up after shim disconnected" id=4d11dc083209a40977f5ae83158a39fd288ad43e0ef1872d6b8d25785786e90f namespace=k8s.io Jul 12 00:20:35.030533 env[1212]: time="2025-07-12T00:20:35.030537131Z" level=info msg="cleaning up dead shim" Jul 12 00:20:35.036730 env[1212]: time="2025-07-12T00:20:35.036667986Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:20:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3222 runtime=io.containerd.runc.v2\n" Jul 12 00:20:35.644993 kubelet[1417]: E0712 00:20:35.644932 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:35.932240 kubelet[1417]: E0712 00:20:35.932203 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:35.934125 env[1212]: time="2025-07-12T00:20:35.934088710Z" level=info msg="CreateContainer within sandbox \"a23d4a7d89f6efc284236270072992e25ccb3033d4bf0c38c170c62245a7ba3c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:20:35.946349 env[1212]: time="2025-07-12T00:20:35.946312980Z" level=info msg="CreateContainer within sandbox \"a23d4a7d89f6efc284236270072992e25ccb3033d4bf0c38c170c62245a7ba3c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a636b1800977409acbd003c4458d02c6c204f674263c5d1e1762dcbb8b661777\"" Jul 12 00:20:35.946941 env[1212]: time="2025-07-12T00:20:35.946916113Z" level=info msg="StartContainer for \"a636b1800977409acbd003c4458d02c6c204f674263c5d1e1762dcbb8b661777\"" Jul 12 00:20:35.962974 systemd[1]: Started cri-containerd-a636b1800977409acbd003c4458d02c6c204f674263c5d1e1762dcbb8b661777.scope. Jul 12 00:20:35.998965 systemd[1]: cri-containerd-a636b1800977409acbd003c4458d02c6c204f674263c5d1e1762dcbb8b661777.scope: Deactivated successfully. Jul 12 00:20:35.999685 env[1212]: time="2025-07-12T00:20:35.999648839Z" level=info msg="StartContainer for \"a636b1800977409acbd003c4458d02c6c204f674263c5d1e1762dcbb8b661777\" returns successfully" Jul 12 00:20:36.015348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a636b1800977409acbd003c4458d02c6c204f674263c5d1e1762dcbb8b661777-rootfs.mount: Deactivated successfully. Jul 12 00:20:36.017968 env[1212]: time="2025-07-12T00:20:36.017923032Z" level=info msg="shim disconnected" id=a636b1800977409acbd003c4458d02c6c204f674263c5d1e1762dcbb8b661777 Jul 12 00:20:36.017968 env[1212]: time="2025-07-12T00:20:36.017962313Z" level=warning msg="cleaning up after shim disconnected" id=a636b1800977409acbd003c4458d02c6c204f674263c5d1e1762dcbb8b661777 namespace=k8s.io Jul 12 00:20:36.018101 env[1212]: time="2025-07-12T00:20:36.017980953Z" level=info msg="cleaning up dead shim" Jul 12 00:20:36.023810 env[1212]: time="2025-07-12T00:20:36.023778557Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:20:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3279 runtime=io.containerd.runc.v2\n" Jul 12 00:20:36.605387 kubelet[1417]: E0712 00:20:36.605337 1417 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:36.642443 env[1212]: time="2025-07-12T00:20:36.642402059Z" level=info msg="StopPodSandbox for \"0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b\"" Jul 12 00:20:36.642580 env[1212]: time="2025-07-12T00:20:36.642489101Z" level=info msg="TearDown network for sandbox \"0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b\" successfully" Jul 12 00:20:36.642580 env[1212]: time="2025-07-12T00:20:36.642521862Z" level=info msg="StopPodSandbox for \"0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b\" returns successfully" Jul 12 00:20:36.643078 env[1212]: time="2025-07-12T00:20:36.643050433Z" level=info msg="RemovePodSandbox for \"0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b\"" Jul 12 00:20:36.643208 env[1212]: time="2025-07-12T00:20:36.643170556Z" level=info msg="Forcibly stopping sandbox \"0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b\"" Jul 12 00:20:36.643353 env[1212]: time="2025-07-12T00:20:36.643322799Z" level=info msg="TearDown network for sandbox \"0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b\" successfully" Jul 12 00:20:36.645750 env[1212]: time="2025-07-12T00:20:36.645723211Z" level=info msg="RemovePodSandbox \"0404cb2cf4b0a30e9dd25483dad4fff05a5458ef0c377118f0a700aa2438362b\" returns successfully" Jul 12 00:20:36.646482 kubelet[1417]: E0712 00:20:36.646455 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:36.761201 kubelet[1417]: E0712 00:20:36.761167 1417 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:20:36.936004 kubelet[1417]: E0712 00:20:36.935965 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:36.940427 env[1212]: time="2025-07-12T00:20:36.940025160Z" level=info msg="CreateContainer within sandbox \"a23d4a7d89f6efc284236270072992e25ccb3033d4bf0c38c170c62245a7ba3c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:20:36.948638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1706131773.mount: Deactivated successfully. Jul 12 00:20:36.949574 env[1212]: time="2025-07-12T00:20:36.949536924Z" level=info msg="CreateContainer within sandbox \"a23d4a7d89f6efc284236270072992e25ccb3033d4bf0c38c170c62245a7ba3c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cef55ee688ddd77e3b8895571d406e9392c5fccfda6eeb887af2cd84a7cf97d0\"" Jul 12 00:20:36.950233 env[1212]: time="2025-07-12T00:20:36.950150337Z" level=info msg="StartContainer for \"cef55ee688ddd77e3b8895571d406e9392c5fccfda6eeb887af2cd84a7cf97d0\"" Jul 12 00:20:36.964042 systemd[1]: Started cri-containerd-cef55ee688ddd77e3b8895571d406e9392c5fccfda6eeb887af2cd84a7cf97d0.scope. Jul 12 00:20:36.994214 systemd[1]: cri-containerd-cef55ee688ddd77e3b8895571d406e9392c5fccfda6eeb887af2cd84a7cf97d0.scope: Deactivated successfully. Jul 12 00:20:36.996293 env[1212]: time="2025-07-12T00:20:36.996233365Z" level=info msg="StartContainer for \"cef55ee688ddd77e3b8895571d406e9392c5fccfda6eeb887af2cd84a7cf97d0\" returns successfully" Jul 12 00:20:37.009554 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cef55ee688ddd77e3b8895571d406e9392c5fccfda6eeb887af2cd84a7cf97d0-rootfs.mount: Deactivated successfully. Jul 12 00:20:37.013682 env[1212]: time="2025-07-12T00:20:37.012639349Z" level=info msg="shim disconnected" id=cef55ee688ddd77e3b8895571d406e9392c5fccfda6eeb887af2cd84a7cf97d0 Jul 12 00:20:37.013682 env[1212]: time="2025-07-12T00:20:37.012706030Z" level=warning msg="cleaning up after shim disconnected" id=cef55ee688ddd77e3b8895571d406e9392c5fccfda6eeb887af2cd84a7cf97d0 namespace=k8s.io Jul 12 00:20:37.013682 env[1212]: time="2025-07-12T00:20:37.012716551Z" level=info msg="cleaning up dead shim" Jul 12 00:20:37.018847 env[1212]: time="2025-07-12T00:20:37.018796317Z" level=warning msg="cleanup warnings time=\"2025-07-12T00:20:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3336 runtime=io.containerd.runc.v2\n" Jul 12 00:20:37.647439 kubelet[1417]: E0712 00:20:37.647398 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:37.940206 kubelet[1417]: E0712 00:20:37.940177 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:37.942077 env[1212]: time="2025-07-12T00:20:37.942036447Z" level=info msg="CreateContainer within sandbox \"a23d4a7d89f6efc284236270072992e25ccb3033d4bf0c38c170c62245a7ba3c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:20:37.955463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount321542202.mount: Deactivated successfully. Jul 12 00:20:37.962138 env[1212]: time="2025-07-12T00:20:37.962097104Z" level=info msg="CreateContainer within sandbox \"a23d4a7d89f6efc284236270072992e25ccb3033d4bf0c38c170c62245a7ba3c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a69c107d344b36ce3745cc08101c04066d21dbd1f7fe4717a88e1332f4b215b2\"" Jul 12 00:20:37.962712 env[1212]: time="2025-07-12T00:20:37.962675636Z" level=info msg="StartContainer for \"a69c107d344b36ce3745cc08101c04066d21dbd1f7fe4717a88e1332f4b215b2\"" Jul 12 00:20:37.975928 systemd[1]: Started cri-containerd-a69c107d344b36ce3745cc08101c04066d21dbd1f7fe4717a88e1332f4b215b2.scope. Jul 12 00:20:37.988220 kubelet[1417]: I0712 00:20:37.988019 1417 setters.go:602] "Node became not ready" node="10.0.0.35" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-12T00:20:37Z","lastTransitionTime":"2025-07-12T00:20:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 12 00:20:38.029265 env[1212]: time="2025-07-12T00:20:38.029215164Z" level=info msg="StartContainer for \"a69c107d344b36ce3745cc08101c04066d21dbd1f7fe4717a88e1332f4b215b2\" returns successfully" Jul 12 00:20:38.048254 systemd[1]: run-containerd-runc-k8s.io-a69c107d344b36ce3745cc08101c04066d21dbd1f7fe4717a88e1332f4b215b2-runc.I1wwBW.mount: Deactivated successfully. Jul 12 00:20:38.278996 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 12 00:20:38.648102 kubelet[1417]: E0712 00:20:38.648048 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:38.945275 kubelet[1417]: E0712 00:20:38.945210 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:39.648973 kubelet[1417]: E0712 00:20:39.648937 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:40.279654 kubelet[1417]: E0712 00:20:40.279624 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:40.650338 kubelet[1417]: E0712 00:20:40.650217 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:41.101780 systemd-networkd[1050]: lxc_health: Link UP Jul 12 00:20:41.121359 systemd-networkd[1050]: lxc_health: Gained carrier Jul 12 00:20:41.122031 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 12 00:20:41.651234 kubelet[1417]: E0712 00:20:41.651193 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:42.280583 kubelet[1417]: E0712 00:20:42.280270 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:42.305189 kubelet[1417]: I0712 00:20:42.305130 1417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kfdcm" podStartSLOduration=9.30511327 podStartE2EDuration="9.30511327s" podCreationTimestamp="2025-07-12 00:20:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:20:38.977215287 +0000 UTC m=+63.455705310" watchObservedRunningTime="2025-07-12 00:20:42.30511327 +0000 UTC m=+66.783603293" Jul 12 00:20:42.422424 systemd[1]: run-containerd-runc-k8s.io-a69c107d344b36ce3745cc08101c04066d21dbd1f7fe4717a88e1332f4b215b2-runc.BvDqDO.mount: Deactivated successfully. Jul 12 00:20:42.651896 kubelet[1417]: E0712 00:20:42.651483 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:42.953005 kubelet[1417]: E0712 00:20:42.952982 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:43.129997 systemd-networkd[1050]: lxc_health: Gained IPv6LL Jul 12 00:20:43.653063 kubelet[1417]: E0712 00:20:43.653030 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:43.955052 kubelet[1417]: E0712 00:20:43.955025 1417 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:44.616214 systemd[1]: run-containerd-runc-k8s.io-a69c107d344b36ce3745cc08101c04066d21dbd1f7fe4717a88e1332f4b215b2-runc.LVdvhX.mount: Deactivated successfully. Jul 12 00:20:44.654501 kubelet[1417]: E0712 00:20:44.654441 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:45.655556 kubelet[1417]: E0712 00:20:45.655512 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:46.656887 kubelet[1417]: E0712 00:20:46.656812 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:47.657176 kubelet[1417]: E0712 00:20:47.657122 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:48.657857 kubelet[1417]: E0712 00:20:48.657812 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:49.658830 kubelet[1417]: E0712 00:20:49.658765 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 12 00:20:50.659715 kubelet[1417]: E0712 00:20:50.659668 1417 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"