Sep 9 00:47:01.673078 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 00:47:01.673099 kernel: Linux version 5.15.191-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Sep 8 23:23:23 -00 2025 Sep 9 00:47:01.673107 kernel: efi: EFI v2.70 by EDK II Sep 9 00:47:01.673113 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Sep 9 00:47:01.673118 kernel: random: crng init done Sep 9 00:47:01.673124 kernel: ACPI: Early table checksum verification disabled Sep 9 00:47:01.673131 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Sep 9 00:47:01.673137 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 00:47:01.673143 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:47:01.673149 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:47:01.673154 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:47:01.673160 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:47:01.673165 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:47:01.673171 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:47:01.673179 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:47:01.673185 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:47:01.673191 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:47:01.673197 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 00:47:01.673203 kernel: NUMA: Failed to initialise from firmware Sep 9 00:47:01.673209 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:47:01.673215 kernel: NUMA: NODE_DATA [mem 0xdcb09900-0xdcb0efff] Sep 9 00:47:01.673221 kernel: Zone ranges: Sep 9 00:47:01.673227 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:47:01.673234 kernel: DMA32 empty Sep 9 00:47:01.673240 kernel: Normal empty Sep 9 00:47:01.673245 kernel: Movable zone start for each node Sep 9 00:47:01.673251 kernel: Early memory node ranges Sep 9 00:47:01.673257 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Sep 9 00:47:01.673263 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Sep 9 00:47:01.673269 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Sep 9 00:47:01.673275 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Sep 9 00:47:01.673281 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Sep 9 00:47:01.673287 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Sep 9 00:47:01.673292 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Sep 9 00:47:01.673298 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:47:01.673305 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 00:47:01.673311 kernel: psci: probing for conduit method from ACPI. Sep 9 00:47:01.673317 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 00:47:01.673323 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 00:47:01.673329 kernel: psci: Trusted OS migration not required Sep 9 00:47:01.673338 kernel: psci: SMC Calling Convention v1.1 Sep 9 00:47:01.673344 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 00:47:01.673352 kernel: ACPI: SRAT not present Sep 9 00:47:01.673358 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Sep 9 00:47:01.673365 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Sep 9 00:47:01.673371 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 00:47:01.673377 kernel: Detected PIPT I-cache on CPU0 Sep 9 00:47:01.673384 kernel: CPU features: detected: GIC system register CPU interface Sep 9 00:47:01.673390 kernel: CPU features: detected: Hardware dirty bit management Sep 9 00:47:01.673396 kernel: CPU features: detected: Spectre-v4 Sep 9 00:47:01.673403 kernel: CPU features: detected: Spectre-BHB Sep 9 00:47:01.673410 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 00:47:01.673417 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 00:47:01.673423 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 00:47:01.673430 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 00:47:01.673438 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 9 00:47:01.673446 kernel: Policy zone: DMA Sep 9 00:47:01.673454 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=32b3b664430ec28e33efa673a32f74eb733fc8145822fbe5ce810188f7f71923 Sep 9 00:47:01.673461 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:47:01.673469 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:47:01.673477 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:47:01.673483 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:47:01.673492 kernel: Memory: 2457332K/2572288K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 114956K reserved, 0K cma-reserved) Sep 9 00:47:01.673498 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:47:01.673505 kernel: trace event string verifier disabled Sep 9 00:47:01.673512 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:47:01.673519 kernel: rcu: RCU event tracing is enabled. Sep 9 00:47:01.673525 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:47:01.673532 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:47:01.673539 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:47:01.673545 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:47:01.673552 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:47:01.673558 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 00:47:01.673566 kernel: GICv3: 256 SPIs implemented Sep 9 00:47:01.673573 kernel: GICv3: 0 Extended SPIs implemented Sep 9 00:47:01.673579 kernel: GICv3: Distributor has no Range Selector support Sep 9 00:47:01.673585 kernel: Root IRQ handler: gic_handle_irq Sep 9 00:47:01.673592 kernel: GICv3: 16 PPIs implemented Sep 9 00:47:01.673599 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 00:47:01.673605 kernel: ACPI: SRAT not present Sep 9 00:47:01.673611 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 00:47:01.673617 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Sep 9 00:47:01.673624 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Sep 9 00:47:01.673630 kernel: GICv3: using LPI property table @0x00000000400d0000 Sep 9 00:47:01.673637 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Sep 9 00:47:01.673644 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:47:01.673650 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 00:47:01.673657 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 00:47:01.673664 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 00:47:01.673670 kernel: arm-pv: using stolen time PV Sep 9 00:47:01.673677 kernel: Console: colour dummy device 80x25 Sep 9 00:47:01.673684 kernel: ACPI: Core revision 20210730 Sep 9 00:47:01.673691 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 00:47:01.673697 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:47:01.673704 kernel: LSM: Security Framework initializing Sep 9 00:47:01.673716 kernel: SELinux: Initializing. Sep 9 00:47:01.673724 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:47:01.673732 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:47:01.673738 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:47:01.673745 kernel: Platform MSI: ITS@0x8080000 domain created Sep 9 00:47:01.673752 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 9 00:47:01.673759 kernel: Remapping and enabling EFI services. Sep 9 00:47:01.673766 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:47:01.673773 kernel: Detected PIPT I-cache on CPU1 Sep 9 00:47:01.673781 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 00:47:01.673788 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Sep 9 00:47:01.673795 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:47:01.673801 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 00:47:01.673808 kernel: Detected PIPT I-cache on CPU2 Sep 9 00:47:01.673814 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 00:47:01.673821 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Sep 9 00:47:01.673828 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:47:01.673834 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 00:47:01.673840 kernel: Detected PIPT I-cache on CPU3 Sep 9 00:47:01.673848 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 00:47:01.673864 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Sep 9 00:47:01.673872 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:47:01.673879 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 00:47:01.673890 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:47:01.673898 kernel: SMP: Total of 4 processors activated. Sep 9 00:47:01.673905 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 00:47:01.673912 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 00:47:01.673919 kernel: CPU features: detected: Common not Private translations Sep 9 00:47:01.673926 kernel: CPU features: detected: CRC32 instructions Sep 9 00:47:01.673933 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 00:47:01.673940 kernel: CPU features: detected: LSE atomic instructions Sep 9 00:47:01.673948 kernel: CPU features: detected: Privileged Access Never Sep 9 00:47:01.673955 kernel: CPU features: detected: RAS Extension Support Sep 9 00:47:01.673962 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 00:47:01.673969 kernel: CPU: All CPU(s) started at EL1 Sep 9 00:47:01.673976 kernel: alternatives: patching kernel code Sep 9 00:47:01.673983 kernel: devtmpfs: initialized Sep 9 00:47:01.673990 kernel: KASLR enabled Sep 9 00:47:01.673998 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:47:01.674018 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:47:01.674026 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:47:01.674032 kernel: SMBIOS 3.0.0 present. Sep 9 00:47:01.674040 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Sep 9 00:47:01.674047 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:47:01.674054 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 00:47:01.674062 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 00:47:01.674069 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 00:47:01.674076 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:47:01.674083 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 Sep 9 00:47:01.674090 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:47:01.674097 kernel: cpuidle: using governor menu Sep 9 00:47:01.674104 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 00:47:01.674111 kernel: ASID allocator initialised with 32768 entries Sep 9 00:47:01.674117 kernel: ACPI: bus type PCI registered Sep 9 00:47:01.674126 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:47:01.674133 kernel: Serial: AMBA PL011 UART driver Sep 9 00:47:01.674140 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:47:01.674147 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 00:47:01.674154 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:47:01.674160 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 00:47:01.674167 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:47:01.674174 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 00:47:01.674181 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:47:01.674189 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:47:01.674209 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:47:01.674216 kernel: ACPI: Added _OSI(Linux-Dell-Video) Sep 9 00:47:01.674223 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Sep 9 00:47:01.674230 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Sep 9 00:47:01.674257 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:47:01.674271 kernel: ACPI: Interpreter enabled Sep 9 00:47:01.674278 kernel: ACPI: Using GIC for interrupt routing Sep 9 00:47:01.674286 kernel: ACPI: MCFG table detected, 1 entries Sep 9 00:47:01.674294 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 00:47:01.674301 kernel: printk: console [ttyAMA0] enabled Sep 9 00:47:01.674308 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:47:01.674426 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:47:01.674491 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 00:47:01.674551 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 00:47:01.674611 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 00:47:01.674679 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 00:47:01.674690 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 00:47:01.674697 kernel: PCI host bridge to bus 0000:00 Sep 9 00:47:01.674769 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 00:47:01.674824 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 00:47:01.674890 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 00:47:01.674946 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:47:01.675030 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 9 00:47:01.675105 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 9 00:47:01.675181 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 9 00:47:01.675242 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 9 00:47:01.675302 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:47:01.675364 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:47:01.675424 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 9 00:47:01.675488 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 9 00:47:01.675543 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 00:47:01.675595 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 00:47:01.675648 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 00:47:01.675658 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 00:47:01.675665 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 00:47:01.675672 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 00:47:01.675679 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 00:47:01.675688 kernel: iommu: Default domain type: Translated Sep 9 00:47:01.675695 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 00:47:01.675701 kernel: vgaarb: loaded Sep 9 00:47:01.675714 kernel: pps_core: LinuxPPS API ver. 1 registered Sep 9 00:47:01.675722 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Sep 9 00:47:01.675729 kernel: PTP clock support registered Sep 9 00:47:01.675736 kernel: Registered efivars operations Sep 9 00:47:01.675743 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 00:47:01.675749 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:47:01.675758 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:47:01.675765 kernel: pnp: PnP ACPI init Sep 9 00:47:01.675829 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 00:47:01.675839 kernel: pnp: PnP ACPI: found 1 devices Sep 9 00:47:01.675846 kernel: NET: Registered PF_INET protocol family Sep 9 00:47:01.675860 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:47:01.675868 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:47:01.675875 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:47:01.675884 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:47:01.675891 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Sep 9 00:47:01.675898 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:47:01.675905 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:47:01.675912 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:47:01.675919 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:47:01.675926 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:47:01.675933 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 9 00:47:01.675940 kernel: kvm [1]: HYP mode not available Sep 9 00:47:01.675948 kernel: Initialise system trusted keyrings Sep 9 00:47:01.675955 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:47:01.675961 kernel: Key type asymmetric registered Sep 9 00:47:01.675968 kernel: Asymmetric key parser 'x509' registered Sep 9 00:47:01.675975 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 00:47:01.675982 kernel: io scheduler mq-deadline registered Sep 9 00:47:01.675989 kernel: io scheduler kyber registered Sep 9 00:47:01.675996 kernel: io scheduler bfq registered Sep 9 00:47:01.676010 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 00:47:01.676019 kernel: ACPI: button: Power Button [PWRB] Sep 9 00:47:01.676026 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 00:47:01.676096 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 00:47:01.676108 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:47:01.676115 kernel: thunder_xcv, ver 1.0 Sep 9 00:47:01.676123 kernel: thunder_bgx, ver 1.0 Sep 9 00:47:01.676130 kernel: nicpf, ver 1.0 Sep 9 00:47:01.676137 kernel: nicvf, ver 1.0 Sep 9 00:47:01.676210 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 00:47:01.676288 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T00:47:01 UTC (1757378821) Sep 9 00:47:01.676299 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 00:47:01.676306 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:47:01.676312 kernel: Segment Routing with IPv6 Sep 9 00:47:01.676319 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:47:01.676327 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:47:01.676334 kernel: Key type dns_resolver registered Sep 9 00:47:01.676341 kernel: registered taskstats version 1 Sep 9 00:47:01.676349 kernel: Loading compiled-in X.509 certificates Sep 9 00:47:01.676357 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.191-flatcar: 14b3f28443a1a4b809c7c0337ab8c3dc8fdb5252' Sep 9 00:47:01.676364 kernel: Key type .fscrypt registered Sep 9 00:47:01.676370 kernel: Key type fscrypt-provisioning registered Sep 9 00:47:01.676377 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:47:01.676384 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:47:01.676391 kernel: ima: No architecture policies found Sep 9 00:47:01.676398 kernel: clk: Disabling unused clocks Sep 9 00:47:01.676404 kernel: Freeing unused kernel memory: 36416K Sep 9 00:47:01.676416 kernel: Run /init as init process Sep 9 00:47:01.676423 kernel: with arguments: Sep 9 00:47:01.676430 kernel: /init Sep 9 00:47:01.676437 kernel: with environment: Sep 9 00:47:01.676443 kernel: HOME=/ Sep 9 00:47:01.676450 kernel: TERM=linux Sep 9 00:47:01.676457 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:47:01.676466 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 9 00:47:01.676476 systemd[1]: Detected virtualization kvm. Sep 9 00:47:01.676483 systemd[1]: Detected architecture arm64. Sep 9 00:47:01.676490 systemd[1]: Running in initrd. Sep 9 00:47:01.676497 systemd[1]: No hostname configured, using default hostname. Sep 9 00:47:01.676504 systemd[1]: Hostname set to . Sep 9 00:47:01.676512 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:47:01.676519 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:47:01.676526 systemd[1]: Started systemd-ask-password-console.path. Sep 9 00:47:01.676535 systemd[1]: Reached target cryptsetup.target. Sep 9 00:47:01.676542 systemd[1]: Reached target paths.target. Sep 9 00:47:01.676549 systemd[1]: Reached target slices.target. Sep 9 00:47:01.676556 systemd[1]: Reached target swap.target. Sep 9 00:47:01.676563 systemd[1]: Reached target timers.target. Sep 9 00:47:01.676571 systemd[1]: Listening on iscsid.socket. Sep 9 00:47:01.676578 systemd[1]: Listening on iscsiuio.socket. Sep 9 00:47:01.676586 systemd[1]: Listening on systemd-journald-audit.socket. Sep 9 00:47:01.676594 systemd[1]: Listening on systemd-journald-dev-log.socket. Sep 9 00:47:01.676601 systemd[1]: Listening on systemd-journald.socket. Sep 9 00:47:01.676608 systemd[1]: Listening on systemd-networkd.socket. Sep 9 00:47:01.676615 systemd[1]: Listening on systemd-udevd-control.socket. Sep 9 00:47:01.676623 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 9 00:47:01.676630 systemd[1]: Reached target sockets.target. Sep 9 00:47:01.676637 systemd[1]: Starting kmod-static-nodes.service... Sep 9 00:47:01.676644 systemd[1]: Finished network-cleanup.service. Sep 9 00:47:01.676652 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:47:01.676660 systemd[1]: Starting systemd-journald.service... Sep 9 00:47:01.676667 systemd[1]: Starting systemd-modules-load.service... Sep 9 00:47:01.676674 systemd[1]: Starting systemd-resolved.service... Sep 9 00:47:01.676681 systemd[1]: Starting systemd-vconsole-setup.service... Sep 9 00:47:01.676688 systemd[1]: Finished kmod-static-nodes.service. Sep 9 00:47:01.676696 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:47:01.676703 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Sep 9 00:47:01.676713 systemd-journald[289]: Journal started Sep 9 00:47:01.676752 systemd-journald[289]: Runtime Journal (/run/log/journal/97ddbc229fa14a0497907ed60a68d7f7) is 6.0M, max 48.7M, 42.6M free. Sep 9 00:47:01.674248 systemd-modules-load[290]: Inserted module 'overlay' Sep 9 00:47:01.679031 systemd[1]: Started systemd-journald.service. Sep 9 00:47:01.679055 kernel: audit: type=1130 audit(1757378821.678:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:01.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:01.679298 systemd[1]: Finished systemd-vconsole-setup.service. Sep 9 00:47:01.684020 kernel: audit: type=1130 audit(1757378821.681:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:01.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:01.681672 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Sep 9 00:47:01.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:01.688018 kernel: audit: type=1130 audit(1757378821.684:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:01.688178 systemd[1]: Starting dracut-cmdline-ask.service... Sep 9 00:47:01.696713 systemd-resolved[291]: Positive Trust Anchors: Sep 9 00:47:01.696727 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:47:01.696755 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 9 00:47:01.704042 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:47:01.700916 systemd-resolved[291]: Defaulting to hostname 'linux'. Sep 9 00:47:01.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:01.701664 systemd[1]: Started systemd-resolved.service. Sep 9 00:47:01.709034 kernel: audit: type=1130 audit(1757378821.704:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:01.709058 kernel: Bridge firewalling registered Sep 9 00:47:01.707234 systemd[1]: Reached target nss-lookup.target. Sep 9 00:47:01.707607 systemd-modules-load[290]: Inserted module 'br_netfilter' Sep 9 00:47:01.712090 systemd[1]: Finished dracut-cmdline-ask.service. Sep 9 00:47:01.717486 kernel: audit: type=1130 audit(1757378821.712:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:01.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:01.714735 systemd[1]: Starting dracut-cmdline.service... Sep 9 00:47:01.719039 kernel: SCSI subsystem initialized Sep 9 00:47:01.723642 dracut-cmdline[308]: dracut-dracut-053 Sep 9 00:47:01.725749 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=32b3b664430ec28e33efa673a32f74eb733fc8145822fbe5ce810188f7f71923 Sep 9 00:47:01.730786 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:47:01.730809 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:47:01.730819 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Sep 9 00:47:01.730816 systemd-modules-load[290]: Inserted module 'dm_multipath' Sep 9 00:47:01.731552 systemd[1]: Finished systemd-modules-load.service. Sep 9 00:47:01.735072 kernel: audit: type=1130 audit(1757378821.732:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:01.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:01.732872 systemd[1]: Starting systemd-sysctl.service... Sep 9 00:47:01.742093 systemd[1]: Finished systemd-sysctl.service. Sep 9 00:47:01.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:01.746032 kernel: audit: type=1130 audit(1757378821.742:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:01.788026 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:47:01.800032 kernel: iscsi: registered transport (tcp) Sep 9 00:47:01.814027 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:47:01.814042 kernel: QLogic iSCSI HBA Driver Sep 9 00:47:01.846898 systemd[1]: Finished dracut-cmdline.service. Sep 9 00:47:01.850028 kernel: audit: type=1130 audit(1757378821.847:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:01.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:01.848397 systemd[1]: Starting dracut-pre-udev.service... Sep 9 00:47:01.890041 kernel: raid6: neonx8 gen() 13743 MB/s Sep 9 00:47:01.907018 kernel: raid6: neonx8 xor() 10817 MB/s Sep 9 00:47:01.924018 kernel: raid6: neonx4 gen() 13478 MB/s Sep 9 00:47:01.941014 kernel: raid6: neonx4 xor() 11172 MB/s Sep 9 00:47:01.958026 kernel: raid6: neonx2 gen() 12953 MB/s Sep 9 00:47:01.975018 kernel: raid6: neonx2 xor() 10300 MB/s Sep 9 00:47:01.992017 kernel: raid6: neonx1 gen() 10505 MB/s Sep 9 00:47:02.009019 kernel: raid6: neonx1 xor() 8788 MB/s Sep 9 00:47:02.026022 kernel: raid6: int64x8 gen() 6270 MB/s Sep 9 00:47:02.043015 kernel: raid6: int64x8 xor() 3544 MB/s Sep 9 00:47:02.060016 kernel: raid6: int64x4 gen() 7186 MB/s Sep 9 00:47:02.077022 kernel: raid6: int64x4 xor() 3857 MB/s Sep 9 00:47:02.094020 kernel: raid6: int64x2 gen() 6140 MB/s Sep 9 00:47:02.111019 kernel: raid6: int64x2 xor() 3319 MB/s Sep 9 00:47:02.128030 kernel: raid6: int64x1 gen() 5046 MB/s Sep 9 00:47:02.145297 kernel: raid6: int64x1 xor() 2646 MB/s Sep 9 00:47:02.145319 kernel: raid6: using algorithm neonx8 gen() 13743 MB/s Sep 9 00:47:02.145337 kernel: raid6: .... xor() 10817 MB/s, rmw enabled Sep 9 00:47:02.145354 kernel: raid6: using neon recovery algorithm Sep 9 00:47:02.156067 kernel: xor: measuring software checksum speed Sep 9 00:47:02.156084 kernel: 8regs : 17202 MB/sec Sep 9 00:47:02.157106 kernel: 32regs : 20712 MB/sec Sep 9 00:47:02.157134 kernel: arm64_neon : 27804 MB/sec Sep 9 00:47:02.157152 kernel: xor: using function: arm64_neon (27804 MB/sec) Sep 9 00:47:02.209032 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Sep 9 00:47:02.219450 systemd[1]: Finished dracut-pre-udev.service. Sep 9 00:47:02.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:02.221243 systemd[1]: Starting systemd-udevd.service... Sep 9 00:47:02.226552 kernel: audit: type=1130 audit(1757378822.220:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:02.220000 audit: BPF prog-id=7 op=LOAD Sep 9 00:47:02.220000 audit: BPF prog-id=8 op=LOAD Sep 9 00:47:02.234344 systemd-udevd[492]: Using default interface naming scheme 'v252'. Sep 9 00:47:02.237654 systemd[1]: Started systemd-udevd.service. Sep 9 00:47:02.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:02.240227 systemd[1]: Starting dracut-pre-trigger.service... Sep 9 00:47:02.252514 dracut-pre-trigger[501]: rd.md=0: removing MD RAID activation Sep 9 00:47:02.279137 systemd[1]: Finished dracut-pre-trigger.service. Sep 9 00:47:02.280592 systemd[1]: Starting systemd-udev-trigger.service... Sep 9 00:47:02.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:02.313889 systemd[1]: Finished systemd-udev-trigger.service. Sep 9 00:47:02.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:02.343945 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:47:02.347986 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:47:02.348000 kernel: GPT:9289727 != 19775487 Sep 9 00:47:02.348020 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:47:02.348036 kernel: GPT:9289727 != 19775487 Sep 9 00:47:02.348044 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:47:02.348053 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:47:02.365997 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Sep 9 00:47:02.369022 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (546) Sep 9 00:47:02.369260 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Sep 9 00:47:02.371778 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Sep 9 00:47:02.372619 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Sep 9 00:47:02.382165 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 9 00:47:02.383640 systemd[1]: Starting disk-uuid.service... Sep 9 00:47:02.389610 disk-uuid[561]: Primary Header is updated. Sep 9 00:47:02.389610 disk-uuid[561]: Secondary Entries is updated. Sep 9 00:47:02.389610 disk-uuid[561]: Secondary Header is updated. Sep 9 00:47:02.394030 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:47:02.397025 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:47:03.400027 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:47:03.400172 disk-uuid[562]: The operation has completed successfully. Sep 9 00:47:03.422754 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:47:03.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:03.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:03.422860 systemd[1]: Finished disk-uuid.service. Sep 9 00:47:03.424317 systemd[1]: Starting verity-setup.service... Sep 9 00:47:03.439032 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 9 00:47:03.457988 systemd[1]: Found device dev-mapper-usr.device. Sep 9 00:47:03.460032 systemd[1]: Mounting sysusr-usr.mount... Sep 9 00:47:03.461774 systemd[1]: Finished verity-setup.service. Sep 9 00:47:03.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:03.504839 systemd[1]: Mounted sysusr-usr.mount. Sep 9 00:47:03.506026 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Sep 9 00:47:03.505683 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Sep 9 00:47:03.506361 systemd[1]: Starting ignition-setup.service... Sep 9 00:47:03.508252 systemd[1]: Starting parse-ip-for-networkd.service... Sep 9 00:47:03.514538 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:47:03.514578 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:47:03.514589 kernel: BTRFS info (device vda6): has skinny extents Sep 9 00:47:03.522355 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 9 00:47:03.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:03.527948 systemd[1]: Finished ignition-setup.service. Sep 9 00:47:03.529392 systemd[1]: Starting ignition-fetch-offline.service... Sep 9 00:47:03.575309 ignition[650]: Ignition 2.14.0 Sep 9 00:47:03.575318 ignition[650]: Stage: fetch-offline Sep 9 00:47:03.575354 ignition[650]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:47:03.575362 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:47:03.575487 ignition[650]: parsed url from cmdline: "" Sep 9 00:47:03.575490 ignition[650]: no config URL provided Sep 9 00:47:03.575494 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:47:03.575500 ignition[650]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:47:03.575516 ignition[650]: op(1): [started] loading QEMU firmware config module Sep 9 00:47:03.575520 ignition[650]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:47:03.583778 ignition[650]: op(1): [finished] loading QEMU firmware config module Sep 9 00:47:03.589996 ignition[650]: parsing config with SHA512: 9ce1587a27f34ac5c25072f98798d4fcd6c488ce6a451ed1d6e9da2cebf70143f62fc0c8b7fa27479c931516dc2a0f1c869291700724ebaec3011063dc408053 Sep 9 00:47:03.592906 systemd[1]: Finished parse-ip-for-networkd.service. Sep 9 00:47:03.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:03.594000 audit: BPF prog-id=9 op=LOAD Sep 9 00:47:03.594871 systemd[1]: Starting systemd-networkd.service... Sep 9 00:47:03.601614 unknown[650]: fetched base config from "system" Sep 9 00:47:03.601629 unknown[650]: fetched user config from "qemu" Sep 9 00:47:03.602101 ignition[650]: fetch-offline: fetch-offline passed Sep 9 00:47:03.603214 systemd[1]: Finished ignition-fetch-offline.service. Sep 9 00:47:03.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:03.602169 ignition[650]: Ignition finished successfully Sep 9 00:47:03.613505 systemd-networkd[741]: lo: Link UP Sep 9 00:47:03.613518 systemd-networkd[741]: lo: Gained carrier Sep 9 00:47:03.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:03.613902 systemd-networkd[741]: Enumeration completed Sep 9 00:47:03.613966 systemd[1]: Started systemd-networkd.service. Sep 9 00:47:03.614108 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:47:03.614969 systemd[1]: Reached target network.target. Sep 9 00:47:03.615061 systemd-networkd[741]: eth0: Link UP Sep 9 00:47:03.615064 systemd-networkd[741]: eth0: Gained carrier Sep 9 00:47:03.615981 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:47:03.616657 systemd[1]: Starting ignition-kargs.service... Sep 9 00:47:03.618326 systemd[1]: Starting iscsiuio.service... Sep 9 00:47:03.625044 ignition[743]: Ignition 2.14.0 Sep 9 00:47:03.625054 ignition[743]: Stage: kargs Sep 9 00:47:03.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:03.625440 systemd[1]: Started iscsiuio.service. Sep 9 00:47:03.625145 ignition[743]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:47:03.627041 systemd[1]: Starting iscsid.service... Sep 9 00:47:03.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:03.630326 iscsid[752]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Sep 9 00:47:03.630326 iscsid[752]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Sep 9 00:47:03.630326 iscsid[752]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Sep 9 00:47:03.630326 iscsid[752]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Sep 9 00:47:03.630326 iscsid[752]: If using hardware iscsi like qla4xxx this message can be ignored. Sep 9 00:47:03.630326 iscsid[752]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Sep 9 00:47:03.630326 iscsid[752]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Sep 9 00:47:03.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:03.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:03.625155 ignition[743]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:47:03.628620 systemd[1]: Finished ignition-kargs.service. Sep 9 00:47:03.626456 ignition[743]: kargs: kargs passed Sep 9 00:47:03.630511 systemd[1]: Starting ignition-disks.service... Sep 9 00:47:03.626495 ignition[743]: Ignition finished successfully Sep 9 00:47:03.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:03.633048 systemd[1]: Started iscsid.service. Sep 9 00:47:03.637019 ignition[753]: Ignition 2.14.0 Sep 9 00:47:03.635557 systemd[1]: Starting dracut-initqueue.service... Sep 9 00:47:03.637025 ignition[753]: Stage: disks Sep 9 00:47:03.638896 systemd[1]: Finished ignition-disks.service. Sep 9 00:47:03.637112 ignition[753]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:47:03.640084 systemd[1]: Reached target initrd-root-device.target. Sep 9 00:47:03.637120 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:47:03.640822 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:47:03.637715 ignition[753]: disks: disks passed Sep 9 00:47:03.642613 systemd[1]: Reached target local-fs-pre.target. Sep 9 00:47:03.637750 ignition[753]: Ignition finished successfully Sep 9 00:47:03.644067 systemd[1]: Reached target local-fs.target. Sep 9 00:47:03.645490 systemd[1]: Reached target sysinit.target. Sep 9 00:47:03.646500 systemd[1]: Reached target basic.target. Sep 9 00:47:03.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:03.647692 systemd[1]: Finished dracut-initqueue.service. Sep 9 00:47:03.648650 systemd[1]: Reached target remote-fs-pre.target. Sep 9 00:47:03.649658 systemd[1]: Reached target remote-cryptsetup.target. Sep 9 00:47:03.650627 systemd[1]: Reached target remote-fs.target. Sep 9 00:47:03.652200 systemd[1]: Starting dracut-pre-mount.service... Sep 9 00:47:03.659491 systemd[1]: Finished dracut-pre-mount.service. Sep 9 00:47:03.661041 systemd[1]: Starting systemd-fsck-root.service... Sep 9 00:47:03.671322 systemd-fsck[774]: ROOT: clean, 629/553520 files, 56027/553472 blocks Sep 9 00:47:03.675359 systemd[1]: Finished systemd-fsck-root.service. Sep 9 00:47:03.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:03.676751 systemd[1]: Mounting sysroot.mount... Sep 9 00:47:03.684876 systemd[1]: Mounted sysroot.mount. Sep 9 00:47:03.685871 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Sep 9 00:47:03.685524 systemd[1]: Reached target initrd-root-fs.target. Sep 9 00:47:03.687754 systemd[1]: Mounting sysroot-usr.mount... Sep 9 00:47:03.688512 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Sep 9 00:47:03.688547 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:47:03.688569 systemd[1]: Reached target ignition-diskful.target. Sep 9 00:47:03.690298 systemd[1]: Mounted sysroot-usr.mount. Sep 9 00:47:03.691492 systemd[1]: Starting initrd-setup-root.service... Sep 9 00:47:03.695566 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:47:03.699973 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:47:03.703769 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:47:03.707648 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:47:03.734380 systemd[1]: Finished initrd-setup-root.service. Sep 9 00:47:03.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:03.735730 systemd[1]: Starting ignition-mount.service... Sep 9 00:47:03.736941 systemd[1]: Starting sysroot-boot.service... Sep 9 00:47:03.741258 bash[825]: umount: /sysroot/usr/share/oem: not mounted. Sep 9 00:47:03.749461 ignition[827]: INFO : Ignition 2.14.0 Sep 9 00:47:03.749461 ignition[827]: INFO : Stage: mount Sep 9 00:47:03.751273 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:47:03.751273 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:47:03.751273 ignition[827]: INFO : mount: mount passed Sep 9 00:47:03.751273 ignition[827]: INFO : Ignition finished successfully Sep 9 00:47:03.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:03.751603 systemd[1]: Finished sysroot-boot.service. Sep 9 00:47:03.754710 systemd[1]: Finished ignition-mount.service. Sep 9 00:47:03.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:04.468941 systemd[1]: Mounting sysroot-usr-share-oem.mount... Sep 9 00:47:04.475481 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (835) Sep 9 00:47:04.475509 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:47:04.475519 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:47:04.476454 kernel: BTRFS info (device vda6): has skinny extents Sep 9 00:47:04.479223 systemd[1]: Mounted sysroot-usr-share-oem.mount. Sep 9 00:47:04.480540 systemd[1]: Starting ignition-files.service... Sep 9 00:47:04.493647 ignition[855]: INFO : Ignition 2.14.0 Sep 9 00:47:04.493647 ignition[855]: INFO : Stage: files Sep 9 00:47:04.494967 ignition[855]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:47:04.494967 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:47:04.494967 ignition[855]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:47:04.497846 ignition[855]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:47:04.497846 ignition[855]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:47:04.499998 ignition[855]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:47:04.499998 ignition[855]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:47:04.499998 ignition[855]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:47:04.499998 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:47:04.499998 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:47:04.499998 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:47:04.499998 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:47:04.499998 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 00:47:04.499998 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 00:47:04.499998 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 00:47:04.499998 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 9 00:47:04.498640 unknown[855]: wrote ssh authorized keys file for user: core Sep 9 00:47:04.878305 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Sep 9 00:47:04.959195 systemd-networkd[741]: eth0: Gained IPv6LL Sep 9 00:47:05.563952 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 9 00:47:05.563952 ignition[855]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Sep 9 00:47:05.568150 ignition[855]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:47:05.568150 ignition[855]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:47:05.568150 ignition[855]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Sep 9 00:47:05.568150 ignition[855]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:47:05.568150 ignition[855]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:47:05.589002 ignition[855]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:47:05.591134 ignition[855]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:47:05.591134 ignition[855]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:47:05.591134 ignition[855]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:47:05.591134 ignition[855]: INFO : files: files passed Sep 9 00:47:05.591134 ignition[855]: INFO : Ignition finished successfully Sep 9 00:47:05.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.595405 systemd[1]: Finished ignition-files.service. Sep 9 00:47:05.596965 systemd[1]: Starting initrd-setup-root-after-ignition.service... Sep 9 00:47:05.598261 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Sep 9 00:47:05.598907 systemd[1]: Starting ignition-quench.service... Sep 9 00:47:05.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.603741 initrd-setup-root-after-ignition[881]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Sep 9 00:47:05.601335 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:47:05.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.606748 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:47:05.601410 systemd[1]: Finished ignition-quench.service. Sep 9 00:47:05.604728 systemd[1]: Finished initrd-setup-root-after-ignition.service. Sep 9 00:47:05.605923 systemd[1]: Reached target ignition-complete.target. Sep 9 00:47:05.607883 systemd[1]: Starting initrd-parse-etc.service... Sep 9 00:47:05.619399 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:47:05.619485 systemd[1]: Finished initrd-parse-etc.service. Sep 9 00:47:05.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.620943 systemd[1]: Reached target initrd-fs.target. Sep 9 00:47:05.622043 systemd[1]: Reached target initrd.target. Sep 9 00:47:05.623257 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Sep 9 00:47:05.623905 systemd[1]: Starting dracut-pre-pivot.service... Sep 9 00:47:05.633563 systemd[1]: Finished dracut-pre-pivot.service. Sep 9 00:47:05.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.634849 systemd[1]: Starting initrd-cleanup.service... Sep 9 00:47:05.642274 systemd[1]: Stopped target nss-lookup.target. Sep 9 00:47:05.642941 systemd[1]: Stopped target remote-cryptsetup.target. Sep 9 00:47:05.644199 systemd[1]: Stopped target timers.target. Sep 9 00:47:05.645451 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:47:05.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.645550 systemd[1]: Stopped dracut-pre-pivot.service. Sep 9 00:47:05.646714 systemd[1]: Stopped target initrd.target. Sep 9 00:47:05.647937 systemd[1]: Stopped target basic.target. Sep 9 00:47:05.649087 systemd[1]: Stopped target ignition-complete.target. Sep 9 00:47:05.650323 systemd[1]: Stopped target ignition-diskful.target. Sep 9 00:47:05.651531 systemd[1]: Stopped target initrd-root-device.target. Sep 9 00:47:05.652953 systemd[1]: Stopped target remote-fs.target. Sep 9 00:47:05.654199 systemd[1]: Stopped target remote-fs-pre.target. Sep 9 00:47:05.655506 systemd[1]: Stopped target sysinit.target. Sep 9 00:47:05.656629 systemd[1]: Stopped target local-fs.target. Sep 9 00:47:05.657790 systemd[1]: Stopped target local-fs-pre.target. Sep 9 00:47:05.658970 systemd[1]: Stopped target swap.target. Sep 9 00:47:05.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.660111 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:47:05.660213 systemd[1]: Stopped dracut-pre-mount.service. Sep 9 00:47:05.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.661436 systemd[1]: Stopped target cryptsetup.target. Sep 9 00:47:05.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.662484 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:47:05.662579 systemd[1]: Stopped dracut-initqueue.service. Sep 9 00:47:05.663890 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:47:05.663979 systemd[1]: Stopped ignition-fetch-offline.service. Sep 9 00:47:05.665138 systemd[1]: Stopped target paths.target. Sep 9 00:47:05.666323 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:47:05.671042 systemd[1]: Stopped systemd-ask-password-console.path. Sep 9 00:47:05.672024 systemd[1]: Stopped target slices.target. Sep 9 00:47:05.673327 systemd[1]: Stopped target sockets.target. Sep 9 00:47:05.674478 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:47:05.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.674581 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Sep 9 00:47:05.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.675823 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:47:05.675925 systemd[1]: Stopped ignition-files.service. Sep 9 00:47:05.678198 systemd[1]: Stopping ignition-mount.service... Sep 9 00:47:05.681474 iscsid[752]: iscsid shutting down. Sep 9 00:47:05.679721 systemd[1]: Stopping iscsid.service... Sep 9 00:47:05.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.680703 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:47:05.680818 systemd[1]: Stopped kmod-static-nodes.service. Sep 9 00:47:05.682753 systemd[1]: Stopping sysroot-boot.service... Sep 9 00:47:05.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.687212 ignition[896]: INFO : Ignition 2.14.0 Sep 9 00:47:05.687212 ignition[896]: INFO : Stage: umount Sep 9 00:47:05.687212 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:47:05.687212 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:47:05.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.683639 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:47:05.694785 ignition[896]: INFO : umount: umount passed Sep 9 00:47:05.694785 ignition[896]: INFO : Ignition finished successfully Sep 9 00:47:05.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.683758 systemd[1]: Stopped systemd-udev-trigger.service. Sep 9 00:47:05.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.685357 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:47:05.685446 systemd[1]: Stopped dracut-pre-trigger.service. Sep 9 00:47:05.687881 systemd[1]: iscsid.service: Deactivated successfully. Sep 9 00:47:05.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.687975 systemd[1]: Stopped iscsid.service. Sep 9 00:47:05.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.689411 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:47:05.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.689471 systemd[1]: Closed iscsid.socket. Sep 9 00:47:05.690381 systemd[1]: Stopping iscsiuio.service... Sep 9 00:47:05.693266 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:47:05.693655 systemd[1]: iscsiuio.service: Deactivated successfully. Sep 9 00:47:05.693735 systemd[1]: Stopped iscsiuio.service. Sep 9 00:47:05.695491 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:47:05.695568 systemd[1]: Finished initrd-cleanup.service. Sep 9 00:47:05.696888 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:47:05.696965 systemd[1]: Stopped ignition-mount.service. Sep 9 00:47:05.698695 systemd[1]: Stopped target network.target. Sep 9 00:47:05.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.699422 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:47:05.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.699454 systemd[1]: Closed iscsiuio.socket. Sep 9 00:47:05.700781 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:47:05.700819 systemd[1]: Stopped ignition-disks.service. Sep 9 00:47:05.702018 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:47:05.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.702052 systemd[1]: Stopped ignition-kargs.service. Sep 9 00:47:05.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.703416 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:47:05.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.724000 audit: BPF prog-id=6 op=UNLOAD Sep 9 00:47:05.703456 systemd[1]: Stopped ignition-setup.service. Sep 9 00:47:05.704728 systemd[1]: Stopping systemd-networkd.service... Sep 9 00:47:05.705892 systemd[1]: Stopping systemd-resolved.service... Sep 9 00:47:05.712056 systemd-networkd[741]: eth0: DHCPv6 lease lost Sep 9 00:47:05.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.729000 audit: BPF prog-id=9 op=UNLOAD Sep 9 00:47:05.713031 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:47:05.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.713125 systemd[1]: Stopped systemd-networkd.service. Sep 9 00:47:05.714347 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:47:05.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.714431 systemd[1]: Stopped systemd-resolved.service. Sep 9 00:47:05.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.715645 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:47:05.715670 systemd[1]: Closed systemd-networkd.socket. Sep 9 00:47:05.717512 systemd[1]: Stopping network-cleanup.service... Sep 9 00:47:05.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.718887 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:47:05.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.718938 systemd[1]: Stopped parse-ip-for-networkd.service. Sep 9 00:47:05.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.720237 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:47:05.720274 systemd[1]: Stopped systemd-sysctl.service. Sep 9 00:47:05.722422 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:47:05.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.722460 systemd[1]: Stopped systemd-modules-load.service. Sep 9 00:47:05.723193 systemd[1]: Stopping systemd-udevd.service... Sep 9 00:47:05.727475 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:47:05.727917 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:47:05.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.727997 systemd[1]: Stopped sysroot-boot.service. Sep 9 00:47:05.729543 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:47:05.729588 systemd[1]: Stopped initrd-setup-root.service. Sep 9 00:47:05.731488 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:47:05.731576 systemd[1]: Stopped network-cleanup.service. Sep 9 00:47:05.732569 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:47:05.732683 systemd[1]: Stopped systemd-udevd.service. Sep 9 00:47:05.733875 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:47:05.733909 systemd[1]: Closed systemd-udevd-control.socket. Sep 9 00:47:05.734821 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:47:05.734859 systemd[1]: Closed systemd-udevd-kernel.socket. Sep 9 00:47:05.736123 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:47:05.736154 systemd[1]: Stopped dracut-pre-udev.service. Sep 9 00:47:05.737208 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:47:05.737242 systemd[1]: Stopped dracut-cmdline.service. Sep 9 00:47:05.738245 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:47:05.738278 systemd[1]: Stopped dracut-cmdline-ask.service. Sep 9 00:47:05.740223 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Sep 9 00:47:05.741592 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:47:05.741639 systemd[1]: Stopped systemd-vconsole-setup.service. Sep 9 00:47:05.745218 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:47:05.745299 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Sep 9 00:47:05.746845 systemd[1]: Reached target initrd-switch-root.target. Sep 9 00:47:05.748643 systemd[1]: Starting initrd-switch-root.service... Sep 9 00:47:05.754478 systemd[1]: Switching root. Sep 9 00:47:05.772164 systemd-journald[289]: Journal stopped Sep 9 00:47:07.705507 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Sep 9 00:47:07.705563 kernel: SELinux: Class mctp_socket not defined in policy. Sep 9 00:47:07.705577 kernel: SELinux: Class anon_inode not defined in policy. Sep 9 00:47:07.705587 kernel: SELinux: the above unknown classes and permissions will be allowed Sep 9 00:47:07.705596 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:47:07.705605 kernel: SELinux: policy capability open_perms=1 Sep 9 00:47:07.705615 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:47:07.705625 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:47:07.705634 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:47:07.705645 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:47:07.705659 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:47:07.705672 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:47:07.705682 systemd[1]: Successfully loaded SELinux policy in 32.897ms. Sep 9 00:47:07.705698 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.528ms. Sep 9 00:47:07.705710 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Sep 9 00:47:07.705720 systemd[1]: Detected virtualization kvm. Sep 9 00:47:07.705730 systemd[1]: Detected architecture arm64. Sep 9 00:47:07.705740 systemd[1]: Detected first boot. Sep 9 00:47:07.705752 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:47:07.705763 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Sep 9 00:47:07.705773 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:47:07.705785 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:47:07.705796 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:47:07.705808 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:47:07.705820 kernel: kauditd_printk_skb: 80 callbacks suppressed Sep 9 00:47:07.705843 kernel: audit: type=1334 audit(1757378827.596:84): prog-id=12 op=LOAD Sep 9 00:47:07.705853 kernel: audit: type=1334 audit(1757378827.596:85): prog-id=3 op=UNLOAD Sep 9 00:47:07.705863 kernel: audit: type=1334 audit(1757378827.597:86): prog-id=13 op=LOAD Sep 9 00:47:07.705872 kernel: audit: type=1334 audit(1757378827.597:87): prog-id=14 op=LOAD Sep 9 00:47:07.705882 kernel: audit: type=1334 audit(1757378827.597:88): prog-id=4 op=UNLOAD Sep 9 00:47:07.705892 kernel: audit: type=1334 audit(1757378827.597:89): prog-id=5 op=UNLOAD Sep 9 00:47:07.705902 kernel: audit: type=1334 audit(1757378827.598:90): prog-id=15 op=LOAD Sep 9 00:47:07.705912 kernel: audit: type=1334 audit(1757378827.598:91): prog-id=12 op=UNLOAD Sep 9 00:47:07.705922 kernel: audit: type=1334 audit(1757378827.598:92): prog-id=16 op=LOAD Sep 9 00:47:07.705933 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:47:07.705944 kernel: audit: type=1334 audit(1757378827.599:93): prog-id=17 op=LOAD Sep 9 00:47:07.705960 systemd[1]: Stopped initrd-switch-root.service. Sep 9 00:47:07.705971 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:47:07.705982 systemd[1]: Created slice system-addon\x2dconfig.slice. Sep 9 00:47:07.705993 systemd[1]: Created slice system-addon\x2drun.slice. Sep 9 00:47:07.706015 systemd[1]: Created slice system-getty.slice. Sep 9 00:47:07.706026 systemd[1]: Created slice system-modprobe.slice. Sep 9 00:47:07.706038 systemd[1]: Created slice system-serial\x2dgetty.slice. Sep 9 00:47:07.706048 systemd[1]: Created slice system-system\x2dcloudinit.slice. Sep 9 00:47:07.706059 systemd[1]: Created slice system-systemd\x2dfsck.slice. Sep 9 00:47:07.706069 systemd[1]: Created slice user.slice. Sep 9 00:47:07.706083 systemd[1]: Started systemd-ask-password-console.path. Sep 9 00:47:07.706093 systemd[1]: Started systemd-ask-password-wall.path. Sep 9 00:47:07.706103 systemd[1]: Set up automount boot.automount. Sep 9 00:47:07.706113 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Sep 9 00:47:07.706124 systemd[1]: Stopped target initrd-switch-root.target. Sep 9 00:47:07.706136 systemd[1]: Stopped target initrd-fs.target. Sep 9 00:47:07.706147 systemd[1]: Stopped target initrd-root-fs.target. Sep 9 00:47:07.706158 systemd[1]: Reached target integritysetup.target. Sep 9 00:47:07.706169 systemd[1]: Reached target remote-cryptsetup.target. Sep 9 00:47:07.706180 systemd[1]: Reached target remote-fs.target. Sep 9 00:47:07.706191 systemd[1]: Reached target slices.target. Sep 9 00:47:07.706204 systemd[1]: Reached target swap.target. Sep 9 00:47:07.706214 systemd[1]: Reached target torcx.target. Sep 9 00:47:07.706224 systemd[1]: Reached target veritysetup.target. Sep 9 00:47:07.706234 systemd[1]: Listening on systemd-coredump.socket. Sep 9 00:47:07.706244 systemd[1]: Listening on systemd-initctl.socket. Sep 9 00:47:07.706254 systemd[1]: Listening on systemd-networkd.socket. Sep 9 00:47:07.706265 systemd[1]: Listening on systemd-udevd-control.socket. Sep 9 00:47:07.706276 systemd[1]: Listening on systemd-udevd-kernel.socket. Sep 9 00:47:07.706286 systemd[1]: Listening on systemd-userdbd.socket. Sep 9 00:47:07.706297 systemd[1]: Mounting dev-hugepages.mount... Sep 9 00:47:07.706307 systemd[1]: Mounting dev-mqueue.mount... Sep 9 00:47:07.706317 systemd[1]: Mounting media.mount... Sep 9 00:47:07.707920 systemd[1]: Mounting sys-kernel-debug.mount... Sep 9 00:47:07.707943 systemd[1]: Mounting sys-kernel-tracing.mount... Sep 9 00:47:07.707955 systemd[1]: Mounting tmp.mount... Sep 9 00:47:07.707966 systemd[1]: Starting flatcar-tmpfiles.service... Sep 9 00:47:07.707977 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:47:07.707993 systemd[1]: Starting kmod-static-nodes.service... Sep 9 00:47:07.708016 systemd[1]: Starting modprobe@configfs.service... Sep 9 00:47:07.708030 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:47:07.708040 systemd[1]: Starting modprobe@drm.service... Sep 9 00:47:07.708051 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:47:07.708062 systemd[1]: Starting modprobe@fuse.service... Sep 9 00:47:07.708072 systemd[1]: Starting modprobe@loop.service... Sep 9 00:47:07.708084 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:47:07.708095 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:47:07.708106 systemd[1]: Stopped systemd-fsck-root.service. Sep 9 00:47:07.708117 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:47:07.708130 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:47:07.708140 systemd[1]: Stopped systemd-journald.service. Sep 9 00:47:07.708149 kernel: loop: module loaded Sep 9 00:47:07.708160 kernel: fuse: init (API version 7.34) Sep 9 00:47:07.708170 systemd[1]: Starting systemd-journald.service... Sep 9 00:47:07.708180 systemd[1]: Starting systemd-modules-load.service... Sep 9 00:47:07.708191 systemd[1]: Starting systemd-network-generator.service... Sep 9 00:47:07.708202 systemd[1]: Starting systemd-remount-fs.service... Sep 9 00:47:07.708213 systemd[1]: Starting systemd-udev-trigger.service... Sep 9 00:47:07.708224 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:47:07.708234 systemd[1]: Stopped verity-setup.service. Sep 9 00:47:07.708245 systemd[1]: Mounted dev-hugepages.mount. Sep 9 00:47:07.708255 systemd[1]: Mounted dev-mqueue.mount. Sep 9 00:47:07.708265 systemd[1]: Mounted media.mount. Sep 9 00:47:07.708275 systemd[1]: Mounted sys-kernel-debug.mount. Sep 9 00:47:07.708285 systemd[1]: Mounted sys-kernel-tracing.mount. Sep 9 00:47:07.708295 systemd[1]: Mounted tmp.mount. Sep 9 00:47:07.708305 systemd[1]: Finished kmod-static-nodes.service. Sep 9 00:47:07.708318 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:47:07.708330 systemd-journald[999]: Journal started Sep 9 00:47:07.708377 systemd-journald[999]: Runtime Journal (/run/log/journal/97ddbc229fa14a0497907ed60a68d7f7) is 6.0M, max 48.7M, 42.6M free. Sep 9 00:47:05.827000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:47:05.903000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 9 00:47:05.903000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Sep 9 00:47:05.903000 audit: BPF prog-id=10 op=LOAD Sep 9 00:47:05.904000 audit: BPF prog-id=10 op=UNLOAD Sep 9 00:47:05.904000 audit: BPF prog-id=11 op=LOAD Sep 9 00:47:05.904000 audit: BPF prog-id=11 op=UNLOAD Sep 9 00:47:05.938000 audit[929]: AVC avc: denied { associate } for pid=929 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Sep 9 00:47:05.938000 audit[929]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001bd89c a1=400013ede0 a2=40001450c0 a3=32 items=0 ppid=912 pid=929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:47:05.938000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 9 00:47:05.939000 audit[929]: AVC avc: denied { associate } for pid=929 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Sep 9 00:47:05.939000 audit[929]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001bd975 a2=1ed a3=0 items=2 ppid=912 pid=929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:47:05.939000 audit: CWD cwd="/" Sep 9 00:47:05.939000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 9 00:47:05.939000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Sep 9 00:47:05.939000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Sep 9 00:47:07.596000 audit: BPF prog-id=12 op=LOAD Sep 9 00:47:07.596000 audit: BPF prog-id=3 op=UNLOAD Sep 9 00:47:07.597000 audit: BPF prog-id=13 op=LOAD Sep 9 00:47:07.597000 audit: BPF prog-id=14 op=LOAD Sep 9 00:47:07.597000 audit: BPF prog-id=4 op=UNLOAD Sep 9 00:47:07.597000 audit: BPF prog-id=5 op=UNLOAD Sep 9 00:47:07.598000 audit: BPF prog-id=15 op=LOAD Sep 9 00:47:07.598000 audit: BPF prog-id=12 op=UNLOAD Sep 9 00:47:07.598000 audit: BPF prog-id=16 op=LOAD Sep 9 00:47:07.599000 audit: BPF prog-id=17 op=LOAD Sep 9 00:47:07.599000 audit: BPF prog-id=13 op=UNLOAD Sep 9 00:47:07.599000 audit: BPF prog-id=14 op=UNLOAD Sep 9 00:47:07.600000 audit: BPF prog-id=18 op=LOAD Sep 9 00:47:07.600000 audit: BPF prog-id=15 op=UNLOAD Sep 9 00:47:07.600000 audit: BPF prog-id=19 op=LOAD Sep 9 00:47:07.601000 audit: BPF prog-id=20 op=LOAD Sep 9 00:47:07.601000 audit: BPF prog-id=16 op=UNLOAD Sep 9 00:47:07.709075 systemd[1]: Finished modprobe@configfs.service. Sep 9 00:47:07.601000 audit: BPF prog-id=17 op=UNLOAD Sep 9 00:47:07.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.615000 audit: BPF prog-id=18 op=UNLOAD Sep 9 00:47:07.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.683000 audit: BPF prog-id=21 op=LOAD Sep 9 00:47:07.683000 audit: BPF prog-id=22 op=LOAD Sep 9 00:47:07.683000 audit: BPF prog-id=23 op=LOAD Sep 9 00:47:07.683000 audit: BPF prog-id=19 op=UNLOAD Sep 9 00:47:07.683000 audit: BPF prog-id=20 op=UNLOAD Sep 9 00:47:07.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.704000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Sep 9 00:47:07.704000 audit[999]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffcc61a4f0 a2=4000 a3=1 items=0 ppid=1 pid=999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:47:07.704000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Sep 9 00:47:07.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.936812 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:47:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:47:07.594698 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:47:05.937073 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:47:05Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 9 00:47:07.594709 systemd[1]: Unnecessary job was removed for dev-vda6.device. Sep 9 00:47:05.937091 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:47:05Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 9 00:47:07.601795 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:47:07.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:05.937119 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:47:05Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Sep 9 00:47:05.937130 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:47:05Z" level=debug msg="skipped missing lower profile" missing profile=oem Sep 9 00:47:05.937158 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:47:05Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Sep 9 00:47:05.937170 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:47:05Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Sep 9 00:47:05.937355 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:47:05Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Sep 9 00:47:05.937387 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:47:05Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Sep 9 00:47:07.710374 systemd[1]: Started systemd-journald.service. Sep 9 00:47:05.937399 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:47:05Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Sep 9 00:47:05.938072 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:47:05Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Sep 9 00:47:05.938104 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:47:05Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Sep 9 00:47:05.938122 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:47:05Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Sep 9 00:47:05.938136 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:47:05Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Sep 9 00:47:05.938152 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:47:05Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Sep 9 00:47:05.938165 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:47:05Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Sep 9 00:47:07.362402 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:47:07Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 9 00:47:07.362656 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:47:07Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 9 00:47:07.362750 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:47:07Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 9 00:47:07.362922 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:47:07Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Sep 9 00:47:07.362974 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:47:07Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Sep 9 00:47:07.363050 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-09-09T00:47:07Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Sep 9 00:47:07.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.711848 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:47:07.712027 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:47:07.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.712966 systemd[1]: Finished flatcar-tmpfiles.service. Sep 9 00:47:07.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.713949 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:47:07.714110 systemd[1]: Finished modprobe@drm.service. Sep 9 00:47:07.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.714934 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:47:07.715194 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:47:07.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.716088 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:47:07.716227 systemd[1]: Finished modprobe@fuse.service. Sep 9 00:47:07.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.717155 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:47:07.717295 systemd[1]: Finished modprobe@loop.service. Sep 9 00:47:07.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.718248 systemd[1]: Finished systemd-modules-load.service. Sep 9 00:47:07.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.719114 systemd[1]: Finished systemd-network-generator.service. Sep 9 00:47:07.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.720053 systemd[1]: Finished systemd-remount-fs.service. Sep 9 00:47:07.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.721103 systemd[1]: Reached target network-pre.target. Sep 9 00:47:07.722876 systemd[1]: Mounting sys-fs-fuse-connections.mount... Sep 9 00:47:07.724504 systemd[1]: Mounting sys-kernel-config.mount... Sep 9 00:47:07.725187 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:47:07.726646 systemd[1]: Starting systemd-hwdb-update.service... Sep 9 00:47:07.728587 systemd[1]: Starting systemd-journal-flush.service... Sep 9 00:47:07.729418 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:47:07.730422 systemd[1]: Starting systemd-random-seed.service... Sep 9 00:47:07.731142 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:47:07.732223 systemd[1]: Starting systemd-sysctl.service... Sep 9 00:47:07.734894 systemd-journald[999]: Time spent on flushing to /var/log/journal/97ddbc229fa14a0497907ed60a68d7f7 is 12.534ms for 983 entries. Sep 9 00:47:07.734894 systemd-journald[999]: System Journal (/var/log/journal/97ddbc229fa14a0497907ed60a68d7f7) is 8.0M, max 195.6M, 187.6M free. Sep 9 00:47:07.758560 systemd-journald[999]: Received client request to flush runtime journal. Sep 9 00:47:07.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.733973 systemd[1]: Starting systemd-sysusers.service... Sep 9 00:47:07.737303 systemd[1]: Mounted sys-fs-fuse-connections.mount. Sep 9 00:47:07.738180 systemd[1]: Mounted sys-kernel-config.mount. Sep 9 00:47:07.759803 udevadm[1030]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 9 00:47:07.738988 systemd[1]: Finished systemd-random-seed.service. Sep 9 00:47:07.740103 systemd[1]: Finished systemd-udev-trigger.service. Sep 9 00:47:07.741083 systemd[1]: Reached target first-boot-complete.target. Sep 9 00:47:07.742847 systemd[1]: Starting systemd-udev-settle.service... Sep 9 00:47:07.751173 systemd[1]: Finished systemd-sysusers.service. Sep 9 00:47:07.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:07.753104 systemd[1]: Finished systemd-sysctl.service. Sep 9 00:47:07.759485 systemd[1]: Finished systemd-journal-flush.service. Sep 9 00:47:08.097948 systemd[1]: Finished systemd-hwdb-update.service. Sep 9 00:47:08.100051 systemd[1]: Starting systemd-udevd.service... Sep 9 00:47:08.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.099000 audit: BPF prog-id=24 op=LOAD Sep 9 00:47:08.099000 audit: BPF prog-id=25 op=LOAD Sep 9 00:47:08.099000 audit: BPF prog-id=7 op=UNLOAD Sep 9 00:47:08.099000 audit: BPF prog-id=8 op=UNLOAD Sep 9 00:47:08.115322 systemd-udevd[1033]: Using default interface naming scheme 'v252'. Sep 9 00:47:08.127279 systemd[1]: Started systemd-udevd.service. Sep 9 00:47:08.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.128000 audit: BPF prog-id=26 op=LOAD Sep 9 00:47:08.129331 systemd[1]: Starting systemd-networkd.service... Sep 9 00:47:08.134000 audit: BPF prog-id=27 op=LOAD Sep 9 00:47:08.135000 audit: BPF prog-id=28 op=LOAD Sep 9 00:47:08.135000 audit: BPF prog-id=29 op=LOAD Sep 9 00:47:08.135724 systemd[1]: Starting systemd-userdbd.service... Sep 9 00:47:08.150631 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Sep 9 00:47:08.167103 systemd[1]: Started systemd-userdbd.service. Sep 9 00:47:08.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.201209 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Sep 9 00:47:08.213407 systemd-networkd[1041]: lo: Link UP Sep 9 00:47:08.213419 systemd-networkd[1041]: lo: Gained carrier Sep 9 00:47:08.213764 systemd-networkd[1041]: Enumeration completed Sep 9 00:47:08.213861 systemd[1]: Started systemd-networkd.service. Sep 9 00:47:08.213876 systemd-networkd[1041]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:47:08.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.215422 systemd-networkd[1041]: eth0: Link UP Sep 9 00:47:08.215431 systemd-networkd[1041]: eth0: Gained carrier Sep 9 00:47:08.217343 systemd[1]: Finished systemd-udev-settle.service. Sep 9 00:47:08.219104 systemd[1]: Starting lvm2-activation-early.service... Sep 9 00:47:08.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.226779 lvm[1066]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:47:08.238123 systemd-networkd[1041]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:47:08.267751 systemd[1]: Finished lvm2-activation-early.service. Sep 9 00:47:08.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.268640 systemd[1]: Reached target cryptsetup.target. Sep 9 00:47:08.270399 systemd[1]: Starting lvm2-activation.service... Sep 9 00:47:08.274113 lvm[1067]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:47:08.304777 systemd[1]: Finished lvm2-activation.service. Sep 9 00:47:08.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.305580 systemd[1]: Reached target local-fs-pre.target. Sep 9 00:47:08.306239 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:47:08.306262 systemd[1]: Reached target local-fs.target. Sep 9 00:47:08.306819 systemd[1]: Reached target machines.target. Sep 9 00:47:08.308537 systemd[1]: Starting ldconfig.service... Sep 9 00:47:08.309423 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:47:08.309472 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:47:08.310609 systemd[1]: Starting systemd-boot-update.service... Sep 9 00:47:08.312431 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Sep 9 00:47:08.314493 systemd[1]: Starting systemd-machine-id-commit.service... Sep 9 00:47:08.316951 systemd[1]: Starting systemd-sysext.service... Sep 9 00:47:08.317973 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1069 (bootctl) Sep 9 00:47:08.319134 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Sep 9 00:47:08.326502 systemd[1]: Unmounting usr-share-oem.mount... Sep 9 00:47:08.328304 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Sep 9 00:47:08.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.332848 systemd[1]: usr-share-oem.mount: Deactivated successfully. Sep 9 00:47:08.333119 systemd[1]: Unmounted usr-share-oem.mount. Sep 9 00:47:08.346037 kernel: loop0: detected capacity change from 0 to 207008 Sep 9 00:47:08.399304 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:47:08.400230 systemd[1]: Finished systemd-machine-id-commit.service. Sep 9 00:47:08.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.407027 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:47:08.423485 systemd-fsck[1079]: fsck.fat 4.2 (2021-01-31) Sep 9 00:47:08.423485 systemd-fsck[1079]: /dev/vda1: 236 files, 117310/258078 clusters Sep 9 00:47:08.425795 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Sep 9 00:47:08.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.428487 systemd[1]: Mounting boot.mount... Sep 9 00:47:08.429126 kernel: loop1: detected capacity change from 0 to 207008 Sep 9 00:47:08.434590 (sd-sysext)[1083]: Using extensions 'kubernetes'. Sep 9 00:47:08.435080 (sd-sysext)[1083]: Merged extensions into '/usr'. Sep 9 00:47:08.447940 systemd[1]: Mounted boot.mount. Sep 9 00:47:08.452218 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:47:08.453464 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:47:08.455326 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:47:08.457209 systemd[1]: Starting modprobe@loop.service... Sep 9 00:47:08.457925 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:47:08.458068 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:47:08.458857 systemd[1]: Finished systemd-boot-update.service. Sep 9 00:47:08.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.460147 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:47:08.460261 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:47:08.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.461361 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:47:08.461463 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:47:08.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.462679 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:47:08.462781 systemd[1]: Finished modprobe@loop.service. Sep 9 00:47:08.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.464023 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:47:08.464128 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:47:08.514044 ldconfig[1068]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:47:08.517749 systemd[1]: Finished ldconfig.service. Sep 9 00:47:08.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.698814 systemd[1]: Mounting usr-share-oem.mount... Sep 9 00:47:08.703686 systemd[1]: Mounted usr-share-oem.mount. Sep 9 00:47:08.705337 systemd[1]: Finished systemd-sysext.service. Sep 9 00:47:08.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.707118 systemd[1]: Starting ensure-sysext.service... Sep 9 00:47:08.708600 systemd[1]: Starting systemd-tmpfiles-setup.service... Sep 9 00:47:08.713055 systemd[1]: Reloading. Sep 9 00:47:08.720253 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Sep 9 00:47:08.722276 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:47:08.725437 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:47:08.748129 /usr/lib/systemd/system-generators/torcx-generator[1111]: time="2025-09-09T00:47:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:47:08.748159 /usr/lib/systemd/system-generators/torcx-generator[1111]: time="2025-09-09T00:47:08Z" level=info msg="torcx already run" Sep 9 00:47:08.802621 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:47:08.802643 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:47:08.817594 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:47:08.861000 audit: BPF prog-id=30 op=LOAD Sep 9 00:47:08.861000 audit: BPF prog-id=31 op=LOAD Sep 9 00:47:08.861000 audit: BPF prog-id=24 op=UNLOAD Sep 9 00:47:08.861000 audit: BPF prog-id=25 op=UNLOAD Sep 9 00:47:08.861000 audit: BPF prog-id=32 op=LOAD Sep 9 00:47:08.862000 audit: BPF prog-id=27 op=UNLOAD Sep 9 00:47:08.862000 audit: BPF prog-id=33 op=LOAD Sep 9 00:47:08.862000 audit: BPF prog-id=34 op=LOAD Sep 9 00:47:08.862000 audit: BPF prog-id=28 op=UNLOAD Sep 9 00:47:08.862000 audit: BPF prog-id=29 op=UNLOAD Sep 9 00:47:08.862000 audit: BPF prog-id=35 op=LOAD Sep 9 00:47:08.862000 audit: BPF prog-id=21 op=UNLOAD Sep 9 00:47:08.862000 audit: BPF prog-id=36 op=LOAD Sep 9 00:47:08.863000 audit: BPF prog-id=37 op=LOAD Sep 9 00:47:08.863000 audit: BPF prog-id=22 op=UNLOAD Sep 9 00:47:08.863000 audit: BPF prog-id=23 op=UNLOAD Sep 9 00:47:08.863000 audit: BPF prog-id=38 op=LOAD Sep 9 00:47:08.863000 audit: BPF prog-id=26 op=UNLOAD Sep 9 00:47:08.865383 systemd[1]: Finished systemd-tmpfiles-setup.service. Sep 9 00:47:08.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.869428 systemd[1]: Starting audit-rules.service... Sep 9 00:47:08.871211 systemd[1]: Starting clean-ca-certificates.service... Sep 9 00:47:08.873080 systemd[1]: Starting systemd-journal-catalog-update.service... Sep 9 00:47:08.874000 audit: BPF prog-id=39 op=LOAD Sep 9 00:47:08.875292 systemd[1]: Starting systemd-resolved.service... Sep 9 00:47:08.876000 audit: BPF prog-id=40 op=LOAD Sep 9 00:47:08.877404 systemd[1]: Starting systemd-timesyncd.service... Sep 9 00:47:08.879070 systemd[1]: Starting systemd-update-utmp.service... Sep 9 00:47:08.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.880329 systemd[1]: Finished clean-ca-certificates.service. Sep 9 00:47:08.883000 audit[1161]: SYSTEM_BOOT pid=1161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.882910 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:47:08.885949 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:47:08.887326 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:47:08.890168 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:47:08.892862 systemd[1]: Starting modprobe@loop.service... Sep 9 00:47:08.893583 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:47:08.893759 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:47:08.893924 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:47:08.895223 systemd[1]: Finished systemd-journal-catalog-update.service. Sep 9 00:47:08.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.896534 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:47:08.896643 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:47:08.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.897760 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:47:08.897877 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:47:08.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.899157 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:47:08.899262 systemd[1]: Finished modprobe@loop.service. Sep 9 00:47:08.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.900336 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:47:08.900476 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:47:08.901818 systemd[1]: Starting systemd-update-done.service... Sep 9 00:47:08.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.903075 systemd[1]: Finished systemd-update-utmp.service. Sep 9 00:47:08.905873 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:47:08.906984 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:47:08.908678 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:47:08.910441 systemd[1]: Starting modprobe@loop.service... Sep 9 00:47:08.911220 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:47:08.911349 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:47:08.911446 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:47:08.912277 systemd[1]: Finished systemd-update-done.service. Sep 9 00:47:08.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.913307 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:47:08.913417 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:47:08.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.914613 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:47:08.914723 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:47:08.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Sep 9 00:47:08.915917 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:47:08.916040 systemd[1]: Finished modprobe@loop.service. Sep 9 00:47:08.916000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Sep 9 00:47:08.916000 audit[1177]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd1fe4820 a2=420 a3=0 items=0 ppid=1150 pid=1177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Sep 9 00:47:08.916000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Sep 9 00:47:08.916413 augenrules[1177]: No rules Sep 9 00:47:08.917124 systemd[1]: Finished audit-rules.service. Sep 9 00:47:08.920099 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Sep 9 00:47:08.921390 systemd[1]: Starting modprobe@dm_mod.service... Sep 9 00:47:08.923158 systemd[1]: Starting modprobe@drm.service... Sep 9 00:47:08.924770 systemd[1]: Starting modprobe@efi_pstore.service... Sep 9 00:47:08.926606 systemd[1]: Starting modprobe@loop.service... Sep 9 00:47:08.927330 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Sep 9 00:47:08.927449 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:47:08.928617 systemd[1]: Starting systemd-networkd-wait-online.service... Sep 9 00:47:08.929478 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:47:08.930443 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:47:08.930559 systemd[1]: Finished modprobe@dm_mod.service. Sep 9 00:47:08.931170 systemd-timesyncd[1158]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:47:08.931223 systemd-timesyncd[1158]: Initial clock synchronization to Tue 2025-09-09 00:47:09.043667 UTC. Sep 9 00:47:08.931630 systemd[1]: Started systemd-timesyncd.service. Sep 9 00:47:08.932837 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:47:08.932952 systemd[1]: Finished modprobe@drm.service. Sep 9 00:47:08.933956 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:47:08.934071 systemd[1]: Finished modprobe@efi_pstore.service. Sep 9 00:47:08.935114 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:47:08.935221 systemd[1]: Finished modprobe@loop.service. Sep 9 00:47:08.936651 systemd[1]: Reached target time-set.target. Sep 9 00:47:08.937363 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:47:08.937401 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Sep 9 00:47:08.937672 systemd[1]: Finished ensure-sysext.service. Sep 9 00:47:08.941878 systemd-resolved[1154]: Positive Trust Anchors: Sep 9 00:47:08.942141 systemd-resolved[1154]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:47:08.942221 systemd-resolved[1154]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Sep 9 00:47:08.950656 systemd-resolved[1154]: Defaulting to hostname 'linux'. Sep 9 00:47:08.952175 systemd[1]: Started systemd-resolved.service. Sep 9 00:47:08.952843 systemd[1]: Reached target network.target. Sep 9 00:47:08.953527 systemd[1]: Reached target nss-lookup.target. Sep 9 00:47:08.954131 systemd[1]: Reached target sysinit.target. Sep 9 00:47:08.954768 systemd[1]: Started motdgen.path. Sep 9 00:47:08.955355 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Sep 9 00:47:08.956339 systemd[1]: Started logrotate.timer. Sep 9 00:47:08.956999 systemd[1]: Started mdadm.timer. Sep 9 00:47:08.957527 systemd[1]: Started systemd-tmpfiles-clean.timer. Sep 9 00:47:08.958157 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:47:08.958183 systemd[1]: Reached target paths.target. Sep 9 00:47:08.958712 systemd[1]: Reached target timers.target. Sep 9 00:47:08.959583 systemd[1]: Listening on dbus.socket. Sep 9 00:47:08.961150 systemd[1]: Starting docker.socket... Sep 9 00:47:08.964112 systemd[1]: Listening on sshd.socket. Sep 9 00:47:08.964769 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:47:08.965260 systemd[1]: Listening on docker.socket. Sep 9 00:47:08.965907 systemd[1]: Reached target sockets.target. Sep 9 00:47:08.966544 systemd[1]: Reached target basic.target. Sep 9 00:47:08.967128 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 9 00:47:08.967155 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Sep 9 00:47:08.968101 systemd[1]: Starting containerd.service... Sep 9 00:47:08.969594 systemd[1]: Starting dbus.service... Sep 9 00:47:08.971138 systemd[1]: Starting enable-oem-cloudinit.service... Sep 9 00:47:08.972835 systemd[1]: Starting extend-filesystems.service... Sep 9 00:47:08.973620 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Sep 9 00:47:08.974658 systemd[1]: Starting motdgen.service... Sep 9 00:47:08.977298 jq[1192]: false Sep 9 00:47:08.978151 systemd[1]: Starting ssh-key-proc-cmdline.service... Sep 9 00:47:08.979955 systemd[1]: Starting sshd-keygen.service... Sep 9 00:47:08.982665 systemd[1]: Starting systemd-logind.service... Sep 9 00:47:08.983513 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Sep 9 00:47:08.983590 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:47:08.984024 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:47:08.984638 systemd[1]: Starting update-engine.service... Sep 9 00:47:08.986438 systemd[1]: Starting update-ssh-keys-after-ignition.service... Sep 9 00:47:08.988925 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:47:08.990664 jq[1206]: true Sep 9 00:47:08.989313 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Sep 9 00:47:08.989618 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:47:08.990568 systemd[1]: Finished ssh-key-proc-cmdline.service. Sep 9 00:47:08.999935 dbus-daemon[1191]: [system] SELinux support is enabled Sep 9 00:47:09.001497 jq[1213]: true Sep 9 00:47:09.004120 systemd[1]: Started dbus.service. Sep 9 00:47:09.007499 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:47:09.007536 systemd[1]: Reached target system-config.target. Sep 9 00:47:09.008265 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:47:09.008288 systemd[1]: Reached target user-config.target. Sep 9 00:47:09.010073 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:47:09.010231 systemd[1]: Finished motdgen.service. Sep 9 00:47:09.017844 extend-filesystems[1193]: Found loop1 Sep 9 00:47:09.017844 extend-filesystems[1193]: Found vda Sep 9 00:47:09.017844 extend-filesystems[1193]: Found vda1 Sep 9 00:47:09.017844 extend-filesystems[1193]: Found vda2 Sep 9 00:47:09.017844 extend-filesystems[1193]: Found vda3 Sep 9 00:47:09.017844 extend-filesystems[1193]: Found usr Sep 9 00:47:09.017844 extend-filesystems[1193]: Found vda4 Sep 9 00:47:09.017844 extend-filesystems[1193]: Found vda6 Sep 9 00:47:09.017844 extend-filesystems[1193]: Found vda7 Sep 9 00:47:09.017844 extend-filesystems[1193]: Found vda9 Sep 9 00:47:09.017844 extend-filesystems[1193]: Checking size of /dev/vda9 Sep 9 00:47:09.021149 systemd-logind[1201]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 00:47:09.021757 systemd-logind[1201]: New seat seat0. Sep 9 00:47:09.030139 systemd[1]: Started systemd-logind.service. Sep 9 00:47:09.036137 extend-filesystems[1193]: Resized partition /dev/vda9 Sep 9 00:47:09.038077 extend-filesystems[1239]: resize2fs 1.46.5 (30-Dec-2021) Sep 9 00:47:09.043608 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:47:09.045897 update_engine[1203]: I0909 00:47:09.045600 1203 main.cc:92] Flatcar Update Engine starting Sep 9 00:47:09.056627 update_engine[1203]: I0909 00:47:09.052726 1203 update_check_scheduler.cc:74] Next update check in 10m40s Sep 9 00:47:09.052696 systemd[1]: Started update-engine.service. Sep 9 00:47:09.056311 systemd[1]: Started locksmithd.service. Sep 9 00:47:09.066063 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:47:09.066338 bash[1238]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:47:09.067193 systemd[1]: Finished update-ssh-keys-after-ignition.service. Sep 9 00:47:09.079584 extend-filesystems[1239]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:47:09.079584 extend-filesystems[1239]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:47:09.079584 extend-filesystems[1239]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:47:09.084281 extend-filesystems[1193]: Resized filesystem in /dev/vda9 Sep 9 00:47:09.081541 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:47:09.085348 env[1212]: time="2025-09-09T00:47:09.080113919Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Sep 9 00:47:09.081692 systemd[1]: Finished extend-filesystems.service. Sep 9 00:47:09.096595 locksmithd[1241]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:47:09.102480 env[1212]: time="2025-09-09T00:47:09.102437955Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 9 00:47:09.102702 env[1212]: time="2025-09-09T00:47:09.102681697Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:47:09.104177 env[1212]: time="2025-09-09T00:47:09.104145325Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.191-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:47:09.104264 env[1212]: time="2025-09-09T00:47:09.104248517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:47:09.104534 env[1212]: time="2025-09-09T00:47:09.104508687Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:47:09.104619 env[1212]: time="2025-09-09T00:47:09.104603442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 9 00:47:09.104683 env[1212]: time="2025-09-09T00:47:09.104667410Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 9 00:47:09.104736 env[1212]: time="2025-09-09T00:47:09.104723670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 9 00:47:09.104864 env[1212]: time="2025-09-09T00:47:09.104847549Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:47:09.105202 env[1212]: time="2025-09-09T00:47:09.105176514Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:47:09.105414 env[1212]: time="2025-09-09T00:47:09.105390848Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:47:09.105499 env[1212]: time="2025-09-09T00:47:09.105484061Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 9 00:47:09.105614 env[1212]: time="2025-09-09T00:47:09.105594757Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 9 00:47:09.105676 env[1212]: time="2025-09-09T00:47:09.105662335Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:47:09.108816 env[1212]: time="2025-09-09T00:47:09.108791513Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 9 00:47:09.108916 env[1212]: time="2025-09-09T00:47:09.108900019Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 9 00:47:09.108977 env[1212]: time="2025-09-09T00:47:09.108962729Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 9 00:47:09.109086 env[1212]: time="2025-09-09T00:47:09.109070099Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 9 00:47:09.109149 env[1212]: time="2025-09-09T00:47:09.109135040Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 9 00:47:09.109219 env[1212]: time="2025-09-09T00:47:09.109205822Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 9 00:47:09.109276 env[1212]: time="2025-09-09T00:47:09.109263786Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 9 00:47:09.109696 env[1212]: time="2025-09-09T00:47:09.109666251Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 9 00:47:09.109789 env[1212]: time="2025-09-09T00:47:09.109773702Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Sep 9 00:47:09.109850 env[1212]: time="2025-09-09T00:47:09.109836128Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 9 00:47:09.109919 env[1212]: time="2025-09-09T00:47:09.109905815Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 9 00:47:09.109975 env[1212]: time="2025-09-09T00:47:09.109962481Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 9 00:47:09.110162 env[1212]: time="2025-09-09T00:47:09.110144324Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 9 00:47:09.110319 env[1212]: time="2025-09-09T00:47:09.110299355Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 9 00:47:09.110627 env[1212]: time="2025-09-09T00:47:09.110605645Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 9 00:47:09.110714 env[1212]: time="2025-09-09T00:47:09.110699629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 9 00:47:09.110778 env[1212]: time="2025-09-09T00:47:09.110764246Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 9 00:47:09.110954 env[1212]: time="2025-09-09T00:47:09.110937246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 9 00:47:09.111036 env[1212]: time="2025-09-09T00:47:09.111007339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 9 00:47:09.111097 env[1212]: time="2025-09-09T00:47:09.111083759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 9 00:47:09.111152 env[1212]: time="2025-09-09T00:47:09.111139655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 9 00:47:09.111207 env[1212]: time="2025-09-09T00:47:09.111193928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 9 00:47:09.111261 env[1212]: time="2025-09-09T00:47:09.111249134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 9 00:47:09.111341 env[1212]: time="2025-09-09T00:47:09.111326568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 9 00:47:09.111420 env[1212]: time="2025-09-09T00:47:09.111404976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 9 00:47:09.111490 env[1212]: time="2025-09-09T00:47:09.111476732Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 9 00:47:09.111658 env[1212]: time="2025-09-09T00:47:09.111639916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 9 00:47:09.111742 env[1212]: time="2025-09-09T00:47:09.111724976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 9 00:47:09.111803 env[1212]: time="2025-09-09T00:47:09.111790445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 9 00:47:09.111863 env[1212]: time="2025-09-09T00:47:09.111849626Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 9 00:47:09.111922 env[1212]: time="2025-09-09T00:47:09.111906738Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Sep 9 00:47:09.111972 env[1212]: time="2025-09-09T00:47:09.111959835Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 9 00:47:09.112050 env[1212]: time="2025-09-09T00:47:09.112035647Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Sep 9 00:47:09.112144 env[1212]: time="2025-09-09T00:47:09.112129144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 9 00:47:09.112433 env[1212]: time="2025-09-09T00:47:09.112365098Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 9 00:47:09.113042 env[1212]: time="2025-09-09T00:47:09.112767360Z" level=info msg="Connect containerd service" Sep 9 00:47:09.113042 env[1212]: time="2025-09-09T00:47:09.112827393Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 9 00:47:09.113617 env[1212]: time="2025-09-09T00:47:09.113588149Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:47:09.113890 env[1212]: time="2025-09-09T00:47:09.113816761Z" level=info msg="Start subscribing containerd event" Sep 9 00:47:09.113890 env[1212]: time="2025-09-09T00:47:09.113875415Z" level=info msg="Start recovering state" Sep 9 00:47:09.113951 env[1212]: time="2025-09-09T00:47:09.113937151Z" level=info msg="Start event monitor" Sep 9 00:47:09.113972 env[1212]: time="2025-09-09T00:47:09.113955242Z" level=info msg="Start snapshots syncer" Sep 9 00:47:09.113972 env[1212]: time="2025-09-09T00:47:09.113964734Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:47:09.114010 env[1212]: time="2025-09-09T00:47:09.113972157Z" level=info msg="Start streaming server" Sep 9 00:47:09.114174 env[1212]: time="2025-09-09T00:47:09.114153230Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:47:09.114228 env[1212]: time="2025-09-09T00:47:09.114196591Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:47:09.114306 systemd[1]: Started containerd.service. Sep 9 00:47:09.115188 env[1212]: time="2025-09-09T00:47:09.115151602Z" level=info msg="containerd successfully booted in 0.046171s" Sep 9 00:47:09.375386 systemd-networkd[1041]: eth0: Gained IPv6LL Sep 9 00:47:09.376989 systemd[1]: Finished systemd-networkd-wait-online.service. Sep 9 00:47:09.378000 systemd[1]: Reached target network-online.target. Sep 9 00:47:09.380216 systemd[1]: Starting kubelet.service... Sep 9 00:47:09.946874 systemd[1]: Started kubelet.service. Sep 9 00:47:10.297809 kubelet[1256]: E0909 00:47:10.297711 1256 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:47:10.299750 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:47:10.299878 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:47:12.071101 sshd_keygen[1211]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:47:12.089057 systemd[1]: Finished sshd-keygen.service. Sep 9 00:47:12.091126 systemd[1]: Starting issuegen.service... Sep 9 00:47:12.095357 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:47:12.095509 systemd[1]: Finished issuegen.service. Sep 9 00:47:12.097511 systemd[1]: Starting systemd-user-sessions.service... Sep 9 00:47:12.103149 systemd[1]: Finished systemd-user-sessions.service. Sep 9 00:47:12.105191 systemd[1]: Started getty@tty1.service. Sep 9 00:47:12.107004 systemd[1]: Started serial-getty@ttyAMA0.service. Sep 9 00:47:12.107834 systemd[1]: Reached target getty.target. Sep 9 00:47:12.108544 systemd[1]: Reached target multi-user.target. Sep 9 00:47:12.110393 systemd[1]: Starting systemd-update-utmp-runlevel.service... Sep 9 00:47:12.116691 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Sep 9 00:47:12.116844 systemd[1]: Finished systemd-update-utmp-runlevel.service. Sep 9 00:47:12.117759 systemd[1]: Startup finished in 526ms (kernel) + 4.241s (initrd) + 6.325s (userspace) = 11.093s. Sep 9 00:47:14.483952 systemd[1]: Created slice system-sshd.slice. Sep 9 00:47:14.485521 systemd[1]: Started sshd@0-10.0.0.139:22-10.0.0.1:48098.service. Sep 9 00:47:14.528103 sshd[1278]: Accepted publickey for core from 10.0.0.1 port 48098 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:47:14.530037 sshd[1278]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:47:14.538504 systemd-logind[1201]: New session 1 of user core. Sep 9 00:47:14.539629 systemd[1]: Created slice user-500.slice. Sep 9 00:47:14.540968 systemd[1]: Starting user-runtime-dir@500.service... Sep 9 00:47:14.548652 systemd[1]: Finished user-runtime-dir@500.service. Sep 9 00:47:14.550049 systemd[1]: Starting user@500.service... Sep 9 00:47:14.552855 (systemd)[1281]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:47:14.610819 systemd[1281]: Queued start job for default target default.target. Sep 9 00:47:14.611340 systemd[1281]: Reached target paths.target. Sep 9 00:47:14.611371 systemd[1281]: Reached target sockets.target. Sep 9 00:47:14.611382 systemd[1281]: Reached target timers.target. Sep 9 00:47:14.611391 systemd[1281]: Reached target basic.target. Sep 9 00:47:14.611441 systemd[1281]: Reached target default.target. Sep 9 00:47:14.611468 systemd[1281]: Startup finished in 53ms. Sep 9 00:47:14.611915 systemd[1]: Started user@500.service. Sep 9 00:47:14.612868 systemd[1]: Started session-1.scope. Sep 9 00:47:14.665550 systemd[1]: Started sshd@1-10.0.0.139:22-10.0.0.1:48112.service. Sep 9 00:47:14.714131 sshd[1290]: Accepted publickey for core from 10.0.0.1 port 48112 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:47:14.715498 sshd[1290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:47:14.719039 systemd-logind[1201]: New session 2 of user core. Sep 9 00:47:14.720243 systemd[1]: Started session-2.scope. Sep 9 00:47:14.773420 sshd[1290]: pam_unix(sshd:session): session closed for user core Sep 9 00:47:14.776710 systemd[1]: Started sshd@2-10.0.0.139:22-10.0.0.1:48126.service. Sep 9 00:47:14.777223 systemd[1]: sshd@1-10.0.0.139:22-10.0.0.1:48112.service: Deactivated successfully. Sep 9 00:47:14.777848 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:47:14.778372 systemd-logind[1201]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:47:14.779051 systemd-logind[1201]: Removed session 2. Sep 9 00:47:14.811231 sshd[1295]: Accepted publickey for core from 10.0.0.1 port 48126 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:47:14.812314 sshd[1295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:47:14.817006 systemd[1]: Started session-3.scope. Sep 9 00:47:14.817164 systemd-logind[1201]: New session 3 of user core. Sep 9 00:47:14.867864 sshd[1295]: pam_unix(sshd:session): session closed for user core Sep 9 00:47:14.871708 systemd[1]: sshd@2-10.0.0.139:22-10.0.0.1:48126.service: Deactivated successfully. Sep 9 00:47:14.872253 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:47:14.873115 systemd-logind[1201]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:47:14.874223 systemd[1]: Started sshd@3-10.0.0.139:22-10.0.0.1:48142.service. Sep 9 00:47:14.875447 systemd-logind[1201]: Removed session 3. Sep 9 00:47:14.907939 sshd[1302]: Accepted publickey for core from 10.0.0.1 port 48142 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:47:14.908965 sshd[1302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:47:14.912734 systemd-logind[1201]: New session 4 of user core. Sep 9 00:47:14.913206 systemd[1]: Started session-4.scope. Sep 9 00:47:14.966929 sshd[1302]: pam_unix(sshd:session): session closed for user core Sep 9 00:47:14.970490 systemd[1]: Started sshd@4-10.0.0.139:22-10.0.0.1:48158.service. Sep 9 00:47:14.970980 systemd[1]: sshd@3-10.0.0.139:22-10.0.0.1:48142.service: Deactivated successfully. Sep 9 00:47:14.971665 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:47:14.972406 systemd-logind[1201]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:47:14.974684 systemd-logind[1201]: Removed session 4. Sep 9 00:47:15.004149 sshd[1307]: Accepted publickey for core from 10.0.0.1 port 48158 ssh2: RSA SHA256:SJL83fEN2Ip3G6nq+SFahxwHER39rSdiWTx9teXxMXo Sep 9 00:47:15.005183 sshd[1307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:47:15.008848 systemd[1]: Started session-5.scope. Sep 9 00:47:15.009357 systemd-logind[1201]: New session 5 of user core. Sep 9 00:47:15.065716 sudo[1311]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:47:15.065932 sudo[1311]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 9 00:47:15.077595 systemd[1]: Starting coreos-metadata.service... Sep 9 00:47:15.083620 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:47:15.083782 systemd[1]: Finished coreos-metadata.service. Sep 9 00:47:15.489312 systemd[1]: Stopped kubelet.service. Sep 9 00:47:15.491332 systemd[1]: Starting kubelet.service... Sep 9 00:47:15.512400 systemd[1]: Reloading. Sep 9 00:47:15.564369 /usr/lib/systemd/system-generators/torcx-generator[1370]: time="2025-09-09T00:47:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Sep 9 00:47:15.564690 /usr/lib/systemd/system-generators/torcx-generator[1370]: time="2025-09-09T00:47:15Z" level=info msg="torcx already run" Sep 9 00:47:15.741676 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Sep 9 00:47:15.741697 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Sep 9 00:47:15.757343 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:47:15.821911 systemd[1]: Started kubelet.service. Sep 9 00:47:15.823346 systemd[1]: Stopping kubelet.service... Sep 9 00:47:15.823735 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:47:15.823897 systemd[1]: Stopped kubelet.service. Sep 9 00:47:15.825520 systemd[1]: Starting kubelet.service... Sep 9 00:47:15.918314 systemd[1]: Started kubelet.service. Sep 9 00:47:15.951624 kubelet[1414]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:47:15.951624 kubelet[1414]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:47:15.951624 kubelet[1414]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:47:15.951951 kubelet[1414]: I0909 00:47:15.951706 1414 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:47:16.702533 kubelet[1414]: I0909 00:47:16.702495 1414 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 00:47:16.702673 kubelet[1414]: I0909 00:47:16.702661 1414 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:47:16.703006 kubelet[1414]: I0909 00:47:16.702987 1414 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 00:47:16.724393 kubelet[1414]: I0909 00:47:16.724357 1414 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:47:16.730181 kubelet[1414]: E0909 00:47:16.730148 1414 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:47:16.730181 kubelet[1414]: I0909 00:47:16.730173 1414 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:47:16.733291 kubelet[1414]: I0909 00:47:16.733265 1414 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:47:16.736375 kubelet[1414]: I0909 00:47:16.736333 1414 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:47:16.736812 kubelet[1414]: I0909 00:47:16.736374 1414 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.139","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:47:16.736923 kubelet[1414]: I0909 00:47:16.736887 1414 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:47:16.736923 kubelet[1414]: I0909 00:47:16.736898 1414 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 00:47:16.737127 kubelet[1414]: I0909 00:47:16.737106 1414 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:47:16.739769 kubelet[1414]: I0909 00:47:16.739746 1414 kubelet.go:446] "Attempting to sync node with API server" Sep 9 00:47:16.739769 kubelet[1414]: I0909 00:47:16.739773 1414 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:47:16.739884 kubelet[1414]: I0909 00:47:16.739793 1414 kubelet.go:352] "Adding apiserver pod source" Sep 9 00:47:16.739884 kubelet[1414]: I0909 00:47:16.739802 1414 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:47:16.740161 kubelet[1414]: E0909 00:47:16.740141 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:16.740535 kubelet[1414]: E0909 00:47:16.740484 1414 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:16.742539 kubelet[1414]: I0909 00:47:16.742522 1414 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Sep 9 00:47:16.743251 kubelet[1414]: I0909 00:47:16.743231 1414 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:47:16.743490 kubelet[1414]: W0909 00:47:16.743475 1414 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:47:16.744419 kubelet[1414]: I0909 00:47:16.744399 1414 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:47:16.744529 kubelet[1414]: I0909 00:47:16.744517 1414 server.go:1287] "Started kubelet" Sep 9 00:47:16.758250 kubelet[1414]: I0909 00:47:16.758170 1414 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:47:16.759644 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Sep 9 00:47:16.759706 kubelet[1414]: I0909 00:47:16.758655 1414 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:47:16.759706 kubelet[1414]: I0909 00:47:16.758728 1414 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:47:16.759923 kubelet[1414]: I0909 00:47:16.759896 1414 server.go:479] "Adding debug handlers to kubelet server" Sep 9 00:47:16.761059 kubelet[1414]: I0909 00:47:16.761037 1414 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:47:16.761117 kubelet[1414]: E0909 00:47:16.761077 1414 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:47:16.761406 kubelet[1414]: I0909 00:47:16.761328 1414 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:47:16.761927 kubelet[1414]: E0909 00:47:16.761910 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Sep 9 00:47:16.762092 kubelet[1414]: I0909 00:47:16.762080 1414 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:47:16.762332 kubelet[1414]: I0909 00:47:16.762310 1414 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:47:16.762443 kubelet[1414]: I0909 00:47:16.762431 1414 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:47:16.763316 kubelet[1414]: I0909 00:47:16.763278 1414 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:47:16.763689 kubelet[1414]: I0909 00:47:16.763647 1414 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:47:16.765151 kubelet[1414]: I0909 00:47:16.765115 1414 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:47:16.770917 kubelet[1414]: E0909 00:47:16.770891 1414 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.139\" not found" node="10.0.0.139" Sep 9 00:47:16.774010 kubelet[1414]: I0909 00:47:16.773991 1414 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:47:16.774047 kubelet[1414]: I0909 00:47:16.774014 1414 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:47:16.774047 kubelet[1414]: I0909 00:47:16.774033 1414 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:47:16.851287 kubelet[1414]: I0909 00:47:16.851256 1414 policy_none.go:49] "None policy: Start" Sep 9 00:47:16.851287 kubelet[1414]: I0909 00:47:16.851283 1414 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:47:16.851287 kubelet[1414]: I0909 00:47:16.851295 1414 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:47:16.856170 systemd[1]: Created slice kubepods.slice. Sep 9 00:47:16.864042 kubelet[1414]: E0909 00:47:16.862305 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.139\" not found" Sep 9 00:47:16.862561 systemd[1]: Created slice kubepods-besteffort.slice. Sep 9 00:47:16.875078 systemd[1]: Created slice kubepods-burstable.slice. Sep 9 00:47:16.876308 kubelet[1414]: I0909 00:47:16.876278 1414 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:47:16.876576 kubelet[1414]: I0909 00:47:16.876561 1414 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:47:16.877122 kubelet[1414]: I0909 00:47:16.877081 1414 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:47:16.877596 kubelet[1414]: I0909 00:47:16.877582 1414 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:47:16.878124 kubelet[1414]: E0909 00:47:16.878105 1414 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:47:16.878171 kubelet[1414]: E0909 00:47:16.878143 1414 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.139\" not found" Sep 9 00:47:16.919789 kubelet[1414]: I0909 00:47:16.919734 1414 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:47:16.920712 kubelet[1414]: I0909 00:47:16.920688 1414 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:47:16.920712 kubelet[1414]: I0909 00:47:16.920710 1414 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 00:47:16.920796 kubelet[1414]: I0909 00:47:16.920728 1414 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:47:16.920796 kubelet[1414]: I0909 00:47:16.920735 1414 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 00:47:16.920796 kubelet[1414]: E0909 00:47:16.920777 1414 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Sep 9 00:47:16.978269 kubelet[1414]: I0909 00:47:16.978176 1414 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.139" Sep 9 00:47:16.983997 kubelet[1414]: I0909 00:47:16.983967 1414 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.139" Sep 9 00:47:16.992049 kubelet[1414]: I0909 00:47:16.992026 1414 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Sep 9 00:47:16.992365 env[1212]: time="2025-09-09T00:47:16.992323171Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:47:16.992603 kubelet[1414]: I0909 00:47:16.992508 1414 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Sep 9 00:47:17.442178 sudo[1311]: pam_unix(sudo:session): session closed for user root Sep 9 00:47:17.444905 sshd[1307]: pam_unix(sshd:session): session closed for user core Sep 9 00:47:17.447137 systemd-logind[1201]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:47:17.447374 systemd[1]: sshd@4-10.0.0.139:22-10.0.0.1:48158.service: Deactivated successfully. Sep 9 00:47:17.448063 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:47:17.448651 systemd-logind[1201]: Removed session 5. Sep 9 00:47:17.705158 kubelet[1414]: I0909 00:47:17.705060 1414 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Sep 9 00:47:17.705301 kubelet[1414]: W0909 00:47:17.705261 1414 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 9 00:47:17.705301 kubelet[1414]: W0909 00:47:17.705300 1414 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 9 00:47:17.705717 kubelet[1414]: W0909 00:47:17.705518 1414 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Sep 9 00:47:17.740623 kubelet[1414]: E0909 00:47:17.740594 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:17.740623 kubelet[1414]: I0909 00:47:17.740610 1414 apiserver.go:52] "Watching apiserver" Sep 9 00:47:17.748513 systemd[1]: Created slice kubepods-besteffort-pod7ce787aa_b270_4e92_a778_24362b167111.slice. Sep 9 00:47:17.758884 systemd[1]: Created slice kubepods-burstable-podb87f6de4_c2d1_4be0_b5e7_3db8185fe994.slice. Sep 9 00:47:17.763442 kubelet[1414]: I0909 00:47:17.763416 1414 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:47:17.768168 kubelet[1414]: I0909 00:47:17.768137 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-bpf-maps\") pod \"cilium-pm7z7\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " pod="kube-system/cilium-pm7z7" Sep 9 00:47:17.768168 kubelet[1414]: I0909 00:47:17.768168 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-cilium-cgroup\") pod \"cilium-pm7z7\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " pod="kube-system/cilium-pm7z7" Sep 9 00:47:17.768269 kubelet[1414]: I0909 00:47:17.768186 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-cni-path\") pod \"cilium-pm7z7\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " pod="kube-system/cilium-pm7z7" Sep 9 00:47:17.768269 kubelet[1414]: I0909 00:47:17.768202 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-clustermesh-secrets\") pod \"cilium-pm7z7\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " pod="kube-system/cilium-pm7z7" Sep 9 00:47:17.768269 kubelet[1414]: I0909 00:47:17.768216 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-hubble-tls\") pod \"cilium-pm7z7\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " pod="kube-system/cilium-pm7z7" Sep 9 00:47:17.768269 kubelet[1414]: I0909 00:47:17.768229 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ce787aa-b270-4e92-a778-24362b167111-xtables-lock\") pod \"kube-proxy-wzg7h\" (UID: \"7ce787aa-b270-4e92-a778-24362b167111\") " pod="kube-system/kube-proxy-wzg7h" Sep 9 00:47:17.768269 kubelet[1414]: I0909 00:47:17.768245 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-host-proc-sys-kernel\") pod \"cilium-pm7z7\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " pod="kube-system/cilium-pm7z7" Sep 9 00:47:17.768388 kubelet[1414]: I0909 00:47:17.768260 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4wv2\" (UniqueName: \"kubernetes.io/projected/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-kube-api-access-k4wv2\") pod \"cilium-pm7z7\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " pod="kube-system/cilium-pm7z7" Sep 9 00:47:17.768388 kubelet[1414]: I0909 00:47:17.768275 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-lib-modules\") pod \"cilium-pm7z7\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " pod="kube-system/cilium-pm7z7" Sep 9 00:47:17.768388 kubelet[1414]: I0909 00:47:17.768291 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-cilium-config-path\") pod \"cilium-pm7z7\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " pod="kube-system/cilium-pm7z7" Sep 9 00:47:17.768388 kubelet[1414]: I0909 00:47:17.768312 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7ce787aa-b270-4e92-a778-24362b167111-kube-proxy\") pod \"kube-proxy-wzg7h\" (UID: \"7ce787aa-b270-4e92-a778-24362b167111\") " pod="kube-system/kube-proxy-wzg7h" Sep 9 00:47:17.768388 kubelet[1414]: I0909 00:47:17.768328 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ce787aa-b270-4e92-a778-24362b167111-lib-modules\") pod \"kube-proxy-wzg7h\" (UID: \"7ce787aa-b270-4e92-a778-24362b167111\") " pod="kube-system/kube-proxy-wzg7h" Sep 9 00:47:17.768485 kubelet[1414]: I0909 00:47:17.768349 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84dxh\" (UniqueName: \"kubernetes.io/projected/7ce787aa-b270-4e92-a778-24362b167111-kube-api-access-84dxh\") pod \"kube-proxy-wzg7h\" (UID: \"7ce787aa-b270-4e92-a778-24362b167111\") " pod="kube-system/kube-proxy-wzg7h" Sep 9 00:47:17.768485 kubelet[1414]: I0909 00:47:17.768364 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-cilium-run\") pod \"cilium-pm7z7\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " pod="kube-system/cilium-pm7z7" Sep 9 00:47:17.768485 kubelet[1414]: I0909 00:47:17.768378 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-hostproc\") pod \"cilium-pm7z7\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " pod="kube-system/cilium-pm7z7" Sep 9 00:47:17.768485 kubelet[1414]: I0909 00:47:17.768391 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-etc-cni-netd\") pod \"cilium-pm7z7\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " pod="kube-system/cilium-pm7z7" Sep 9 00:47:17.768485 kubelet[1414]: I0909 00:47:17.768405 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-xtables-lock\") pod \"cilium-pm7z7\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " pod="kube-system/cilium-pm7z7" Sep 9 00:47:17.768485 kubelet[1414]: I0909 00:47:17.768419 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-host-proc-sys-net\") pod \"cilium-pm7z7\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " pod="kube-system/cilium-pm7z7" Sep 9 00:47:17.869811 kubelet[1414]: I0909 00:47:17.869761 1414 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 9 00:47:18.057679 kubelet[1414]: E0909 00:47:18.057544 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:18.058576 env[1212]: time="2025-09-09T00:47:18.058515196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wzg7h,Uid:7ce787aa-b270-4e92-a778-24362b167111,Namespace:kube-system,Attempt:0,}" Sep 9 00:47:18.069579 kubelet[1414]: E0909 00:47:18.069553 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:18.070128 env[1212]: time="2025-09-09T00:47:18.070067703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pm7z7,Uid:b87f6de4-c2d1-4be0-b5e7-3db8185fe994,Namespace:kube-system,Attempt:0,}" Sep 9 00:47:18.613836 env[1212]: time="2025-09-09T00:47:18.613793304Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:18.615412 env[1212]: time="2025-09-09T00:47:18.615366451Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:18.616331 env[1212]: time="2025-09-09T00:47:18.616295364Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:18.618095 env[1212]: time="2025-09-09T00:47:18.618067550Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:18.620212 env[1212]: time="2025-09-09T00:47:18.620184628Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:18.621157 env[1212]: time="2025-09-09T00:47:18.621130694Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:18.623844 env[1212]: time="2025-09-09T00:47:18.623809297Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:18.625168 env[1212]: time="2025-09-09T00:47:18.625132231Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:18.638771 env[1212]: time="2025-09-09T00:47:18.638707779Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:47:18.638771 env[1212]: time="2025-09-09T00:47:18.638748792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:47:18.638771 env[1212]: time="2025-09-09T00:47:18.638759156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:47:18.639113 env[1212]: time="2025-09-09T00:47:18.639055565Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0444b72f0bfe2a3354f7e940375ecbdfa1fc187f693c073fc4df61c96df3b459 pid=1478 runtime=io.containerd.runc.v2 Sep 9 00:47:18.639337 env[1212]: time="2025-09-09T00:47:18.639257495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:47:18.639337 env[1212]: time="2025-09-09T00:47:18.639289269Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:47:18.639337 env[1212]: time="2025-09-09T00:47:18.639299753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:47:18.639507 env[1212]: time="2025-09-09T00:47:18.639435887Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0003290857fa4f97be50a178e1a717078640d239aacf15658306b084d220c7a0 pid=1480 runtime=io.containerd.runc.v2 Sep 9 00:47:18.654672 systemd[1]: Started cri-containerd-0003290857fa4f97be50a178e1a717078640d239aacf15658306b084d220c7a0.scope. Sep 9 00:47:18.656862 systemd[1]: Started cri-containerd-0444b72f0bfe2a3354f7e940375ecbdfa1fc187f693c073fc4df61c96df3b459.scope. Sep 9 00:47:18.684067 env[1212]: time="2025-09-09T00:47:18.683337750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pm7z7,Uid:b87f6de4-c2d1-4be0-b5e7-3db8185fe994,Namespace:kube-system,Attempt:0,} returns sandbox id \"0003290857fa4f97be50a178e1a717078640d239aacf15658306b084d220c7a0\"" Sep 9 00:47:18.684212 kubelet[1414]: E0909 00:47:18.684169 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:18.687176 env[1212]: time="2025-09-09T00:47:18.687131853Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 00:47:18.689496 env[1212]: time="2025-09-09T00:47:18.689461065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wzg7h,Uid:7ce787aa-b270-4e92-a778-24362b167111,Namespace:kube-system,Attempt:0,} returns sandbox id \"0444b72f0bfe2a3354f7e940375ecbdfa1fc187f693c073fc4df61c96df3b459\"" Sep 9 00:47:18.690209 kubelet[1414]: E0909 00:47:18.690042 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:18.741153 kubelet[1414]: E0909 00:47:18.741107 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:18.875580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3295613077.mount: Deactivated successfully. Sep 9 00:47:19.741318 kubelet[1414]: E0909 00:47:19.741290 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:20.741816 kubelet[1414]: E0909 00:47:20.741776 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:21.742045 kubelet[1414]: E0909 00:47:21.741977 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:22.156068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4256585654.mount: Deactivated successfully. Sep 9 00:47:22.742600 kubelet[1414]: E0909 00:47:22.742556 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:23.743090 kubelet[1414]: E0909 00:47:23.743050 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:24.371455 env[1212]: time="2025-09-09T00:47:24.371410296Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:24.372558 env[1212]: time="2025-09-09T00:47:24.372531060Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:24.374041 env[1212]: time="2025-09-09T00:47:24.373996317Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:24.375127 env[1212]: time="2025-09-09T00:47:24.375083658Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 9 00:47:24.377565 env[1212]: time="2025-09-09T00:47:24.377530013Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 00:47:24.378439 env[1212]: time="2025-09-09T00:47:24.378409560Z" level=info msg="CreateContainer within sandbox \"0003290857fa4f97be50a178e1a717078640d239aacf15658306b084d220c7a0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:47:24.386744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1222475182.mount: Deactivated successfully. Sep 9 00:47:24.393793 env[1212]: time="2025-09-09T00:47:24.393751754Z" level=info msg="CreateContainer within sandbox \"0003290857fa4f97be50a178e1a717078640d239aacf15658306b084d220c7a0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"89a7f7bf372f18ba9b56d03b4c827a2fa0f1c11f92b54af6140eeedca562e6f2\"" Sep 9 00:47:24.394362 env[1212]: time="2025-09-09T00:47:24.394327685Z" level=info msg="StartContainer for \"89a7f7bf372f18ba9b56d03b4c827a2fa0f1c11f92b54af6140eeedca562e6f2\"" Sep 9 00:47:24.409721 systemd[1]: Started cri-containerd-89a7f7bf372f18ba9b56d03b4c827a2fa0f1c11f92b54af6140eeedca562e6f2.scope. Sep 9 00:47:24.442957 env[1212]: time="2025-09-09T00:47:24.442910230Z" level=info msg="StartContainer for \"89a7f7bf372f18ba9b56d03b4c827a2fa0f1c11f92b54af6140eeedca562e6f2\" returns successfully" Sep 9 00:47:24.447546 systemd[1]: cri-containerd-89a7f7bf372f18ba9b56d03b4c827a2fa0f1c11f92b54af6140eeedca562e6f2.scope: Deactivated successfully. Sep 9 00:47:24.567495 env[1212]: time="2025-09-09T00:47:24.567450596Z" level=info msg="shim disconnected" id=89a7f7bf372f18ba9b56d03b4c827a2fa0f1c11f92b54af6140eeedca562e6f2 Sep 9 00:47:24.567495 env[1212]: time="2025-09-09T00:47:24.567494719Z" level=warning msg="cleaning up after shim disconnected" id=89a7f7bf372f18ba9b56d03b4c827a2fa0f1c11f92b54af6140eeedca562e6f2 namespace=k8s.io Sep 9 00:47:24.567743 env[1212]: time="2025-09-09T00:47:24.567506021Z" level=info msg="cleaning up dead shim" Sep 9 00:47:24.574627 env[1212]: time="2025-09-09T00:47:24.574595014Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:47:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1596 runtime=io.containerd.runc.v2\n" Sep 9 00:47:24.743182 kubelet[1414]: E0909 00:47:24.743148 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:24.932940 kubelet[1414]: E0909 00:47:24.932906 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:24.934770 env[1212]: time="2025-09-09T00:47:24.934726669Z" level=info msg="CreateContainer within sandbox \"0003290857fa4f97be50a178e1a717078640d239aacf15658306b084d220c7a0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:47:24.945166 env[1212]: time="2025-09-09T00:47:24.945114634Z" level=info msg="CreateContainer within sandbox \"0003290857fa4f97be50a178e1a717078640d239aacf15658306b084d220c7a0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8b92dff4ab58fc6154ddd84769a9b2c787426c747036ee21a00d94080f503805\"" Sep 9 00:47:24.945674 env[1212]: time="2025-09-09T00:47:24.945647604Z" level=info msg="StartContainer for \"8b92dff4ab58fc6154ddd84769a9b2c787426c747036ee21a00d94080f503805\"" Sep 9 00:47:24.959129 systemd[1]: Started cri-containerd-8b92dff4ab58fc6154ddd84769a9b2c787426c747036ee21a00d94080f503805.scope. Sep 9 00:47:24.987774 env[1212]: time="2025-09-09T00:47:24.987729430Z" level=info msg="StartContainer for \"8b92dff4ab58fc6154ddd84769a9b2c787426c747036ee21a00d94080f503805\" returns successfully" Sep 9 00:47:24.999942 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:47:25.000151 systemd[1]: Stopped systemd-sysctl.service. Sep 9 00:47:25.000524 systemd[1]: Stopping systemd-sysctl.service... Sep 9 00:47:25.001936 systemd[1]: Starting systemd-sysctl.service... Sep 9 00:47:25.002915 systemd[1]: cri-containerd-8b92dff4ab58fc6154ddd84769a9b2c787426c747036ee21a00d94080f503805.scope: Deactivated successfully. Sep 9 00:47:25.008998 systemd[1]: Finished systemd-sysctl.service. Sep 9 00:47:25.021520 env[1212]: time="2025-09-09T00:47:25.021462557Z" level=info msg="shim disconnected" id=8b92dff4ab58fc6154ddd84769a9b2c787426c747036ee21a00d94080f503805 Sep 9 00:47:25.021520 env[1212]: time="2025-09-09T00:47:25.021514323Z" level=warning msg="cleaning up after shim disconnected" id=8b92dff4ab58fc6154ddd84769a9b2c787426c747036ee21a00d94080f503805 namespace=k8s.io Sep 9 00:47:25.021520 env[1212]: time="2025-09-09T00:47:25.021524980Z" level=info msg="cleaning up dead shim" Sep 9 00:47:25.028123 env[1212]: time="2025-09-09T00:47:25.028088506Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:47:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1660 runtime=io.containerd.runc.v2\n" Sep 9 00:47:25.385956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89a7f7bf372f18ba9b56d03b4c827a2fa0f1c11f92b54af6140eeedca562e6f2-rootfs.mount: Deactivated successfully. Sep 9 00:47:25.676820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2244655611.mount: Deactivated successfully. Sep 9 00:47:25.743594 kubelet[1414]: E0909 00:47:25.743547 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:25.935295 kubelet[1414]: E0909 00:47:25.935203 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:25.937532 env[1212]: time="2025-09-09T00:47:25.937488814Z" level=info msg="CreateContainer within sandbox \"0003290857fa4f97be50a178e1a717078640d239aacf15658306b084d220c7a0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:47:25.947524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1782211368.mount: Deactivated successfully. Sep 9 00:47:25.950328 env[1212]: time="2025-09-09T00:47:25.950280910Z" level=info msg="CreateContainer within sandbox \"0003290857fa4f97be50a178e1a717078640d239aacf15658306b084d220c7a0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ba2f389bae131126ab7c2b6569f2ad18d9e4dc74e9e790fb6dbb4eb6f3211865\"" Sep 9 00:47:25.951983 env[1212]: time="2025-09-09T00:47:25.951945190Z" level=info msg="StartContainer for \"ba2f389bae131126ab7c2b6569f2ad18d9e4dc74e9e790fb6dbb4eb6f3211865\"" Sep 9 00:47:25.967844 systemd[1]: Started cri-containerd-ba2f389bae131126ab7c2b6569f2ad18d9e4dc74e9e790fb6dbb4eb6f3211865.scope. Sep 9 00:47:25.999377 systemd[1]: cri-containerd-ba2f389bae131126ab7c2b6569f2ad18d9e4dc74e9e790fb6dbb4eb6f3211865.scope: Deactivated successfully. Sep 9 00:47:26.000811 env[1212]: time="2025-09-09T00:47:26.000770528Z" level=info msg="StartContainer for \"ba2f389bae131126ab7c2b6569f2ad18d9e4dc74e9e790fb6dbb4eb6f3211865\" returns successfully" Sep 9 00:47:26.227840 env[1212]: time="2025-09-09T00:47:26.227725853Z" level=info msg="shim disconnected" id=ba2f389bae131126ab7c2b6569f2ad18d9e4dc74e9e790fb6dbb4eb6f3211865 Sep 9 00:47:26.227840 env[1212]: time="2025-09-09T00:47:26.227773162Z" level=warning msg="cleaning up after shim disconnected" id=ba2f389bae131126ab7c2b6569f2ad18d9e4dc74e9e790fb6dbb4eb6f3211865 namespace=k8s.io Sep 9 00:47:26.227840 env[1212]: time="2025-09-09T00:47:26.227785580Z" level=info msg="cleaning up dead shim" Sep 9 00:47:26.234645 env[1212]: time="2025-09-09T00:47:26.234603757Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:47:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1717 runtime=io.containerd.runc.v2\n" Sep 9 00:47:26.238027 env[1212]: time="2025-09-09T00:47:26.237983903Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:26.239649 env[1212]: time="2025-09-09T00:47:26.239602933Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:26.241101 env[1212]: time="2025-09-09T00:47:26.241072827Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:26.242263 env[1212]: time="2025-09-09T00:47:26.242226701Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:26.242587 env[1212]: time="2025-09-09T00:47:26.242555178Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Sep 9 00:47:26.246728 env[1212]: time="2025-09-09T00:47:26.246698592Z" level=info msg="CreateContainer within sandbox \"0444b72f0bfe2a3354f7e940375ecbdfa1fc187f693c073fc4df61c96df3b459\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:47:26.256877 env[1212]: time="2025-09-09T00:47:26.256840714Z" level=info msg="CreateContainer within sandbox \"0444b72f0bfe2a3354f7e940375ecbdfa1fc187f693c073fc4df61c96df3b459\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ad23bf2f8dadd8a2a7fd4b86cdbb78c847ac529eb5733cdb487c9dbf5e21ae47\"" Sep 9 00:47:26.257603 env[1212]: time="2025-09-09T00:47:26.257572576Z" level=info msg="StartContainer for \"ad23bf2f8dadd8a2a7fd4b86cdbb78c847ac529eb5733cdb487c9dbf5e21ae47\"" Sep 9 00:47:26.271259 systemd[1]: Started cri-containerd-ad23bf2f8dadd8a2a7fd4b86cdbb78c847ac529eb5733cdb487c9dbf5e21ae47.scope. Sep 9 00:47:26.298037 env[1212]: time="2025-09-09T00:47:26.297986678Z" level=info msg="StartContainer for \"ad23bf2f8dadd8a2a7fd4b86cdbb78c847ac529eb5733cdb487c9dbf5e21ae47\" returns successfully" Sep 9 00:47:26.385727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba2f389bae131126ab7c2b6569f2ad18d9e4dc74e9e790fb6dbb4eb6f3211865-rootfs.mount: Deactivated successfully. Sep 9 00:47:26.743708 kubelet[1414]: E0909 00:47:26.743647 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:26.937841 kubelet[1414]: E0909 00:47:26.937799 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:26.939848 kubelet[1414]: E0909 00:47:26.939825 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:26.942448 env[1212]: time="2025-09-09T00:47:26.942409307Z" level=info msg="CreateContainer within sandbox \"0003290857fa4f97be50a178e1a717078640d239aacf15658306b084d220c7a0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:47:26.946778 kubelet[1414]: I0909 00:47:26.946721 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wzg7h" podStartSLOduration=2.39166147 podStartE2EDuration="9.946708587s" podCreationTimestamp="2025-09-09 00:47:17 +0000 UTC" firstStartedPulling="2025-09-09 00:47:18.69042838 +0000 UTC m=+2.768544508" lastFinishedPulling="2025-09-09 00:47:26.245475497 +0000 UTC m=+10.323591625" observedRunningTime="2025-09-09 00:47:26.946662641 +0000 UTC m=+11.024778769" watchObservedRunningTime="2025-09-09 00:47:26.946708587 +0000 UTC m=+11.024824715" Sep 9 00:47:26.953911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount591781699.mount: Deactivated successfully. Sep 9 00:47:26.955620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3990445179.mount: Deactivated successfully. Sep 9 00:47:26.956842 env[1212]: time="2025-09-09T00:47:26.956805683Z" level=info msg="CreateContainer within sandbox \"0003290857fa4f97be50a178e1a717078640d239aacf15658306b084d220c7a0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"44224455d3ba6f02f2c3de805869b7f1e6cb3c7a82f691de24b67f29878dbd74\"" Sep 9 00:47:26.957459 env[1212]: time="2025-09-09T00:47:26.957417251Z" level=info msg="StartContainer for \"44224455d3ba6f02f2c3de805869b7f1e6cb3c7a82f691de24b67f29878dbd74\"" Sep 9 00:47:26.971275 systemd[1]: Started cri-containerd-44224455d3ba6f02f2c3de805869b7f1e6cb3c7a82f691de24b67f29878dbd74.scope. Sep 9 00:47:26.995644 systemd[1]: cri-containerd-44224455d3ba6f02f2c3de805869b7f1e6cb3c7a82f691de24b67f29878dbd74.scope: Deactivated successfully. Sep 9 00:47:26.996780 env[1212]: time="2025-09-09T00:47:26.996683206Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb87f6de4_c2d1_4be0_b5e7_3db8185fe994.slice/cri-containerd-44224455d3ba6f02f2c3de805869b7f1e6cb3c7a82f691de24b67f29878dbd74.scope/memory.events\": no such file or directory" Sep 9 00:47:26.998471 env[1212]: time="2025-09-09T00:47:26.998432866Z" level=info msg="StartContainer for \"44224455d3ba6f02f2c3de805869b7f1e6cb3c7a82f691de24b67f29878dbd74\" returns successfully" Sep 9 00:47:27.037268 env[1212]: time="2025-09-09T00:47:27.037223168Z" level=info msg="shim disconnected" id=44224455d3ba6f02f2c3de805869b7f1e6cb3c7a82f691de24b67f29878dbd74 Sep 9 00:47:27.037268 env[1212]: time="2025-09-09T00:47:27.037267183Z" level=warning msg="cleaning up after shim disconnected" id=44224455d3ba6f02f2c3de805869b7f1e6cb3c7a82f691de24b67f29878dbd74 namespace=k8s.io Sep 9 00:47:27.037268 env[1212]: time="2025-09-09T00:47:27.037276475Z" level=info msg="cleaning up dead shim" Sep 9 00:47:27.043697 env[1212]: time="2025-09-09T00:47:27.043661586Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:47:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1941 runtime=io.containerd.runc.v2\n" Sep 9 00:47:27.385150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44224455d3ba6f02f2c3de805869b7f1e6cb3c7a82f691de24b67f29878dbd74-rootfs.mount: Deactivated successfully. Sep 9 00:47:27.744171 kubelet[1414]: E0909 00:47:27.744132 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:27.944127 kubelet[1414]: E0909 00:47:27.944100 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:27.944266 kubelet[1414]: E0909 00:47:27.944163 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:27.945903 env[1212]: time="2025-09-09T00:47:27.945862698Z" level=info msg="CreateContainer within sandbox \"0003290857fa4f97be50a178e1a717078640d239aacf15658306b084d220c7a0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:47:27.962585 env[1212]: time="2025-09-09T00:47:27.962515772Z" level=info msg="CreateContainer within sandbox \"0003290857fa4f97be50a178e1a717078640d239aacf15658306b084d220c7a0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9\"" Sep 9 00:47:27.962894 env[1212]: time="2025-09-09T00:47:27.962868861Z" level=info msg="StartContainer for \"e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9\"" Sep 9 00:47:27.980031 systemd[1]: Started cri-containerd-e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9.scope. Sep 9 00:47:28.010664 env[1212]: time="2025-09-09T00:47:28.010255109Z" level=info msg="StartContainer for \"e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9\" returns successfully" Sep 9 00:47:28.142034 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 9 00:47:28.167039 kubelet[1414]: I0909 00:47:28.166282 1414 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 00:47:28.189672 systemd[1]: Created slice kubepods-burstable-podc93bc4a9_71b5_4450_9f89_2f4c972680ce.slice. Sep 9 00:47:28.193765 systemd[1]: Created slice kubepods-burstable-podef5c99cd_ddb3_43ef_88c5_c8b8fd71f371.slice. Sep 9 00:47:28.234993 kubelet[1414]: I0909 00:47:28.234951 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef5c99cd-ddb3-43ef-88c5-c8b8fd71f371-config-volume\") pod \"coredns-668d6bf9bc-2rd8g\" (UID: \"ef5c99cd-ddb3-43ef-88c5-c8b8fd71f371\") " pod="kube-system/coredns-668d6bf9bc-2rd8g" Sep 9 00:47:28.235135 kubelet[1414]: I0909 00:47:28.234996 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlg6t\" (UniqueName: \"kubernetes.io/projected/c93bc4a9-71b5-4450-9f89-2f4c972680ce-kube-api-access-hlg6t\") pod \"coredns-668d6bf9bc-57dkh\" (UID: \"c93bc4a9-71b5-4450-9f89-2f4c972680ce\") " pod="kube-system/coredns-668d6bf9bc-57dkh" Sep 9 00:47:28.235135 kubelet[1414]: I0909 00:47:28.235053 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c93bc4a9-71b5-4450-9f89-2f4c972680ce-config-volume\") pod \"coredns-668d6bf9bc-57dkh\" (UID: \"c93bc4a9-71b5-4450-9f89-2f4c972680ce\") " pod="kube-system/coredns-668d6bf9bc-57dkh" Sep 9 00:47:28.235135 kubelet[1414]: I0909 00:47:28.235073 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jjj7\" (UniqueName: \"kubernetes.io/projected/ef5c99cd-ddb3-43ef-88c5-c8b8fd71f371-kube-api-access-4jjj7\") pod \"coredns-668d6bf9bc-2rd8g\" (UID: \"ef5c99cd-ddb3-43ef-88c5-c8b8fd71f371\") " pod="kube-system/coredns-668d6bf9bc-2rd8g" Sep 9 00:47:28.384026 kernel: Initializing XFRM netlink socket Sep 9 00:47:28.387024 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Sep 9 00:47:28.491862 kubelet[1414]: E0909 00:47:28.491823 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:28.492581 env[1212]: time="2025-09-09T00:47:28.492541904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-57dkh,Uid:c93bc4a9-71b5-4450-9f89-2f4c972680ce,Namespace:kube-system,Attempt:0,}" Sep 9 00:47:28.496341 kubelet[1414]: E0909 00:47:28.496317 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:28.496830 env[1212]: time="2025-09-09T00:47:28.496799157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2rd8g,Uid:ef5c99cd-ddb3-43ef-88c5-c8b8fd71f371,Namespace:kube-system,Attempt:0,}" Sep 9 00:47:28.744729 kubelet[1414]: E0909 00:47:28.744686 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:28.948478 kubelet[1414]: E0909 00:47:28.948074 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:29.745512 kubelet[1414]: E0909 00:47:29.745450 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:29.949241 kubelet[1414]: E0909 00:47:29.949218 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:29.987814 systemd-networkd[1041]: cilium_host: Link UP Sep 9 00:47:29.987923 systemd-networkd[1041]: cilium_net: Link UP Sep 9 00:47:29.989245 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Sep 9 00:47:29.989316 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Sep 9 00:47:29.988717 systemd-networkd[1041]: cilium_net: Gained carrier Sep 9 00:47:29.989257 systemd-networkd[1041]: cilium_host: Gained carrier Sep 9 00:47:30.061206 systemd-networkd[1041]: cilium_vxlan: Link UP Sep 9 00:47:30.061214 systemd-networkd[1041]: cilium_vxlan: Gained carrier Sep 9 00:47:30.215157 systemd-networkd[1041]: cilium_net: Gained IPv6LL Sep 9 00:47:30.302037 kernel: NET: Registered PF_ALG protocol family Sep 9 00:47:30.423182 systemd-networkd[1041]: cilium_host: Gained IPv6LL Sep 9 00:47:30.746078 kubelet[1414]: E0909 00:47:30.745966 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:30.842420 systemd-networkd[1041]: lxc_health: Link UP Sep 9 00:47:30.849942 systemd-networkd[1041]: lxc_health: Gained carrier Sep 9 00:47:30.850095 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 9 00:47:30.950370 kubelet[1414]: E0909 00:47:30.950329 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:31.036684 systemd-networkd[1041]: lxc3b2bd1710f75: Link UP Sep 9 00:47:31.046043 kernel: eth0: renamed from tmp2f14e Sep 9 00:47:31.054968 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 9 00:47:31.055065 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3b2bd1710f75: link becomes ready Sep 9 00:47:31.054554 systemd-networkd[1041]: lxc3b2bd1710f75: Gained carrier Sep 9 00:47:31.060666 systemd-networkd[1041]: lxca2733ccd2ee0: Link UP Sep 9 00:47:31.071049 kernel: eth0: renamed from tmpa26fe Sep 9 00:47:31.079487 systemd-networkd[1041]: lxca2733ccd2ee0: Gained carrier Sep 9 00:47:31.080019 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca2733ccd2ee0: link becomes ready Sep 9 00:47:31.475766 kubelet[1414]: I0909 00:47:31.475699 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pm7z7" podStartSLOduration=8.783433789 podStartE2EDuration="14.475681557s" podCreationTimestamp="2025-09-09 00:47:17 +0000 UTC" firstStartedPulling="2025-09-09 00:47:18.684861127 +0000 UTC m=+2.762977255" lastFinishedPulling="2025-09-09 00:47:24.377108895 +0000 UTC m=+8.455225023" observedRunningTime="2025-09-09 00:47:28.963983243 +0000 UTC m=+13.042099371" watchObservedRunningTime="2025-09-09 00:47:31.475681557 +0000 UTC m=+15.553797685" Sep 9 00:47:31.746970 kubelet[1414]: E0909 00:47:31.746852 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:31.775151 systemd-networkd[1041]: cilium_vxlan: Gained IPv6LL Sep 9 00:47:31.952286 kubelet[1414]: E0909 00:47:31.952118 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:32.223241 systemd-networkd[1041]: lxc_health: Gained IPv6LL Sep 9 00:47:32.479127 systemd-networkd[1041]: lxca2733ccd2ee0: Gained IPv6LL Sep 9 00:47:32.748109 kubelet[1414]: E0909 00:47:32.747751 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:32.927135 systemd-networkd[1041]: lxc3b2bd1710f75: Gained IPv6LL Sep 9 00:47:32.953208 kubelet[1414]: E0909 00:47:32.953177 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:33.748272 kubelet[1414]: E0909 00:47:33.748207 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:34.490090 env[1212]: time="2025-09-09T00:47:34.489916597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:47:34.490090 env[1212]: time="2025-09-09T00:47:34.489951295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:47:34.490090 env[1212]: time="2025-09-09T00:47:34.489961420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:47:34.490549 env[1212]: time="2025-09-09T00:47:34.490471834Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a26fed3711fcacb4ae09bc287b490a3042d0a76b72c888d630c2ed21d4bdf340 pid=2505 runtime=io.containerd.runc.v2 Sep 9 00:47:34.490889 env[1212]: time="2025-09-09T00:47:34.490838898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:47:34.490889 env[1212]: time="2025-09-09T00:47:34.490870273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:47:34.490889 env[1212]: time="2025-09-09T00:47:34.490880398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:47:34.491098 env[1212]: time="2025-09-09T00:47:34.491064410Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f14e3866700276ec13abf77d19decaf5e307e794732c7a3c5c45dd418f71ea5 pid=2513 runtime=io.containerd.runc.v2 Sep 9 00:47:34.505359 systemd[1]: Started cri-containerd-a26fed3711fcacb4ae09bc287b490a3042d0a76b72c888d630c2ed21d4bdf340.scope. Sep 9 00:47:34.507315 systemd[1]: Started cri-containerd-2f14e3866700276ec13abf77d19decaf5e307e794732c7a3c5c45dd418f71ea5.scope. Sep 9 00:47:34.523883 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:47:34.527811 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:47:34.545647 env[1212]: time="2025-09-09T00:47:34.545610401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2rd8g,Uid:ef5c99cd-ddb3-43ef-88c5-c8b8fd71f371,Namespace:kube-system,Attempt:0,} returns sandbox id \"a26fed3711fcacb4ae09bc287b490a3042d0a76b72c888d630c2ed21d4bdf340\"" Sep 9 00:47:34.545859 env[1212]: time="2025-09-09T00:47:34.545677475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-57dkh,Uid:c93bc4a9-71b5-4450-9f89-2f4c972680ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f14e3866700276ec13abf77d19decaf5e307e794732c7a3c5c45dd418f71ea5\"" Sep 9 00:47:34.546520 kubelet[1414]: E0909 00:47:34.546499 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:34.546596 kubelet[1414]: E0909 00:47:34.546537 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:34.547308 env[1212]: time="2025-09-09T00:47:34.547281756Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 00:47:34.749298 kubelet[1414]: E0909 00:47:34.749173 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:35.749616 kubelet[1414]: E0909 00:47:35.749563 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:35.957713 env[1212]: time="2025-09-09T00:47:35.957669789Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:35.959452 env[1212]: time="2025-09-09T00:47:35.959413631Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:35.961213 env[1212]: time="2025-09-09T00:47:35.961189887Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:35.962861 env[1212]: time="2025-09-09T00:47:35.962832885Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:35.964476 env[1212]: time="2025-09-09T00:47:35.964443908Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 00:47:35.965286 env[1212]: time="2025-09-09T00:47:35.965257104Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 00:47:35.966377 env[1212]: time="2025-09-09T00:47:35.966325410Z" level=info msg="CreateContainer within sandbox \"2f14e3866700276ec13abf77d19decaf5e307e794732c7a3c5c45dd418f71ea5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:47:35.974534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3039897145.mount: Deactivated successfully. Sep 9 00:47:35.977616 env[1212]: time="2025-09-09T00:47:35.977584129Z" level=info msg="CreateContainer within sandbox \"2f14e3866700276ec13abf77d19decaf5e307e794732c7a3c5c45dd418f71ea5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"10a050143d23dd497ca9bedb7fb9040d299bb9b3f6c790ea4c2e5b24732c189f\"" Sep 9 00:47:35.978503 env[1212]: time="2025-09-09T00:47:35.978458751Z" level=info msg="StartContainer for \"10a050143d23dd497ca9bedb7fb9040d299bb9b3f6c790ea4c2e5b24732c189f\"" Sep 9 00:47:35.993730 systemd[1]: Started cri-containerd-10a050143d23dd497ca9bedb7fb9040d299bb9b3f6c790ea4c2e5b24732c189f.scope. Sep 9 00:47:36.027412 env[1212]: time="2025-09-09T00:47:36.027324061Z" level=info msg="StartContainer for \"10a050143d23dd497ca9bedb7fb9040d299bb9b3f6c790ea4c2e5b24732c189f\" returns successfully" Sep 9 00:47:36.043813 env[1212]: time="2025-09-09T00:47:36.043724851Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:36.045903 env[1212]: time="2025-09-09T00:47:36.045872832Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:36.049374 env[1212]: time="2025-09-09T00:47:36.049336035Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:36.051134 env[1212]: time="2025-09-09T00:47:36.051086985Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:36.052763 env[1212]: time="2025-09-09T00:47:36.052718528Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 00:47:36.054750 env[1212]: time="2025-09-09T00:47:36.054703927Z" level=info msg="CreateContainer within sandbox \"a26fed3711fcacb4ae09bc287b490a3042d0a76b72c888d630c2ed21d4bdf340\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:47:36.065112 env[1212]: time="2025-09-09T00:47:36.065071050Z" level=info msg="CreateContainer within sandbox \"a26fed3711fcacb4ae09bc287b490a3042d0a76b72c888d630c2ed21d4bdf340\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"31d60b1bdb60e03a15ac768ae6d89794e1914376dfcc1a4b8fea00d6a6ee552d\"" Sep 9 00:47:36.065478 env[1212]: time="2025-09-09T00:47:36.065440272Z" level=info msg="StartContainer for \"31d60b1bdb60e03a15ac768ae6d89794e1914376dfcc1a4b8fea00d6a6ee552d\"" Sep 9 00:47:36.082192 systemd[1]: Started cri-containerd-31d60b1bdb60e03a15ac768ae6d89794e1914376dfcc1a4b8fea00d6a6ee552d.scope. Sep 9 00:47:36.108796 env[1212]: time="2025-09-09T00:47:36.108753749Z" level=info msg="StartContainer for \"31d60b1bdb60e03a15ac768ae6d89794e1914376dfcc1a4b8fea00d6a6ee552d\" returns successfully" Sep 9 00:47:36.739929 kubelet[1414]: E0909 00:47:36.739881 1414 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:36.750520 kubelet[1414]: E0909 00:47:36.750493 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:36.961722 kubelet[1414]: E0909 00:47:36.961695 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:36.963939 kubelet[1414]: E0909 00:47:36.963894 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:36.971855 kubelet[1414]: I0909 00:47:36.971800 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2rd8g" podStartSLOduration=40.465323099 podStartE2EDuration="41.971780335s" podCreationTimestamp="2025-09-09 00:46:55 +0000 UTC" firstStartedPulling="2025-09-09 00:47:34.546996053 +0000 UTC m=+18.625112181" lastFinishedPulling="2025-09-09 00:47:36.053453289 +0000 UTC m=+20.131569417" observedRunningTime="2025-09-09 00:47:36.971396109 +0000 UTC m=+21.049512237" watchObservedRunningTime="2025-09-09 00:47:36.971780335 +0000 UTC m=+21.049896463" Sep 9 00:47:36.980843 kubelet[1414]: I0909 00:47:36.980777 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-57dkh" podStartSLOduration=40.562738942 podStartE2EDuration="41.980762129s" podCreationTimestamp="2025-09-09 00:46:55 +0000 UTC" firstStartedPulling="2025-09-09 00:47:34.54698993 +0000 UTC m=+18.625106058" lastFinishedPulling="2025-09-09 00:47:35.965013117 +0000 UTC m=+20.043129245" observedRunningTime="2025-09-09 00:47:36.98026922 +0000 UTC m=+21.058385429" watchObservedRunningTime="2025-09-09 00:47:36.980762129 +0000 UTC m=+21.058878257" Sep 9 00:47:37.750834 kubelet[1414]: E0909 00:47:37.750801 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:37.965637 kubelet[1414]: E0909 00:47:37.965613 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:37.965801 kubelet[1414]: E0909 00:47:37.965645 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:38.751605 kubelet[1414]: E0909 00:47:38.751569 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:38.966900 kubelet[1414]: E0909 00:47:38.966862 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:38.967237 kubelet[1414]: E0909 00:47:38.967216 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:47:39.752871 kubelet[1414]: E0909 00:47:39.752835 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:40.754378 kubelet[1414]: E0909 00:47:40.754320 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:41.755359 kubelet[1414]: E0909 00:47:41.755324 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:42.756213 kubelet[1414]: E0909 00:47:42.756168 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:43.756849 kubelet[1414]: E0909 00:47:43.756811 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:44.757518 kubelet[1414]: E0909 00:47:44.757477 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:45.758472 kubelet[1414]: E0909 00:47:45.758416 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:46.759350 kubelet[1414]: E0909 00:47:46.759304 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:47.760001 kubelet[1414]: E0909 00:47:47.759942 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:48.761090 kubelet[1414]: E0909 00:47:48.761050 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:49.762627 kubelet[1414]: E0909 00:47:49.762593 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:50.763061 kubelet[1414]: E0909 00:47:50.763024 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:51.764556 kubelet[1414]: E0909 00:47:51.764521 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:52.765942 kubelet[1414]: E0909 00:47:52.765893 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:53.766651 kubelet[1414]: E0909 00:47:53.766603 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:54.027837 update_engine[1203]: I0909 00:47:54.027525 1203 update_attempter.cc:509] Updating boot flags... Sep 9 00:47:54.032912 systemd[1]: Created slice kubepods-besteffort-podb54b4eaa_f54e_45b4_975a_1f2116d2be16.slice. Sep 9 00:47:54.079391 kubelet[1414]: I0909 00:47:54.079354 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rq9w\" (UniqueName: \"kubernetes.io/projected/b54b4eaa-f54e-45b4-975a-1f2116d2be16-kube-api-access-9rq9w\") pod \"nginx-deployment-7fcdb87857-2brrl\" (UID: \"b54b4eaa-f54e-45b4-975a-1f2116d2be16\") " pod="default/nginx-deployment-7fcdb87857-2brrl" Sep 9 00:47:54.336055 env[1212]: time="2025-09-09T00:47:54.335748012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-2brrl,Uid:b54b4eaa-f54e-45b4-975a-1f2116d2be16,Namespace:default,Attempt:0,}" Sep 9 00:47:54.358272 systemd-networkd[1041]: lxc6ae6ca7c72e6: Link UP Sep 9 00:47:54.365038 kernel: eth0: renamed from tmpf6987 Sep 9 00:47:54.372687 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 9 00:47:54.372764 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6ae6ca7c72e6: link becomes ready Sep 9 00:47:54.372872 systemd-networkd[1041]: lxc6ae6ca7c72e6: Gained carrier Sep 9 00:47:54.503799 env[1212]: time="2025-09-09T00:47:54.503742955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:47:54.503963 env[1212]: time="2025-09-09T00:47:54.503781279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:47:54.503963 env[1212]: time="2025-09-09T00:47:54.503791920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:47:54.503963 env[1212]: time="2025-09-09T00:47:54.503906090Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f6987febd7aca8c6e2c0a9245cdd806be89f5034955e5c6d5e1ec472f1f64947 pid=2706 runtime=io.containerd.runc.v2 Sep 9 00:47:54.517891 systemd[1]: Started cri-containerd-f6987febd7aca8c6e2c0a9245cdd806be89f5034955e5c6d5e1ec472f1f64947.scope. Sep 9 00:47:54.534967 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:47:54.551357 env[1212]: time="2025-09-09T00:47:54.551317353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-2brrl,Uid:b54b4eaa-f54e-45b4-975a-1f2116d2be16,Namespace:default,Attempt:0,} returns sandbox id \"f6987febd7aca8c6e2c0a9245cdd806be89f5034955e5c6d5e1ec472f1f64947\"" Sep 9 00:47:54.552494 env[1212]: time="2025-09-09T00:47:54.552466535Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 9 00:47:54.767295 kubelet[1414]: E0909 00:47:54.767241 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:55.767632 kubelet[1414]: E0909 00:47:55.767582 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:56.287125 systemd-networkd[1041]: lxc6ae6ca7c72e6: Gained IPv6LL Sep 9 00:47:56.673954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount916865502.mount: Deactivated successfully. Sep 9 00:47:56.740895 kubelet[1414]: E0909 00:47:56.740855 1414 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:56.768406 kubelet[1414]: E0909 00:47:56.768377 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:57.769285 kubelet[1414]: E0909 00:47:57.769240 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:57.857915 env[1212]: time="2025-09-09T00:47:57.857861657Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:57.861492 env[1212]: time="2025-09-09T00:47:57.861448969Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:57.863210 env[1212]: time="2025-09-09T00:47:57.863185981Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:57.865481 env[1212]: time="2025-09-09T00:47:57.865451394Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:47:57.866202 env[1212]: time="2025-09-09T00:47:57.866175929Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 9 00:47:57.868323 env[1212]: time="2025-09-09T00:47:57.868291730Z" level=info msg="CreateContainer within sandbox \"f6987febd7aca8c6e2c0a9245cdd806be89f5034955e5c6d5e1ec472f1f64947\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Sep 9 00:47:57.877513 env[1212]: time="2025-09-09T00:47:57.877473908Z" level=info msg="CreateContainer within sandbox \"f6987febd7aca8c6e2c0a9245cdd806be89f5034955e5c6d5e1ec472f1f64947\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"d1018c7eab4b6a5e16f98e71d5c0b63ce689b664d4b342980f5463f833aa6619\"" Sep 9 00:47:57.877963 env[1212]: time="2025-09-09T00:47:57.877931302Z" level=info msg="StartContainer for \"d1018c7eab4b6a5e16f98e71d5c0b63ce689b664d4b342980f5463f833aa6619\"" Sep 9 00:47:57.894798 systemd[1]: Started cri-containerd-d1018c7eab4b6a5e16f98e71d5c0b63ce689b664d4b342980f5463f833aa6619.scope. Sep 9 00:47:57.924829 env[1212]: time="2025-09-09T00:47:57.924789905Z" level=info msg="StartContainer for \"d1018c7eab4b6a5e16f98e71d5c0b63ce689b664d4b342980f5463f833aa6619\" returns successfully" Sep 9 00:47:58.003801 kubelet[1414]: I0909 00:47:58.003533 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-2brrl" podStartSLOduration=0.688369374 podStartE2EDuration="4.003516521s" podCreationTimestamp="2025-09-09 00:47:54 +0000 UTC" firstStartedPulling="2025-09-09 00:47:54.552153827 +0000 UTC m=+38.630269955" lastFinishedPulling="2025-09-09 00:47:57.867300974 +0000 UTC m=+41.945417102" observedRunningTime="2025-09-09 00:47:58.00323058 +0000 UTC m=+42.081346708" watchObservedRunningTime="2025-09-09 00:47:58.003516521 +0000 UTC m=+42.081632609" Sep 9 00:47:58.770045 kubelet[1414]: E0909 00:47:58.769987 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:47:58.875411 systemd[1]: run-containerd-runc-k8s.io-d1018c7eab4b6a5e16f98e71d5c0b63ce689b664d4b342980f5463f833aa6619-runc.aS8VHI.mount: Deactivated successfully. Sep 9 00:47:59.770947 kubelet[1414]: E0909 00:47:59.770899 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:00.205817 systemd[1]: Created slice kubepods-besteffort-podea5ee6ee_921b_40e6_9090_cdedee11076f.slice. Sep 9 00:48:00.213473 kubelet[1414]: I0909 00:48:00.213434 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dg87\" (UniqueName: \"kubernetes.io/projected/ea5ee6ee-921b-40e6-9090-cdedee11076f-kube-api-access-9dg87\") pod \"nfs-server-provisioner-0\" (UID: \"ea5ee6ee-921b-40e6-9090-cdedee11076f\") " pod="default/nfs-server-provisioner-0" Sep 9 00:48:00.213473 kubelet[1414]: I0909 00:48:00.213474 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/ea5ee6ee-921b-40e6-9090-cdedee11076f-data\") pod \"nfs-server-provisioner-0\" (UID: \"ea5ee6ee-921b-40e6-9090-cdedee11076f\") " pod="default/nfs-server-provisioner-0" Sep 9 00:48:00.508982 env[1212]: time="2025-09-09T00:48:00.508877943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ea5ee6ee-921b-40e6-9090-cdedee11076f,Namespace:default,Attempt:0,}" Sep 9 00:48:00.537457 systemd-networkd[1041]: lxceefdfcd23529: Link UP Sep 9 00:48:00.547052 kernel: eth0: renamed from tmp7f7f2 Sep 9 00:48:00.553106 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 9 00:48:00.553182 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxceefdfcd23529: link becomes ready Sep 9 00:48:00.553803 systemd-networkd[1041]: lxceefdfcd23529: Gained carrier Sep 9 00:48:00.686157 env[1212]: time="2025-09-09T00:48:00.686093631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:48:00.686312 env[1212]: time="2025-09-09T00:48:00.686132674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:48:00.686312 env[1212]: time="2025-09-09T00:48:00.686142994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:00.686380 env[1212]: time="2025-09-09T00:48:00.686303845Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f7f227f04424bcd2d975c00355fdd5e15f6df8aa43e952af847496aaebecfcf pid=2838 runtime=io.containerd.runc.v2 Sep 9 00:48:00.699750 systemd[1]: Started cri-containerd-7f7f227f04424bcd2d975c00355fdd5e15f6df8aa43e952af847496aaebecfcf.scope. Sep 9 00:48:00.714496 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:48:00.731461 env[1212]: time="2025-09-09T00:48:00.731422901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ea5ee6ee-921b-40e6-9090-cdedee11076f,Namespace:default,Attempt:0,} returns sandbox id \"7f7f227f04424bcd2d975c00355fdd5e15f6df8aa43e952af847496aaebecfcf\"" Sep 9 00:48:00.732786 env[1212]: time="2025-09-09T00:48:00.732759789Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Sep 9 00:48:00.771595 kubelet[1414]: E0909 00:48:00.771493 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:01.771907 kubelet[1414]: E0909 00:48:01.771834 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:01.983146 systemd-networkd[1041]: lxceefdfcd23529: Gained IPv6LL Sep 9 00:48:02.772997 kubelet[1414]: E0909 00:48:02.772951 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:02.832903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1437664944.mount: Deactivated successfully. Sep 9 00:48:03.773848 kubelet[1414]: E0909 00:48:03.773803 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:04.592097 env[1212]: time="2025-09-09T00:48:04.592053761Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:48:04.593410 env[1212]: time="2025-09-09T00:48:04.593383355Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:48:04.595086 env[1212]: time="2025-09-09T00:48:04.595057967Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:48:04.596559 env[1212]: time="2025-09-09T00:48:04.596533369Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:48:04.598044 env[1212]: time="2025-09-09T00:48:04.598013330Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Sep 9 00:48:04.600942 env[1212]: time="2025-09-09T00:48:04.600915171Z" level=info msg="CreateContainer within sandbox \"7f7f227f04424bcd2d975c00355fdd5e15f6df8aa43e952af847496aaebecfcf\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Sep 9 00:48:04.608835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3793690198.mount: Deactivated successfully. Sep 9 00:48:04.612519 env[1212]: time="2025-09-09T00:48:04.612479409Z" level=info msg="CreateContainer within sandbox \"7f7f227f04424bcd2d975c00355fdd5e15f6df8aa43e952af847496aaebecfcf\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"0c5f77139aef54078aba0a41c0788f8a0341f7a4d01ff1ef97f660986b8acafd\"" Sep 9 00:48:04.613281 env[1212]: time="2025-09-09T00:48:04.613241811Z" level=info msg="StartContainer for \"0c5f77139aef54078aba0a41c0788f8a0341f7a4d01ff1ef97f660986b8acafd\"" Sep 9 00:48:04.631980 systemd[1]: Started cri-containerd-0c5f77139aef54078aba0a41c0788f8a0341f7a4d01ff1ef97f660986b8acafd.scope. Sep 9 00:48:04.658383 env[1212]: time="2025-09-09T00:48:04.658344062Z" level=info msg="StartContainer for \"0c5f77139aef54078aba0a41c0788f8a0341f7a4d01ff1ef97f660986b8acafd\" returns successfully" Sep 9 00:48:04.774295 kubelet[1414]: E0909 00:48:04.774230 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:05.775376 kubelet[1414]: E0909 00:48:05.775340 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:06.776842 kubelet[1414]: E0909 00:48:06.776788 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:07.777109 kubelet[1414]: E0909 00:48:07.777068 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:08.777429 kubelet[1414]: E0909 00:48:08.777393 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:09.778198 kubelet[1414]: E0909 00:48:09.778150 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:10.778888 kubelet[1414]: E0909 00:48:10.778844 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:11.779376 kubelet[1414]: E0909 00:48:11.779335 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:12.780052 kubelet[1414]: E0909 00:48:12.780017 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:13.780866 kubelet[1414]: E0909 00:48:13.780828 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:14.002918 kubelet[1414]: I0909 00:48:14.002836 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=10.136698286 podStartE2EDuration="14.002820719s" podCreationTimestamp="2025-09-09 00:48:00 +0000 UTC" firstStartedPulling="2025-09-09 00:48:00.732492891 +0000 UTC m=+44.810609019" lastFinishedPulling="2025-09-09 00:48:04.598615324 +0000 UTC m=+48.676731452" observedRunningTime="2025-09-09 00:48:05.020930931 +0000 UTC m=+49.099047059" watchObservedRunningTime="2025-09-09 00:48:14.002820719 +0000 UTC m=+58.080936807" Sep 9 00:48:14.007655 systemd[1]: Created slice kubepods-besteffort-pod034db611_68b8_4684_8bf5_d0e0715b5e52.slice. Sep 9 00:48:14.095106 kubelet[1414]: I0909 00:48:14.094724 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6d15b48f-9b74-4bce-8ef6-49661bd3b5b2\" (UniqueName: \"kubernetes.io/nfs/034db611-68b8-4684-8bf5-d0e0715b5e52-pvc-6d15b48f-9b74-4bce-8ef6-49661bd3b5b2\") pod \"test-pod-1\" (UID: \"034db611-68b8-4684-8bf5-d0e0715b5e52\") " pod="default/test-pod-1" Sep 9 00:48:14.095273 kubelet[1414]: I0909 00:48:14.095253 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qhrp\" (UniqueName: \"kubernetes.io/projected/034db611-68b8-4684-8bf5-d0e0715b5e52-kube-api-access-8qhrp\") pod \"test-pod-1\" (UID: \"034db611-68b8-4684-8bf5-d0e0715b5e52\") " pod="default/test-pod-1" Sep 9 00:48:14.216030 kernel: FS-Cache: Loaded Sep 9 00:48:14.244118 kernel: RPC: Registered named UNIX socket transport module. Sep 9 00:48:14.244200 kernel: RPC: Registered udp transport module. Sep 9 00:48:14.244222 kernel: RPC: Registered tcp transport module. Sep 9 00:48:14.245149 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Sep 9 00:48:14.289042 kernel: FS-Cache: Netfs 'nfs' registered for caching Sep 9 00:48:14.420042 kernel: NFS: Registering the id_resolver key type Sep 9 00:48:14.420175 kernel: Key type id_resolver registered Sep 9 00:48:14.421054 kernel: Key type id_legacy registered Sep 9 00:48:14.441791 nfsidmap[2956]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Sep 9 00:48:14.444752 nfsidmap[2959]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Sep 9 00:48:14.610332 env[1212]: time="2025-09-09T00:48:14.610281913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:034db611-68b8-4684-8bf5-d0e0715b5e52,Namespace:default,Attempt:0,}" Sep 9 00:48:14.684154 systemd-networkd[1041]: lxc56d35485cfd3: Link UP Sep 9 00:48:14.690041 kernel: eth0: renamed from tmp5c59b Sep 9 00:48:14.698142 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Sep 9 00:48:14.698214 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc56d35485cfd3: link becomes ready Sep 9 00:48:14.698184 systemd-networkd[1041]: lxc56d35485cfd3: Gained carrier Sep 9 00:48:14.782217 kubelet[1414]: E0909 00:48:14.782155 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:14.892700 env[1212]: time="2025-09-09T00:48:14.892627140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:48:14.892851 env[1212]: time="2025-09-09T00:48:14.892704183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:48:14.892851 env[1212]: time="2025-09-09T00:48:14.892731864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:14.892993 env[1212]: time="2025-09-09T00:48:14.892961312Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c59b73bc814812ffef3f61aef7c9cd91f30cd2e4cfa8b33822dcf19ba64b768 pid=2995 runtime=io.containerd.runc.v2 Sep 9 00:48:14.903770 systemd[1]: Started cri-containerd-5c59b73bc814812ffef3f61aef7c9cd91f30cd2e4cfa8b33822dcf19ba64b768.scope. Sep 9 00:48:14.919793 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:48:14.933281 env[1212]: time="2025-09-09T00:48:14.933233519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:034db611-68b8-4684-8bf5-d0e0715b5e52,Namespace:default,Attempt:0,} returns sandbox id \"5c59b73bc814812ffef3f61aef7c9cd91f30cd2e4cfa8b33822dcf19ba64b768\"" Sep 9 00:48:14.934938 env[1212]: time="2025-09-09T00:48:14.934431205Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Sep 9 00:48:15.167340 env[1212]: time="2025-09-09T00:48:15.167299636Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:48:15.168700 env[1212]: time="2025-09-09T00:48:15.168669127Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:48:15.170272 env[1212]: time="2025-09-09T00:48:15.170247585Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:48:15.172124 env[1212]: time="2025-09-09T00:48:15.172095972Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:883ca821a91fc20bcde818eeee4e1ed55ef63a020d6198ecd5a03af5a4eac530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:48:15.172898 env[1212]: time="2025-09-09T00:48:15.172871841Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:9fddf21fd9c2634e7bf6e633e36b0fb227f6cd5fbe1b3334a16de3ab50f31e5e\"" Sep 9 00:48:15.175229 env[1212]: time="2025-09-09T00:48:15.175199606Z" level=info msg="CreateContainer within sandbox \"5c59b73bc814812ffef3f61aef7c9cd91f30cd2e4cfa8b33822dcf19ba64b768\" for container &ContainerMetadata{Name:test,Attempt:0,}" Sep 9 00:48:15.187279 env[1212]: time="2025-09-09T00:48:15.186897756Z" level=info msg="CreateContainer within sandbox \"5c59b73bc814812ffef3f61aef7c9cd91f30cd2e4cfa8b33822dcf19ba64b768\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"a678e91647429231bb2d442ba8082374b8a62783ac4327e2182cbdc0b85ee8a1\"" Sep 9 00:48:15.187778 env[1212]: time="2025-09-09T00:48:15.187753227Z" level=info msg="StartContainer for \"a678e91647429231bb2d442ba8082374b8a62783ac4327e2182cbdc0b85ee8a1\"" Sep 9 00:48:15.201580 systemd[1]: Started cri-containerd-a678e91647429231bb2d442ba8082374b8a62783ac4327e2182cbdc0b85ee8a1.scope. Sep 9 00:48:15.232504 env[1212]: time="2025-09-09T00:48:15.232352505Z" level=info msg="StartContainer for \"a678e91647429231bb2d442ba8082374b8a62783ac4327e2182cbdc0b85ee8a1\" returns successfully" Sep 9 00:48:15.782981 kubelet[1414]: E0909 00:48:15.782934 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:16.037464 kubelet[1414]: I0909 00:48:16.037348 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.797017129 podStartE2EDuration="16.037334795s" podCreationTimestamp="2025-09-09 00:48:00 +0000 UTC" firstStartedPulling="2025-09-09 00:48:14.93378058 +0000 UTC m=+59.011896708" lastFinishedPulling="2025-09-09 00:48:15.174098246 +0000 UTC m=+59.252214374" observedRunningTime="2025-09-09 00:48:16.036627809 +0000 UTC m=+60.114743937" watchObservedRunningTime="2025-09-09 00:48:16.037334795 +0000 UTC m=+60.115450883" Sep 9 00:48:16.383155 systemd-networkd[1041]: lxc56d35485cfd3: Gained IPv6LL Sep 9 00:48:16.740357 kubelet[1414]: E0909 00:48:16.740317 1414 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:16.783900 kubelet[1414]: E0909 00:48:16.783854 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:17.769148 systemd[1]: run-containerd-runc-k8s.io-e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9-runc.Y9ui5D.mount: Deactivated successfully. Sep 9 00:48:17.784341 kubelet[1414]: E0909 00:48:17.784304 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:17.786647 env[1212]: time="2025-09-09T00:48:17.786592543Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:48:17.791915 env[1212]: time="2025-09-09T00:48:17.791882166Z" level=info msg="StopContainer for \"e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9\" with timeout 2 (s)" Sep 9 00:48:17.792283 env[1212]: time="2025-09-09T00:48:17.792258499Z" level=info msg="Stop container \"e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9\" with signal terminated" Sep 9 00:48:17.797587 systemd-networkd[1041]: lxc_health: Link DOWN Sep 9 00:48:17.797592 systemd-networkd[1041]: lxc_health: Lost carrier Sep 9 00:48:17.835336 systemd[1]: cri-containerd-e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9.scope: Deactivated successfully. Sep 9 00:48:17.835663 systemd[1]: cri-containerd-e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9.scope: Consumed 6.309s CPU time. Sep 9 00:48:17.850593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9-rootfs.mount: Deactivated successfully. Sep 9 00:48:17.860059 env[1212]: time="2025-09-09T00:48:17.859988959Z" level=info msg="shim disconnected" id=e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9 Sep 9 00:48:17.860059 env[1212]: time="2025-09-09T00:48:17.860046481Z" level=warning msg="cleaning up after shim disconnected" id=e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9 namespace=k8s.io Sep 9 00:48:17.860059 env[1212]: time="2025-09-09T00:48:17.860056082Z" level=info msg="cleaning up dead shim" Sep 9 00:48:17.866413 env[1212]: time="2025-09-09T00:48:17.866369620Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:48:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3127 runtime=io.containerd.runc.v2\n" Sep 9 00:48:17.868529 env[1212]: time="2025-09-09T00:48:17.868490693Z" level=info msg="StopContainer for \"e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9\" returns successfully" Sep 9 00:48:17.869120 env[1212]: time="2025-09-09T00:48:17.869097554Z" level=info msg="StopPodSandbox for \"0003290857fa4f97be50a178e1a717078640d239aacf15658306b084d220c7a0\"" Sep 9 00:48:17.869169 env[1212]: time="2025-09-09T00:48:17.869154396Z" level=info msg="Container to stop \"8b92dff4ab58fc6154ddd84769a9b2c787426c747036ee21a00d94080f503805\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:48:17.869204 env[1212]: time="2025-09-09T00:48:17.869170877Z" level=info msg="Container to stop \"89a7f7bf372f18ba9b56d03b4c827a2fa0f1c11f92b54af6140eeedca562e6f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:48:17.869204 env[1212]: time="2025-09-09T00:48:17.869182157Z" level=info msg="Container to stop \"ba2f389bae131126ab7c2b6569f2ad18d9e4dc74e9e790fb6dbb4eb6f3211865\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:48:17.869204 env[1212]: time="2025-09-09T00:48:17.869193477Z" level=info msg="Container to stop \"44224455d3ba6f02f2c3de805869b7f1e6cb3c7a82f691de24b67f29878dbd74\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:48:17.869296 env[1212]: time="2025-09-09T00:48:17.869203598Z" level=info msg="Container to stop \"e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 00:48:17.870918 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0003290857fa4f97be50a178e1a717078640d239aacf15658306b084d220c7a0-shm.mount: Deactivated successfully. Sep 9 00:48:17.876361 systemd[1]: cri-containerd-0003290857fa4f97be50a178e1a717078640d239aacf15658306b084d220c7a0.scope: Deactivated successfully. Sep 9 00:48:17.893812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0003290857fa4f97be50a178e1a717078640d239aacf15658306b084d220c7a0-rootfs.mount: Deactivated successfully. Sep 9 00:48:17.898752 env[1212]: time="2025-09-09T00:48:17.898705137Z" level=info msg="shim disconnected" id=0003290857fa4f97be50a178e1a717078640d239aacf15658306b084d220c7a0 Sep 9 00:48:17.898752 env[1212]: time="2025-09-09T00:48:17.898753139Z" level=warning msg="cleaning up after shim disconnected" id=0003290857fa4f97be50a178e1a717078640d239aacf15658306b084d220c7a0 namespace=k8s.io Sep 9 00:48:17.898933 env[1212]: time="2025-09-09T00:48:17.898762219Z" level=info msg="cleaning up dead shim" Sep 9 00:48:17.905645 env[1212]: time="2025-09-09T00:48:17.905593775Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:48:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3157 runtime=io.containerd.runc.v2\n" Sep 9 00:48:17.905934 env[1212]: time="2025-09-09T00:48:17.905908266Z" level=info msg="TearDown network for sandbox \"0003290857fa4f97be50a178e1a717078640d239aacf15658306b084d220c7a0\" successfully" Sep 9 00:48:17.905968 env[1212]: time="2025-09-09T00:48:17.905932627Z" level=info msg="StopPodSandbox for \"0003290857fa4f97be50a178e1a717078640d239aacf15658306b084d220c7a0\" returns successfully" Sep 9 00:48:17.918752 kubelet[1414]: I0909 00:48:17.918712 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-host-proc-sys-net\") pod \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " Sep 9 00:48:17.918752 kubelet[1414]: I0909 00:48:17.918758 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-hubble-tls\") pod \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " Sep 9 00:48:17.918925 kubelet[1414]: I0909 00:48:17.918776 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-xtables-lock\") pod \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " Sep 9 00:48:17.918925 kubelet[1414]: I0909 00:48:17.918802 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-etc-cni-netd\") pod \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " Sep 9 00:48:17.918925 kubelet[1414]: I0909 00:48:17.918820 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4wv2\" (UniqueName: \"kubernetes.io/projected/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-kube-api-access-k4wv2\") pod \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " Sep 9 00:48:17.918925 kubelet[1414]: I0909 00:48:17.918836 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-bpf-maps\") pod \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " Sep 9 00:48:17.918925 kubelet[1414]: I0909 00:48:17.918855 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-clustermesh-secrets\") pod \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " Sep 9 00:48:17.918925 kubelet[1414]: I0909 00:48:17.918870 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-host-proc-sys-kernel\") pod \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " Sep 9 00:48:17.919166 kubelet[1414]: I0909 00:48:17.918886 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-lib-modules\") pod \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " Sep 9 00:48:17.919166 kubelet[1414]: I0909 00:48:17.918901 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-cni-path\") pod \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " Sep 9 00:48:17.919166 kubelet[1414]: I0909 00:48:17.918917 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-hostproc\") pod \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " Sep 9 00:48:17.919166 kubelet[1414]: I0909 00:48:17.918933 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-cilium-cgroup\") pod \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " Sep 9 00:48:17.919166 kubelet[1414]: I0909 00:48:17.918950 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-cilium-config-path\") pod \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " Sep 9 00:48:17.919166 kubelet[1414]: I0909 00:48:17.918966 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-cilium-run\") pod \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\" (UID: \"b87f6de4-c2d1-4be0-b5e7-3db8185fe994\") " Sep 9 00:48:17.919300 kubelet[1414]: I0909 00:48:17.919059 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b87f6de4-c2d1-4be0-b5e7-3db8185fe994" (UID: "b87f6de4-c2d1-4be0-b5e7-3db8185fe994"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:17.919300 kubelet[1414]: I0909 00:48:17.919091 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b87f6de4-c2d1-4be0-b5e7-3db8185fe994" (UID: "b87f6de4-c2d1-4be0-b5e7-3db8185fe994"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:17.919742 kubelet[1414]: I0909 00:48:17.919338 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b87f6de4-c2d1-4be0-b5e7-3db8185fe994" (UID: "b87f6de4-c2d1-4be0-b5e7-3db8185fe994"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:17.919742 kubelet[1414]: I0909 00:48:17.919387 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b87f6de4-c2d1-4be0-b5e7-3db8185fe994" (UID: "b87f6de4-c2d1-4be0-b5e7-3db8185fe994"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:17.919742 kubelet[1414]: I0909 00:48:17.919389 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-hostproc" (OuterVolumeSpecName: "hostproc") pod "b87f6de4-c2d1-4be0-b5e7-3db8185fe994" (UID: "b87f6de4-c2d1-4be0-b5e7-3db8185fe994"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:17.919742 kubelet[1414]: I0909 00:48:17.919406 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b87f6de4-c2d1-4be0-b5e7-3db8185fe994" (UID: "b87f6de4-c2d1-4be0-b5e7-3db8185fe994"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:17.919742 kubelet[1414]: I0909 00:48:17.919422 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b87f6de4-c2d1-4be0-b5e7-3db8185fe994" (UID: "b87f6de4-c2d1-4be0-b5e7-3db8185fe994"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:17.919967 kubelet[1414]: I0909 00:48:17.919442 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-cni-path" (OuterVolumeSpecName: "cni-path") pod "b87f6de4-c2d1-4be0-b5e7-3db8185fe994" (UID: "b87f6de4-c2d1-4be0-b5e7-3db8185fe994"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:17.919967 kubelet[1414]: I0909 00:48:17.919457 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b87f6de4-c2d1-4be0-b5e7-3db8185fe994" (UID: "b87f6de4-c2d1-4be0-b5e7-3db8185fe994"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:17.919967 kubelet[1414]: I0909 00:48:17.919471 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b87f6de4-c2d1-4be0-b5e7-3db8185fe994" (UID: "b87f6de4-c2d1-4be0-b5e7-3db8185fe994"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:17.921713 kubelet[1414]: I0909 00:48:17.921685 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b87f6de4-c2d1-4be0-b5e7-3db8185fe994" (UID: "b87f6de4-c2d1-4be0-b5e7-3db8185fe994"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:48:17.922641 kubelet[1414]: I0909 00:48:17.922616 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b87f6de4-c2d1-4be0-b5e7-3db8185fe994" (UID: "b87f6de4-c2d1-4be0-b5e7-3db8185fe994"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:48:17.922720 kubelet[1414]: I0909 00:48:17.922689 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-kube-api-access-k4wv2" (OuterVolumeSpecName: "kube-api-access-k4wv2") pod "b87f6de4-c2d1-4be0-b5e7-3db8185fe994" (UID: "b87f6de4-c2d1-4be0-b5e7-3db8185fe994"). InnerVolumeSpecName "kube-api-access-k4wv2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:48:17.922876 kubelet[1414]: I0909 00:48:17.922849 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b87f6de4-c2d1-4be0-b5e7-3db8185fe994" (UID: "b87f6de4-c2d1-4be0-b5e7-3db8185fe994"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:48:18.019811 kubelet[1414]: I0909 00:48:18.019707 1414 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-xtables-lock\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:18.019963 kubelet[1414]: I0909 00:48:18.019949 1414 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-etc-cni-netd\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:18.020063 kubelet[1414]: I0909 00:48:18.020048 1414 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k4wv2\" (UniqueName: \"kubernetes.io/projected/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-kube-api-access-k4wv2\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:18.020130 kubelet[1414]: I0909 00:48:18.020120 1414 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-bpf-maps\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:18.020204 kubelet[1414]: I0909 00:48:18.020194 1414 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-clustermesh-secrets\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:18.020270 kubelet[1414]: I0909 00:48:18.020260 1414 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-host-proc-sys-kernel\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:18.020332 kubelet[1414]: I0909 00:48:18.020323 1414 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-lib-modules\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:18.020391 kubelet[1414]: I0909 00:48:18.020375 1414 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-cni-path\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:18.020452 kubelet[1414]: I0909 00:48:18.020443 1414 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-hostproc\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:18.020520 kubelet[1414]: I0909 00:48:18.020510 1414 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-cilium-cgroup\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:18.020587 kubelet[1414]: I0909 00:48:18.020577 1414 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-cilium-config-path\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:18.020650 kubelet[1414]: I0909 00:48:18.020634 1414 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-cilium-run\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:18.020712 kubelet[1414]: I0909 00:48:18.020702 1414 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-host-proc-sys-net\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:18.020774 kubelet[1414]: I0909 00:48:18.020758 1414 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b87f6de4-c2d1-4be0-b5e7-3db8185fe994-hubble-tls\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:18.034691 kubelet[1414]: I0909 00:48:18.034668 1414 scope.go:117] "RemoveContainer" containerID="e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9" Sep 9 00:48:18.035849 env[1212]: time="2025-09-09T00:48:18.035815015Z" level=info msg="RemoveContainer for \"e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9\"" Sep 9 00:48:18.038671 env[1212]: time="2025-09-09T00:48:18.038642830Z" level=info msg="RemoveContainer for \"e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9\" returns successfully" Sep 9 00:48:18.038868 kubelet[1414]: I0909 00:48:18.038846 1414 scope.go:117] "RemoveContainer" containerID="44224455d3ba6f02f2c3de805869b7f1e6cb3c7a82f691de24b67f29878dbd74" Sep 9 00:48:18.040027 systemd[1]: Removed slice kubepods-burstable-podb87f6de4_c2d1_4be0_b5e7_3db8185fe994.slice. Sep 9 00:48:18.040113 systemd[1]: kubepods-burstable-podb87f6de4_c2d1_4be0_b5e7_3db8185fe994.slice: Consumed 6.424s CPU time. Sep 9 00:48:18.041087 env[1212]: time="2025-09-09T00:48:18.041059272Z" level=info msg="RemoveContainer for \"44224455d3ba6f02f2c3de805869b7f1e6cb3c7a82f691de24b67f29878dbd74\"" Sep 9 00:48:18.043373 env[1212]: time="2025-09-09T00:48:18.043340308Z" level=info msg="RemoveContainer for \"44224455d3ba6f02f2c3de805869b7f1e6cb3c7a82f691de24b67f29878dbd74\" returns successfully" Sep 9 00:48:18.043488 kubelet[1414]: I0909 00:48:18.043465 1414 scope.go:117] "RemoveContainer" containerID="ba2f389bae131126ab7c2b6569f2ad18d9e4dc74e9e790fb6dbb4eb6f3211865" Sep 9 00:48:18.044572 env[1212]: time="2025-09-09T00:48:18.044545589Z" level=info msg="RemoveContainer for \"ba2f389bae131126ab7c2b6569f2ad18d9e4dc74e9e790fb6dbb4eb6f3211865\"" Sep 9 00:48:18.046628 env[1212]: time="2025-09-09T00:48:18.046593217Z" level=info msg="RemoveContainer for \"ba2f389bae131126ab7c2b6569f2ad18d9e4dc74e9e790fb6dbb4eb6f3211865\" returns successfully" Sep 9 00:48:18.046763 kubelet[1414]: I0909 00:48:18.046730 1414 scope.go:117] "RemoveContainer" containerID="8b92dff4ab58fc6154ddd84769a9b2c787426c747036ee21a00d94080f503805" Sep 9 00:48:18.047652 env[1212]: time="2025-09-09T00:48:18.047631772Z" level=info msg="RemoveContainer for \"8b92dff4ab58fc6154ddd84769a9b2c787426c747036ee21a00d94080f503805\"" Sep 9 00:48:18.052313 env[1212]: time="2025-09-09T00:48:18.052278168Z" level=info msg="RemoveContainer for \"8b92dff4ab58fc6154ddd84769a9b2c787426c747036ee21a00d94080f503805\" returns successfully" Sep 9 00:48:18.052556 kubelet[1414]: I0909 00:48:18.052535 1414 scope.go:117] "RemoveContainer" containerID="89a7f7bf372f18ba9b56d03b4c827a2fa0f1c11f92b54af6140eeedca562e6f2" Sep 9 00:48:18.053648 env[1212]: time="2025-09-09T00:48:18.053624173Z" level=info msg="RemoveContainer for \"89a7f7bf372f18ba9b56d03b4c827a2fa0f1c11f92b54af6140eeedca562e6f2\"" Sep 9 00:48:18.055701 env[1212]: time="2025-09-09T00:48:18.055668722Z" level=info msg="RemoveContainer for \"89a7f7bf372f18ba9b56d03b4c827a2fa0f1c11f92b54af6140eeedca562e6f2\" returns successfully" Sep 9 00:48:18.055847 kubelet[1414]: I0909 00:48:18.055807 1414 scope.go:117] "RemoveContainer" containerID="e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9" Sep 9 00:48:18.056151 env[1212]: time="2025-09-09T00:48:18.056001213Z" level=error msg="ContainerStatus for \"e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9\": not found" Sep 9 00:48:18.056284 kubelet[1414]: E0909 00:48:18.056262 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9\": not found" containerID="e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9" Sep 9 00:48:18.056371 kubelet[1414]: I0909 00:48:18.056296 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9"} err="failed to get container status \"e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9\": rpc error: code = NotFound desc = an error occurred when try to find container \"e4438d9c08769ba962ab0ba284cb1fbe54be2e7478087f2c8bf2b0266b197bb9\": not found" Sep 9 00:48:18.056416 kubelet[1414]: I0909 00:48:18.056372 1414 scope.go:117] "RemoveContainer" containerID="44224455d3ba6f02f2c3de805869b7f1e6cb3c7a82f691de24b67f29878dbd74" Sep 9 00:48:18.056549 env[1212]: time="2025-09-09T00:48:18.056508310Z" level=error msg="ContainerStatus for \"44224455d3ba6f02f2c3de805869b7f1e6cb3c7a82f691de24b67f29878dbd74\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"44224455d3ba6f02f2c3de805869b7f1e6cb3c7a82f691de24b67f29878dbd74\": not found" Sep 9 00:48:18.056647 kubelet[1414]: E0909 00:48:18.056629 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"44224455d3ba6f02f2c3de805869b7f1e6cb3c7a82f691de24b67f29878dbd74\": not found" containerID="44224455d3ba6f02f2c3de805869b7f1e6cb3c7a82f691de24b67f29878dbd74" Sep 9 00:48:18.056687 kubelet[1414]: I0909 00:48:18.056655 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"44224455d3ba6f02f2c3de805869b7f1e6cb3c7a82f691de24b67f29878dbd74"} err="failed to get container status \"44224455d3ba6f02f2c3de805869b7f1e6cb3c7a82f691de24b67f29878dbd74\": rpc error: code = NotFound desc = an error occurred when try to find container \"44224455d3ba6f02f2c3de805869b7f1e6cb3c7a82f691de24b67f29878dbd74\": not found" Sep 9 00:48:18.056687 kubelet[1414]: I0909 00:48:18.056672 1414 scope.go:117] "RemoveContainer" containerID="ba2f389bae131126ab7c2b6569f2ad18d9e4dc74e9e790fb6dbb4eb6f3211865" Sep 9 00:48:18.056880 env[1212]: time="2025-09-09T00:48:18.056835921Z" level=error msg="ContainerStatus for \"ba2f389bae131126ab7c2b6569f2ad18d9e4dc74e9e790fb6dbb4eb6f3211865\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba2f389bae131126ab7c2b6569f2ad18d9e4dc74e9e790fb6dbb4eb6f3211865\": not found" Sep 9 00:48:18.057013 kubelet[1414]: E0909 00:48:18.056984 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba2f389bae131126ab7c2b6569f2ad18d9e4dc74e9e790fb6dbb4eb6f3211865\": not found" containerID="ba2f389bae131126ab7c2b6569f2ad18d9e4dc74e9e790fb6dbb4eb6f3211865" Sep 9 00:48:18.057051 kubelet[1414]: I0909 00:48:18.057025 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba2f389bae131126ab7c2b6569f2ad18d9e4dc74e9e790fb6dbb4eb6f3211865"} err="failed to get container status \"ba2f389bae131126ab7c2b6569f2ad18d9e4dc74e9e790fb6dbb4eb6f3211865\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba2f389bae131126ab7c2b6569f2ad18d9e4dc74e9e790fb6dbb4eb6f3211865\": not found" Sep 9 00:48:18.057051 kubelet[1414]: I0909 00:48:18.057043 1414 scope.go:117] "RemoveContainer" containerID="8b92dff4ab58fc6154ddd84769a9b2c787426c747036ee21a00d94080f503805" Sep 9 00:48:18.057212 env[1212]: time="2025-09-09T00:48:18.057173893Z" level=error msg="ContainerStatus for \"8b92dff4ab58fc6154ddd84769a9b2c787426c747036ee21a00d94080f503805\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8b92dff4ab58fc6154ddd84769a9b2c787426c747036ee21a00d94080f503805\": not found" Sep 9 00:48:18.057281 kubelet[1414]: E0909 00:48:18.057264 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b92dff4ab58fc6154ddd84769a9b2c787426c747036ee21a00d94080f503805\": not found" containerID="8b92dff4ab58fc6154ddd84769a9b2c787426c747036ee21a00d94080f503805" Sep 9 00:48:18.057315 kubelet[1414]: I0909 00:48:18.057286 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8b92dff4ab58fc6154ddd84769a9b2c787426c747036ee21a00d94080f503805"} err="failed to get container status \"8b92dff4ab58fc6154ddd84769a9b2c787426c747036ee21a00d94080f503805\": rpc error: code = NotFound desc = an error occurred when try to find container \"8b92dff4ab58fc6154ddd84769a9b2c787426c747036ee21a00d94080f503805\": not found" Sep 9 00:48:18.057344 kubelet[1414]: I0909 00:48:18.057300 1414 scope.go:117] "RemoveContainer" containerID="89a7f7bf372f18ba9b56d03b4c827a2fa0f1c11f92b54af6140eeedca562e6f2" Sep 9 00:48:18.057494 env[1212]: time="2025-09-09T00:48:18.057453062Z" level=error msg="ContainerStatus for \"89a7f7bf372f18ba9b56d03b4c827a2fa0f1c11f92b54af6140eeedca562e6f2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89a7f7bf372f18ba9b56d03b4c827a2fa0f1c11f92b54af6140eeedca562e6f2\": not found" Sep 9 00:48:18.057597 kubelet[1414]: E0909 00:48:18.057580 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89a7f7bf372f18ba9b56d03b4c827a2fa0f1c11f92b54af6140eeedca562e6f2\": not found" containerID="89a7f7bf372f18ba9b56d03b4c827a2fa0f1c11f92b54af6140eeedca562e6f2" Sep 9 00:48:18.057635 kubelet[1414]: I0909 00:48:18.057602 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89a7f7bf372f18ba9b56d03b4c827a2fa0f1c11f92b54af6140eeedca562e6f2"} err="failed to get container status \"89a7f7bf372f18ba9b56d03b4c827a2fa0f1c11f92b54af6140eeedca562e6f2\": rpc error: code = NotFound desc = an error occurred when try to find container \"89a7f7bf372f18ba9b56d03b4c827a2fa0f1c11f92b54af6140eeedca562e6f2\": not found" Sep 9 00:48:18.764408 systemd[1]: var-lib-kubelet-pods-b87f6de4\x2dc2d1\x2d4be0\x2db5e7\x2d3db8185fe994-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk4wv2.mount: Deactivated successfully. Sep 9 00:48:18.764506 systemd[1]: var-lib-kubelet-pods-b87f6de4\x2dc2d1\x2d4be0\x2db5e7\x2d3db8185fe994-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 00:48:18.764562 systemd[1]: var-lib-kubelet-pods-b87f6de4\x2dc2d1\x2d4be0\x2db5e7\x2d3db8185fe994-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 00:48:18.784630 kubelet[1414]: E0909 00:48:18.784598 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:18.924193 kubelet[1414]: I0909 00:48:18.924159 1414 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b87f6de4-c2d1-4be0-b5e7-3db8185fe994" path="/var/lib/kubelet/pods/b87f6de4-c2d1-4be0-b5e7-3db8185fe994/volumes" Sep 9 00:48:19.785438 kubelet[1414]: E0909 00:48:19.785389 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:20.785668 kubelet[1414]: E0909 00:48:20.785624 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:20.870840 kubelet[1414]: I0909 00:48:20.870809 1414 memory_manager.go:355] "RemoveStaleState removing state" podUID="b87f6de4-c2d1-4be0-b5e7-3db8185fe994" containerName="cilium-agent" Sep 9 00:48:20.876094 systemd[1]: Created slice kubepods-besteffort-poddb08f6ce_e69f_4209_896e_67c4ebb6ea7f.slice. Sep 9 00:48:20.879538 systemd[1]: Created slice kubepods-burstable-podc76247f4_173f_473c_8cc5_fe72ba56a4c7.slice. Sep 9 00:48:20.937158 kubelet[1414]: I0909 00:48:20.937119 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c76247f4-173f-473c-8cc5-fe72ba56a4c7-cilium-config-path\") pod \"cilium-hc6nd\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " pod="kube-system/cilium-hc6nd" Sep 9 00:48:20.937158 kubelet[1414]: I0909 00:48:20.937160 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c76247f4-173f-473c-8cc5-fe72ba56a4c7-hubble-tls\") pod \"cilium-hc6nd\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " pod="kube-system/cilium-hc6nd" Sep 9 00:48:20.937330 kubelet[1414]: I0909 00:48:20.937179 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db08f6ce-e69f-4209-896e-67c4ebb6ea7f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-g985q\" (UID: \"db08f6ce-e69f-4209-896e-67c4ebb6ea7f\") " pod="kube-system/cilium-operator-6c4d7847fc-g985q" Sep 9 00:48:20.937330 kubelet[1414]: I0909 00:48:20.937198 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7vb4\" (UniqueName: \"kubernetes.io/projected/db08f6ce-e69f-4209-896e-67c4ebb6ea7f-kube-api-access-w7vb4\") pod \"cilium-operator-6c4d7847fc-g985q\" (UID: \"db08f6ce-e69f-4209-896e-67c4ebb6ea7f\") " pod="kube-system/cilium-operator-6c4d7847fc-g985q" Sep 9 00:48:20.937330 kubelet[1414]: I0909 00:48:20.937214 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxhv5\" (UniqueName: \"kubernetes.io/projected/c76247f4-173f-473c-8cc5-fe72ba56a4c7-kube-api-access-mxhv5\") pod \"cilium-hc6nd\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " pod="kube-system/cilium-hc6nd" Sep 9 00:48:20.937330 kubelet[1414]: I0909 00:48:20.937230 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-bpf-maps\") pod \"cilium-hc6nd\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " pod="kube-system/cilium-hc6nd" Sep 9 00:48:20.937330 kubelet[1414]: I0909 00:48:20.937246 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-etc-cni-netd\") pod \"cilium-hc6nd\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " pod="kube-system/cilium-hc6nd" Sep 9 00:48:20.937442 kubelet[1414]: I0909 00:48:20.937262 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c76247f4-173f-473c-8cc5-fe72ba56a4c7-cilium-ipsec-secrets\") pod \"cilium-hc6nd\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " pod="kube-system/cilium-hc6nd" Sep 9 00:48:20.937442 kubelet[1414]: I0909 00:48:20.937276 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-cilium-run\") pod \"cilium-hc6nd\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " pod="kube-system/cilium-hc6nd" Sep 9 00:48:20.937442 kubelet[1414]: I0909 00:48:20.937290 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-cilium-cgroup\") pod \"cilium-hc6nd\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " pod="kube-system/cilium-hc6nd" Sep 9 00:48:20.937442 kubelet[1414]: I0909 00:48:20.937305 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-cni-path\") pod \"cilium-hc6nd\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " pod="kube-system/cilium-hc6nd" Sep 9 00:48:20.937442 kubelet[1414]: I0909 00:48:20.937319 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-host-proc-sys-kernel\") pod \"cilium-hc6nd\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " pod="kube-system/cilium-hc6nd" Sep 9 00:48:20.937442 kubelet[1414]: I0909 00:48:20.937334 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-hostproc\") pod \"cilium-hc6nd\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " pod="kube-system/cilium-hc6nd" Sep 9 00:48:20.937566 kubelet[1414]: I0909 00:48:20.937348 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-lib-modules\") pod \"cilium-hc6nd\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " pod="kube-system/cilium-hc6nd" Sep 9 00:48:20.937566 kubelet[1414]: I0909 00:48:20.937366 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c76247f4-173f-473c-8cc5-fe72ba56a4c7-clustermesh-secrets\") pod \"cilium-hc6nd\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " pod="kube-system/cilium-hc6nd" Sep 9 00:48:20.937566 kubelet[1414]: I0909 00:48:20.937382 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-xtables-lock\") pod \"cilium-hc6nd\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " pod="kube-system/cilium-hc6nd" Sep 9 00:48:20.937566 kubelet[1414]: I0909 00:48:20.937403 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-host-proc-sys-net\") pod \"cilium-hc6nd\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " pod="kube-system/cilium-hc6nd" Sep 9 00:48:21.046496 kubelet[1414]: E0909 00:48:21.045236 1414 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[hubble-tls kube-api-access-mxhv5], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-hc6nd" podUID="c76247f4-173f-473c-8cc5-fe72ba56a4c7" Sep 9 00:48:21.178465 kubelet[1414]: E0909 00:48:21.178415 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:21.179443 env[1212]: time="2025-09-09T00:48:21.179071211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-g985q,Uid:db08f6ce-e69f-4209-896e-67c4ebb6ea7f,Namespace:kube-system,Attempt:0,}" Sep 9 00:48:21.191080 env[1212]: time="2025-09-09T00:48:21.191026541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:48:21.191229 env[1212]: time="2025-09-09T00:48:21.191205427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:48:21.191304 env[1212]: time="2025-09-09T00:48:21.191284829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:21.191583 env[1212]: time="2025-09-09T00:48:21.191523836Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dedc91e65b54b9f3620ed09e0f346831075ff1f810164685e6648d2d610f34e5 pid=3186 runtime=io.containerd.runc.v2 Sep 9 00:48:21.200871 systemd[1]: Started cri-containerd-dedc91e65b54b9f3620ed09e0f346831075ff1f810164685e6648d2d610f34e5.scope. Sep 9 00:48:21.231890 env[1212]: time="2025-09-09T00:48:21.231833965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-g985q,Uid:db08f6ce-e69f-4209-896e-67c4ebb6ea7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"dedc91e65b54b9f3620ed09e0f346831075ff1f810164685e6648d2d610f34e5\"" Sep 9 00:48:21.232945 kubelet[1414]: E0909 00:48:21.232493 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:21.233730 env[1212]: time="2025-09-09T00:48:21.233702223Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 00:48:21.786225 kubelet[1414]: E0909 00:48:21.786176 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:21.888212 kubelet[1414]: E0909 00:48:21.888161 1414 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 00:48:22.144152 kubelet[1414]: I0909 00:48:22.143905 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c76247f4-173f-473c-8cc5-fe72ba56a4c7" (UID: "c76247f4-173f-473c-8cc5-fe72ba56a4c7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:22.144152 kubelet[1414]: I0909 00:48:22.143963 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-xtables-lock\") pod \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " Sep 9 00:48:22.144152 kubelet[1414]: I0909 00:48:22.144034 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c76247f4-173f-473c-8cc5-fe72ba56a4c7" (UID: "c76247f4-173f-473c-8cc5-fe72ba56a4c7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:22.144152 kubelet[1414]: I0909 00:48:22.144051 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-cilium-run\") pod \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " Sep 9 00:48:22.144152 kubelet[1414]: I0909 00:48:22.144081 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-hostproc\") pod \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " Sep 9 00:48:22.144152 kubelet[1414]: I0909 00:48:22.144096 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-lib-modules\") pod \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " Sep 9 00:48:22.144369 kubelet[1414]: I0909 00:48:22.144116 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c76247f4-173f-473c-8cc5-fe72ba56a4c7-clustermesh-secrets\") pod \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " Sep 9 00:48:22.144369 kubelet[1414]: I0909 00:48:22.144191 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-hostproc" (OuterVolumeSpecName: "hostproc") pod "c76247f4-173f-473c-8cc5-fe72ba56a4c7" (UID: "c76247f4-173f-473c-8cc5-fe72ba56a4c7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:22.144421 kubelet[1414]: I0909 00:48:22.144365 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c76247f4-173f-473c-8cc5-fe72ba56a4c7" (UID: "c76247f4-173f-473c-8cc5-fe72ba56a4c7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:22.144445 kubelet[1414]: I0909 00:48:22.144430 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c76247f4-173f-473c-8cc5-fe72ba56a4c7" (UID: "c76247f4-173f-473c-8cc5-fe72ba56a4c7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:22.144471 kubelet[1414]: I0909 00:48:22.144460 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-cilium-cgroup\") pod \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " Sep 9 00:48:22.144546 kubelet[1414]: I0909 00:48:22.144503 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-host-proc-sys-kernel\") pod \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " Sep 9 00:48:22.144546 kubelet[1414]: I0909 00:48:22.144539 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxhv5\" (UniqueName: \"kubernetes.io/projected/c76247f4-173f-473c-8cc5-fe72ba56a4c7-kube-api-access-mxhv5\") pod \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " Sep 9 00:48:22.144609 kubelet[1414]: I0909 00:48:22.144576 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c76247f4-173f-473c-8cc5-fe72ba56a4c7" (UID: "c76247f4-173f-473c-8cc5-fe72ba56a4c7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:22.144609 kubelet[1414]: I0909 00:48:22.144593 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-bpf-maps\") pod \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " Sep 9 00:48:22.144657 kubelet[1414]: I0909 00:48:22.144612 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c76247f4-173f-473c-8cc5-fe72ba56a4c7-cilium-config-path\") pod \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " Sep 9 00:48:22.144657 kubelet[1414]: I0909 00:48:22.144645 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c76247f4-173f-473c-8cc5-fe72ba56a4c7" (UID: "c76247f4-173f-473c-8cc5-fe72ba56a4c7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:22.144702 kubelet[1414]: I0909 00:48:22.144662 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c76247f4-173f-473c-8cc5-fe72ba56a4c7-cilium-ipsec-secrets\") pod \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " Sep 9 00:48:22.144702 kubelet[1414]: I0909 00:48:22.144683 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-cni-path\") pod \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " Sep 9 00:48:22.146721 kubelet[1414]: I0909 00:48:22.144699 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-host-proc-sys-net\") pod \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " Sep 9 00:48:22.146721 kubelet[1414]: I0909 00:48:22.144969 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c76247f4-173f-473c-8cc5-fe72ba56a4c7-hubble-tls\") pod \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " Sep 9 00:48:22.146721 kubelet[1414]: I0909 00:48:22.144989 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-etc-cni-netd\") pod \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\" (UID: \"c76247f4-173f-473c-8cc5-fe72ba56a4c7\") " Sep 9 00:48:22.146721 kubelet[1414]: I0909 00:48:22.145068 1414 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-cilium-run\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:22.146721 kubelet[1414]: I0909 00:48:22.145080 1414 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-xtables-lock\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:22.146721 kubelet[1414]: I0909 00:48:22.145097 1414 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-cilium-cgroup\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:22.146721 kubelet[1414]: I0909 00:48:22.145106 1414 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-hostproc\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:22.146959 kubelet[1414]: I0909 00:48:22.145115 1414 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-lib-modules\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:22.146959 kubelet[1414]: I0909 00:48:22.145122 1414 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-bpf-maps\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:22.146959 kubelet[1414]: I0909 00:48:22.145131 1414 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-host-proc-sys-kernel\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:22.146959 kubelet[1414]: I0909 00:48:22.145158 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c76247f4-173f-473c-8cc5-fe72ba56a4c7" (UID: "c76247f4-173f-473c-8cc5-fe72ba56a4c7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:22.146959 kubelet[1414]: I0909 00:48:22.145194 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-cni-path" (OuterVolumeSpecName: "cni-path") pod "c76247f4-173f-473c-8cc5-fe72ba56a4c7" (UID: "c76247f4-173f-473c-8cc5-fe72ba56a4c7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:22.146959 kubelet[1414]: I0909 00:48:22.145224 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c76247f4-173f-473c-8cc5-fe72ba56a4c7" (UID: "c76247f4-173f-473c-8cc5-fe72ba56a4c7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 9 00:48:22.147117 kubelet[1414]: I0909 00:48:22.146439 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c76247f4-173f-473c-8cc5-fe72ba56a4c7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c76247f4-173f-473c-8cc5-fe72ba56a4c7" (UID: "c76247f4-173f-473c-8cc5-fe72ba56a4c7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:48:22.148368 systemd[1]: var-lib-kubelet-pods-c76247f4\x2d173f\x2d473c\x2d8cc5\x2dfe72ba56a4c7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 00:48:22.149241 kubelet[1414]: I0909 00:48:22.149208 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c76247f4-173f-473c-8cc5-fe72ba56a4c7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c76247f4-173f-473c-8cc5-fe72ba56a4c7" (UID: "c76247f4-173f-473c-8cc5-fe72ba56a4c7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:48:22.149302 kubelet[1414]: I0909 00:48:22.149291 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c76247f4-173f-473c-8cc5-fe72ba56a4c7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c76247f4-173f-473c-8cc5-fe72ba56a4c7" (UID: "c76247f4-173f-473c-8cc5-fe72ba56a4c7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:48:22.149834 kubelet[1414]: I0909 00:48:22.149808 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c76247f4-173f-473c-8cc5-fe72ba56a4c7-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "c76247f4-173f-473c-8cc5-fe72ba56a4c7" (UID: "c76247f4-173f-473c-8cc5-fe72ba56a4c7"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:48:22.149968 kubelet[1414]: I0909 00:48:22.149941 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c76247f4-173f-473c-8cc5-fe72ba56a4c7-kube-api-access-mxhv5" (OuterVolumeSpecName: "kube-api-access-mxhv5") pod "c76247f4-173f-473c-8cc5-fe72ba56a4c7" (UID: "c76247f4-173f-473c-8cc5-fe72ba56a4c7"). InnerVolumeSpecName "kube-api-access-mxhv5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:48:22.150320 systemd[1]: var-lib-kubelet-pods-c76247f4\x2d173f\x2d473c\x2d8cc5\x2dfe72ba56a4c7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmxhv5.mount: Deactivated successfully. Sep 9 00:48:22.150406 systemd[1]: var-lib-kubelet-pods-c76247f4\x2d173f\x2d473c\x2d8cc5\x2dfe72ba56a4c7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 00:48:22.150457 systemd[1]: var-lib-kubelet-pods-c76247f4\x2d173f\x2d473c\x2d8cc5\x2dfe72ba56a4c7-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Sep 9 00:48:22.245578 kubelet[1414]: I0909 00:48:22.245543 1414 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-host-proc-sys-net\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:22.245756 kubelet[1414]: I0909 00:48:22.245741 1414 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c76247f4-173f-473c-8cc5-fe72ba56a4c7-cilium-ipsec-secrets\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:22.245823 kubelet[1414]: I0909 00:48:22.245811 1414 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-cni-path\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:22.245879 kubelet[1414]: I0909 00:48:22.245868 1414 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c76247f4-173f-473c-8cc5-fe72ba56a4c7-hubble-tls\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:22.245962 kubelet[1414]: I0909 00:48:22.245951 1414 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c76247f4-173f-473c-8cc5-fe72ba56a4c7-etc-cni-netd\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:22.246052 kubelet[1414]: I0909 00:48:22.246040 1414 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c76247f4-173f-473c-8cc5-fe72ba56a4c7-clustermesh-secrets\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:22.246123 kubelet[1414]: I0909 00:48:22.246108 1414 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mxhv5\" (UniqueName: \"kubernetes.io/projected/c76247f4-173f-473c-8cc5-fe72ba56a4c7-kube-api-access-mxhv5\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:22.246178 kubelet[1414]: I0909 00:48:22.246168 1414 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c76247f4-173f-473c-8cc5-fe72ba56a4c7-cilium-config-path\") on node \"10.0.0.139\" DevicePath \"\"" Sep 9 00:48:22.786543 kubelet[1414]: E0909 00:48:22.786507 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:22.928850 systemd[1]: Removed slice kubepods-burstable-podc76247f4_173f_473c_8cc5_fe72ba56a4c7.slice. Sep 9 00:48:23.043216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1380101255.mount: Deactivated successfully. Sep 9 00:48:23.080885 systemd[1]: Created slice kubepods-burstable-pode96de0ca_ace1_43af_853e_e344ca2bdd4d.slice. Sep 9 00:48:23.151283 kubelet[1414]: I0909 00:48:23.151241 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e96de0ca-ace1-43af-853e-e344ca2bdd4d-bpf-maps\") pod \"cilium-j9l7k\" (UID: \"e96de0ca-ace1-43af-853e-e344ca2bdd4d\") " pod="kube-system/cilium-j9l7k" Sep 9 00:48:23.151476 kubelet[1414]: I0909 00:48:23.151459 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e96de0ca-ace1-43af-853e-e344ca2bdd4d-clustermesh-secrets\") pod \"cilium-j9l7k\" (UID: \"e96de0ca-ace1-43af-853e-e344ca2bdd4d\") " pod="kube-system/cilium-j9l7k" Sep 9 00:48:23.151567 kubelet[1414]: I0909 00:48:23.151554 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e96de0ca-ace1-43af-853e-e344ca2bdd4d-cilium-ipsec-secrets\") pod \"cilium-j9l7k\" (UID: \"e96de0ca-ace1-43af-853e-e344ca2bdd4d\") " pod="kube-system/cilium-j9l7k" Sep 9 00:48:23.151643 kubelet[1414]: I0909 00:48:23.151630 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e96de0ca-ace1-43af-853e-e344ca2bdd4d-host-proc-sys-net\") pod \"cilium-j9l7k\" (UID: \"e96de0ca-ace1-43af-853e-e344ca2bdd4d\") " pod="kube-system/cilium-j9l7k" Sep 9 00:48:23.151714 kubelet[1414]: I0909 00:48:23.151702 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e96de0ca-ace1-43af-853e-e344ca2bdd4d-host-proc-sys-kernel\") pod \"cilium-j9l7k\" (UID: \"e96de0ca-ace1-43af-853e-e344ca2bdd4d\") " pod="kube-system/cilium-j9l7k" Sep 9 00:48:23.151790 kubelet[1414]: I0909 00:48:23.151778 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e96de0ca-ace1-43af-853e-e344ca2bdd4d-lib-modules\") pod \"cilium-j9l7k\" (UID: \"e96de0ca-ace1-43af-853e-e344ca2bdd4d\") " pod="kube-system/cilium-j9l7k" Sep 9 00:48:23.151855 kubelet[1414]: I0909 00:48:23.151843 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e96de0ca-ace1-43af-853e-e344ca2bdd4d-cilium-config-path\") pod \"cilium-j9l7k\" (UID: \"e96de0ca-ace1-43af-853e-e344ca2bdd4d\") " pod="kube-system/cilium-j9l7k" Sep 9 00:48:23.152000 kubelet[1414]: I0909 00:48:23.151959 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e96de0ca-ace1-43af-853e-e344ca2bdd4d-cilium-cgroup\") pod \"cilium-j9l7k\" (UID: \"e96de0ca-ace1-43af-853e-e344ca2bdd4d\") " pod="kube-system/cilium-j9l7k" Sep 9 00:48:23.152075 kubelet[1414]: I0909 00:48:23.152023 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e96de0ca-ace1-43af-853e-e344ca2bdd4d-etc-cni-netd\") pod \"cilium-j9l7k\" (UID: \"e96de0ca-ace1-43af-853e-e344ca2bdd4d\") " pod="kube-system/cilium-j9l7k" Sep 9 00:48:23.152075 kubelet[1414]: I0909 00:48:23.152055 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e96de0ca-ace1-43af-853e-e344ca2bdd4d-xtables-lock\") pod \"cilium-j9l7k\" (UID: \"e96de0ca-ace1-43af-853e-e344ca2bdd4d\") " pod="kube-system/cilium-j9l7k" Sep 9 00:48:23.152075 kubelet[1414]: I0909 00:48:23.152072 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e96de0ca-ace1-43af-853e-e344ca2bdd4d-hostproc\") pod \"cilium-j9l7k\" (UID: \"e96de0ca-ace1-43af-853e-e344ca2bdd4d\") " pod="kube-system/cilium-j9l7k" Sep 9 00:48:23.152142 kubelet[1414]: I0909 00:48:23.152087 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e96de0ca-ace1-43af-853e-e344ca2bdd4d-cilium-run\") pod \"cilium-j9l7k\" (UID: \"e96de0ca-ace1-43af-853e-e344ca2bdd4d\") " pod="kube-system/cilium-j9l7k" Sep 9 00:48:23.152142 kubelet[1414]: I0909 00:48:23.152102 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e96de0ca-ace1-43af-853e-e344ca2bdd4d-cni-path\") pod \"cilium-j9l7k\" (UID: \"e96de0ca-ace1-43af-853e-e344ca2bdd4d\") " pod="kube-system/cilium-j9l7k" Sep 9 00:48:23.152142 kubelet[1414]: I0909 00:48:23.152119 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e96de0ca-ace1-43af-853e-e344ca2bdd4d-hubble-tls\") pod \"cilium-j9l7k\" (UID: \"e96de0ca-ace1-43af-853e-e344ca2bdd4d\") " pod="kube-system/cilium-j9l7k" Sep 9 00:48:23.152142 kubelet[1414]: I0909 00:48:23.152135 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlgt5\" (UniqueName: \"kubernetes.io/projected/e96de0ca-ace1-43af-853e-e344ca2bdd4d-kube-api-access-wlgt5\") pod \"cilium-j9l7k\" (UID: \"e96de0ca-ace1-43af-853e-e344ca2bdd4d\") " pod="kube-system/cilium-j9l7k" Sep 9 00:48:23.383470 kubelet[1414]: E0909 00:48:23.383366 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:23.385999 env[1212]: time="2025-09-09T00:48:23.385955224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j9l7k,Uid:e96de0ca-ace1-43af-853e-e344ca2bdd4d,Namespace:kube-system,Attempt:0,}" Sep 9 00:48:23.398408 env[1212]: time="2025-09-09T00:48:23.398339669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:48:23.398408 env[1212]: time="2025-09-09T00:48:23.398384630Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:48:23.398408 env[1212]: time="2025-09-09T00:48:23.398403111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:48:23.398592 env[1212]: time="2025-09-09T00:48:23.398565756Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6331de3eb36e158217784b7cc4b5eca150abe806516a902d9ad01b0e4d4a7269 pid=3235 runtime=io.containerd.runc.v2 Sep 9 00:48:23.409615 systemd[1]: Started cri-containerd-6331de3eb36e158217784b7cc4b5eca150abe806516a902d9ad01b0e4d4a7269.scope. Sep 9 00:48:23.434068 env[1212]: time="2025-09-09T00:48:23.433996761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j9l7k,Uid:e96de0ca-ace1-43af-853e-e344ca2bdd4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6331de3eb36e158217784b7cc4b5eca150abe806516a902d9ad01b0e4d4a7269\"" Sep 9 00:48:23.434795 kubelet[1414]: E0909 00:48:23.434757 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:23.436971 env[1212]: time="2025-09-09T00:48:23.436899687Z" level=info msg="CreateContainer within sandbox \"6331de3eb36e158217784b7cc4b5eca150abe806516a902d9ad01b0e4d4a7269\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 00:48:23.449636 env[1212]: time="2025-09-09T00:48:23.449557460Z" level=info msg="CreateContainer within sandbox \"6331de3eb36e158217784b7cc4b5eca150abe806516a902d9ad01b0e4d4a7269\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5a3fa37ee47fd6afe9fceb1308ad6297d664fd8e2183d99a7536de769c81831c\"" Sep 9 00:48:23.450288 env[1212]: time="2025-09-09T00:48:23.450230040Z" level=info msg="StartContainer for \"5a3fa37ee47fd6afe9fceb1308ad6297d664fd8e2183d99a7536de769c81831c\"" Sep 9 00:48:23.476284 systemd[1]: Started cri-containerd-5a3fa37ee47fd6afe9fceb1308ad6297d664fd8e2183d99a7536de769c81831c.scope. Sep 9 00:48:23.523519 env[1212]: time="2025-09-09T00:48:23.523447761Z" level=info msg="StartContainer for \"5a3fa37ee47fd6afe9fceb1308ad6297d664fd8e2183d99a7536de769c81831c\" returns successfully" Sep 9 00:48:23.529662 systemd[1]: cri-containerd-5a3fa37ee47fd6afe9fceb1308ad6297d664fd8e2183d99a7536de769c81831c.scope: Deactivated successfully. Sep 9 00:48:23.586122 env[1212]: time="2025-09-09T00:48:23.586073169Z" level=info msg="shim disconnected" id=5a3fa37ee47fd6afe9fceb1308ad6297d664fd8e2183d99a7536de769c81831c Sep 9 00:48:23.586122 env[1212]: time="2025-09-09T00:48:23.586119250Z" level=warning msg="cleaning up after shim disconnected" id=5a3fa37ee47fd6afe9fceb1308ad6297d664fd8e2183d99a7536de769c81831c namespace=k8s.io Sep 9 00:48:23.586122 env[1212]: time="2025-09-09T00:48:23.586128090Z" level=info msg="cleaning up dead shim" Sep 9 00:48:23.595374 env[1212]: time="2025-09-09T00:48:23.595330762Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:48:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3318 runtime=io.containerd.runc.v2\n" Sep 9 00:48:23.601990 env[1212]: time="2025-09-09T00:48:23.601949237Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:48:23.603877 env[1212]: time="2025-09-09T00:48:23.603848893Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:48:23.605565 env[1212]: time="2025-09-09T00:48:23.605540183Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Sep 9 00:48:23.606038 env[1212]: time="2025-09-09T00:48:23.605989236Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 9 00:48:23.608030 env[1212]: time="2025-09-09T00:48:23.607989335Z" level=info msg="CreateContainer within sandbox \"dedc91e65b54b9f3620ed09e0f346831075ff1f810164685e6648d2d610f34e5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 00:48:23.617116 env[1212]: time="2025-09-09T00:48:23.617075963Z" level=info msg="CreateContainer within sandbox \"dedc91e65b54b9f3620ed09e0f346831075ff1f810164685e6648d2d610f34e5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f7d71ec89d5e7c3265ee0362537ef13d2c2e2c5ab9f1ac648bcb85d95263f0e3\"" Sep 9 00:48:23.617593 env[1212]: time="2025-09-09T00:48:23.617524297Z" level=info msg="StartContainer for \"f7d71ec89d5e7c3265ee0362537ef13d2c2e2c5ab9f1ac648bcb85d95263f0e3\"" Sep 9 00:48:23.633507 systemd[1]: Started cri-containerd-f7d71ec89d5e7c3265ee0362537ef13d2c2e2c5ab9f1ac648bcb85d95263f0e3.scope. Sep 9 00:48:23.660441 env[1212]: time="2025-09-09T00:48:23.660398242Z" level=info msg="StartContainer for \"f7d71ec89d5e7c3265ee0362537ef13d2c2e2c5ab9f1ac648bcb85d95263f0e3\" returns successfully" Sep 9 00:48:23.787604 kubelet[1414]: E0909 00:48:23.787548 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:24.050140 kubelet[1414]: E0909 00:48:24.049771 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:24.051875 kubelet[1414]: E0909 00:48:24.051842 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:24.054858 env[1212]: time="2025-09-09T00:48:24.054815351Z" level=info msg="CreateContainer within sandbox \"6331de3eb36e158217784b7cc4b5eca150abe806516a902d9ad01b0e4d4a7269\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 00:48:24.070609 env[1212]: time="2025-09-09T00:48:24.070567245Z" level=info msg="CreateContainer within sandbox \"6331de3eb36e158217784b7cc4b5eca150abe806516a902d9ad01b0e4d4a7269\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f2b2c3ca32573d6feeea96d3e8b017fb8dc976d7a7b0b063c30b3f8f26df9ffa\"" Sep 9 00:48:24.071126 env[1212]: time="2025-09-09T00:48:24.071100901Z" level=info msg="StartContainer for \"f2b2c3ca32573d6feeea96d3e8b017fb8dc976d7a7b0b063c30b3f8f26df9ffa\"" Sep 9 00:48:24.077313 kubelet[1414]: I0909 00:48:24.074829 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-g985q" podStartSLOduration=1.701602807 podStartE2EDuration="4.074812848s" podCreationTimestamp="2025-09-09 00:48:20 +0000 UTC" firstStartedPulling="2025-09-09 00:48:21.233454215 +0000 UTC m=+65.311570343" lastFinishedPulling="2025-09-09 00:48:23.606664256 +0000 UTC m=+67.684780384" observedRunningTime="2025-09-09 00:48:24.060177106 +0000 UTC m=+68.138293234" watchObservedRunningTime="2025-09-09 00:48:24.074812848 +0000 UTC m=+68.152928976" Sep 9 00:48:24.087875 systemd[1]: Started cri-containerd-f2b2c3ca32573d6feeea96d3e8b017fb8dc976d7a7b0b063c30b3f8f26df9ffa.scope. Sep 9 00:48:24.117570 env[1212]: time="2025-09-09T00:48:24.117524359Z" level=info msg="StartContainer for \"f2b2c3ca32573d6feeea96d3e8b017fb8dc976d7a7b0b063c30b3f8f26df9ffa\" returns successfully" Sep 9 00:48:24.126446 systemd[1]: cri-containerd-f2b2c3ca32573d6feeea96d3e8b017fb8dc976d7a7b0b063c30b3f8f26df9ffa.scope: Deactivated successfully. Sep 9 00:48:24.147011 env[1212]: time="2025-09-09T00:48:24.146958448Z" level=info msg="shim disconnected" id=f2b2c3ca32573d6feeea96d3e8b017fb8dc976d7a7b0b063c30b3f8f26df9ffa Sep 9 00:48:24.147173 env[1212]: time="2025-09-09T00:48:24.147002490Z" level=warning msg="cleaning up after shim disconnected" id=f2b2c3ca32573d6feeea96d3e8b017fb8dc976d7a7b0b063c30b3f8f26df9ffa namespace=k8s.io Sep 9 00:48:24.147173 env[1212]: time="2025-09-09T00:48:24.147030970Z" level=info msg="cleaning up dead shim" Sep 9 00:48:24.153668 env[1212]: time="2025-09-09T00:48:24.153634401Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:48:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3419 runtime=io.containerd.runc.v2\n" Sep 9 00:48:24.788419 kubelet[1414]: E0909 00:48:24.788366 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:24.924584 kubelet[1414]: I0909 00:48:24.924552 1414 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c76247f4-173f-473c-8cc5-fe72ba56a4c7" path="/var/lib/kubelet/pods/c76247f4-173f-473c-8cc5-fe72ba56a4c7/volumes" Sep 9 00:48:25.043447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2b2c3ca32573d6feeea96d3e8b017fb8dc976d7a7b0b063c30b3f8f26df9ffa-rootfs.mount: Deactivated successfully. Sep 9 00:48:25.056473 kubelet[1414]: E0909 00:48:25.056443 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:25.056678 kubelet[1414]: E0909 00:48:25.056547 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:25.058673 env[1212]: time="2025-09-09T00:48:25.058245774Z" level=info msg="CreateContainer within sandbox \"6331de3eb36e158217784b7cc4b5eca150abe806516a902d9ad01b0e4d4a7269\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 00:48:25.071106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount957319216.mount: Deactivated successfully. Sep 9 00:48:25.077085 env[1212]: time="2025-09-09T00:48:25.077046064Z" level=info msg="CreateContainer within sandbox \"6331de3eb36e158217784b7cc4b5eca150abe806516a902d9ad01b0e4d4a7269\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2b9edd9999c64868ada27732f409e11b936fce5cb6b45862df167aab4db98956\"" Sep 9 00:48:25.077631 env[1212]: time="2025-09-09T00:48:25.077600280Z" level=info msg="StartContainer for \"2b9edd9999c64868ada27732f409e11b936fce5cb6b45862df167aab4db98956\"" Sep 9 00:48:25.091403 systemd[1]: Started cri-containerd-2b9edd9999c64868ada27732f409e11b936fce5cb6b45862df167aab4db98956.scope. Sep 9 00:48:25.122675 env[1212]: time="2025-09-09T00:48:25.122625910Z" level=info msg="StartContainer for \"2b9edd9999c64868ada27732f409e11b936fce5cb6b45862df167aab4db98956\" returns successfully" Sep 9 00:48:25.125992 systemd[1]: cri-containerd-2b9edd9999c64868ada27732f409e11b936fce5cb6b45862df167aab4db98956.scope: Deactivated successfully. Sep 9 00:48:25.142727 env[1212]: time="2025-09-09T00:48:25.142684116Z" level=info msg="shim disconnected" id=2b9edd9999c64868ada27732f409e11b936fce5cb6b45862df167aab4db98956 Sep 9 00:48:25.142727 env[1212]: time="2025-09-09T00:48:25.142724797Z" level=warning msg="cleaning up after shim disconnected" id=2b9edd9999c64868ada27732f409e11b936fce5cb6b45862df167aab4db98956 namespace=k8s.io Sep 9 00:48:25.142928 env[1212]: time="2025-09-09T00:48:25.142733197Z" level=info msg="cleaning up dead shim" Sep 9 00:48:25.148515 env[1212]: time="2025-09-09T00:48:25.148471119Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:48:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3475 runtime=io.containerd.runc.v2\n" Sep 9 00:48:25.788950 kubelet[1414]: E0909 00:48:25.788906 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:26.059881 kubelet[1414]: E0909 00:48:26.059760 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:26.061594 env[1212]: time="2025-09-09T00:48:26.061556886Z" level=info msg="CreateContainer within sandbox \"6331de3eb36e158217784b7cc4b5eca150abe806516a902d9ad01b0e4d4a7269\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 00:48:26.073167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1732098250.mount: Deactivated successfully. Sep 9 00:48:26.074667 env[1212]: time="2025-09-09T00:48:26.074620447Z" level=info msg="CreateContainer within sandbox \"6331de3eb36e158217784b7cc4b5eca150abe806516a902d9ad01b0e4d4a7269\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b9881cf3a1fe4d142ec683d99a794781542a25ecda1f895048dda757806eb433\"" Sep 9 00:48:26.075346 env[1212]: time="2025-09-09T00:48:26.075316866Z" level=info msg="StartContainer for \"b9881cf3a1fe4d142ec683d99a794781542a25ecda1f895048dda757806eb433\"" Sep 9 00:48:26.089986 systemd[1]: Started cri-containerd-b9881cf3a1fe4d142ec683d99a794781542a25ecda1f895048dda757806eb433.scope. Sep 9 00:48:26.113406 systemd[1]: cri-containerd-b9881cf3a1fe4d142ec683d99a794781542a25ecda1f895048dda757806eb433.scope: Deactivated successfully. Sep 9 00:48:26.114283 env[1212]: time="2025-09-09T00:48:26.114222061Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode96de0ca_ace1_43af_853e_e344ca2bdd4d.slice/cri-containerd-b9881cf3a1fe4d142ec683d99a794781542a25ecda1f895048dda757806eb433.scope/memory.events\": no such file or directory" Sep 9 00:48:26.116029 env[1212]: time="2025-09-09T00:48:26.115971629Z" level=info msg="StartContainer for \"b9881cf3a1fe4d142ec683d99a794781542a25ecda1f895048dda757806eb433\" returns successfully" Sep 9 00:48:26.133208 env[1212]: time="2025-09-09T00:48:26.133166664Z" level=info msg="shim disconnected" id=b9881cf3a1fe4d142ec683d99a794781542a25ecda1f895048dda757806eb433 Sep 9 00:48:26.133405 env[1212]: time="2025-09-09T00:48:26.133386190Z" level=warning msg="cleaning up after shim disconnected" id=b9881cf3a1fe4d142ec683d99a794781542a25ecda1f895048dda757806eb433 namespace=k8s.io Sep 9 00:48:26.133469 env[1212]: time="2025-09-09T00:48:26.133456272Z" level=info msg="cleaning up dead shim" Sep 9 00:48:26.139441 env[1212]: time="2025-09-09T00:48:26.139412517Z" level=warning msg="cleanup warnings time=\"2025-09-09T00:48:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3530 runtime=io.containerd.runc.v2\n" Sep 9 00:48:26.789249 kubelet[1414]: E0909 00:48:26.789204 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:26.889468 kubelet[1414]: E0909 00:48:26.889414 1414 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 00:48:27.043530 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9881cf3a1fe4d142ec683d99a794781542a25ecda1f895048dda757806eb433-rootfs.mount: Deactivated successfully. Sep 9 00:48:27.063587 kubelet[1414]: E0909 00:48:27.063557 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:27.065639 env[1212]: time="2025-09-09T00:48:27.065600271Z" level=info msg="CreateContainer within sandbox \"6331de3eb36e158217784b7cc4b5eca150abe806516a902d9ad01b0e4d4a7269\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 00:48:27.079636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1904222332.mount: Deactivated successfully. Sep 9 00:48:27.089670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1128980122.mount: Deactivated successfully. Sep 9 00:48:27.092103 env[1212]: time="2025-09-09T00:48:27.092059107Z" level=info msg="CreateContainer within sandbox \"6331de3eb36e158217784b7cc4b5eca150abe806516a902d9ad01b0e4d4a7269\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"951884e135d7309dfa97a1e7b61ab45ec19b40ef6f53fd93ff7ed8b07a519e11\"" Sep 9 00:48:27.092800 env[1212]: time="2025-09-09T00:48:27.092747726Z" level=info msg="StartContainer for \"951884e135d7309dfa97a1e7b61ab45ec19b40ef6f53fd93ff7ed8b07a519e11\"" Sep 9 00:48:27.105379 systemd[1]: Started cri-containerd-951884e135d7309dfa97a1e7b61ab45ec19b40ef6f53fd93ff7ed8b07a519e11.scope. Sep 9 00:48:27.136521 env[1212]: time="2025-09-09T00:48:27.136463350Z" level=info msg="StartContainer for \"951884e135d7309dfa97a1e7b61ab45ec19b40ef6f53fd93ff7ed8b07a519e11\" returns successfully" Sep 9 00:48:27.410098 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Sep 9 00:48:27.790219 kubelet[1414]: E0909 00:48:27.790109 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:28.067958 kubelet[1414]: E0909 00:48:28.067860 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:28.083659 kubelet[1414]: I0909 00:48:28.083592 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j9l7k" podStartSLOduration=5.083576755 podStartE2EDuration="5.083576755s" podCreationTimestamp="2025-09-09 00:48:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:48:28.083447871 +0000 UTC m=+72.161563999" watchObservedRunningTime="2025-09-09 00:48:28.083576755 +0000 UTC m=+72.161692843" Sep 9 00:48:28.725561 kubelet[1414]: I0909 00:48:28.725512 1414 setters.go:602] "Node became not ready" node="10.0.0.139" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T00:48:28Z","lastTransitionTime":"2025-09-09T00:48:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 00:48:28.790637 kubelet[1414]: E0909 00:48:28.790585 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:29.333079 systemd[1]: run-containerd-runc-k8s.io-951884e135d7309dfa97a1e7b61ab45ec19b40ef6f53fd93ff7ed8b07a519e11-runc.zkn6w9.mount: Deactivated successfully. Sep 9 00:48:29.384562 kubelet[1414]: E0909 00:48:29.384526 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:29.791021 kubelet[1414]: E0909 00:48:29.790962 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:30.178215 systemd-networkd[1041]: lxc_health: Link UP Sep 9 00:48:30.188039 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Sep 9 00:48:30.188509 systemd-networkd[1041]: lxc_health: Gained carrier Sep 9 00:48:30.791833 kubelet[1414]: E0909 00:48:30.791783 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:31.384878 kubelet[1414]: E0909 00:48:31.384794 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:31.491942 systemd[1]: run-containerd-runc-k8s.io-951884e135d7309dfa97a1e7b61ab45ec19b40ef6f53fd93ff7ed8b07a519e11-runc.y8xMtV.mount: Deactivated successfully. Sep 9 00:48:31.552167 systemd-networkd[1041]: lxc_health: Gained IPv6LL Sep 9 00:48:31.792359 kubelet[1414]: E0909 00:48:31.792309 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:32.074544 kubelet[1414]: E0909 00:48:32.074437 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:32.793445 kubelet[1414]: E0909 00:48:32.793385 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:33.075736 kubelet[1414]: E0909 00:48:33.075635 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:48:33.794261 kubelet[1414]: E0909 00:48:33.794208 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:34.795029 kubelet[1414]: E0909 00:48:34.794982 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:35.796154 kubelet[1414]: E0909 00:48:35.796111 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:36.740275 kubelet[1414]: E0909 00:48:36.740224 1414 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Sep 9 00:48:36.796687 kubelet[1414]: E0909 00:48:36.796645 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"