Jul 14 22:03:18.709556 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 14 22:03:18.709575 kernel: Linux version 5.15.187-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Jul 14 20:49:56 -00 2025 Jul 14 22:03:18.709583 kernel: efi: EFI v2.70 by EDK II Jul 14 22:03:18.709588 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Jul 14 22:03:18.709593 kernel: random: crng init done Jul 14 22:03:18.709599 kernel: ACPI: Early table checksum verification disabled Jul 14 22:03:18.709605 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jul 14 22:03:18.709611 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 14 22:03:18.709617 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:03:18.709622 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:03:18.709628 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:03:18.709633 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:03:18.709638 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:03:18.709643 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:03:18.709651 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:03:18.709657 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:03:18.709663 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:03:18.709669 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 14 22:03:18.709674 kernel: NUMA: Failed to initialise from firmware Jul 14 22:03:18.709680 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 22:03:18.709686 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Jul 14 22:03:18.709691 kernel: Zone ranges: Jul 14 22:03:18.709697 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 22:03:18.709703 kernel: DMA32 empty Jul 14 22:03:18.709709 kernel: Normal empty Jul 14 22:03:18.709722 kernel: Movable zone start for each node Jul 14 22:03:18.709727 kernel: Early memory node ranges Jul 14 22:03:18.709733 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jul 14 22:03:18.709739 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jul 14 22:03:18.709744 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jul 14 22:03:18.709750 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jul 14 22:03:18.709755 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jul 14 22:03:18.709761 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jul 14 22:03:18.709767 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jul 14 22:03:18.709772 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 22:03:18.709779 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 14 22:03:18.709785 kernel: psci: probing for conduit method from ACPI. Jul 14 22:03:18.709791 kernel: psci: PSCIv1.1 detected in firmware. Jul 14 22:03:18.709796 kernel: psci: Using standard PSCI v0.2 function IDs Jul 14 22:03:18.709802 kernel: psci: Trusted OS migration not required Jul 14 22:03:18.709810 kernel: psci: SMC Calling Convention v1.1 Jul 14 22:03:18.709816 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 14 22:03:18.709823 kernel: ACPI: SRAT not present Jul 14 22:03:18.709830 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 14 22:03:18.709835 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 14 22:03:18.709842 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 14 22:03:18.709848 kernel: Detected PIPT I-cache on CPU0 Jul 14 22:03:18.709854 kernel: CPU features: detected: GIC system register CPU interface Jul 14 22:03:18.709860 kernel: CPU features: detected: Hardware dirty bit management Jul 14 22:03:18.709865 kernel: CPU features: detected: Spectre-v4 Jul 14 22:03:18.709872 kernel: CPU features: detected: Spectre-BHB Jul 14 22:03:18.709879 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 14 22:03:18.709885 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 14 22:03:18.709891 kernel: CPU features: detected: ARM erratum 1418040 Jul 14 22:03:18.709897 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 14 22:03:18.709903 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 14 22:03:18.709909 kernel: Policy zone: DMA Jul 14 22:03:18.709924 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0fbac260ee8dcd4db6590eed44229ca41387b27ea0fa758fd2be410620d68236 Jul 14 22:03:18.709942 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 22:03:18.709948 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 22:03:18.709955 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 22:03:18.709961 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 22:03:18.709969 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Jul 14 22:03:18.709975 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 22:03:18.709981 kernel: trace event string verifier disabled Jul 14 22:03:18.709987 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 22:03:18.709993 kernel: rcu: RCU event tracing is enabled. Jul 14 22:03:18.709999 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 22:03:18.710006 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 22:03:18.710012 kernel: Tracing variant of Tasks RCU enabled. Jul 14 22:03:18.710018 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 22:03:18.710024 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 22:03:18.710030 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 14 22:03:18.710037 kernel: GICv3: 256 SPIs implemented Jul 14 22:03:18.710043 kernel: GICv3: 0 Extended SPIs implemented Jul 14 22:03:18.710049 kernel: GICv3: Distributor has no Range Selector support Jul 14 22:03:18.710055 kernel: Root IRQ handler: gic_handle_irq Jul 14 22:03:18.710061 kernel: GICv3: 16 PPIs implemented Jul 14 22:03:18.710067 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 14 22:03:18.710072 kernel: ACPI: SRAT not present Jul 14 22:03:18.710078 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 14 22:03:18.710084 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Jul 14 22:03:18.710098 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Jul 14 22:03:18.710104 kernel: GICv3: using LPI property table @0x00000000400d0000 Jul 14 22:03:18.710110 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Jul 14 22:03:18.710118 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 22:03:18.710124 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 14 22:03:18.710130 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 14 22:03:18.710136 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 14 22:03:18.710142 kernel: arm-pv: using stolen time PV Jul 14 22:03:18.710148 kernel: Console: colour dummy device 80x25 Jul 14 22:03:18.710154 kernel: ACPI: Core revision 20210730 Jul 14 22:03:18.710161 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 14 22:03:18.710167 kernel: pid_max: default: 32768 minimum: 301 Jul 14 22:03:18.710173 kernel: LSM: Security Framework initializing Jul 14 22:03:18.710181 kernel: SELinux: Initializing. Jul 14 22:03:18.710187 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 22:03:18.710193 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 22:03:18.710199 kernel: rcu: Hierarchical SRCU implementation. Jul 14 22:03:18.710205 kernel: Platform MSI: ITS@0x8080000 domain created Jul 14 22:03:18.710212 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 14 22:03:18.710218 kernel: Remapping and enabling EFI services. Jul 14 22:03:18.710224 kernel: smp: Bringing up secondary CPUs ... Jul 14 22:03:18.710230 kernel: Detected PIPT I-cache on CPU1 Jul 14 22:03:18.710237 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 14 22:03:18.710244 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Jul 14 22:03:18.710250 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 22:03:18.710256 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 14 22:03:18.710262 kernel: Detected PIPT I-cache on CPU2 Jul 14 22:03:18.710268 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 14 22:03:18.710275 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Jul 14 22:03:18.710281 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 22:03:18.710287 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 14 22:03:18.710293 kernel: Detected PIPT I-cache on CPU3 Jul 14 22:03:18.710301 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 14 22:03:18.710307 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Jul 14 22:03:18.710313 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 22:03:18.710320 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 14 22:03:18.710330 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 22:03:18.710338 kernel: SMP: Total of 4 processors activated. Jul 14 22:03:18.710344 kernel: CPU features: detected: 32-bit EL0 Support Jul 14 22:03:18.710351 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 14 22:03:18.710357 kernel: CPU features: detected: Common not Private translations Jul 14 22:03:18.710364 kernel: CPU features: detected: CRC32 instructions Jul 14 22:03:18.710370 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 14 22:03:18.710376 kernel: CPU features: detected: LSE atomic instructions Jul 14 22:03:18.710384 kernel: CPU features: detected: Privileged Access Never Jul 14 22:03:18.710391 kernel: CPU features: detected: RAS Extension Support Jul 14 22:03:18.710397 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 14 22:03:18.710404 kernel: CPU: All CPU(s) started at EL1 Jul 14 22:03:18.710410 kernel: alternatives: patching kernel code Jul 14 22:03:18.710417 kernel: devtmpfs: initialized Jul 14 22:03:18.710424 kernel: KASLR enabled Jul 14 22:03:18.710431 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 22:03:18.710437 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 22:03:18.710444 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 22:03:18.710450 kernel: SMBIOS 3.0.0 present. Jul 14 22:03:18.710457 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jul 14 22:03:18.710463 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 22:03:18.710470 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 14 22:03:18.710478 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 14 22:03:18.710484 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 14 22:03:18.710491 kernel: audit: initializing netlink subsys (disabled) Jul 14 22:03:18.710498 kernel: audit: type=2000 audit(0.037:1): state=initialized audit_enabled=0 res=1 Jul 14 22:03:18.710504 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 22:03:18.710511 kernel: cpuidle: using governor menu Jul 14 22:03:18.710517 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 14 22:03:18.710524 kernel: ASID allocator initialised with 32768 entries Jul 14 22:03:18.710530 kernel: ACPI: bus type PCI registered Jul 14 22:03:18.710538 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 22:03:18.710545 kernel: Serial: AMBA PL011 UART driver Jul 14 22:03:18.710551 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 22:03:18.710558 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 14 22:03:18.710565 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 22:03:18.710572 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 14 22:03:18.710578 kernel: cryptd: max_cpu_qlen set to 1000 Jul 14 22:03:18.710585 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 14 22:03:18.710592 kernel: ACPI: Added _OSI(Module Device) Jul 14 22:03:18.710599 kernel: ACPI: Added _OSI(Processor Device) Jul 14 22:03:18.710606 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 22:03:18.710613 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 14 22:03:18.710619 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 14 22:03:18.710625 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 14 22:03:18.710632 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 22:03:18.710638 kernel: ACPI: Interpreter enabled Jul 14 22:03:18.710645 kernel: ACPI: Using GIC for interrupt routing Jul 14 22:03:18.710651 kernel: ACPI: MCFG table detected, 1 entries Jul 14 22:03:18.710659 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 14 22:03:18.710666 kernel: printk: console [ttyAMA0] enabled Jul 14 22:03:18.710672 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 22:03:18.710806 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 22:03:18.710868 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 14 22:03:18.710950 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 14 22:03:18.711010 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 14 22:03:18.711070 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 14 22:03:18.711079 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 14 22:03:18.711086 kernel: PCI host bridge to bus 0000:00 Jul 14 22:03:18.711149 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 14 22:03:18.711202 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 14 22:03:18.711263 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 14 22:03:18.711328 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 22:03:18.711406 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 14 22:03:18.711475 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 22:03:18.711534 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 14 22:03:18.711592 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 14 22:03:18.711649 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 22:03:18.711707 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 22:03:18.711850 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 14 22:03:18.711928 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 14 22:03:18.711988 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 14 22:03:18.712046 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 14 22:03:18.712099 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 14 22:03:18.712108 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 14 22:03:18.712115 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 14 22:03:18.712122 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 14 22:03:18.712131 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 14 22:03:18.712138 kernel: iommu: Default domain type: Translated Jul 14 22:03:18.712144 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 14 22:03:18.712151 kernel: vgaarb: loaded Jul 14 22:03:18.712157 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 14 22:03:18.712164 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 14 22:03:18.712171 kernel: PTP clock support registered Jul 14 22:03:18.712177 kernel: Registered efivars operations Jul 14 22:03:18.712184 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 14 22:03:18.712190 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 22:03:18.712199 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 22:03:18.712205 kernel: pnp: PnP ACPI init Jul 14 22:03:18.712272 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 14 22:03:18.712281 kernel: pnp: PnP ACPI: found 1 devices Jul 14 22:03:18.712292 kernel: NET: Registered PF_INET protocol family Jul 14 22:03:18.712301 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 22:03:18.712307 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 22:03:18.712322 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 22:03:18.712332 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 22:03:18.712347 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 14 22:03:18.712363 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 22:03:18.712369 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 22:03:18.712376 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 22:03:18.712382 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 22:03:18.712389 kernel: PCI: CLS 0 bytes, default 64 Jul 14 22:03:18.712396 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 14 22:03:18.712402 kernel: kvm [1]: HYP mode not available Jul 14 22:03:18.712410 kernel: Initialise system trusted keyrings Jul 14 22:03:18.712417 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 22:03:18.712423 kernel: Key type asymmetric registered Jul 14 22:03:18.712430 kernel: Asymmetric key parser 'x509' registered Jul 14 22:03:18.712436 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 14 22:03:18.712443 kernel: io scheduler mq-deadline registered Jul 14 22:03:18.712449 kernel: io scheduler kyber registered Jul 14 22:03:18.712456 kernel: io scheduler bfq registered Jul 14 22:03:18.712463 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 14 22:03:18.712471 kernel: ACPI: button: Power Button [PWRB] Jul 14 22:03:18.712478 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 14 22:03:18.712536 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 14 22:03:18.712545 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 22:03:18.712552 kernel: thunder_xcv, ver 1.0 Jul 14 22:03:18.712558 kernel: thunder_bgx, ver 1.0 Jul 14 22:03:18.712565 kernel: nicpf, ver 1.0 Jul 14 22:03:18.712571 kernel: nicvf, ver 1.0 Jul 14 22:03:18.712641 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 14 22:03:18.712698 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-14T22:03:18 UTC (1752530598) Jul 14 22:03:18.712707 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 14 22:03:18.712721 kernel: NET: Registered PF_INET6 protocol family Jul 14 22:03:18.712728 kernel: Segment Routing with IPv6 Jul 14 22:03:18.712734 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 22:03:18.712741 kernel: NET: Registered PF_PACKET protocol family Jul 14 22:03:18.712748 kernel: Key type dns_resolver registered Jul 14 22:03:18.712754 kernel: registered taskstats version 1 Jul 14 22:03:18.712763 kernel: Loading compiled-in X.509 certificates Jul 14 22:03:18.712770 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.187-flatcar: 118351bb2b1409a8fe1c98db16ecff1bb5342a27' Jul 14 22:03:18.712776 kernel: Key type .fscrypt registered Jul 14 22:03:18.712782 kernel: Key type fscrypt-provisioning registered Jul 14 22:03:18.712789 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 22:03:18.712796 kernel: ima: Allocated hash algorithm: sha1 Jul 14 22:03:18.712802 kernel: ima: No architecture policies found Jul 14 22:03:18.712809 kernel: clk: Disabling unused clocks Jul 14 22:03:18.712816 kernel: Freeing unused kernel memory: 36416K Jul 14 22:03:18.712823 kernel: Run /init as init process Jul 14 22:03:18.712830 kernel: with arguments: Jul 14 22:03:18.712836 kernel: /init Jul 14 22:03:18.712843 kernel: with environment: Jul 14 22:03:18.712849 kernel: HOME=/ Jul 14 22:03:18.712856 kernel: TERM=linux Jul 14 22:03:18.712863 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 22:03:18.712871 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 14 22:03:18.712881 systemd[1]: Detected virtualization kvm. Jul 14 22:03:18.712889 systemd[1]: Detected architecture arm64. Jul 14 22:03:18.712895 systemd[1]: Running in initrd. Jul 14 22:03:18.712902 systemd[1]: No hostname configured, using default hostname. Jul 14 22:03:18.712909 systemd[1]: Hostname set to . Jul 14 22:03:18.712923 systemd[1]: Initializing machine ID from VM UUID. Jul 14 22:03:18.712954 systemd[1]: Queued start job for default target initrd.target. Jul 14 22:03:18.712961 systemd[1]: Started systemd-ask-password-console.path. Jul 14 22:03:18.712970 systemd[1]: Reached target cryptsetup.target. Jul 14 22:03:18.712977 systemd[1]: Reached target paths.target. Jul 14 22:03:18.712984 systemd[1]: Reached target slices.target. Jul 14 22:03:18.712991 systemd[1]: Reached target swap.target. Jul 14 22:03:18.712998 systemd[1]: Reached target timers.target. Jul 14 22:03:18.713005 systemd[1]: Listening on iscsid.socket. Jul 14 22:03:18.713013 systemd[1]: Listening on iscsiuio.socket. Jul 14 22:03:18.713021 systemd[1]: Listening on systemd-journald-audit.socket. Jul 14 22:03:18.713028 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 14 22:03:18.713035 systemd[1]: Listening on systemd-journald.socket. Jul 14 22:03:18.713042 systemd[1]: Listening on systemd-networkd.socket. Jul 14 22:03:18.713049 systemd[1]: Listening on systemd-udevd-control.socket. Jul 14 22:03:18.713056 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 14 22:03:18.713063 systemd[1]: Reached target sockets.target. Jul 14 22:03:18.713070 systemd[1]: Starting kmod-static-nodes.service... Jul 14 22:03:18.713077 systemd[1]: Finished network-cleanup.service. Jul 14 22:03:18.713085 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 22:03:18.713092 systemd[1]: Starting systemd-journald.service... Jul 14 22:03:18.713100 systemd[1]: Starting systemd-modules-load.service... Jul 14 22:03:18.713107 systemd[1]: Starting systemd-resolved.service... Jul 14 22:03:18.713114 systemd[1]: Starting systemd-vconsole-setup.service... Jul 14 22:03:18.713121 systemd[1]: Finished kmod-static-nodes.service. Jul 14 22:03:18.713128 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 22:03:18.713134 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 14 22:03:18.713141 systemd[1]: Finished systemd-vconsole-setup.service. Jul 14 22:03:18.713151 kernel: audit: type=1130 audit(1752530598.709:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:18.713158 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 14 22:03:18.713169 systemd-journald[290]: Journal started Jul 14 22:03:18.713213 systemd-journald[290]: Runtime Journal (/run/log/journal/acb64acf5c1d42318f2214b7c0b9d802) is 6.0M, max 48.7M, 42.6M free. Jul 14 22:03:18.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:18.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:18.704481 systemd-modules-load[291]: Inserted module 'overlay' Jul 14 22:03:18.716279 kernel: audit: type=1130 audit(1752530598.713:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:18.716297 systemd[1]: Started systemd-journald.service. Jul 14 22:03:18.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:18.717844 systemd[1]: Starting dracut-cmdline-ask.service... Jul 14 22:03:18.724504 kernel: audit: type=1130 audit(1752530598.716:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:18.730933 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 22:03:18.732565 systemd-resolved[292]: Positive Trust Anchors: Jul 14 22:03:18.732579 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:03:18.732607 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 14 22:03:18.736759 systemd-resolved[292]: Defaulting to hostname 'linux'. Jul 14 22:03:18.739106 kernel: Bridge firewalling registered Jul 14 22:03:18.738595 systemd-modules-load[291]: Inserted module 'br_netfilter' Jul 14 22:03:18.739836 systemd[1]: Started systemd-resolved.service. Jul 14 22:03:18.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:18.740516 systemd[1]: Reached target nss-lookup.target. Jul 14 22:03:18.743774 kernel: audit: type=1130 audit(1752530598.740:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:18.743495 systemd[1]: Finished dracut-cmdline-ask.service. Jul 14 22:03:18.747984 kernel: audit: type=1130 audit(1752530598.744:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:18.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:18.745203 systemd[1]: Starting dracut-cmdline.service... Jul 14 22:03:18.753002 kernel: SCSI subsystem initialized Jul 14 22:03:18.754951 dracut-cmdline[309]: dracut-dracut-053 Jul 14 22:03:18.757036 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=0fbac260ee8dcd4db6590eed44229ca41387b27ea0fa758fd2be410620d68236 Jul 14 22:03:18.761935 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 22:03:18.761973 kernel: device-mapper: uevent: version 1.0.3 Jul 14 22:03:18.763126 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 14 22:03:18.765415 systemd-modules-load[291]: Inserted module 'dm_multipath' Jul 14 22:03:18.766308 systemd[1]: Finished systemd-modules-load.service. Jul 14 22:03:18.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:18.767696 systemd[1]: Starting systemd-sysctl.service... Jul 14 22:03:18.770341 kernel: audit: type=1130 audit(1752530598.766:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:18.775825 systemd[1]: Finished systemd-sysctl.service. Jul 14 22:03:18.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:18.778937 kernel: audit: type=1130 audit(1752530598.776:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:18.819946 kernel: Loading iSCSI transport class v2.0-870. Jul 14 22:03:18.831940 kernel: iscsi: registered transport (tcp) Jul 14 22:03:18.847098 kernel: iscsi: registered transport (qla4xxx) Jul 14 22:03:18.847144 kernel: QLogic iSCSI HBA Driver Jul 14 22:03:18.880889 systemd[1]: Finished dracut-cmdline.service. Jul 14 22:03:18.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:18.882302 systemd[1]: Starting dracut-pre-udev.service... Jul 14 22:03:18.884524 kernel: audit: type=1130 audit(1752530598.880:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:18.925974 kernel: raid6: neonx8 gen() 13727 MB/s Jul 14 22:03:18.942935 kernel: raid6: neonx8 xor() 10729 MB/s Jul 14 22:03:18.959933 kernel: raid6: neonx4 gen() 13555 MB/s Jul 14 22:03:18.976933 kernel: raid6: neonx4 xor() 11200 MB/s Jul 14 22:03:18.993931 kernel: raid6: neonx2 gen() 13054 MB/s Jul 14 22:03:19.010934 kernel: raid6: neonx2 xor() 10389 MB/s Jul 14 22:03:19.027933 kernel: raid6: neonx1 gen() 10558 MB/s Jul 14 22:03:19.044932 kernel: raid6: neonx1 xor() 8778 MB/s Jul 14 22:03:19.061940 kernel: raid6: int64x8 gen() 6276 MB/s Jul 14 22:03:19.078931 kernel: raid6: int64x8 xor() 3542 MB/s Jul 14 22:03:19.095933 kernel: raid6: int64x4 gen() 7201 MB/s Jul 14 22:03:19.112932 kernel: raid6: int64x4 xor() 3852 MB/s Jul 14 22:03:19.129932 kernel: raid6: int64x2 gen() 6150 MB/s Jul 14 22:03:19.146936 kernel: raid6: int64x2 xor() 3317 MB/s Jul 14 22:03:19.163940 kernel: raid6: int64x1 gen() 5047 MB/s Jul 14 22:03:19.181108 kernel: raid6: int64x1 xor() 2647 MB/s Jul 14 22:03:19.181130 kernel: raid6: using algorithm neonx8 gen() 13727 MB/s Jul 14 22:03:19.181148 kernel: raid6: .... xor() 10729 MB/s, rmw enabled Jul 14 22:03:19.181164 kernel: raid6: using neon recovery algorithm Jul 14 22:03:19.192261 kernel: xor: measuring software checksum speed Jul 14 22:03:19.192288 kernel: 8regs : 17188 MB/sec Jul 14 22:03:19.192305 kernel: 32regs : 20691 MB/sec Jul 14 22:03:19.193161 kernel: arm64_neon : 27635 MB/sec Jul 14 22:03:19.193174 kernel: xor: using function: arm64_neon (27635 MB/sec) Jul 14 22:03:19.246941 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 14 22:03:19.256298 systemd[1]: Finished dracut-pre-udev.service. Jul 14 22:03:19.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:19.258000 audit: BPF prog-id=7 op=LOAD Jul 14 22:03:19.258000 audit: BPF prog-id=8 op=LOAD Jul 14 22:03:19.259430 systemd[1]: Starting systemd-udevd.service... Jul 14 22:03:19.260431 kernel: audit: type=1130 audit(1752530599.255:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:19.274951 systemd-udevd[491]: Using default interface naming scheme 'v252'. Jul 14 22:03:19.278245 systemd[1]: Started systemd-udevd.service. Jul 14 22:03:19.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:19.280568 systemd[1]: Starting dracut-pre-trigger.service... Jul 14 22:03:19.292141 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Jul 14 22:03:19.320254 systemd[1]: Finished dracut-pre-trigger.service. Jul 14 22:03:19.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:19.321945 systemd[1]: Starting systemd-udev-trigger.service... Jul 14 22:03:19.355866 systemd[1]: Finished systemd-udev-trigger.service. Jul 14 22:03:19.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:19.392940 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 22:03:19.396229 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 22:03:19.396243 kernel: GPT:9289727 != 19775487 Jul 14 22:03:19.396251 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 22:03:19.396260 kernel: GPT:9289727 != 19775487 Jul 14 22:03:19.396268 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 22:03:19.396276 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:03:19.405940 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (554) Jul 14 22:03:19.409176 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 14 22:03:19.415500 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 14 22:03:19.416293 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 14 22:03:19.420182 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 14 22:03:19.423365 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 14 22:03:19.424854 systemd[1]: Starting disk-uuid.service... Jul 14 22:03:19.430508 disk-uuid[561]: Primary Header is updated. Jul 14 22:03:19.430508 disk-uuid[561]: Secondary Entries is updated. Jul 14 22:03:19.430508 disk-uuid[561]: Secondary Header is updated. Jul 14 22:03:19.433937 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:03:20.445395 disk-uuid[562]: The operation has completed successfully. Jul 14 22:03:20.446460 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:03:20.469822 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 22:03:20.470745 systemd[1]: Finished disk-uuid.service. Jul 14 22:03:20.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:20.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:20.475159 systemd[1]: Starting verity-setup.service... Jul 14 22:03:20.491951 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 14 22:03:20.518860 systemd[1]: Found device dev-mapper-usr.device. Jul 14 22:03:20.520849 systemd[1]: Mounting sysusr-usr.mount... Jul 14 22:03:20.522573 systemd[1]: Finished verity-setup.service. Jul 14 22:03:20.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:20.571940 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 14 22:03:20.572049 systemd[1]: Mounted sysusr-usr.mount. Jul 14 22:03:20.572692 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 14 22:03:20.573433 systemd[1]: Starting ignition-setup.service... Jul 14 22:03:20.575049 systemd[1]: Starting parse-ip-for-networkd.service... Jul 14 22:03:20.581936 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 22:03:20.581974 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:03:20.581983 kernel: BTRFS info (device vda6): has skinny extents Jul 14 22:03:20.591184 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 22:03:20.597172 systemd[1]: Finished ignition-setup.service. Jul 14 22:03:20.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:20.598550 systemd[1]: Starting ignition-fetch-offline.service... Jul 14 22:03:20.656697 systemd[1]: Finished parse-ip-for-networkd.service. Jul 14 22:03:20.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:20.657000 audit: BPF prog-id=9 op=LOAD Jul 14 22:03:20.658797 systemd[1]: Starting systemd-networkd.service... Jul 14 22:03:20.670900 ignition[652]: Ignition 2.14.0 Jul 14 22:03:20.670910 ignition[652]: Stage: fetch-offline Jul 14 22:03:20.670961 ignition[652]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:03:20.670970 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:03:20.671097 ignition[652]: parsed url from cmdline: "" Jul 14 22:03:20.671100 ignition[652]: no config URL provided Jul 14 22:03:20.671105 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 22:03:20.671111 ignition[652]: no config at "/usr/lib/ignition/user.ign" Jul 14 22:03:20.671129 ignition[652]: op(1): [started] loading QEMU firmware config module Jul 14 22:03:20.671134 ignition[652]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 22:03:20.674973 ignition[652]: op(1): [finished] loading QEMU firmware config module Jul 14 22:03:20.682109 ignition[652]: parsing config with SHA512: 75353b2191f0a51216a8c18af8e5fc8a3a400a0216e8a38cdd71878df73c58afc0e2cdcd0b193e497aa711d8d1ac9fcb1ee22770ac0ab52ec8ec8dcc26ee58d1 Jul 14 22:03:20.686773 systemd-networkd[739]: lo: Link UP Jul 14 22:03:20.686784 systemd-networkd[739]: lo: Gained carrier Jul 14 22:03:20.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:20.687766 ignition[652]: fetch-offline: fetch-offline passed Jul 14 22:03:20.687182 systemd-networkd[739]: Enumeration completed Jul 14 22:03:20.687819 ignition[652]: Ignition finished successfully Jul 14 22:03:20.687286 systemd[1]: Started systemd-networkd.service. Jul 14 22:03:20.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:20.687355 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 22:03:20.687463 unknown[652]: fetched base config from "system" Jul 14 22:03:20.687470 unknown[652]: fetched user config from "qemu" Jul 14 22:03:20.688430 systemd-networkd[739]: eth0: Link UP Jul 14 22:03:20.688433 systemd-networkd[739]: eth0: Gained carrier Jul 14 22:03:20.689469 systemd[1]: Reached target network.target. Jul 14 22:03:20.691193 systemd[1]: Starting iscsiuio.service... Jul 14 22:03:20.692109 systemd[1]: Finished ignition-fetch-offline.service. Jul 14 22:03:20.693126 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 22:03:20.693847 systemd[1]: Starting ignition-kargs.service... Jul 14 22:03:20.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:20.699850 systemd[1]: Started iscsiuio.service. Jul 14 22:03:20.701853 systemd[1]: Starting iscsid.service... Jul 14 22:03:20.702999 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 22:03:20.703081 ignition[744]: Ignition 2.14.0 Jul 14 22:03:20.703087 ignition[744]: Stage: kargs Jul 14 22:03:20.703179 ignition[744]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:03:20.703187 ignition[744]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:03:20.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:20.707988 iscsid[752]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 14 22:03:20.707988 iscsid[752]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 14 22:03:20.707988 iscsid[752]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 14 22:03:20.707988 iscsid[752]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 14 22:03:20.707988 iscsid[752]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 14 22:03:20.707988 iscsid[752]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 14 22:03:20.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:20.705880 systemd[1]: Finished ignition-kargs.service. Jul 14 22:03:20.703877 ignition[744]: kargs: kargs passed Jul 14 22:03:20.708226 systemd[1]: Starting ignition-disks.service... Jul 14 22:03:20.703932 ignition[744]: Ignition finished successfully Jul 14 22:03:20.709543 systemd[1]: Started iscsid.service. Jul 14 22:03:20.719612 ignition[753]: Ignition 2.14.0 Jul 14 22:03:20.710885 systemd[1]: Starting dracut-initqueue.service... Jul 14 22:03:20.719619 ignition[753]: Stage: disks Jul 14 22:03:20.719729 ignition[753]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:03:20.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:20.721581 systemd[1]: Finished ignition-disks.service. Jul 14 22:03:20.719740 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:03:20.722584 systemd[1]: Reached target initrd-root-device.target. Jul 14 22:03:20.720466 ignition[753]: disks: disks passed Jul 14 22:03:20.723509 systemd[1]: Reached target local-fs-pre.target. Jul 14 22:03:20.720530 ignition[753]: Ignition finished successfully Jul 14 22:03:20.724462 systemd[1]: Reached target local-fs.target. Jul 14 22:03:20.725576 systemd[1]: Reached target sysinit.target. Jul 14 22:03:20.726546 systemd[1]: Reached target basic.target. Jul 14 22:03:20.729776 systemd[1]: Finished dracut-initqueue.service. Jul 14 22:03:20.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:20.730552 systemd[1]: Reached target remote-fs-pre.target. Jul 14 22:03:20.731547 systemd[1]: Reached target remote-cryptsetup.target. Jul 14 22:03:20.732631 systemd[1]: Reached target remote-fs.target. Jul 14 22:03:20.734389 systemd[1]: Starting dracut-pre-mount.service... Jul 14 22:03:20.741893 systemd[1]: Finished dracut-pre-mount.service. Jul 14 22:03:20.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:20.743231 systemd[1]: Starting systemd-fsck-root.service... Jul 14 22:03:20.752339 systemd-resolved[292]: Detected conflict on linux IN A 10.0.0.99 Jul 14 22:03:20.752354 systemd-resolved[292]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Jul 14 22:03:20.755279 systemd-fsck[774]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 14 22:03:20.759773 systemd[1]: Finished systemd-fsck-root.service. Jul 14 22:03:20.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:20.761757 systemd[1]: Mounting sysroot.mount... Jul 14 22:03:20.767934 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 14 22:03:20.768288 systemd[1]: Mounted sysroot.mount. Jul 14 22:03:20.768867 systemd[1]: Reached target initrd-root-fs.target. Jul 14 22:03:20.771837 systemd[1]: Mounting sysroot-usr.mount... Jul 14 22:03:20.772568 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 14 22:03:20.772608 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 22:03:20.772629 systemd[1]: Reached target ignition-diskful.target. Jul 14 22:03:20.774578 systemd[1]: Mounted sysroot-usr.mount. Jul 14 22:03:20.775954 systemd[1]: Starting initrd-setup-root.service... Jul 14 22:03:20.780066 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 22:03:20.784386 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Jul 14 22:03:20.787601 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 22:03:20.791448 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 22:03:20.817594 systemd[1]: Finished initrd-setup-root.service. Jul 14 22:03:20.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:20.819051 systemd[1]: Starting ignition-mount.service... Jul 14 22:03:20.820214 systemd[1]: Starting sysroot-boot.service... Jul 14 22:03:20.824676 bash[825]: umount: /sysroot/usr/share/oem: not mounted. Jul 14 22:03:20.833886 ignition[827]: INFO : Ignition 2.14.0 Jul 14 22:03:20.833886 ignition[827]: INFO : Stage: mount Jul 14 22:03:20.835211 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:03:20.835211 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:03:20.835211 ignition[827]: INFO : mount: mount passed Jul 14 22:03:20.835211 ignition[827]: INFO : Ignition finished successfully Jul 14 22:03:20.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:20.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:20.835989 systemd[1]: Finished ignition-mount.service. Jul 14 22:03:20.837559 systemd[1]: Finished sysroot-boot.service. Jul 14 22:03:21.528766 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 14 22:03:21.533940 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (836) Jul 14 22:03:21.535316 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 22:03:21.535338 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:03:21.535348 kernel: BTRFS info (device vda6): has skinny extents Jul 14 22:03:21.538488 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 14 22:03:21.539816 systemd[1]: Starting ignition-files.service... Jul 14 22:03:21.553531 ignition[856]: INFO : Ignition 2.14.0 Jul 14 22:03:21.553531 ignition[856]: INFO : Stage: files Jul 14 22:03:21.554693 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:03:21.554693 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:03:21.554693 ignition[856]: DEBUG : files: compiled without relabeling support, skipping Jul 14 22:03:21.563608 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 22:03:21.563608 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 22:03:21.566195 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 22:03:21.567216 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 22:03:21.567216 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 22:03:21.566995 unknown[856]: wrote ssh authorized keys file for user: core Jul 14 22:03:21.570220 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jul 14 22:03:21.570220 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 22:03:21.570220 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:03:21.570220 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:03:21.570220 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 14 22:03:21.570220 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 14 22:03:21.570220 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 14 22:03:21.570220 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 14 22:03:21.920162 systemd-networkd[739]: eth0: Gained IPv6LL Jul 14 22:03:52.051907 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jul 14 22:03:52.380295 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 14 22:03:52.380295 ignition[856]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jul 14 22:03:52.383088 ignition[856]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:03:52.383088 ignition[856]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:03:52.383088 ignition[856]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jul 14 22:03:52.383088 ignition[856]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 22:03:52.383088 ignition[856]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:03:52.416100 ignition[856]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:03:52.417182 ignition[856]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 22:03:52.417182 ignition[856]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:03:52.417182 ignition[856]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:03:52.417182 ignition[856]: INFO : files: files passed Jul 14 22:03:52.417182 ignition[856]: INFO : Ignition finished successfully Jul 14 22:03:52.424808 kernel: kauditd_printk_skb: 23 callbacks suppressed Jul 14 22:03:52.424829 kernel: audit: type=1130 audit(1752530632.418:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.417710 systemd[1]: Finished ignition-files.service. Jul 14 22:03:52.419681 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 14 22:03:52.423830 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 14 22:03:52.428225 initrd-setup-root-after-ignition[879]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 14 22:03:52.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.424523 systemd[1]: Starting ignition-quench.service... Jul 14 22:03:52.435824 kernel: audit: type=1130 audit(1752530632.427:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.435844 kernel: audit: type=1130 audit(1752530632.431:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.435855 kernel: audit: type=1131 audit(1752530632.431:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.436013 initrd-setup-root-after-ignition[882]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:03:52.427438 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 14 22:03:52.429024 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 22:03:52.429093 systemd[1]: Finished ignition-quench.service. Jul 14 22:03:52.432128 systemd[1]: Reached target ignition-complete.target. Jul 14 22:03:52.437057 systemd[1]: Starting initrd-parse-etc.service... Jul 14 22:03:52.449315 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 22:03:52.449410 systemd[1]: Finished initrd-parse-etc.service. Jul 14 22:03:52.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.450640 systemd[1]: Reached target initrd-fs.target. Jul 14 22:03:52.455242 kernel: audit: type=1130 audit(1752530632.449:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.455264 kernel: audit: type=1131 audit(1752530632.449:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.454748 systemd[1]: Reached target initrd.target. Jul 14 22:03:52.455786 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 14 22:03:52.456535 systemd[1]: Starting dracut-pre-pivot.service... Jul 14 22:03:52.466544 systemd[1]: Finished dracut-pre-pivot.service. Jul 14 22:03:52.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.467905 systemd[1]: Starting initrd-cleanup.service... Jul 14 22:03:52.470730 kernel: audit: type=1130 audit(1752530632.466:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.475860 systemd[1]: Stopped target nss-lookup.target. Jul 14 22:03:52.476579 systemd[1]: Stopped target remote-cryptsetup.target. Jul 14 22:03:52.477603 systemd[1]: Stopped target timers.target. Jul 14 22:03:52.478539 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 22:03:52.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.478654 systemd[1]: Stopped dracut-pre-pivot.service. Jul 14 22:03:52.482764 kernel: audit: type=1131 audit(1752530632.478:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.479605 systemd[1]: Stopped target initrd.target. Jul 14 22:03:52.482394 systemd[1]: Stopped target basic.target. Jul 14 22:03:52.483314 systemd[1]: Stopped target ignition-complete.target. Jul 14 22:03:52.484310 systemd[1]: Stopped target ignition-diskful.target. Jul 14 22:03:52.485309 systemd[1]: Stopped target initrd-root-device.target. Jul 14 22:03:52.486381 systemd[1]: Stopped target remote-fs.target. Jul 14 22:03:52.487447 systemd[1]: Stopped target remote-fs-pre.target. Jul 14 22:03:52.488619 systemd[1]: Stopped target sysinit.target. Jul 14 22:03:52.489531 systemd[1]: Stopped target local-fs.target. Jul 14 22:03:52.490534 systemd[1]: Stopped target local-fs-pre.target. Jul 14 22:03:52.491595 systemd[1]: Stopped target swap.target. Jul 14 22:03:52.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.492472 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 22:03:52.496757 kernel: audit: type=1131 audit(1752530632.493:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.492576 systemd[1]: Stopped dracut-pre-mount.service. Jul 14 22:03:52.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.493585 systemd[1]: Stopped target cryptsetup.target. Jul 14 22:03:52.500489 kernel: audit: type=1131 audit(1752530632.497:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.496189 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 22:03:52.496283 systemd[1]: Stopped dracut-initqueue.service. Jul 14 22:03:52.497412 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 22:03:52.497506 systemd[1]: Stopped ignition-fetch-offline.service. Jul 14 22:03:52.500119 systemd[1]: Stopped target paths.target. Jul 14 22:03:52.501010 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 22:03:52.502956 systemd[1]: Stopped systemd-ask-password-console.path. Jul 14 22:03:52.503674 systemd[1]: Stopped target slices.target. Jul 14 22:03:52.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.504636 systemd[1]: Stopped target sockets.target. Jul 14 22:03:52.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.505552 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 22:03:52.505627 systemd[1]: Closed iscsid.socket. Jul 14 22:03:52.506792 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 22:03:52.506891 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 14 22:03:52.507954 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 22:03:52.508044 systemd[1]: Stopped ignition-files.service. Jul 14 22:03:52.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.518089 ignition[896]: INFO : Ignition 2.14.0 Jul 14 22:03:52.518089 ignition[896]: INFO : Stage: umount Jul 14 22:03:52.518089 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:03:52.518089 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:03:52.518089 ignition[896]: INFO : umount: umount passed Jul 14 22:03:52.518089 ignition[896]: INFO : Ignition finished successfully Jul 14 22:03:52.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.509681 systemd[1]: Stopping ignition-mount.service... Jul 14 22:03:52.510507 systemd[1]: Stopping iscsiuio.service... Jul 14 22:03:52.511835 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 22:03:52.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.511966 systemd[1]: Stopped kmod-static-nodes.service. Jul 14 22:03:52.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.514443 systemd[1]: Stopping sysroot-boot.service... Jul 14 22:03:52.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.515423 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 22:03:52.515526 systemd[1]: Stopped systemd-udev-trigger.service. Jul 14 22:03:52.516632 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 22:03:52.516726 systemd[1]: Stopped dracut-pre-trigger.service. Jul 14 22:03:52.518803 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 14 22:03:52.518891 systemd[1]: Stopped iscsiuio.service. Jul 14 22:03:52.519814 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 22:03:52.519877 systemd[1]: Closed iscsiuio.socket. Jul 14 22:03:52.520799 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 22:03:52.520878 systemd[1]: Finished initrd-cleanup.service. Jul 14 22:03:52.522605 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 22:03:52.522691 systemd[1]: Stopped ignition-mount.service. Jul 14 22:03:52.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.524946 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 22:03:52.525874 systemd[1]: Stopped target network.target. Jul 14 22:03:52.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.526959 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 22:03:52.527011 systemd[1]: Stopped ignition-disks.service. Jul 14 22:03:52.530360 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 22:03:52.530396 systemd[1]: Stopped ignition-kargs.service. Jul 14 22:03:52.531541 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 22:03:52.550000 audit: BPF prog-id=6 op=UNLOAD Jul 14 22:03:52.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.531575 systemd[1]: Stopped ignition-setup.service. Jul 14 22:03:52.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.532608 systemd[1]: Stopping systemd-networkd.service... Jul 14 22:03:52.533711 systemd[1]: Stopping systemd-resolved.service... Jul 14 22:03:52.542231 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 22:03:52.542330 systemd[1]: Stopped systemd-resolved.service. Jul 14 22:03:52.543350 systemd-networkd[739]: eth0: DHCPv6 lease lost Jul 14 22:03:52.557000 audit: BPF prog-id=9 op=UNLOAD Jul 14 22:03:52.545493 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 22:03:52.545583 systemd[1]: Stopped systemd-networkd.service. Jul 14 22:03:52.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.546991 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 22:03:52.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.547023 systemd[1]: Closed systemd-networkd.socket. Jul 14 22:03:52.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.548507 systemd[1]: Stopping network-cleanup.service... Jul 14 22:03:52.549524 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 22:03:52.549576 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 14 22:03:52.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.551085 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 22:03:52.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.551126 systemd[1]: Stopped systemd-sysctl.service. Jul 14 22:03:52.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.552796 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 22:03:52.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.552837 systemd[1]: Stopped systemd-modules-load.service. Jul 14 22:03:52.553713 systemd[1]: Stopping systemd-udevd.service... Jul 14 22:03:52.556409 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 14 22:03:52.559116 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 22:03:52.559203 systemd[1]: Stopped sysroot-boot.service. Jul 14 22:03:52.560317 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 22:03:52.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.560393 systemd[1]: Stopped network-cleanup.service. Jul 14 22:03:52.562353 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 22:03:52.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.562453 systemd[1]: Stopped systemd-udevd.service. Jul 14 22:03:52.563517 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 22:03:52.563547 systemd[1]: Closed systemd-udevd-control.socket. Jul 14 22:03:52.564372 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 22:03:52.564400 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 14 22:03:52.565369 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 22:03:52.565405 systemd[1]: Stopped dracut-pre-udev.service. Jul 14 22:03:52.566494 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 22:03:52.566528 systemd[1]: Stopped dracut-cmdline.service. Jul 14 22:03:52.567469 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:03:52.567501 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 14 22:03:52.568590 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 22:03:52.568634 systemd[1]: Stopped initrd-setup-root.service. Jul 14 22:03:52.570259 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 14 22:03:52.571340 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:03:52.571390 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 14 22:03:52.575472 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 22:03:52.575557 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 14 22:03:52.576712 systemd[1]: Reached target initrd-switch-root.target. Jul 14 22:03:52.578461 systemd[1]: Starting initrd-switch-root.service... Jul 14 22:03:52.585294 systemd[1]: Switching root. Jul 14 22:03:52.603251 iscsid[752]: iscsid shutting down. Jul 14 22:03:52.603931 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Jul 14 22:03:52.603960 systemd-journald[290]: Journal stopped Jul 14 22:03:54.618126 kernel: SELinux: Class mctp_socket not defined in policy. Jul 14 22:03:54.618182 kernel: SELinux: Class anon_inode not defined in policy. Jul 14 22:03:54.618193 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 14 22:03:54.618203 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 22:03:54.618221 kernel: SELinux: policy capability open_perms=1 Jul 14 22:03:54.618234 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 22:03:54.618244 kernel: SELinux: policy capability always_check_network=0 Jul 14 22:03:54.618254 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 22:03:54.618263 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 22:03:54.618273 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 22:03:54.618282 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 22:03:54.618293 systemd[1]: Successfully loaded SELinux policy in 30.129ms. Jul 14 22:03:54.618311 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.448ms. Jul 14 22:03:54.618323 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 14 22:03:54.618335 systemd[1]: Detected virtualization kvm. Jul 14 22:03:54.618346 systemd[1]: Detected architecture arm64. Jul 14 22:03:54.618358 systemd[1]: Detected first boot. Jul 14 22:03:54.618369 systemd[1]: Initializing machine ID from VM UUID. Jul 14 22:03:54.618379 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 14 22:03:54.618389 systemd[1]: Populated /etc with preset unit settings. Jul 14 22:03:54.618400 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 22:03:54.618412 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 22:03:54.618424 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:03:54.618435 systemd[1]: iscsid.service: Deactivated successfully. Jul 14 22:03:54.618446 systemd[1]: Stopped iscsid.service. Jul 14 22:03:54.618457 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 14 22:03:54.618468 systemd[1]: Stopped initrd-switch-root.service. Jul 14 22:03:54.618479 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 14 22:03:54.618490 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 14 22:03:54.618501 systemd[1]: Created slice system-addon\x2drun.slice. Jul 14 22:03:54.618511 systemd[1]: Created slice system-getty.slice. Jul 14 22:03:54.618522 systemd[1]: Created slice system-modprobe.slice. Jul 14 22:03:54.618533 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 14 22:03:54.618544 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 14 22:03:54.618554 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 14 22:03:54.618568 systemd[1]: Created slice user.slice. Jul 14 22:03:54.618578 systemd[1]: Started systemd-ask-password-console.path. Jul 14 22:03:54.618590 systemd[1]: Started systemd-ask-password-wall.path. Jul 14 22:03:54.618600 systemd[1]: Set up automount boot.automount. Jul 14 22:03:54.618619 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 14 22:03:54.618632 systemd[1]: Stopped target initrd-switch-root.target. Jul 14 22:03:54.618644 systemd[1]: Stopped target initrd-fs.target. Jul 14 22:03:54.618654 systemd[1]: Stopped target initrd-root-fs.target. Jul 14 22:03:54.618665 systemd[1]: Reached target integritysetup.target. Jul 14 22:03:54.618675 systemd[1]: Reached target remote-cryptsetup.target. Jul 14 22:03:54.618686 systemd[1]: Reached target remote-fs.target. Jul 14 22:03:54.618697 systemd[1]: Reached target slices.target. Jul 14 22:03:54.618708 systemd[1]: Reached target swap.target. Jul 14 22:03:54.618722 systemd[1]: Reached target torcx.target. Jul 14 22:03:54.618733 systemd[1]: Reached target veritysetup.target. Jul 14 22:03:54.618744 systemd[1]: Listening on systemd-coredump.socket. Jul 14 22:03:54.618755 systemd[1]: Listening on systemd-initctl.socket. Jul 14 22:03:54.618766 systemd[1]: Listening on systemd-networkd.socket. Jul 14 22:03:54.618776 systemd[1]: Listening on systemd-udevd-control.socket. Jul 14 22:03:54.618786 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 14 22:03:54.618796 systemd[1]: Listening on systemd-userdbd.socket. Jul 14 22:03:54.618806 systemd[1]: Mounting dev-hugepages.mount... Jul 14 22:03:54.618816 systemd[1]: Mounting dev-mqueue.mount... Jul 14 22:03:54.618828 systemd[1]: Mounting media.mount... Jul 14 22:03:54.618838 systemd[1]: Mounting sys-kernel-debug.mount... Jul 14 22:03:54.618849 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 14 22:03:54.618859 systemd[1]: Mounting tmp.mount... Jul 14 22:03:54.618869 systemd[1]: Starting flatcar-tmpfiles.service... Jul 14 22:03:54.618880 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 22:03:54.618894 systemd[1]: Starting kmod-static-nodes.service... Jul 14 22:03:54.618904 systemd[1]: Starting modprobe@configfs.service... Jul 14 22:03:54.618915 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 22:03:54.618935 systemd[1]: Starting modprobe@drm.service... Jul 14 22:03:54.618946 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 22:03:54.618958 systemd[1]: Starting modprobe@fuse.service... Jul 14 22:03:54.618968 systemd[1]: Starting modprobe@loop.service... Jul 14 22:03:54.618979 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 22:03:54.618989 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 14 22:03:54.618999 systemd[1]: Stopped systemd-fsck-root.service. Jul 14 22:03:54.619009 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 14 22:03:54.619020 systemd[1]: Stopped systemd-fsck-usr.service. Jul 14 22:03:54.619030 systemd[1]: Stopped systemd-journald.service. Jul 14 22:03:54.619040 kernel: fuse: init (API version 7.34) Jul 14 22:03:54.619053 kernel: loop: module loaded Jul 14 22:03:54.619065 systemd[1]: Starting systemd-journald.service... Jul 14 22:03:54.619075 systemd[1]: Starting systemd-modules-load.service... Jul 14 22:03:54.619087 systemd[1]: Starting systemd-network-generator.service... Jul 14 22:03:54.619098 systemd[1]: Starting systemd-remount-fs.service... Jul 14 22:03:54.619108 systemd[1]: Starting systemd-udev-trigger.service... Jul 14 22:03:54.619119 systemd[1]: verity-setup.service: Deactivated successfully. Jul 14 22:03:54.619129 systemd[1]: Stopped verity-setup.service. Jul 14 22:03:54.619140 systemd[1]: Mounted dev-hugepages.mount. Jul 14 22:03:54.619150 systemd[1]: Mounted dev-mqueue.mount. Jul 14 22:03:54.619160 systemd[1]: Mounted media.mount. Jul 14 22:03:54.619171 systemd[1]: Mounted sys-kernel-debug.mount. Jul 14 22:03:54.619182 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 14 22:03:54.619192 systemd[1]: Mounted tmp.mount. Jul 14 22:03:54.619202 systemd[1]: Finished kmod-static-nodes.service. Jul 14 22:03:54.619215 systemd-journald[991]: Journal started Jul 14 22:03:54.619255 systemd-journald[991]: Runtime Journal (/run/log/journal/acb64acf5c1d42318f2214b7c0b9d802) is 6.0M, max 48.7M, 42.6M free. Jul 14 22:03:52.674000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 22:03:52.743000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 14 22:03:52.743000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 14 22:03:52.743000 audit: BPF prog-id=10 op=LOAD Jul 14 22:03:52.743000 audit: BPF prog-id=10 op=UNLOAD Jul 14 22:03:52.743000 audit: BPF prog-id=11 op=LOAD Jul 14 22:03:52.743000 audit: BPF prog-id=11 op=UNLOAD Jul 14 22:03:52.793000 audit[929]: AVC avc: denied { associate } for pid=929 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 14 22:03:52.793000 audit[929]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58b4 a1=40000c8de0 a2=40000cf040 a3=32 items=0 ppid=912 pid=929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:03:52.793000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 14 22:03:52.794000 audit[929]: AVC avc: denied { associate } for pid=929 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 14 22:03:52.794000 audit[929]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5989 a2=1ed a3=0 items=2 ppid=912 pid=929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:03:52.794000 audit: CWD cwd="/" Jul 14 22:03:52.794000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:03:52.794000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 14 22:03:52.794000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 14 22:03:54.483000 audit: BPF prog-id=12 op=LOAD Jul 14 22:03:54.483000 audit: BPF prog-id=3 op=UNLOAD Jul 14 22:03:54.483000 audit: BPF prog-id=13 op=LOAD Jul 14 22:03:54.483000 audit: BPF prog-id=14 op=LOAD Jul 14 22:03:54.483000 audit: BPF prog-id=4 op=UNLOAD Jul 14 22:03:54.483000 audit: BPF prog-id=5 op=UNLOAD Jul 14 22:03:54.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.494000 audit: BPF prog-id=12 op=UNLOAD Jul 14 22:03:54.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.580000 audit: BPF prog-id=15 op=LOAD Jul 14 22:03:54.583000 audit: BPF prog-id=16 op=LOAD Jul 14 22:03:54.583000 audit: BPF prog-id=17 op=LOAD Jul 14 22:03:54.583000 audit: BPF prog-id=13 op=UNLOAD Jul 14 22:03:54.583000 audit: BPF prog-id=14 op=UNLOAD Jul 14 22:03:54.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.617000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 14 22:03:54.617000 audit[991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffefd759c0 a2=4000 a3=1 items=0 ppid=1 pid=991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:03:54.617000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 14 22:03:54.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.482451 systemd[1]: Queued start job for default target multi-user.target. Jul 14 22:03:52.791770 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T22:03:52Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 22:03:54.482463 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 14 22:03:52.792061 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T22:03:52Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 14 22:03:54.485479 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 14 22:03:52.792082 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T22:03:52Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 14 22:03:52.792114 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T22:03:52Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 14 22:03:52.792124 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T22:03:52Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 14 22:03:52.792157 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T22:03:52Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 14 22:03:52.792169 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T22:03:52Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 14 22:03:52.792370 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T22:03:52Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 14 22:03:52.792404 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T22:03:52Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 14 22:03:52.792416 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T22:03:52Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 14 22:03:52.793647 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T22:03:52Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 14 22:03:54.621276 systemd[1]: Started systemd-journald.service. Jul 14 22:03:52.793689 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T22:03:52Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 14 22:03:52.793710 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T22:03:52Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.101: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.101 Jul 14 22:03:54.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:52.793725 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T22:03:52Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 14 22:03:52.793744 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T22:03:52Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.101: no such file or directory" path=/var/lib/torcx/store/3510.3.101 Jul 14 22:03:52.793758 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T22:03:52Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 14 22:03:54.229074 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T22:03:54Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 14 22:03:54.229342 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T22:03:54Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 14 22:03:54.229445 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T22:03:54Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 14 22:03:54.229619 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T22:03:54Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 14 22:03:54.229677 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T22:03:54Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 14 22:03:54.229737 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-07-14T22:03:54Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 14 22:03:54.621904 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 22:03:54.622081 systemd[1]: Finished modprobe@configfs.service. Jul 14 22:03:54.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.623069 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:03:54.623217 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 22:03:54.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.624163 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:03:54.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.624806 systemd[1]: Finished modprobe@drm.service. Jul 14 22:03:54.625788 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:03:54.625959 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 22:03:54.626826 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 22:03:54.627006 systemd[1]: Finished modprobe@fuse.service. Jul 14 22:03:54.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.627842 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:03:54.628678 systemd[1]: Finished modprobe@loop.service. Jul 14 22:03:54.629696 systemd[1]: Finished systemd-modules-load.service. Jul 14 22:03:54.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.628000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.630612 systemd[1]: Finished systemd-network-generator.service. Jul 14 22:03:54.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.631578 systemd[1]: Finished systemd-remount-fs.service. Jul 14 22:03:54.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.632635 systemd[1]: Reached target network-pre.target. Jul 14 22:03:54.634419 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 14 22:03:54.636098 systemd[1]: Mounting sys-kernel-config.mount... Jul 14 22:03:54.636669 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 22:03:54.638444 systemd[1]: Starting systemd-hwdb-update.service... Jul 14 22:03:54.640482 systemd[1]: Starting systemd-journal-flush.service... Jul 14 22:03:54.641355 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:03:54.642491 systemd[1]: Starting systemd-random-seed.service... Jul 14 22:03:54.643310 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 22:03:54.644495 systemd[1]: Starting systemd-sysctl.service... Jul 14 22:03:54.647902 systemd-journald[991]: Time spent on flushing to /var/log/journal/acb64acf5c1d42318f2214b7c0b9d802 is 20.538ms for 968 entries. Jul 14 22:03:54.647902 systemd-journald[991]: System Journal (/var/log/journal/acb64acf5c1d42318f2214b7c0b9d802) is 8.0M, max 195.6M, 187.6M free. Jul 14 22:03:54.680518 systemd-journald[991]: Received client request to flush runtime journal. Jul 14 22:03:54.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.650102 systemd[1]: Finished flatcar-tmpfiles.service. Jul 14 22:03:54.651140 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 14 22:03:54.652215 systemd[1]: Mounted sys-kernel-config.mount. Jul 14 22:03:54.681222 udevadm[1029]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 14 22:03:54.655711 systemd[1]: Starting systemd-sysusers.service... Jul 14 22:03:54.656868 systemd[1]: Finished systemd-udev-trigger.service. Jul 14 22:03:54.659414 systemd[1]: Finished systemd-random-seed.service. Jul 14 22:03:54.660307 systemd[1]: Reached target first-boot-complete.target. Jul 14 22:03:54.662340 systemd[1]: Starting systemd-udev-settle.service... Jul 14 22:03:54.681268 systemd[1]: Finished systemd-sysctl.service. Jul 14 22:03:54.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.682283 systemd[1]: Finished systemd-journal-flush.service. Jul 14 22:03:54.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:54.685542 systemd[1]: Finished systemd-sysusers.service. Jul 14 22:03:54.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.032298 systemd[1]: Finished systemd-hwdb-update.service. Jul 14 22:03:55.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.032000 audit: BPF prog-id=18 op=LOAD Jul 14 22:03:55.032000 audit: BPF prog-id=19 op=LOAD Jul 14 22:03:55.032000 audit: BPF prog-id=7 op=UNLOAD Jul 14 22:03:55.032000 audit: BPF prog-id=8 op=UNLOAD Jul 14 22:03:55.034501 systemd[1]: Starting systemd-udevd.service... Jul 14 22:03:55.052218 systemd-udevd[1033]: Using default interface naming scheme 'v252'. Jul 14 22:03:55.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.067000 audit: BPF prog-id=20 op=LOAD Jul 14 22:03:55.063626 systemd[1]: Started systemd-udevd.service. Jul 14 22:03:55.068807 systemd[1]: Starting systemd-networkd.service... Jul 14 22:03:55.088000 audit: BPF prog-id=21 op=LOAD Jul 14 22:03:55.089000 audit: BPF prog-id=22 op=LOAD Jul 14 22:03:55.092000 audit: BPF prog-id=23 op=LOAD Jul 14 22:03:55.094541 systemd[1]: Starting systemd-userdbd.service... Jul 14 22:03:55.097359 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Jul 14 22:03:55.128079 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 14 22:03:55.147866 systemd[1]: Started systemd-userdbd.service. Jul 14 22:03:55.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.185318 systemd[1]: Finished systemd-udev-settle.service. Jul 14 22:03:55.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.187590 systemd[1]: Starting lvm2-activation-early.service... Jul 14 22:03:55.196697 systemd-networkd[1049]: lo: Link UP Jul 14 22:03:55.196704 systemd-networkd[1049]: lo: Gained carrier Jul 14 22:03:55.197113 systemd-networkd[1049]: Enumeration completed Jul 14 22:03:55.197233 systemd-networkd[1049]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 22:03:55.197278 systemd[1]: Started systemd-networkd.service. Jul 14 22:03:55.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.198854 systemd-networkd[1049]: eth0: Link UP Jul 14 22:03:55.198863 systemd-networkd[1049]: eth0: Gained carrier Jul 14 22:03:55.202835 lvm[1066]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:03:55.221071 systemd-networkd[1049]: eth0: DHCPv4 address 10.0.0.99/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 22:03:55.230852 systemd[1]: Finished lvm2-activation-early.service. Jul 14 22:03:55.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.231792 systemd[1]: Reached target cryptsetup.target. Jul 14 22:03:55.233733 systemd[1]: Starting lvm2-activation.service... Jul 14 22:03:55.237462 lvm[1067]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:03:55.268874 systemd[1]: Finished lvm2-activation.service. Jul 14 22:03:55.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.269753 systemd[1]: Reached target local-fs-pre.target. Jul 14 22:03:55.270433 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 22:03:55.270464 systemd[1]: Reached target local-fs.target. Jul 14 22:03:55.271072 systemd[1]: Reached target machines.target. Jul 14 22:03:55.272938 systemd[1]: Starting ldconfig.service... Jul 14 22:03:55.273996 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 22:03:55.274057 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 22:03:55.275196 systemd[1]: Starting systemd-boot-update.service... Jul 14 22:03:55.277733 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 14 22:03:55.279772 systemd[1]: Starting systemd-machine-id-commit.service... Jul 14 22:03:55.282353 systemd[1]: Starting systemd-sysext.service... Jul 14 22:03:55.283425 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1069 (bootctl) Jul 14 22:03:55.284560 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 14 22:03:55.291329 systemd[1]: Unmounting usr-share-oem.mount... Jul 14 22:03:55.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.292589 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 14 22:03:55.298761 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 14 22:03:55.299009 systemd[1]: Unmounted usr-share-oem.mount. Jul 14 22:03:55.311948 kernel: loop0: detected capacity change from 0 to 211168 Jul 14 22:03:55.374158 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 22:03:55.375119 systemd[1]: Finished systemd-machine-id-commit.service. Jul 14 22:03:55.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.378806 systemd-fsck[1080]: fsck.fat 4.2 (2021-01-31) Jul 14 22:03:55.378806 systemd-fsck[1080]: /dev/vda1: 236 files, 117310/258078 clusters Jul 14 22:03:55.382114 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 14 22:03:55.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.387622 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 22:03:55.384875 systemd[1]: Mounting boot.mount... Jul 14 22:03:55.392652 systemd[1]: Mounted boot.mount. Jul 14 22:03:55.403933 kernel: loop1: detected capacity change from 0 to 211168 Jul 14 22:03:55.404093 systemd[1]: Finished systemd-boot-update.service. Jul 14 22:03:55.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.411430 (sd-sysext)[1084]: Using extensions 'kubernetes'. Jul 14 22:03:55.411853 (sd-sysext)[1084]: Merged extensions into '/usr'. Jul 14 22:03:55.431862 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 22:03:55.433376 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 22:03:55.435317 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 22:03:55.437112 systemd[1]: Starting modprobe@loop.service... Jul 14 22:03:55.437983 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 22:03:55.438122 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 22:03:55.439018 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:03:55.439167 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 22:03:55.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.440592 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:03:55.440749 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 22:03:55.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.442120 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:03:55.442315 systemd[1]: Finished modprobe@loop.service. Jul 14 22:03:55.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.443590 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:03:55.443705 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 22:03:55.491982 ldconfig[1068]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 22:03:55.495838 systemd[1]: Finished ldconfig.service. Jul 14 22:03:55.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.612910 systemd[1]: Mounting usr-share-oem.mount... Jul 14 22:03:55.618065 systemd[1]: Mounted usr-share-oem.mount. Jul 14 22:03:55.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.619794 systemd[1]: Finished systemd-sysext.service. Jul 14 22:03:55.621913 systemd[1]: Starting ensure-sysext.service... Jul 14 22:03:55.623616 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 14 22:03:55.628476 systemd[1]: Reloading. Jul 14 22:03:55.636413 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 14 22:03:55.638383 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 22:03:55.641526 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 22:03:55.672160 /usr/lib/systemd/system-generators/torcx-generator[1111]: time="2025-07-14T22:03:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 22:03:55.672208 /usr/lib/systemd/system-generators/torcx-generator[1111]: time="2025-07-14T22:03:55Z" level=info msg="torcx already run" Jul 14 22:03:55.731479 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 22:03:55.731502 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 22:03:55.748811 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:03:55.791000 audit: BPF prog-id=24 op=LOAD Jul 14 22:03:55.791000 audit: BPF prog-id=15 op=UNLOAD Jul 14 22:03:55.791000 audit: BPF prog-id=25 op=LOAD Jul 14 22:03:55.791000 audit: BPF prog-id=26 op=LOAD Jul 14 22:03:55.791000 audit: BPF prog-id=16 op=UNLOAD Jul 14 22:03:55.791000 audit: BPF prog-id=17 op=UNLOAD Jul 14 22:03:55.792000 audit: BPF prog-id=27 op=LOAD Jul 14 22:03:55.792000 audit: BPF prog-id=28 op=LOAD Jul 14 22:03:55.792000 audit: BPF prog-id=18 op=UNLOAD Jul 14 22:03:55.792000 audit: BPF prog-id=19 op=UNLOAD Jul 14 22:03:55.794000 audit: BPF prog-id=29 op=LOAD Jul 14 22:03:55.794000 audit: BPF prog-id=20 op=UNLOAD Jul 14 22:03:55.794000 audit: BPF prog-id=30 op=LOAD Jul 14 22:03:55.794000 audit: BPF prog-id=21 op=UNLOAD Jul 14 22:03:55.795000 audit: BPF prog-id=31 op=LOAD Jul 14 22:03:55.795000 audit: BPF prog-id=32 op=LOAD Jul 14 22:03:55.795000 audit: BPF prog-id=22 op=UNLOAD Jul 14 22:03:55.795000 audit: BPF prog-id=23 op=UNLOAD Jul 14 22:03:55.797703 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 14 22:03:55.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.802081 systemd[1]: Starting audit-rules.service... Jul 14 22:03:55.803890 systemd[1]: Starting clean-ca-certificates.service... Jul 14 22:03:55.808000 audit: BPF prog-id=33 op=LOAD Jul 14 22:03:55.806341 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 14 22:03:55.811300 systemd[1]: Starting systemd-resolved.service... Jul 14 22:03:55.811000 audit: BPF prog-id=34 op=LOAD Jul 14 22:03:55.813805 systemd[1]: Starting systemd-timesyncd.service... Jul 14 22:03:55.815840 systemd[1]: Starting systemd-update-utmp.service... Jul 14 22:03:55.817223 systemd[1]: Finished clean-ca-certificates.service. Jul 14 22:03:55.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.820000 audit[1161]: SYSTEM_BOOT pid=1161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.819936 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 22:03:55.823513 systemd[1]: Finished systemd-update-utmp.service. Jul 14 22:03:55.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.825999 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 22:03:55.827373 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 22:03:55.829425 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 22:03:55.831360 systemd[1]: Starting modprobe@loop.service... Jul 14 22:03:55.831986 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 22:03:55.832123 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 22:03:55.832226 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 22:03:55.833155 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 14 22:03:55.834645 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:03:55.834793 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 22:03:55.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.835863 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:03:55.836026 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 22:03:55.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.837059 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:03:55.837174 systemd[1]: Finished modprobe@loop.service. Jul 14 22:03:55.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.838445 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:03:55.838550 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 22:03:55.840583 systemd[1]: Starting systemd-update-done.service... Jul 14 22:03:55.843287 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 22:03:55.844868 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 22:03:55.846843 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 22:03:55.849293 systemd[1]: Starting modprobe@loop.service... Jul 14 22:03:55.850002 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 22:03:55.850190 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 22:03:55.850350 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 22:03:55.851313 systemd[1]: Finished systemd-update-done.service. Jul 14 22:03:55.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.852383 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:03:55.852511 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 22:03:55.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.853725 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:03:55.853847 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 22:03:55.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 14 22:03:55.854998 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:03:55.855120 systemd[1]: Finished modprobe@loop.service. Jul 14 22:03:55.856174 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:03:55.856283 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 22:03:55.857000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 14 22:03:55.857000 audit[1177]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffed5f1ad0 a2=420 a3=0 items=0 ppid=1150 pid=1177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 14 22:03:55.857000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 14 22:03:55.858452 augenrules[1177]: No rules Jul 14 22:03:55.858827 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 14 22:03:55.860315 systemd[1]: Starting modprobe@dm_mod.service... Jul 14 22:03:55.862337 systemd[1]: Starting modprobe@drm.service... Jul 14 22:03:55.864090 systemd[1]: Starting modprobe@efi_pstore.service... Jul 14 22:03:55.866265 systemd[1]: Starting modprobe@loop.service... Jul 14 22:03:55.867206 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 14 22:03:55.867464 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 22:03:55.869459 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 14 22:03:55.870425 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 22:03:55.872198 systemd[1]: Finished audit-rules.service. Jul 14 22:03:55.873337 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:03:55.873465 systemd[1]: Finished modprobe@dm_mod.service. Jul 14 22:03:55.874614 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:03:55.874744 systemd[1]: Finished modprobe@drm.service. Jul 14 22:03:55.875840 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:03:55.875994 systemd[1]: Finished modprobe@efi_pstore.service. Jul 14 22:03:55.877358 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:03:55.877490 systemd[1]: Finished modprobe@loop.service. Jul 14 22:03:55.878701 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:03:55.878791 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 14 22:03:55.880115 systemd[1]: Finished ensure-sysext.service. Jul 14 22:03:55.881109 systemd[1]: Started systemd-timesyncd.service. Jul 14 22:03:55.881646 systemd-timesyncd[1160]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 22:03:55.881703 systemd-timesyncd[1160]: Initial clock synchronization to Mon 2025-07-14 22:03:55.560480 UTC. Jul 14 22:03:55.882628 systemd[1]: Reached target time-set.target. Jul 14 22:03:55.883424 systemd-resolved[1157]: Positive Trust Anchors: Jul 14 22:03:55.883437 systemd-resolved[1157]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:03:55.883465 systemd-resolved[1157]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 14 22:03:55.897753 systemd-resolved[1157]: Defaulting to hostname 'linux'. Jul 14 22:03:55.899262 systemd[1]: Started systemd-resolved.service. Jul 14 22:03:55.900055 systemd[1]: Reached target network.target. Jul 14 22:03:55.900649 systemd[1]: Reached target nss-lookup.target. Jul 14 22:03:55.901369 systemd[1]: Reached target sysinit.target. Jul 14 22:03:55.902080 systemd[1]: Started motdgen.path. Jul 14 22:03:55.902629 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 14 22:03:55.903644 systemd[1]: Started logrotate.timer. Jul 14 22:03:55.904317 systemd[1]: Started mdadm.timer. Jul 14 22:03:55.904823 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 14 22:03:55.905558 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 22:03:55.905606 systemd[1]: Reached target paths.target. Jul 14 22:03:55.906221 systemd[1]: Reached target timers.target. Jul 14 22:03:55.907244 systemd[1]: Listening on dbus.socket. Jul 14 22:03:55.909052 systemd[1]: Starting docker.socket... Jul 14 22:03:55.912701 systemd[1]: Listening on sshd.socket. Jul 14 22:03:55.913565 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 22:03:55.914130 systemd[1]: Listening on docker.socket. Jul 14 22:03:55.914803 systemd[1]: Reached target sockets.target. Jul 14 22:03:55.915450 systemd[1]: Reached target basic.target. Jul 14 22:03:55.916098 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 14 22:03:55.916132 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 14 22:03:55.917457 systemd[1]: Starting containerd.service... Jul 14 22:03:55.919289 systemd[1]: Starting dbus.service... Jul 14 22:03:55.921146 systemd[1]: Starting enable-oem-cloudinit.service... Jul 14 22:03:55.923412 systemd[1]: Starting extend-filesystems.service... Jul 14 22:03:55.924179 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 14 22:03:55.925970 systemd[1]: Starting motdgen.service... Jul 14 22:03:55.928910 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 14 22:03:55.931087 systemd[1]: Starting sshd-keygen.service... Jul 14 22:03:55.934311 systemd[1]: Starting systemd-logind.service... Jul 14 22:03:55.938631 jq[1192]: false Jul 14 22:03:55.935105 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 14 22:03:55.935243 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 22:03:55.935967 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 22:03:55.942258 jq[1206]: true Jul 14 22:03:55.936791 systemd[1]: Starting update-engine.service... Jul 14 22:03:55.938829 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 14 22:03:55.941778 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 22:03:55.942015 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 14 22:03:55.942557 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 22:03:55.942746 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 14 22:03:55.949329 jq[1209]: true Jul 14 22:03:55.965712 extend-filesystems[1193]: Found loop1 Jul 14 22:03:55.965712 extend-filesystems[1193]: Found vda Jul 14 22:03:55.965712 extend-filesystems[1193]: Found vda1 Jul 14 22:03:55.965712 extend-filesystems[1193]: Found vda2 Jul 14 22:03:55.965712 extend-filesystems[1193]: Found vda3 Jul 14 22:03:55.965712 extend-filesystems[1193]: Found usr Jul 14 22:03:55.965712 extend-filesystems[1193]: Found vda4 Jul 14 22:03:55.965712 extend-filesystems[1193]: Found vda6 Jul 14 22:03:55.965712 extend-filesystems[1193]: Found vda7 Jul 14 22:03:55.965712 extend-filesystems[1193]: Found vda9 Jul 14 22:03:55.965712 extend-filesystems[1193]: Checking size of /dev/vda9 Jul 14 22:03:55.978438 dbus-daemon[1191]: [system] SELinux support is enabled Jul 14 22:03:55.978667 systemd[1]: Started dbus.service. Jul 14 22:03:55.982746 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 22:03:55.982799 systemd[1]: Reached target system-config.target. Jul 14 22:03:55.983592 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 22:03:55.985720 extend-filesystems[1193]: Resized partition /dev/vda9 Jul 14 22:03:55.983624 systemd[1]: Reached target user-config.target. Jul 14 22:03:55.986417 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 22:03:55.986594 systemd[1]: Finished motdgen.service. Jul 14 22:03:56.002435 extend-filesystems[1237]: resize2fs 1.46.5 (30-Dec-2021) Jul 14 22:03:56.040172 systemd-logind[1201]: Watching system buttons on /dev/input/event0 (Power Button) Jul 14 22:03:56.043571 systemd-logind[1201]: New seat seat0. Jul 14 22:03:56.043996 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 22:03:56.047172 systemd[1]: Started systemd-logind.service. Jul 14 22:03:56.061769 update_engine[1203]: I0714 22:03:56.061465 1203 main.cc:92] Flatcar Update Engine starting Jul 14 22:03:56.064995 systemd[1]: Started update-engine.service. Jul 14 22:03:56.067706 env[1210]: time="2025-07-14T22:03:56.067590448Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 14 22:03:56.068196 systemd[1]: Started locksmithd.service. Jul 14 22:03:56.069500 bash[1236]: Updated "/home/core/.ssh/authorized_keys" Jul 14 22:03:56.071006 update_engine[1203]: I0714 22:03:56.070135 1203 update_check_scheduler.cc:74] Next update check in 3m56s Jul 14 22:03:56.073005 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 22:03:56.073428 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 14 22:03:56.085978 extend-filesystems[1237]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 22:03:56.085978 extend-filesystems[1237]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 22:03:56.085978 extend-filesystems[1237]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 22:03:56.089723 extend-filesystems[1193]: Resized filesystem in /dev/vda9 Jul 14 22:03:56.087622 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 22:03:56.087803 systemd[1]: Finished extend-filesystems.service. Jul 14 22:03:56.094890 env[1210]: time="2025-07-14T22:03:56.094716234Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 22:03:56.095038 env[1210]: time="2025-07-14T22:03:56.094893733Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:03:56.096492 env[1210]: time="2025-07-14T22:03:56.096448296Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.187-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:03:56.096561 env[1210]: time="2025-07-14T22:03:56.096498363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:03:56.097027 env[1210]: time="2025-07-14T22:03:56.096948694Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:03:56.097027 env[1210]: time="2025-07-14T22:03:56.096975571Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 22:03:56.097027 env[1210]: time="2025-07-14T22:03:56.096991735Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 14 22:03:56.097027 env[1210]: time="2025-07-14T22:03:56.097002101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 22:03:56.097140 env[1210]: time="2025-07-14T22:03:56.097092252Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:03:56.097415 env[1210]: time="2025-07-14T22:03:56.097310142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:03:56.097469 env[1210]: time="2025-07-14T22:03:56.097447173Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:03:56.097505 env[1210]: time="2025-07-14T22:03:56.097467522Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 22:03:56.097531 env[1210]: time="2025-07-14T22:03:56.097522542Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 14 22:03:56.097553 env[1210]: time="2025-07-14T22:03:56.097535941Z" level=info msg="metadata content store policy set" policy=shared Jul 14 22:03:56.104560 env[1210]: time="2025-07-14T22:03:56.104398113Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 22:03:56.104560 env[1210]: time="2025-07-14T22:03:56.104440616Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 22:03:56.104560 env[1210]: time="2025-07-14T22:03:56.104456089Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 22:03:56.104560 env[1210]: time="2025-07-14T22:03:56.104491143Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 22:03:56.104560 env[1210]: time="2025-07-14T22:03:56.104505234Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 22:03:56.104560 env[1210]: time="2025-07-14T22:03:56.104521245Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 22:03:56.104560 env[1210]: time="2025-07-14T22:03:56.104539405Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 22:03:56.105018 env[1210]: time="2025-07-14T22:03:56.104983824Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 22:03:56.105057 env[1210]: time="2025-07-14T22:03:56.105031165Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 14 22:03:56.105109 env[1210]: time="2025-07-14T22:03:56.105094209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 22:03:56.105146 env[1210]: time="2025-07-14T22:03:56.105115096Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 22:03:56.105146 env[1210]: time="2025-07-14T22:03:56.105129225Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 22:03:56.105308 env[1210]: time="2025-07-14T22:03:56.105277812Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 22:03:56.105374 env[1210]: time="2025-07-14T22:03:56.105356522Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 22:03:56.105673 env[1210]: time="2025-07-14T22:03:56.105624593Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 22:03:56.105673 env[1210]: time="2025-07-14T22:03:56.105655885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 22:03:56.105673 env[1210]: time="2025-07-14T22:03:56.105671013Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 22:03:56.105800 env[1210]: time="2025-07-14T22:03:56.105783663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 22:03:56.105829 env[1210]: time="2025-07-14T22:03:56.105800403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 22:03:56.105829 env[1210]: time="2025-07-14T22:03:56.105812305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 22:03:56.105829 env[1210]: time="2025-07-14T22:03:56.105823862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 22:03:56.105887 env[1210]: time="2025-07-14T22:03:56.105835764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 22:03:56.105887 env[1210]: time="2025-07-14T22:03:56.105847628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 22:03:56.105887 env[1210]: time="2025-07-14T22:03:56.105858955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 22:03:56.105887 env[1210]: time="2025-07-14T22:03:56.105869936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 22:03:56.105981 env[1210]: time="2025-07-14T22:03:56.105888557Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 22:03:56.106050 env[1210]: time="2025-07-14T22:03:56.106031846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 22:03:56.106081 env[1210]: time="2025-07-14T22:03:56.106053616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 22:03:56.106112 env[1210]: time="2025-07-14T22:03:56.106098384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 22:03:56.106137 env[1210]: time="2025-07-14T22:03:56.106116583Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 22:03:56.106163 env[1210]: time="2025-07-14T22:03:56.106131519Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 14 22:03:56.106163 env[1210]: time="2025-07-14T22:03:56.106143076Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 22:03:56.106163 env[1210]: time="2025-07-14T22:03:56.106159700Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 14 22:03:56.106237 env[1210]: time="2025-07-14T22:03:56.106194870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 22:03:56.106477 env[1210]: time="2025-07-14T22:03:56.106424010Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 22:03:56.110083 env[1210]: time="2025-07-14T22:03:56.106482370Z" level=info msg="Connect containerd service" Jul 14 22:03:56.110083 env[1210]: time="2025-07-14T22:03:56.106512970Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 22:03:56.110083 env[1210]: time="2025-07-14T22:03:56.107382687Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 22:03:56.110083 env[1210]: time="2025-07-14T22:03:56.107739643Z" level=info msg="Start subscribing containerd event" Jul 14 22:03:56.110083 env[1210]: time="2025-07-14T22:03:56.107788673Z" level=info msg="Start recovering state" Jul 14 22:03:56.110083 env[1210]: time="2025-07-14T22:03:56.107868380Z" level=info msg="Start event monitor" Jul 14 22:03:56.110083 env[1210]: time="2025-07-14T22:03:56.107886311Z" level=info msg="Start snapshots syncer" Jul 14 22:03:56.110083 env[1210]: time="2025-07-14T22:03:56.107887770Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 22:03:56.110083 env[1210]: time="2025-07-14T22:03:56.107944440Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 22:03:56.110083 env[1210]: time="2025-07-14T22:03:56.107896063Z" level=info msg="Start cni network conf syncer for default" Jul 14 22:03:56.110083 env[1210]: time="2025-07-14T22:03:56.107972545Z" level=info msg="Start streaming server" Jul 14 22:03:56.110083 env[1210]: time="2025-07-14T22:03:56.108960902Z" level=info msg="containerd successfully booted in 0.042321s" Jul 14 22:03:56.108078 systemd[1]: Started containerd.service. Jul 14 22:03:56.131950 locksmithd[1243]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 22:03:57.248028 systemd-networkd[1049]: eth0: Gained IPv6LL Jul 14 22:03:57.249691 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 14 22:03:57.250729 systemd[1]: Reached target network-online.target. Jul 14 22:03:57.252934 systemd[1]: Starting kubelet.service... Jul 14 22:03:57.693043 sshd_keygen[1207]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 22:03:57.711929 systemd[1]: Finished sshd-keygen.service. Jul 14 22:03:57.714200 systemd[1]: Starting issuegen.service... Jul 14 22:03:57.719250 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 22:03:57.719423 systemd[1]: Finished issuegen.service. Jul 14 22:03:57.721767 systemd[1]: Starting systemd-user-sessions.service... Jul 14 22:03:57.728277 systemd[1]: Finished systemd-user-sessions.service. Jul 14 22:03:57.730724 systemd[1]: Started getty@tty1.service. Jul 14 22:03:57.732873 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 14 22:03:57.733994 systemd[1]: Reached target getty.target. Jul 14 22:03:57.825520 systemd[1]: Started kubelet.service. Jul 14 22:03:57.826667 systemd[1]: Reached target multi-user.target. Jul 14 22:03:57.828734 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 14 22:03:57.835732 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 14 22:03:57.835906 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 14 22:03:57.836855 systemd[1]: Startup finished in 578ms (kernel) + 34.076s (initrd) + 5.197s (userspace) = 39.851s. Jul 14 22:03:58.252067 kubelet[1268]: E0714 22:03:58.252015 1268 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:03:58.254143 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:03:58.254269 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:04:01.501816 systemd[1]: Created slice system-sshd.slice. Jul 14 22:04:01.502852 systemd[1]: Started sshd@0-10.0.0.99:22-10.0.0.1:35528.service. Jul 14 22:04:01.553592 sshd[1278]: Accepted publickey for core from 10.0.0.1 port 35528 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 22:04:01.555934 sshd[1278]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:04:01.565755 systemd[1]: Created slice user-500.slice. Jul 14 22:04:01.566827 systemd[1]: Starting user-runtime-dir@500.service... Jul 14 22:04:01.568781 systemd-logind[1201]: New session 1 of user core. Jul 14 22:04:01.574814 systemd[1]: Finished user-runtime-dir@500.service. Jul 14 22:04:01.576128 systemd[1]: Starting user@500.service... Jul 14 22:04:01.578972 (systemd)[1281]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:04:01.640601 systemd[1281]: Queued start job for default target default.target. Jul 14 22:04:01.641105 systemd[1281]: Reached target paths.target. Jul 14 22:04:01.641136 systemd[1281]: Reached target sockets.target. Jul 14 22:04:01.641147 systemd[1281]: Reached target timers.target. Jul 14 22:04:01.641156 systemd[1281]: Reached target basic.target. Jul 14 22:04:01.641196 systemd[1281]: Reached target default.target. Jul 14 22:04:01.641218 systemd[1281]: Startup finished in 56ms. Jul 14 22:04:01.641282 systemd[1]: Started user@500.service. Jul 14 22:04:01.642209 systemd[1]: Started session-1.scope. Jul 14 22:04:01.692099 systemd[1]: Started sshd@1-10.0.0.99:22-10.0.0.1:35536.service. Jul 14 22:04:01.731459 sshd[1290]: Accepted publickey for core from 10.0.0.1 port 35536 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 22:04:01.732669 sshd[1290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:04:01.736125 systemd-logind[1201]: New session 2 of user core. Jul 14 22:04:01.737299 systemd[1]: Started session-2.scope. Jul 14 22:04:01.789976 sshd[1290]: pam_unix(sshd:session): session closed for user core Jul 14 22:04:01.793537 systemd[1]: Started sshd@2-10.0.0.99:22-10.0.0.1:35546.service. Jul 14 22:04:01.794038 systemd[1]: sshd@1-10.0.0.99:22-10.0.0.1:35536.service: Deactivated successfully. Jul 14 22:04:01.794663 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 22:04:01.795169 systemd-logind[1201]: Session 2 logged out. Waiting for processes to exit. Jul 14 22:04:01.796010 systemd-logind[1201]: Removed session 2. Jul 14 22:04:01.833366 sshd[1295]: Accepted publickey for core from 10.0.0.1 port 35546 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 22:04:01.834604 sshd[1295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:04:01.837939 systemd-logind[1201]: New session 3 of user core. Jul 14 22:04:01.838762 systemd[1]: Started session-3.scope. Jul 14 22:04:01.887416 sshd[1295]: pam_unix(sshd:session): session closed for user core Jul 14 22:04:01.890060 systemd[1]: sshd@2-10.0.0.99:22-10.0.0.1:35546.service: Deactivated successfully. Jul 14 22:04:01.890624 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 22:04:01.891206 systemd-logind[1201]: Session 3 logged out. Waiting for processes to exit. Jul 14 22:04:01.892249 systemd[1]: Started sshd@3-10.0.0.99:22-10.0.0.1:35554.service. Jul 14 22:04:01.892894 systemd-logind[1201]: Removed session 3. Jul 14 22:04:01.934041 sshd[1303]: Accepted publickey for core from 10.0.0.1 port 35554 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 22:04:01.935634 sshd[1303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:04:01.938809 systemd-logind[1201]: New session 4 of user core. Jul 14 22:04:01.939647 systemd[1]: Started session-4.scope. Jul 14 22:04:01.991871 sshd[1303]: pam_unix(sshd:session): session closed for user core Jul 14 22:04:01.996008 systemd[1]: sshd@3-10.0.0.99:22-10.0.0.1:35554.service: Deactivated successfully. Jul 14 22:04:01.996592 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 22:04:01.997178 systemd-logind[1201]: Session 4 logged out. Waiting for processes to exit. Jul 14 22:04:01.998293 systemd[1]: Started sshd@4-10.0.0.99:22-10.0.0.1:35564.service. Jul 14 22:04:01.999425 systemd-logind[1201]: Removed session 4. Jul 14 22:04:02.038269 sshd[1310]: Accepted publickey for core from 10.0.0.1 port 35564 ssh2: RSA SHA256:BOxEaGpHMktIkRdcKvKv9Es2//92qEL6t3QfRP9zfwU Jul 14 22:04:02.039553 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:04:02.042839 systemd-logind[1201]: New session 5 of user core. Jul 14 22:04:02.044404 systemd[1]: Started session-5.scope. Jul 14 22:04:02.105285 sudo[1313]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 22:04:02.105517 sudo[1313]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 14 22:04:02.117223 systemd[1]: Starting coreos-metadata.service... Jul 14 22:04:02.123778 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 22:04:02.123980 systemd[1]: Finished coreos-metadata.service. Jul 14 22:04:02.605847 systemd[1]: Stopped kubelet.service. Jul 14 22:04:02.607750 systemd[1]: Starting kubelet.service... Jul 14 22:04:02.629625 systemd[1]: Reloading. Jul 14 22:04:02.681106 /usr/lib/systemd/system-generators/torcx-generator[1376]: time="2025-07-14T22:04:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.101 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.101 /var/lib/torcx/store]" Jul 14 22:04:02.681136 /usr/lib/systemd/system-generators/torcx-generator[1376]: time="2025-07-14T22:04:02Z" level=info msg="torcx already run" Jul 14 22:04:02.852149 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 14 22:04:02.852169 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 14 22:04:02.867115 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:04:02.930575 systemd[1]: Started kubelet.service. Jul 14 22:04:02.931984 systemd[1]: Stopping kubelet.service... Jul 14 22:04:02.932219 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 22:04:02.932389 systemd[1]: Stopped kubelet.service. Jul 14 22:04:02.933895 systemd[1]: Starting kubelet.service... Jul 14 22:04:03.026240 systemd[1]: Started kubelet.service. Jul 14 22:04:03.066251 kubelet[1419]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:04:03.066251 kubelet[1419]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 22:04:03.066251 kubelet[1419]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:04:03.066595 kubelet[1419]: I0714 22:04:03.066295 1419 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:04:04.277968 kubelet[1419]: I0714 22:04:04.277920 1419 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 14 22:04:04.277968 kubelet[1419]: I0714 22:04:04.277953 1419 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:04:04.278284 kubelet[1419]: I0714 22:04:04.278154 1419 server.go:956] "Client rotation is on, will bootstrap in background" Jul 14 22:04:04.323038 kubelet[1419]: I0714 22:04:04.322990 1419 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:04:04.334845 kubelet[1419]: E0714 22:04:04.334789 1419 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:04:04.335012 kubelet[1419]: I0714 22:04:04.334996 1419 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:04:04.337537 kubelet[1419]: I0714 22:04:04.337515 1419 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:04:04.338818 kubelet[1419]: I0714 22:04:04.338773 1419 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:04:04.339098 kubelet[1419]: I0714 22:04:04.338911 1419 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.99","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 22:04:04.339290 kubelet[1419]: I0714 22:04:04.339273 1419 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:04:04.339364 kubelet[1419]: I0714 22:04:04.339354 1419 container_manager_linux.go:303] "Creating device plugin manager" Jul 14 22:04:04.341273 kubelet[1419]: I0714 22:04:04.341251 1419 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:04:04.346816 kubelet[1419]: I0714 22:04:04.346793 1419 kubelet.go:480] "Attempting to sync node with API server" Jul 14 22:04:04.347076 kubelet[1419]: I0714 22:04:04.346993 1419 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:04:04.347076 kubelet[1419]: I0714 22:04:04.347038 1419 kubelet.go:386] "Adding apiserver pod source" Jul 14 22:04:04.347076 kubelet[1419]: I0714 22:04:04.347053 1419 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:04:04.347157 kubelet[1419]: E0714 22:04:04.347093 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:04.347343 kubelet[1419]: E0714 22:04:04.347322 1419 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:04.348099 kubelet[1419]: I0714 22:04:04.348075 1419 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 14 22:04:04.348959 kubelet[1419]: I0714 22:04:04.348939 1419 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 14 22:04:04.349184 kubelet[1419]: W0714 22:04:04.349171 1419 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 22:04:04.351868 kubelet[1419]: I0714 22:04:04.351850 1419 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 22:04:04.352007 kubelet[1419]: I0714 22:04:04.351993 1419 server.go:1289] "Started kubelet" Jul 14 22:04:04.352367 kubelet[1419]: I0714 22:04:04.352332 1419 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:04:04.353549 kubelet[1419]: I0714 22:04:04.353486 1419 server.go:317] "Adding debug handlers to kubelet server" Jul 14 22:04:04.355252 kubelet[1419]: I0714 22:04:04.355196 1419 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:04:04.355646 kubelet[1419]: I0714 22:04:04.355619 1419 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:04:04.359974 kubelet[1419]: E0714 22:04:04.359933 1419 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:04:04.360167 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 14 22:04:04.360420 kubelet[1419]: I0714 22:04:04.360400 1419 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:04:04.360812 kubelet[1419]: I0714 22:04:04.360789 1419 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:04:04.361130 kubelet[1419]: I0714 22:04:04.361117 1419 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 22:04:04.361367 kubelet[1419]: E0714 22:04:04.361351 1419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.99\" not found" Jul 14 22:04:04.361812 kubelet[1419]: I0714 22:04:04.361795 1419 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 22:04:04.362339 kubelet[1419]: I0714 22:04:04.362314 1419 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:04:04.363940 kubelet[1419]: I0714 22:04:04.363899 1419 factory.go:223] Registration of the systemd container factory successfully Jul 14 22:04:04.364148 kubelet[1419]: I0714 22:04:04.364126 1419 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:04:04.365938 kubelet[1419]: I0714 22:04:04.365897 1419 factory.go:223] Registration of the containerd container factory successfully Jul 14 22:04:04.371892 kubelet[1419]: E0714 22:04:04.371858 1419 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.99\" not found" node="10.0.0.99" Jul 14 22:04:04.374323 kubelet[1419]: I0714 22:04:04.374303 1419 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 22:04:04.374388 kubelet[1419]: I0714 22:04:04.374332 1419 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 22:04:04.374388 kubelet[1419]: I0714 22:04:04.374353 1419 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:04:04.454142 kubelet[1419]: I0714 22:04:04.454100 1419 policy_none.go:49] "None policy: Start" Jul 14 22:04:04.454142 kubelet[1419]: I0714 22:04:04.454140 1419 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 22:04:04.454142 kubelet[1419]: I0714 22:04:04.454154 1419 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:04:04.461757 kubelet[1419]: E0714 22:04:04.461563 1419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.99\" not found" Jul 14 22:04:04.461655 systemd[1]: Created slice kubepods.slice. Jul 14 22:04:04.465705 systemd[1]: Created slice kubepods-burstable.slice. Jul 14 22:04:04.468091 systemd[1]: Created slice kubepods-besteffort.slice. Jul 14 22:04:04.478723 kubelet[1419]: E0714 22:04:04.478683 1419 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 14 22:04:04.478924 kubelet[1419]: I0714 22:04:04.478844 1419 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:04:04.478924 kubelet[1419]: I0714 22:04:04.478859 1419 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:04:04.479231 kubelet[1419]: I0714 22:04:04.479202 1419 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:04:04.480271 kubelet[1419]: E0714 22:04:04.479981 1419 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 22:04:04.480271 kubelet[1419]: E0714 22:04:04.480023 1419 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.99\" not found" Jul 14 22:04:04.480700 kubelet[1419]: I0714 22:04:04.480676 1419 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 14 22:04:04.543337 kubelet[1419]: I0714 22:04:04.543239 1419 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 14 22:04:04.543337 kubelet[1419]: I0714 22:04:04.543272 1419 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 14 22:04:04.543337 kubelet[1419]: I0714 22:04:04.543291 1419 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 22:04:04.543337 kubelet[1419]: I0714 22:04:04.543298 1419 kubelet.go:2436] "Starting kubelet main sync loop" Jul 14 22:04:04.543544 kubelet[1419]: E0714 22:04:04.543343 1419 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 14 22:04:04.580863 kubelet[1419]: I0714 22:04:04.580832 1419 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.99" Jul 14 22:04:04.585085 kubelet[1419]: I0714 22:04:04.585054 1419 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.99" Jul 14 22:04:04.691616 kubelet[1419]: I0714 22:04:04.691578 1419 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 14 22:04:04.691957 env[1210]: time="2025-07-14T22:04:04.691896481Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 22:04:04.692223 kubelet[1419]: I0714 22:04:04.692102 1419 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 14 22:04:05.046893 sudo[1313]: pam_unix(sudo:session): session closed for user root Jul 14 22:04:05.048555 sshd[1310]: pam_unix(sshd:session): session closed for user core Jul 14 22:04:05.050873 systemd[1]: sshd@4-10.0.0.99:22-10.0.0.1:35564.service: Deactivated successfully. Jul 14 22:04:05.051539 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 22:04:05.052056 systemd-logind[1201]: Session 5 logged out. Waiting for processes to exit. Jul 14 22:04:05.052776 systemd-logind[1201]: Removed session 5. Jul 14 22:04:05.280592 kubelet[1419]: I0714 22:04:05.280548 1419 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 14 22:04:05.280970 kubelet[1419]: I0714 22:04:05.280791 1419 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jul 14 22:04:05.280970 kubelet[1419]: I0714 22:04:05.280837 1419 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jul 14 22:04:05.281254 kubelet[1419]: I0714 22:04:05.281233 1419 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jul 14 22:04:05.347800 kubelet[1419]: I0714 22:04:05.347754 1419 apiserver.go:52] "Watching apiserver" Jul 14 22:04:05.348111 kubelet[1419]: E0714 22:04:05.347812 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:05.361443 systemd[1]: Created slice kubepods-burstable-pod32ee73e2_935d_48cc_834d_1e4198754b9e.slice. Jul 14 22:04:05.364425 kubelet[1419]: I0714 22:04:05.364394 1419 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 14 22:04:05.367392 kubelet[1419]: I0714 22:04:05.367360 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-hostproc\") pod \"cilium-lnkcb\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " pod="kube-system/cilium-lnkcb" Jul 14 22:04:05.367392 kubelet[1419]: I0714 22:04:05.367395 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-cilium-cgroup\") pod \"cilium-lnkcb\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " pod="kube-system/cilium-lnkcb" Jul 14 22:04:05.367509 kubelet[1419]: I0714 22:04:05.367421 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-etc-cni-netd\") pod \"cilium-lnkcb\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " pod="kube-system/cilium-lnkcb" Jul 14 22:04:05.367509 kubelet[1419]: I0714 22:04:05.367443 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-host-proc-sys-kernel\") pod \"cilium-lnkcb\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " pod="kube-system/cilium-lnkcb" Jul 14 22:04:05.367509 kubelet[1419]: I0714 22:04:05.367461 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/32ee73e2-935d-48cc-834d-1e4198754b9e-hubble-tls\") pod \"cilium-lnkcb\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " pod="kube-system/cilium-lnkcb" Jul 14 22:04:05.367509 kubelet[1419]: I0714 22:04:05.367476 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3e28d293-a3a7-4397-a9fa-c2f61b9655f4-kube-proxy\") pod \"kube-proxy-cvlpp\" (UID: \"3e28d293-a3a7-4397-a9fa-c2f61b9655f4\") " pod="kube-system/kube-proxy-cvlpp" Jul 14 22:04:05.367509 kubelet[1419]: I0714 22:04:05.367494 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e28d293-a3a7-4397-a9fa-c2f61b9655f4-xtables-lock\") pod \"kube-proxy-cvlpp\" (UID: \"3e28d293-a3a7-4397-a9fa-c2f61b9655f4\") " pod="kube-system/kube-proxy-cvlpp" Jul 14 22:04:05.367509 kubelet[1419]: I0714 22:04:05.367511 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-cni-path\") pod \"cilium-lnkcb\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " pod="kube-system/cilium-lnkcb" Jul 14 22:04:05.367634 kubelet[1419]: I0714 22:04:05.367524 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-lib-modules\") pod \"cilium-lnkcb\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " pod="kube-system/cilium-lnkcb" Jul 14 22:04:05.367634 kubelet[1419]: I0714 22:04:05.367537 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32ee73e2-935d-48cc-834d-1e4198754b9e-cilium-config-path\") pod \"cilium-lnkcb\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " pod="kube-system/cilium-lnkcb" Jul 14 22:04:05.367634 kubelet[1419]: I0714 22:04:05.367553 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-xtables-lock\") pod \"cilium-lnkcb\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " pod="kube-system/cilium-lnkcb" Jul 14 22:04:05.367634 kubelet[1419]: I0714 22:04:05.367567 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/32ee73e2-935d-48cc-834d-1e4198754b9e-clustermesh-secrets\") pod \"cilium-lnkcb\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " pod="kube-system/cilium-lnkcb" Jul 14 22:04:05.367634 kubelet[1419]: I0714 22:04:05.367587 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tsg7\" (UniqueName: \"kubernetes.io/projected/3e28d293-a3a7-4397-a9fa-c2f61b9655f4-kube-api-access-9tsg7\") pod \"kube-proxy-cvlpp\" (UID: \"3e28d293-a3a7-4397-a9fa-c2f61b9655f4\") " pod="kube-system/kube-proxy-cvlpp" Jul 14 22:04:05.367733 kubelet[1419]: I0714 22:04:05.367606 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-host-proc-sys-net\") pod \"cilium-lnkcb\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " pod="kube-system/cilium-lnkcb" Jul 14 22:04:05.367733 kubelet[1419]: I0714 22:04:05.367621 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4z2f\" (UniqueName: \"kubernetes.io/projected/32ee73e2-935d-48cc-834d-1e4198754b9e-kube-api-access-g4z2f\") pod \"cilium-lnkcb\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " pod="kube-system/cilium-lnkcb" Jul 14 22:04:05.367733 kubelet[1419]: I0714 22:04:05.367635 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e28d293-a3a7-4397-a9fa-c2f61b9655f4-lib-modules\") pod \"kube-proxy-cvlpp\" (UID: \"3e28d293-a3a7-4397-a9fa-c2f61b9655f4\") " pod="kube-system/kube-proxy-cvlpp" Jul 14 22:04:05.367791 kubelet[1419]: I0714 22:04:05.367725 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-cilium-run\") pod \"cilium-lnkcb\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " pod="kube-system/cilium-lnkcb" Jul 14 22:04:05.367791 kubelet[1419]: I0714 22:04:05.367760 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-bpf-maps\") pod \"cilium-lnkcb\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " pod="kube-system/cilium-lnkcb" Jul 14 22:04:05.375111 systemd[1]: Created slice kubepods-besteffort-pod3e28d293_a3a7_4397_a9fa_c2f61b9655f4.slice. Jul 14 22:04:05.469500 kubelet[1419]: I0714 22:04:05.469465 1419 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 14 22:04:05.674540 kubelet[1419]: E0714 22:04:05.673865 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:04:05.675087 env[1210]: time="2025-07-14T22:04:05.675036830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lnkcb,Uid:32ee73e2-935d-48cc-834d-1e4198754b9e,Namespace:kube-system,Attempt:0,}" Jul 14 22:04:05.684631 kubelet[1419]: E0714 22:04:05.684490 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:04:05.685505 env[1210]: time="2025-07-14T22:04:05.685472494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cvlpp,Uid:3e28d293-a3a7-4397-a9fa-c2f61b9655f4,Namespace:kube-system,Attempt:0,}" Jul 14 22:04:06.349216 kubelet[1419]: E0714 22:04:06.349163 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:07.350139 kubelet[1419]: E0714 22:04:07.350098 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:08.350550 kubelet[1419]: E0714 22:04:08.350467 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:09.351006 kubelet[1419]: E0714 22:04:09.350949 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:10.352023 kubelet[1419]: E0714 22:04:10.351963 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:11.352768 kubelet[1419]: E0714 22:04:11.352731 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:12.353681 kubelet[1419]: E0714 22:04:12.353621 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:13.354428 kubelet[1419]: E0714 22:04:13.354387 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:14.354530 kubelet[1419]: E0714 22:04:14.354491 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:15.355617 kubelet[1419]: E0714 22:04:15.355573 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:16.356289 kubelet[1419]: E0714 22:04:16.356209 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:16.652569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1698817408.mount: Deactivated successfully. Jul 14 22:04:16.657595 env[1210]: time="2025-07-14T22:04:16.657534532Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:04:16.659819 env[1210]: time="2025-07-14T22:04:16.659782231Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:04:16.660497 env[1210]: time="2025-07-14T22:04:16.660469557Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:04:16.662554 env[1210]: time="2025-07-14T22:04:16.662515738Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:04:16.665283 env[1210]: time="2025-07-14T22:04:16.665248886Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:04:16.667934 env[1210]: time="2025-07-14T22:04:16.666839615Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:04:16.667934 env[1210]: time="2025-07-14T22:04:16.667847248Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:04:16.669894 env[1210]: time="2025-07-14T22:04:16.669837585Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:04:16.699536 env[1210]: time="2025-07-14T22:04:16.699349667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:04:16.699536 env[1210]: time="2025-07-14T22:04:16.699423381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:04:16.699536 env[1210]: time="2025-07-14T22:04:16.699434829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:04:16.700080 env[1210]: time="2025-07-14T22:04:16.699969460Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f2ecc90fce3f48a9889441cc4586257b569aaaafc7c488b7be72e48aaf20d8d pid=1487 runtime=io.containerd.runc.v2 Jul 14 22:04:16.700080 env[1210]: time="2025-07-14T22:04:16.699902287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:04:16.700080 env[1210]: time="2025-07-14T22:04:16.699944610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:04:16.700080 env[1210]: time="2025-07-14T22:04:16.699954981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:04:16.700511 env[1210]: time="2025-07-14T22:04:16.700147245Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d pid=1488 runtime=io.containerd.runc.v2 Jul 14 22:04:16.724219 systemd[1]: Started cri-containerd-739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d.scope. Jul 14 22:04:16.726629 systemd[1]: Started cri-containerd-3f2ecc90fce3f48a9889441cc4586257b569aaaafc7c488b7be72e48aaf20d8d.scope. Jul 14 22:04:16.774630 env[1210]: time="2025-07-14T22:04:16.774583443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lnkcb,Uid:32ee73e2-935d-48cc-834d-1e4198754b9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d\"" Jul 14 22:04:16.776050 kubelet[1419]: E0714 22:04:16.775518 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:04:16.777352 env[1210]: time="2025-07-14T22:04:16.777320300Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 14 22:04:16.780283 env[1210]: time="2025-07-14T22:04:16.780248783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cvlpp,Uid:3e28d293-a3a7-4397-a9fa-c2f61b9655f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f2ecc90fce3f48a9889441cc4586257b569aaaafc7c488b7be72e48aaf20d8d\"" Jul 14 22:04:16.781057 kubelet[1419]: E0714 22:04:16.780870 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:04:17.357141 kubelet[1419]: E0714 22:04:17.357101 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:18.357687 kubelet[1419]: E0714 22:04:18.357636 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:19.358262 kubelet[1419]: E0714 22:04:19.358216 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:20.358868 kubelet[1419]: E0714 22:04:20.358834 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:21.012468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2180030599.mount: Deactivated successfully. Jul 14 22:04:21.359467 kubelet[1419]: E0714 22:04:21.359436 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:22.360416 kubelet[1419]: E0714 22:04:22.360370 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:23.095239 env[1210]: time="2025-07-14T22:04:23.095201282Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:04:23.096980 env[1210]: time="2025-07-14T22:04:23.096954328Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:04:23.098940 env[1210]: time="2025-07-14T22:04:23.098899284Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:04:23.099588 env[1210]: time="2025-07-14T22:04:23.099561321Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 14 22:04:23.101274 env[1210]: time="2025-07-14T22:04:23.101237851Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.0\"" Jul 14 22:04:23.103365 env[1210]: time="2025-07-14T22:04:23.103315062Z" level=info msg="CreateContainer within sandbox \"739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 22:04:23.115278 env[1210]: time="2025-07-14T22:04:23.115245436Z" level=info msg="CreateContainer within sandbox \"739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e88675aab7b130ea5c10ccffbc5bf344dc0dd14da5e9dc6a46e5835f476ee3c5\"" Jul 14 22:04:23.115980 env[1210]: time="2025-07-14T22:04:23.115957418Z" level=info msg="StartContainer for \"e88675aab7b130ea5c10ccffbc5bf344dc0dd14da5e9dc6a46e5835f476ee3c5\"" Jul 14 22:04:23.131017 systemd[1]: Started cri-containerd-e88675aab7b130ea5c10ccffbc5bf344dc0dd14da5e9dc6a46e5835f476ee3c5.scope. Jul 14 22:04:23.134001 systemd[1]: run-containerd-runc-k8s.io-e88675aab7b130ea5c10ccffbc5bf344dc0dd14da5e9dc6a46e5835f476ee3c5-runc.DbA74O.mount: Deactivated successfully. Jul 14 22:04:23.167595 env[1210]: time="2025-07-14T22:04:23.167553720Z" level=info msg="StartContainer for \"e88675aab7b130ea5c10ccffbc5bf344dc0dd14da5e9dc6a46e5835f476ee3c5\" returns successfully" Jul 14 22:04:23.200936 systemd[1]: cri-containerd-e88675aab7b130ea5c10ccffbc5bf344dc0dd14da5e9dc6a46e5835f476ee3c5.scope: Deactivated successfully. Jul 14 22:04:23.297699 env[1210]: time="2025-07-14T22:04:23.297654742Z" level=info msg="shim disconnected" id=e88675aab7b130ea5c10ccffbc5bf344dc0dd14da5e9dc6a46e5835f476ee3c5 Jul 14 22:04:23.298189 env[1210]: time="2025-07-14T22:04:23.298145726Z" level=warning msg="cleaning up after shim disconnected" id=e88675aab7b130ea5c10ccffbc5bf344dc0dd14da5e9dc6a46e5835f476ee3c5 namespace=k8s.io Jul 14 22:04:23.298189 env[1210]: time="2025-07-14T22:04:23.298171058Z" level=info msg="cleaning up dead shim" Jul 14 22:04:23.305226 env[1210]: time="2025-07-14T22:04:23.305179366Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:04:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1602 runtime=io.containerd.runc.v2\n" Jul 14 22:04:23.361081 kubelet[1419]: E0714 22:04:23.360657 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:23.575483 kubelet[1419]: E0714 22:04:23.575453 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:04:23.578555 env[1210]: time="2025-07-14T22:04:23.578519344Z" level=info msg="CreateContainer within sandbox \"739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 22:04:23.589144 env[1210]: time="2025-07-14T22:04:23.589100391Z" level=info msg="CreateContainer within sandbox \"739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ba4c3d35262dc0e82b941f997254b3872789fbc1b0cf55fac1d750c1f8fbf2c1\"" Jul 14 22:04:23.589828 env[1210]: time="2025-07-14T22:04:23.589797510Z" level=info msg="StartContainer for \"ba4c3d35262dc0e82b941f997254b3872789fbc1b0cf55fac1d750c1f8fbf2c1\"" Jul 14 22:04:23.606656 systemd[1]: Started cri-containerd-ba4c3d35262dc0e82b941f997254b3872789fbc1b0cf55fac1d750c1f8fbf2c1.scope. Jul 14 22:04:23.638043 env[1210]: time="2025-07-14T22:04:23.637668000Z" level=info msg="StartContainer for \"ba4c3d35262dc0e82b941f997254b3872789fbc1b0cf55fac1d750c1f8fbf2c1\" returns successfully" Jul 14 22:04:23.653925 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 22:04:23.654111 systemd[1]: Stopped systemd-sysctl.service. Jul 14 22:04:23.654268 systemd[1]: Stopping systemd-sysctl.service... Jul 14 22:04:23.655564 systemd[1]: Starting systemd-sysctl.service... Jul 14 22:04:23.657182 systemd[1]: cri-containerd-ba4c3d35262dc0e82b941f997254b3872789fbc1b0cf55fac1d750c1f8fbf2c1.scope: Deactivated successfully. Jul 14 22:04:23.662156 systemd[1]: Finished systemd-sysctl.service. Jul 14 22:04:23.676774 env[1210]: time="2025-07-14T22:04:23.676735302Z" level=info msg="shim disconnected" id=ba4c3d35262dc0e82b941f997254b3872789fbc1b0cf55fac1d750c1f8fbf2c1 Jul 14 22:04:23.676955 env[1210]: time="2025-07-14T22:04:23.676935963Z" level=warning msg="cleaning up after shim disconnected" id=ba4c3d35262dc0e82b941f997254b3872789fbc1b0cf55fac1d750c1f8fbf2c1 namespace=k8s.io Jul 14 22:04:23.677024 env[1210]: time="2025-07-14T22:04:23.677010441Z" level=info msg="cleaning up dead shim" Jul 14 22:04:23.683335 env[1210]: time="2025-07-14T22:04:23.683304968Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:04:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1668 runtime=io.containerd.runc.v2\n" Jul 14 22:04:24.111182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e88675aab7b130ea5c10ccffbc5bf344dc0dd14da5e9dc6a46e5835f476ee3c5-rootfs.mount: Deactivated successfully. Jul 14 22:04:24.275844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1156134305.mount: Deactivated successfully. Jul 14 22:04:24.347466 kubelet[1419]: E0714 22:04:24.347433 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:24.360989 kubelet[1419]: E0714 22:04:24.360947 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:24.578464 kubelet[1419]: E0714 22:04:24.578435 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:04:24.582051 env[1210]: time="2025-07-14T22:04:24.581757977Z" level=info msg="CreateContainer within sandbox \"739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 22:04:24.592057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount608435461.mount: Deactivated successfully. Jul 14 22:04:24.598613 env[1210]: time="2025-07-14T22:04:24.598562164Z" level=info msg="CreateContainer within sandbox \"739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cd62967aea796ee6d009276253be640e6b1f8f3814f83741a9fe4d8dd8698974\"" Jul 14 22:04:24.599068 env[1210]: time="2025-07-14T22:04:24.599012254Z" level=info msg="StartContainer for \"cd62967aea796ee6d009276253be640e6b1f8f3814f83741a9fe4d8dd8698974\"" Jul 14 22:04:24.613324 systemd[1]: Started cri-containerd-cd62967aea796ee6d009276253be640e6b1f8f3814f83741a9fe4d8dd8698974.scope. Jul 14 22:04:24.656467 env[1210]: time="2025-07-14T22:04:24.654617655Z" level=info msg="StartContainer for \"cd62967aea796ee6d009276253be640e6b1f8f3814f83741a9fe4d8dd8698974\" returns successfully" Jul 14 22:04:24.664737 systemd[1]: cri-containerd-cd62967aea796ee6d009276253be640e6b1f8f3814f83741a9fe4d8dd8698974.scope: Deactivated successfully. Jul 14 22:04:24.792548 env[1210]: time="2025-07-14T22:04:24.792499458Z" level=info msg="shim disconnected" id=cd62967aea796ee6d009276253be640e6b1f8f3814f83741a9fe4d8dd8698974 Jul 14 22:04:24.792548 env[1210]: time="2025-07-14T22:04:24.792547132Z" level=warning msg="cleaning up after shim disconnected" id=cd62967aea796ee6d009276253be640e6b1f8f3814f83741a9fe4d8dd8698974 namespace=k8s.io Jul 14 22:04:24.792790 env[1210]: time="2025-07-14T22:04:24.792556484Z" level=info msg="cleaning up dead shim" Jul 14 22:04:24.816166 env[1210]: time="2025-07-14T22:04:24.816124449Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:04:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1724 runtime=io.containerd.runc.v2\n" Jul 14 22:04:24.821848 env[1210]: time="2025-07-14T22:04:24.821819769Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:04:24.823309 env[1210]: time="2025-07-14T22:04:24.823274699Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8d27a60846a202a0f67dcac3333c24f9ff809f574a261b273945e9e05706518d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:04:24.825346 env[1210]: time="2025-07-14T22:04:24.825311273Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:04:24.827095 env[1210]: time="2025-07-14T22:04:24.827060722Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:05f8984642d05b1b1a6c37605a4a566e46e7290f9291d17885f096c36861095b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:04:24.827565 env[1210]: time="2025-07-14T22:04:24.827530633Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.0\" returns image reference \"sha256:8d27a60846a202a0f67dcac3333c24f9ff809f574a261b273945e9e05706518d\"" Jul 14 22:04:24.831011 env[1210]: time="2025-07-14T22:04:24.830940656Z" level=info msg="CreateContainer within sandbox \"3f2ecc90fce3f48a9889441cc4586257b569aaaafc7c488b7be72e48aaf20d8d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 22:04:24.841323 env[1210]: time="2025-07-14T22:04:24.841276902Z" level=info msg="CreateContainer within sandbox \"3f2ecc90fce3f48a9889441cc4586257b569aaaafc7c488b7be72e48aaf20d8d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"94a2d0e2e24209ccf3038a4ab1834c721c5e1c45c02e8f69b7eb883b41a1bebd\"" Jul 14 22:04:24.841846 env[1210]: time="2025-07-14T22:04:24.841825537Z" level=info msg="StartContainer for \"94a2d0e2e24209ccf3038a4ab1834c721c5e1c45c02e8f69b7eb883b41a1bebd\"" Jul 14 22:04:24.855376 systemd[1]: Started cri-containerd-94a2d0e2e24209ccf3038a4ab1834c721c5e1c45c02e8f69b7eb883b41a1bebd.scope. Jul 14 22:04:24.910323 env[1210]: time="2025-07-14T22:04:24.910276427Z" level=info msg="StartContainer for \"94a2d0e2e24209ccf3038a4ab1834c721c5e1c45c02e8f69b7eb883b41a1bebd\" returns successfully" Jul 14 22:04:25.110888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1157595723.mount: Deactivated successfully. Jul 14 22:04:25.361818 kubelet[1419]: E0714 22:04:25.361719 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:25.580815 kubelet[1419]: E0714 22:04:25.580784 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:04:25.583088 kubelet[1419]: E0714 22:04:25.583070 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:04:25.586402 env[1210]: time="2025-07-14T22:04:25.586353865Z" level=info msg="CreateContainer within sandbox \"739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 22:04:25.599248 env[1210]: time="2025-07-14T22:04:25.599206403Z" level=info msg="CreateContainer within sandbox \"739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4c2fd9bb3affc4b190b7af15573fbbf4ceb9b779de892845de9267ee95cc2ddc\"" Jul 14 22:04:25.599867 env[1210]: time="2025-07-14T22:04:25.599799188Z" level=info msg="StartContainer for \"4c2fd9bb3affc4b190b7af15573fbbf4ceb9b779de892845de9267ee95cc2ddc\"" Jul 14 22:04:25.608978 kubelet[1419]: I0714 22:04:25.608923 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cvlpp" podStartSLOduration=13.561728311 podStartE2EDuration="21.608893507s" podCreationTimestamp="2025-07-14 22:04:04 +0000 UTC" firstStartedPulling="2025-07-14 22:04:16.781317128 +0000 UTC m=+13.751026641" lastFinishedPulling="2025-07-14 22:04:24.828482364 +0000 UTC m=+21.798191837" observedRunningTime="2025-07-14 22:04:25.59276059 +0000 UTC m=+22.562470063" watchObservedRunningTime="2025-07-14 22:04:25.608893507 +0000 UTC m=+22.578603020" Jul 14 22:04:25.617439 systemd[1]: Started cri-containerd-4c2fd9bb3affc4b190b7af15573fbbf4ceb9b779de892845de9267ee95cc2ddc.scope. Jul 14 22:04:25.654815 systemd[1]: cri-containerd-4c2fd9bb3affc4b190b7af15573fbbf4ceb9b779de892845de9267ee95cc2ddc.scope: Deactivated successfully. Jul 14 22:04:25.657070 env[1210]: time="2025-07-14T22:04:25.657033193Z" level=info msg="StartContainer for \"4c2fd9bb3affc4b190b7af15573fbbf4ceb9b779de892845de9267ee95cc2ddc\" returns successfully" Jul 14 22:04:25.688721 env[1210]: time="2025-07-14T22:04:25.688426435Z" level=info msg="shim disconnected" id=4c2fd9bb3affc4b190b7af15573fbbf4ceb9b779de892845de9267ee95cc2ddc Jul 14 22:04:25.688721 env[1210]: time="2025-07-14T22:04:25.688472756Z" level=warning msg="cleaning up after shim disconnected" id=4c2fd9bb3affc4b190b7af15573fbbf4ceb9b779de892845de9267ee95cc2ddc namespace=k8s.io Jul 14 22:04:25.688721 env[1210]: time="2025-07-14T22:04:25.688482548Z" level=info msg="cleaning up dead shim" Jul 14 22:04:25.694531 env[1210]: time="2025-07-14T22:04:25.694493085Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:04:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1957 runtime=io.containerd.runc.v2\n" Jul 14 22:04:26.110351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c2fd9bb3affc4b190b7af15573fbbf4ceb9b779de892845de9267ee95cc2ddc-rootfs.mount: Deactivated successfully. Jul 14 22:04:26.362184 kubelet[1419]: E0714 22:04:26.362064 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:26.586972 kubelet[1419]: E0714 22:04:26.586912 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:04:26.587501 kubelet[1419]: E0714 22:04:26.587479 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:04:26.592040 env[1210]: time="2025-07-14T22:04:26.591950644Z" level=info msg="CreateContainer within sandbox \"739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 22:04:26.608734 env[1210]: time="2025-07-14T22:04:26.608679371Z" level=info msg="CreateContainer within sandbox \"739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495\"" Jul 14 22:04:26.609252 env[1210]: time="2025-07-14T22:04:26.609216658Z" level=info msg="StartContainer for \"931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495\"" Jul 14 22:04:26.623260 systemd[1]: Started cri-containerd-931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495.scope. Jul 14 22:04:26.660441 env[1210]: time="2025-07-14T22:04:26.659974023Z" level=info msg="StartContainer for \"931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495\" returns successfully" Jul 14 22:04:26.755390 kubelet[1419]: I0714 22:04:26.755090 1419 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 14 22:04:26.937957 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 14 22:04:27.180944 kernel: Initializing XFRM netlink socket Jul 14 22:04:27.183945 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 14 22:04:27.362497 kubelet[1419]: E0714 22:04:27.362432 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:27.591590 kubelet[1419]: E0714 22:04:27.591544 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:04:27.606606 kubelet[1419]: I0714 22:04:27.606543 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lnkcb" podStartSLOduration=17.282824045 podStartE2EDuration="23.606528306s" podCreationTimestamp="2025-07-14 22:04:04 +0000 UTC" firstStartedPulling="2025-07-14 22:04:16.776845742 +0000 UTC m=+13.746555254" lastFinishedPulling="2025-07-14 22:04:23.100550042 +0000 UTC m=+20.070259515" observedRunningTime="2025-07-14 22:04:27.60639747 +0000 UTC m=+24.576106983" watchObservedRunningTime="2025-07-14 22:04:27.606528306 +0000 UTC m=+24.576237779" Jul 14 22:04:28.363066 kubelet[1419]: E0714 22:04:28.363014 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:28.596326 kubelet[1419]: E0714 22:04:28.596285 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:04:28.797944 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 14 22:04:28.798674 systemd-networkd[1049]: cilium_host: Link UP Jul 14 22:04:28.798785 systemd-networkd[1049]: cilium_net: Link UP Jul 14 22:04:28.798788 systemd-networkd[1049]: cilium_net: Gained carrier Jul 14 22:04:28.798985 systemd-networkd[1049]: cilium_host: Gained carrier Jul 14 22:04:28.881805 systemd-networkd[1049]: cilium_vxlan: Link UP Jul 14 22:04:28.881817 systemd-networkd[1049]: cilium_vxlan: Gained carrier Jul 14 22:04:29.193957 kernel: NET: Registered PF_ALG protocol family Jul 14 22:04:29.296162 systemd-networkd[1049]: cilium_host: Gained IPv6LL Jul 14 22:04:29.363773 kubelet[1419]: E0714 22:04:29.363704 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:29.599883 kubelet[1419]: E0714 22:04:29.599424 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:04:29.760096 systemd-networkd[1049]: cilium_net: Gained IPv6LL Jul 14 22:04:29.825953 systemd-networkd[1049]: lxc_health: Link UP Jul 14 22:04:29.838896 systemd-networkd[1049]: lxc_health: Gained carrier Jul 14 22:04:29.839066 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 14 22:04:30.272072 systemd-networkd[1049]: cilium_vxlan: Gained IPv6LL Jul 14 22:04:30.364692 kubelet[1419]: E0714 22:04:30.364652 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:30.600838 kubelet[1419]: E0714 22:04:30.600791 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:04:31.254135 systemd[1]: Created slice kubepods-besteffort-podb41cb98b_ab29_4e99_9759_9ffe582fdd2f.slice. Jul 14 22:04:31.323758 kubelet[1419]: I0714 22:04:31.323699 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvjpx\" (UniqueName: \"kubernetes.io/projected/b41cb98b-ab29-4e99-9759-9ffe582fdd2f-kube-api-access-mvjpx\") pod \"nginx-deployment-7fcdb87857-nj7fw\" (UID: \"b41cb98b-ab29-4e99-9759-9ffe582fdd2f\") " pod="default/nginx-deployment-7fcdb87857-nj7fw" Jul 14 22:04:31.365253 kubelet[1419]: E0714 22:04:31.365200 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:31.552102 systemd-networkd[1049]: lxc_health: Gained IPv6LL Jul 14 22:04:31.557761 env[1210]: time="2025-07-14T22:04:31.557709909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nj7fw,Uid:b41cb98b-ab29-4e99-9759-9ffe582fdd2f,Namespace:default,Attempt:0,}" Jul 14 22:04:31.602239 kubelet[1419]: E0714 22:04:31.602182 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:04:31.618245 systemd-networkd[1049]: lxca17f03885223: Link UP Jul 14 22:04:31.627961 kernel: eth0: renamed from tmp48a9c Jul 14 22:04:31.636204 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 14 22:04:31.636323 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca17f03885223: link becomes ready Jul 14 22:04:31.636316 systemd-networkd[1049]: lxca17f03885223: Gained carrier Jul 14 22:04:32.366067 kubelet[1419]: E0714 22:04:32.366002 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:32.604969 kubelet[1419]: E0714 22:04:32.604910 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:04:33.280048 systemd-networkd[1049]: lxca17f03885223: Gained IPv6LL Jul 14 22:04:33.367957 kubelet[1419]: E0714 22:04:33.366662 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:34.367647 kubelet[1419]: E0714 22:04:34.367607 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:34.511179 env[1210]: time="2025-07-14T22:04:34.511101896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:04:34.511477 env[1210]: time="2025-07-14T22:04:34.511188510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:04:34.511477 env[1210]: time="2025-07-14T22:04:34.511214514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:04:34.511727 env[1210]: time="2025-07-14T22:04:34.511675303Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/48a9c38690fcbc30468bd2eb84e83e8406c2d5039f878b4e7e127c5c04d90415 pid=2503 runtime=io.containerd.runc.v2 Jul 14 22:04:34.524346 systemd[1]: run-containerd-runc-k8s.io-48a9c38690fcbc30468bd2eb84e83e8406c2d5039f878b4e7e127c5c04d90415-runc.CQl5CK.mount: Deactivated successfully. Jul 14 22:04:34.527821 systemd[1]: Started cri-containerd-48a9c38690fcbc30468bd2eb84e83e8406c2d5039f878b4e7e127c5c04d90415.scope. Jul 14 22:04:34.578130 systemd-resolved[1157]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:04:34.596605 env[1210]: time="2025-07-14T22:04:34.596556723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-nj7fw,Uid:b41cb98b-ab29-4e99-9759-9ffe582fdd2f,Namespace:default,Attempt:0,} returns sandbox id \"48a9c38690fcbc30468bd2eb84e83e8406c2d5039f878b4e7e127c5c04d90415\"" Jul 14 22:04:34.597858 env[1210]: time="2025-07-14T22:04:34.597825915Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 14 22:04:35.367927 kubelet[1419]: E0714 22:04:35.367880 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:36.368880 kubelet[1419]: E0714 22:04:36.368805 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:36.960975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3702790596.mount: Deactivated successfully. Jul 14 22:04:37.369629 kubelet[1419]: E0714 22:04:37.369577 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:38.179984 env[1210]: time="2025-07-14T22:04:38.179928315Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:04:38.180982 env[1210]: time="2025-07-14T22:04:38.180956960Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:04:38.182677 env[1210]: time="2025-07-14T22:04:38.182642366Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:04:38.184817 env[1210]: time="2025-07-14T22:04:38.184776267Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:04:38.185545 env[1210]: time="2025-07-14T22:04:38.185508596Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\"" Jul 14 22:04:38.189627 env[1210]: time="2025-07-14T22:04:38.189587734Z" level=info msg="CreateContainer within sandbox \"48a9c38690fcbc30468bd2eb84e83e8406c2d5039f878b4e7e127c5c04d90415\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 14 22:04:38.199230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3058757291.mount: Deactivated successfully. Jul 14 22:04:38.203550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1609393493.mount: Deactivated successfully. Jul 14 22:04:38.207774 env[1210]: time="2025-07-14T22:04:38.207719148Z" level=info msg="CreateContainer within sandbox \"48a9c38690fcbc30468bd2eb84e83e8406c2d5039f878b4e7e127c5c04d90415\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"44be835302cb18feb3b2dd4da28af1e87724d867275e3bacfb09a3673c22bde2\"" Jul 14 22:04:38.208236 env[1210]: time="2025-07-14T22:04:38.208205887Z" level=info msg="StartContainer for \"44be835302cb18feb3b2dd4da28af1e87724d867275e3bacfb09a3673c22bde2\"" Jul 14 22:04:38.222491 systemd[1]: Started cri-containerd-44be835302cb18feb3b2dd4da28af1e87724d867275e3bacfb09a3673c22bde2.scope. Jul 14 22:04:38.261177 env[1210]: time="2025-07-14T22:04:38.261114548Z" level=info msg="StartContainer for \"44be835302cb18feb3b2dd4da28af1e87724d867275e3bacfb09a3673c22bde2\" returns successfully" Jul 14 22:04:38.370355 kubelet[1419]: E0714 22:04:38.370304 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:38.625090 kubelet[1419]: I0714 22:04:38.624966 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-nj7fw" podStartSLOduration=4.035887116 podStartE2EDuration="7.624950254s" podCreationTimestamp="2025-07-14 22:04:31 +0000 UTC" firstStartedPulling="2025-07-14 22:04:34.597536151 +0000 UTC m=+31.567245624" lastFinishedPulling="2025-07-14 22:04:38.186599289 +0000 UTC m=+35.156308762" observedRunningTime="2025-07-14 22:04:38.624555526 +0000 UTC m=+35.594265039" watchObservedRunningTime="2025-07-14 22:04:38.624950254 +0000 UTC m=+35.594659767" Jul 14 22:04:39.371104 kubelet[1419]: E0714 22:04:39.371052 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:40.371556 kubelet[1419]: E0714 22:04:40.371500 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:40.988694 update_engine[1203]: I0714 22:04:40.988641 1203 update_attempter.cc:509] Updating boot flags... Jul 14 22:04:41.372110 kubelet[1419]: E0714 22:04:41.372056 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:42.372701 kubelet[1419]: E0714 22:04:42.372650 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:43.373144 kubelet[1419]: E0714 22:04:43.373082 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:44.348072 kubelet[1419]: E0714 22:04:44.348027 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:44.373385 kubelet[1419]: E0714 22:04:44.373359 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:45.373829 kubelet[1419]: E0714 22:04:45.373780 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:46.374347 kubelet[1419]: E0714 22:04:46.374306 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:47.374492 kubelet[1419]: E0714 22:04:47.374438 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:48.374610 kubelet[1419]: E0714 22:04:48.374558 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:49.376874 kubelet[1419]: E0714 22:04:49.376830 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:50.377835 kubelet[1419]: E0714 22:04:50.377794 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:51.378928 kubelet[1419]: E0714 22:04:51.378884 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:52.379997 kubelet[1419]: E0714 22:04:52.379955 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:53.381119 kubelet[1419]: E0714 22:04:53.381062 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:54.381786 kubelet[1419]: E0714 22:04:54.381742 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:55.382055 kubelet[1419]: E0714 22:04:55.382007 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:56.382600 kubelet[1419]: E0714 22:04:56.382532 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:57.383308 kubelet[1419]: E0714 22:04:57.383255 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:58.383865 kubelet[1419]: E0714 22:04:58.383807 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:04:59.384499 kubelet[1419]: E0714 22:04:59.384454 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:00.385166 kubelet[1419]: E0714 22:05:00.385122 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:01.385466 kubelet[1419]: E0714 22:05:01.385426 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:02.386826 kubelet[1419]: E0714 22:05:02.386765 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:03.387777 kubelet[1419]: E0714 22:05:03.387709 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:04.347846 kubelet[1419]: E0714 22:05:04.347767 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:04.388586 kubelet[1419]: E0714 22:05:04.388537 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:05.389374 kubelet[1419]: E0714 22:05:05.389335 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:06.390766 kubelet[1419]: E0714 22:05:06.390709 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:07.391666 kubelet[1419]: E0714 22:05:07.391606 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:08.392093 kubelet[1419]: E0714 22:05:08.392049 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:09.393079 kubelet[1419]: E0714 22:05:09.393028 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:10.394107 kubelet[1419]: E0714 22:05:10.394057 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:11.394874 kubelet[1419]: E0714 22:05:11.394831 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:12.395454 kubelet[1419]: E0714 22:05:12.395413 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:12.808512 systemd[1]: Created slice kubepods-besteffort-pod0f6a9856_8288_4fe5_af4a_75f4fedc94b1.slice. Jul 14 22:05:12.860091 kubelet[1419]: I0714 22:05:12.860039 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/0f6a9856-8288-4fe5-af4a-75f4fedc94b1-data\") pod \"nfs-server-provisioner-0\" (UID: \"0f6a9856-8288-4fe5-af4a-75f4fedc94b1\") " pod="default/nfs-server-provisioner-0" Jul 14 22:05:12.860091 kubelet[1419]: I0714 22:05:12.860095 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lrt5\" (UniqueName: \"kubernetes.io/projected/0f6a9856-8288-4fe5-af4a-75f4fedc94b1-kube-api-access-2lrt5\") pod \"nfs-server-provisioner-0\" (UID: \"0f6a9856-8288-4fe5-af4a-75f4fedc94b1\") " pod="default/nfs-server-provisioner-0" Jul 14 22:05:13.112024 env[1210]: time="2025-07-14T22:05:13.111625158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0f6a9856-8288-4fe5-af4a-75f4fedc94b1,Namespace:default,Attempt:0,}" Jul 14 22:05:13.133704 systemd-networkd[1049]: lxc4cb0680640ef: Link UP Jul 14 22:05:13.143982 kernel: eth0: renamed from tmp716d9 Jul 14 22:05:13.152110 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 14 22:05:13.152198 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4cb0680640ef: link becomes ready Jul 14 22:05:13.151888 systemd-networkd[1049]: lxc4cb0680640ef: Gained carrier Jul 14 22:05:13.333535 env[1210]: time="2025-07-14T22:05:13.333453819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:05:13.333535 env[1210]: time="2025-07-14T22:05:13.333503660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:05:13.333535 env[1210]: time="2025-07-14T22:05:13.333514901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:05:13.334009 env[1210]: time="2025-07-14T22:05:13.333962195Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/716d951f30fb0c6bf39e28eaae70a6e7e9e7c935f01965c6dbd6fb89c6bd880f pid=2655 runtime=io.containerd.runc.v2 Jul 14 22:05:13.348954 systemd[1]: Started cri-containerd-716d951f30fb0c6bf39e28eaae70a6e7e9e7c935f01965c6dbd6fb89c6bd880f.scope. Jul 14 22:05:13.372561 systemd-resolved[1157]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:05:13.392093 env[1210]: time="2025-07-14T22:05:13.392043736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0f6a9856-8288-4fe5-af4a-75f4fedc94b1,Namespace:default,Attempt:0,} returns sandbox id \"716d951f30fb0c6bf39e28eaae70a6e7e9e7c935f01965c6dbd6fb89c6bd880f\"" Jul 14 22:05:13.394111 env[1210]: time="2025-07-14T22:05:13.394080963Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 14 22:05:13.396326 kubelet[1419]: E0714 22:05:13.396296 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:13.974621 systemd[1]: run-containerd-runc-k8s.io-716d951f30fb0c6bf39e28eaae70a6e7e9e7c935f01965c6dbd6fb89c6bd880f-runc.FplPew.mount: Deactivated successfully. Jul 14 22:05:14.397093 kubelet[1419]: E0714 22:05:14.397053 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:14.496082 systemd-networkd[1049]: lxc4cb0680640ef: Gained IPv6LL Jul 14 22:05:15.398009 kubelet[1419]: E0714 22:05:15.397953 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:15.479599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2154903518.mount: Deactivated successfully. Jul 14 22:05:16.398735 kubelet[1419]: E0714 22:05:16.398674 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:17.191398 env[1210]: time="2025-07-14T22:05:17.191349646Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:05:17.192478 env[1210]: time="2025-07-14T22:05:17.192451520Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:05:17.194193 env[1210]: time="2025-07-14T22:05:17.194160412Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:05:17.195733 env[1210]: time="2025-07-14T22:05:17.195704179Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:05:17.196446 env[1210]: time="2025-07-14T22:05:17.196413840Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jul 14 22:05:17.200269 env[1210]: time="2025-07-14T22:05:17.200229716Z" level=info msg="CreateContainer within sandbox \"716d951f30fb0c6bf39e28eaae70a6e7e9e7c935f01965c6dbd6fb89c6bd880f\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 14 22:05:17.210165 env[1210]: time="2025-07-14T22:05:17.210129937Z" level=info msg="CreateContainer within sandbox \"716d951f30fb0c6bf39e28eaae70a6e7e9e7c935f01965c6dbd6fb89c6bd880f\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"65a3a32960d9ecf1f1fdd080d0c48838d53be1ff52d779df8bfe95f965784ebe\"" Jul 14 22:05:17.210709 env[1210]: time="2025-07-14T22:05:17.210677313Z" level=info msg="StartContainer for \"65a3a32960d9ecf1f1fdd080d0c48838d53be1ff52d779df8bfe95f965784ebe\"" Jul 14 22:05:17.227968 systemd[1]: Started cri-containerd-65a3a32960d9ecf1f1fdd080d0c48838d53be1ff52d779df8bfe95f965784ebe.scope. Jul 14 22:05:17.287534 env[1210]: time="2025-07-14T22:05:17.287491725Z" level=info msg="StartContainer for \"65a3a32960d9ecf1f1fdd080d0c48838d53be1ff52d779df8bfe95f965784ebe\" returns successfully" Jul 14 22:05:17.399503 kubelet[1419]: E0714 22:05:17.399443 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:17.705136 kubelet[1419]: I0714 22:05:17.705070 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.901077113 podStartE2EDuration="5.705056041s" podCreationTimestamp="2025-07-14 22:05:12 +0000 UTC" firstStartedPulling="2025-07-14 22:05:13.393425662 +0000 UTC m=+70.363135175" lastFinishedPulling="2025-07-14 22:05:17.19740459 +0000 UTC m=+74.167114103" observedRunningTime="2025-07-14 22:05:17.703821364 +0000 UTC m=+74.673530877" watchObservedRunningTime="2025-07-14 22:05:17.705056041 +0000 UTC m=+74.674765554" Jul 14 22:05:18.400230 kubelet[1419]: E0714 22:05:18.400182 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:19.400674 kubelet[1419]: E0714 22:05:19.400632 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:20.401469 kubelet[1419]: E0714 22:05:20.401429 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:21.401677 kubelet[1419]: E0714 22:05:21.401642 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:22.402507 kubelet[1419]: E0714 22:05:22.402440 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:22.622662 systemd[1]: Created slice kubepods-besteffort-pod678e080e_4cd6_4c03_a94d_d9da021c40e9.slice. Jul 14 22:05:22.715865 kubelet[1419]: I0714 22:05:22.715746 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpl69\" (UniqueName: \"kubernetes.io/projected/678e080e-4cd6-4c03-a94d-d9da021c40e9-kube-api-access-qpl69\") pod \"test-pod-1\" (UID: \"678e080e-4cd6-4c03-a94d-d9da021c40e9\") " pod="default/test-pod-1" Jul 14 22:05:22.715865 kubelet[1419]: I0714 22:05:22.715800 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-05c75eec-7932-4aef-b04d-226a2a91ccd3\" (UniqueName: \"kubernetes.io/nfs/678e080e-4cd6-4c03-a94d-d9da021c40e9-pvc-05c75eec-7932-4aef-b04d-226a2a91ccd3\") pod \"test-pod-1\" (UID: \"678e080e-4cd6-4c03-a94d-d9da021c40e9\") " pod="default/test-pod-1" Jul 14 22:05:22.838946 kernel: FS-Cache: Loaded Jul 14 22:05:22.866162 kernel: RPC: Registered named UNIX socket transport module. Jul 14 22:05:22.866265 kernel: RPC: Registered udp transport module. Jul 14 22:05:22.866287 kernel: RPC: Registered tcp transport module. Jul 14 22:05:22.866303 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 14 22:05:22.906942 kernel: FS-Cache: Netfs 'nfs' registered for caching Jul 14 22:05:23.035006 kernel: NFS: Registering the id_resolver key type Jul 14 22:05:23.035149 kernel: Key type id_resolver registered Jul 14 22:05:23.035188 kernel: Key type id_legacy registered Jul 14 22:05:23.056553 nfsidmap[2770]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 14 22:05:23.059909 nfsidmap[2773]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 14 22:05:23.225471 env[1210]: time="2025-07-14T22:05:23.225421660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:678e080e-4cd6-4c03-a94d-d9da021c40e9,Namespace:default,Attempt:0,}" Jul 14 22:05:23.251718 systemd-networkd[1049]: lxc027f2a901462: Link UP Jul 14 22:05:23.261950 kernel: eth0: renamed from tmp4a0d1 Jul 14 22:05:23.266491 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 14 22:05:23.266575 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc027f2a901462: link becomes ready Jul 14 22:05:23.267021 systemd-networkd[1049]: lxc027f2a901462: Gained carrier Jul 14 22:05:23.400486 env[1210]: time="2025-07-14T22:05:23.400405600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:05:23.400486 env[1210]: time="2025-07-14T22:05:23.400449201Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:05:23.400486 env[1210]: time="2025-07-14T22:05:23.400460641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:05:23.400869 env[1210]: time="2025-07-14T22:05:23.400830811Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4a0d1b6f7293a896be1cf40a5059890f3d3cb9a901fffa3954da08f0155b0f24 pid=2808 runtime=io.containerd.runc.v2 Jul 14 22:05:23.403527 kubelet[1419]: E0714 22:05:23.403484 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:23.410672 systemd[1]: Started cri-containerd-4a0d1b6f7293a896be1cf40a5059890f3d3cb9a901fffa3954da08f0155b0f24.scope. Jul 14 22:05:23.434675 systemd-resolved[1157]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:05:23.451031 env[1210]: time="2025-07-14T22:05:23.450992564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:678e080e-4cd6-4c03-a94d-d9da021c40e9,Namespace:default,Attempt:0,} returns sandbox id \"4a0d1b6f7293a896be1cf40a5059890f3d3cb9a901fffa3954da08f0155b0f24\"" Jul 14 22:05:23.452240 env[1210]: time="2025-07-14T22:05:23.452214118Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 14 22:05:23.725793 env[1210]: time="2025-07-14T22:05:23.723869903Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:05:23.726389 env[1210]: time="2025-07-14T22:05:23.726350292Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:05:23.728720 env[1210]: time="2025-07-14T22:05:23.728680236Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:05:23.730287 env[1210]: time="2025-07-14T22:05:23.730256960Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:05:23.731615 env[1210]: time="2025-07-14T22:05:23.731585677Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\"" Jul 14 22:05:23.735517 env[1210]: time="2025-07-14T22:05:23.735463705Z" level=info msg="CreateContainer within sandbox \"4a0d1b6f7293a896be1cf40a5059890f3d3cb9a901fffa3954da08f0155b0f24\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 14 22:05:23.745948 env[1210]: time="2025-07-14T22:05:23.745881954Z" level=info msg="CreateContainer within sandbox \"4a0d1b6f7293a896be1cf40a5059890f3d3cb9a901fffa3954da08f0155b0f24\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"a4410fccc09ee6e2de40530b8400173c4677197206cbf781bb0945a1548d6c18\"" Jul 14 22:05:23.746318 env[1210]: time="2025-07-14T22:05:23.746294325Z" level=info msg="StartContainer for \"a4410fccc09ee6e2de40530b8400173c4677197206cbf781bb0945a1548d6c18\"" Jul 14 22:05:23.759569 systemd[1]: Started cri-containerd-a4410fccc09ee6e2de40530b8400173c4677197206cbf781bb0945a1548d6c18.scope. Jul 14 22:05:23.788093 env[1210]: time="2025-07-14T22:05:23.788047325Z" level=info msg="StartContainer for \"a4410fccc09ee6e2de40530b8400173c4677197206cbf781bb0945a1548d6c18\" returns successfully" Jul 14 22:05:24.347520 kubelet[1419]: E0714 22:05:24.347465 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:24.404149 kubelet[1419]: E0714 22:05:24.404114 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:24.716440 kubelet[1419]: I0714 22:05:24.715790 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=11.435200693 podStartE2EDuration="11.715776605s" podCreationTimestamp="2025-07-14 22:05:13 +0000 UTC" firstStartedPulling="2025-07-14 22:05:23.451634782 +0000 UTC m=+80.421344295" lastFinishedPulling="2025-07-14 22:05:23.732210734 +0000 UTC m=+80.701920207" observedRunningTime="2025-07-14 22:05:24.715108267 +0000 UTC m=+81.684817780" watchObservedRunningTime="2025-07-14 22:05:24.715776605 +0000 UTC m=+81.685486118" Jul 14 22:05:24.992131 systemd-networkd[1049]: lxc027f2a901462: Gained IPv6LL Jul 14 22:05:25.404881 kubelet[1419]: E0714 22:05:25.404820 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:26.405780 kubelet[1419]: E0714 22:05:26.405740 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:27.406766 kubelet[1419]: E0714 22:05:27.406724 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:28.408029 kubelet[1419]: E0714 22:05:28.407842 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:29.408581 kubelet[1419]: E0714 22:05:29.408533 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:30.087299 systemd[1]: run-containerd-runc-k8s.io-931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495-runc.yxrrQJ.mount: Deactivated successfully. Jul 14 22:05:30.112641 env[1210]: time="2025-07-14T22:05:30.112575475Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 22:05:30.118023 env[1210]: time="2025-07-14T22:05:30.117980734Z" level=info msg="StopContainer for \"931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495\" with timeout 2 (s)" Jul 14 22:05:30.118291 env[1210]: time="2025-07-14T22:05:30.118264021Z" level=info msg="Stop container \"931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495\" with signal terminated" Jul 14 22:05:30.123218 systemd-networkd[1049]: lxc_health: Link DOWN Jul 14 22:05:30.123224 systemd-networkd[1049]: lxc_health: Lost carrier Jul 14 22:05:30.151275 systemd[1]: cri-containerd-931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495.scope: Deactivated successfully. Jul 14 22:05:30.151576 systemd[1]: cri-containerd-931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495.scope: Consumed 6.638s CPU time. Jul 14 22:05:30.166487 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495-rootfs.mount: Deactivated successfully. Jul 14 22:05:30.296121 env[1210]: time="2025-07-14T22:05:30.296064326Z" level=info msg="shim disconnected" id=931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495 Jul 14 22:05:30.296352 env[1210]: time="2025-07-14T22:05:30.296333933Z" level=warning msg="cleaning up after shim disconnected" id=931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495 namespace=k8s.io Jul 14 22:05:30.296409 env[1210]: time="2025-07-14T22:05:30.296397374Z" level=info msg="cleaning up dead shim" Jul 14 22:05:30.302838 env[1210]: time="2025-07-14T22:05:30.302803659Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:05:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2939 runtime=io.containerd.runc.v2\n" Jul 14 22:05:30.305710 env[1210]: time="2025-07-14T22:05:30.305677093Z" level=info msg="StopContainer for \"931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495\" returns successfully" Jul 14 22:05:30.306381 env[1210]: time="2025-07-14T22:05:30.306347471Z" level=info msg="StopPodSandbox for \"739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d\"" Jul 14 22:05:30.306441 env[1210]: time="2025-07-14T22:05:30.306406032Z" level=info msg="Container to stop \"ba4c3d35262dc0e82b941f997254b3872789fbc1b0cf55fac1d750c1f8fbf2c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:05:30.306441 env[1210]: time="2025-07-14T22:05:30.306421153Z" level=info msg="Container to stop \"e88675aab7b130ea5c10ccffbc5bf344dc0dd14da5e9dc6a46e5835f476ee3c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:05:30.306441 env[1210]: time="2025-07-14T22:05:30.306433673Z" level=info msg="Container to stop \"cd62967aea796ee6d009276253be640e6b1f8f3814f83741a9fe4d8dd8698974\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:05:30.306509 env[1210]: time="2025-07-14T22:05:30.306444713Z" level=info msg="Container to stop \"4c2fd9bb3affc4b190b7af15573fbbf4ceb9b779de892845de9267ee95cc2ddc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:05:30.306509 env[1210]: time="2025-07-14T22:05:30.306455273Z" level=info msg="Container to stop \"931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:05:30.308009 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d-shm.mount: Deactivated successfully. Jul 14 22:05:30.312836 systemd[1]: cri-containerd-739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d.scope: Deactivated successfully. Jul 14 22:05:30.329985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d-rootfs.mount: Deactivated successfully. Jul 14 22:05:30.332974 env[1210]: time="2025-07-14T22:05:30.332928156Z" level=info msg="shim disconnected" id=739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d Jul 14 22:05:30.332974 env[1210]: time="2025-07-14T22:05:30.332973317Z" level=warning msg="cleaning up after shim disconnected" id=739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d namespace=k8s.io Jul 14 22:05:30.333102 env[1210]: time="2025-07-14T22:05:30.332982637Z" level=info msg="cleaning up dead shim" Jul 14 22:05:30.338913 env[1210]: time="2025-07-14T22:05:30.338812988Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:05:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2970 runtime=io.containerd.runc.v2\n" Jul 14 22:05:30.339156 env[1210]: time="2025-07-14T22:05:30.339116316Z" level=info msg="TearDown network for sandbox \"739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d\" successfully" Jul 14 22:05:30.339156 env[1210]: time="2025-07-14T22:05:30.339143596Z" level=info msg="StopPodSandbox for \"739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d\" returns successfully" Jul 14 22:05:30.408947 kubelet[1419]: E0714 22:05:30.408889 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:30.459869 kubelet[1419]: I0714 22:05:30.459826 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-xtables-lock\") pod \"32ee73e2-935d-48cc-834d-1e4198754b9e\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " Jul 14 22:05:30.459965 kubelet[1419]: I0714 22:05:30.459875 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32ee73e2-935d-48cc-834d-1e4198754b9e-cilium-config-path\") pod \"32ee73e2-935d-48cc-834d-1e4198754b9e\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " Jul 14 22:05:30.459965 kubelet[1419]: I0714 22:05:30.459902 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-cilium-run\") pod \"32ee73e2-935d-48cc-834d-1e4198754b9e\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " Jul 14 22:05:30.459965 kubelet[1419]: I0714 22:05:30.459945 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-hostproc\") pod \"32ee73e2-935d-48cc-834d-1e4198754b9e\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " Jul 14 22:05:30.459965 kubelet[1419]: I0714 22:05:30.459964 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/32ee73e2-935d-48cc-834d-1e4198754b9e-hubble-tls\") pod \"32ee73e2-935d-48cc-834d-1e4198754b9e\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " Jul 14 22:05:30.460077 kubelet[1419]: I0714 22:05:30.459981 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/32ee73e2-935d-48cc-834d-1e4198754b9e-clustermesh-secrets\") pod \"32ee73e2-935d-48cc-834d-1e4198754b9e\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " Jul 14 22:05:30.460077 kubelet[1419]: I0714 22:05:30.459994 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-host-proc-sys-net\") pod \"32ee73e2-935d-48cc-834d-1e4198754b9e\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " Jul 14 22:05:30.460077 kubelet[1419]: I0714 22:05:30.459805 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "32ee73e2-935d-48cc-834d-1e4198754b9e" (UID: "32ee73e2-935d-48cc-834d-1e4198754b9e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:05:30.460077 kubelet[1419]: I0714 22:05:30.460031 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "32ee73e2-935d-48cc-834d-1e4198754b9e" (UID: "32ee73e2-935d-48cc-834d-1e4198754b9e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:05:30.460077 kubelet[1419]: I0714 22:05:30.460015 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-etc-cni-netd\") pod \"32ee73e2-935d-48cc-834d-1e4198754b9e\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " Jul 14 22:05:30.460199 kubelet[1419]: I0714 22:05:30.460089 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-host-proc-sys-kernel\") pod \"32ee73e2-935d-48cc-834d-1e4198754b9e\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " Jul 14 22:05:30.460199 kubelet[1419]: I0714 22:05:30.460117 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-cni-path\") pod \"32ee73e2-935d-48cc-834d-1e4198754b9e\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " Jul 14 22:05:30.460199 kubelet[1419]: I0714 22:05:30.460142 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4z2f\" (UniqueName: \"kubernetes.io/projected/32ee73e2-935d-48cc-834d-1e4198754b9e-kube-api-access-g4z2f\") pod \"32ee73e2-935d-48cc-834d-1e4198754b9e\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " Jul 14 22:05:30.460199 kubelet[1419]: I0714 22:05:30.460159 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-lib-modules\") pod \"32ee73e2-935d-48cc-834d-1e4198754b9e\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " Jul 14 22:05:30.460199 kubelet[1419]: I0714 22:05:30.460173 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-bpf-maps\") pod \"32ee73e2-935d-48cc-834d-1e4198754b9e\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " Jul 14 22:05:30.460199 kubelet[1419]: I0714 22:05:30.460188 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-cilium-cgroup\") pod \"32ee73e2-935d-48cc-834d-1e4198754b9e\" (UID: \"32ee73e2-935d-48cc-834d-1e4198754b9e\") " Jul 14 22:05:30.460329 kubelet[1419]: I0714 22:05:30.460218 1419 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-xtables-lock\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:30.460329 kubelet[1419]: I0714 22:05:30.460229 1419 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-etc-cni-netd\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:30.460329 kubelet[1419]: I0714 22:05:30.460259 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "32ee73e2-935d-48cc-834d-1e4198754b9e" (UID: "32ee73e2-935d-48cc-834d-1e4198754b9e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:05:30.460329 kubelet[1419]: I0714 22:05:30.460275 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "32ee73e2-935d-48cc-834d-1e4198754b9e" (UID: "32ee73e2-935d-48cc-834d-1e4198754b9e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:05:30.460329 kubelet[1419]: I0714 22:05:30.460290 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-cni-path" (OuterVolumeSpecName: "cni-path") pod "32ee73e2-935d-48cc-834d-1e4198754b9e" (UID: "32ee73e2-935d-48cc-834d-1e4198754b9e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:05:30.460691 kubelet[1419]: I0714 22:05:30.460576 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "32ee73e2-935d-48cc-834d-1e4198754b9e" (UID: "32ee73e2-935d-48cc-834d-1e4198754b9e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:05:30.461070 kubelet[1419]: I0714 22:05:30.460981 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "32ee73e2-935d-48cc-834d-1e4198754b9e" (UID: "32ee73e2-935d-48cc-834d-1e4198754b9e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:05:30.461070 kubelet[1419]: I0714 22:05:30.461017 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "32ee73e2-935d-48cc-834d-1e4198754b9e" (UID: "32ee73e2-935d-48cc-834d-1e4198754b9e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:05:30.461070 kubelet[1419]: I0714 22:05:30.461033 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-hostproc" (OuterVolumeSpecName: "hostproc") pod "32ee73e2-935d-48cc-834d-1e4198754b9e" (UID: "32ee73e2-935d-48cc-834d-1e4198754b9e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:05:30.461070 kubelet[1419]: I0714 22:05:30.461054 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "32ee73e2-935d-48cc-834d-1e4198754b9e" (UID: "32ee73e2-935d-48cc-834d-1e4198754b9e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:05:30.463039 kubelet[1419]: I0714 22:05:30.462080 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32ee73e2-935d-48cc-834d-1e4198754b9e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "32ee73e2-935d-48cc-834d-1e4198754b9e" (UID: "32ee73e2-935d-48cc-834d-1e4198754b9e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 14 22:05:30.468363 kubelet[1419]: I0714 22:05:30.468325 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32ee73e2-935d-48cc-834d-1e4198754b9e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "32ee73e2-935d-48cc-834d-1e4198754b9e" (UID: "32ee73e2-935d-48cc-834d-1e4198754b9e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 22:05:30.468436 kubelet[1419]: I0714 22:05:30.468392 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32ee73e2-935d-48cc-834d-1e4198754b9e-kube-api-access-g4z2f" (OuterVolumeSpecName: "kube-api-access-g4z2f") pod "32ee73e2-935d-48cc-834d-1e4198754b9e" (UID: "32ee73e2-935d-48cc-834d-1e4198754b9e"). InnerVolumeSpecName "kube-api-access-g4z2f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 22:05:30.468580 kubelet[1419]: I0714 22:05:30.468556 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32ee73e2-935d-48cc-834d-1e4198754b9e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "32ee73e2-935d-48cc-834d-1e4198754b9e" (UID: "32ee73e2-935d-48cc-834d-1e4198754b9e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 14 22:05:30.549123 systemd[1]: Removed slice kubepods-burstable-pod32ee73e2_935d_48cc_834d_1e4198754b9e.slice. Jul 14 22:05:30.549210 systemd[1]: kubepods-burstable-pod32ee73e2_935d_48cc_834d_1e4198754b9e.slice: Consumed 6.844s CPU time. Jul 14 22:05:30.560931 kubelet[1419]: I0714 22:05:30.560887 1419 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-host-proc-sys-kernel\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:30.561066 kubelet[1419]: I0714 22:05:30.561052 1419 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-cni-path\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:30.561148 kubelet[1419]: I0714 22:05:30.561136 1419 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g4z2f\" (UniqueName: \"kubernetes.io/projected/32ee73e2-935d-48cc-834d-1e4198754b9e-kube-api-access-g4z2f\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:30.561211 kubelet[1419]: I0714 22:05:30.561200 1419 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-lib-modules\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:30.561281 kubelet[1419]: I0714 22:05:30.561271 1419 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-bpf-maps\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:30.561349 kubelet[1419]: I0714 22:05:30.561339 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-cilium-cgroup\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:30.561413 kubelet[1419]: I0714 22:05:30.561403 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/32ee73e2-935d-48cc-834d-1e4198754b9e-cilium-config-path\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:30.561479 kubelet[1419]: I0714 22:05:30.561469 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-cilium-run\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:30.561595 kubelet[1419]: I0714 22:05:30.561583 1419 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-hostproc\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:30.561676 kubelet[1419]: I0714 22:05:30.561667 1419 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/32ee73e2-935d-48cc-834d-1e4198754b9e-hubble-tls\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:30.561741 kubelet[1419]: I0714 22:05:30.561732 1419 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/32ee73e2-935d-48cc-834d-1e4198754b9e-clustermesh-secrets\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:30.561799 kubelet[1419]: I0714 22:05:30.561789 1419 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/32ee73e2-935d-48cc-834d-1e4198754b9e-host-proc-sys-net\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:30.720876 kubelet[1419]: I0714 22:05:30.719221 1419 scope.go:117] "RemoveContainer" containerID="931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495" Jul 14 22:05:30.721010 env[1210]: time="2025-07-14T22:05:30.720350585Z" level=info msg="RemoveContainer for \"931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495\"" Jul 14 22:05:30.724038 env[1210]: time="2025-07-14T22:05:30.724006079Z" level=info msg="RemoveContainer for \"931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495\" returns successfully" Jul 14 22:05:30.724646 kubelet[1419]: I0714 22:05:30.724622 1419 scope.go:117] "RemoveContainer" containerID="4c2fd9bb3affc4b190b7af15573fbbf4ceb9b779de892845de9267ee95cc2ddc" Jul 14 22:05:30.725642 env[1210]: time="2025-07-14T22:05:30.725618201Z" level=info msg="RemoveContainer for \"4c2fd9bb3affc4b190b7af15573fbbf4ceb9b779de892845de9267ee95cc2ddc\"" Jul 14 22:05:30.727820 env[1210]: time="2025-07-14T22:05:30.727791057Z" level=info msg="RemoveContainer for \"4c2fd9bb3affc4b190b7af15573fbbf4ceb9b779de892845de9267ee95cc2ddc\" returns successfully" Jul 14 22:05:30.728018 kubelet[1419]: I0714 22:05:30.727993 1419 scope.go:117] "RemoveContainer" containerID="cd62967aea796ee6d009276253be640e6b1f8f3814f83741a9fe4d8dd8698974" Jul 14 22:05:30.729375 env[1210]: time="2025-07-14T22:05:30.729075170Z" level=info msg="RemoveContainer for \"cd62967aea796ee6d009276253be640e6b1f8f3814f83741a9fe4d8dd8698974\"" Jul 14 22:05:30.731310 env[1210]: time="2025-07-14T22:05:30.731197305Z" level=info msg="RemoveContainer for \"cd62967aea796ee6d009276253be640e6b1f8f3814f83741a9fe4d8dd8698974\" returns successfully" Jul 14 22:05:30.731382 kubelet[1419]: I0714 22:05:30.731354 1419 scope.go:117] "RemoveContainer" containerID="ba4c3d35262dc0e82b941f997254b3872789fbc1b0cf55fac1d750c1f8fbf2c1" Jul 14 22:05:30.732448 env[1210]: time="2025-07-14T22:05:30.732419536Z" level=info msg="RemoveContainer for \"ba4c3d35262dc0e82b941f997254b3872789fbc1b0cf55fac1d750c1f8fbf2c1\"" Jul 14 22:05:30.735788 env[1210]: time="2025-07-14T22:05:30.735755062Z" level=info msg="RemoveContainer for \"ba4c3d35262dc0e82b941f997254b3872789fbc1b0cf55fac1d750c1f8fbf2c1\" returns successfully" Jul 14 22:05:30.735989 kubelet[1419]: I0714 22:05:30.735962 1419 scope.go:117] "RemoveContainer" containerID="e88675aab7b130ea5c10ccffbc5bf344dc0dd14da5e9dc6a46e5835f476ee3c5" Jul 14 22:05:30.737120 env[1210]: time="2025-07-14T22:05:30.737082697Z" level=info msg="RemoveContainer for \"e88675aab7b130ea5c10ccffbc5bf344dc0dd14da5e9dc6a46e5835f476ee3c5\"" Jul 14 22:05:30.739232 env[1210]: time="2025-07-14T22:05:30.739191671Z" level=info msg="RemoveContainer for \"e88675aab7b130ea5c10ccffbc5bf344dc0dd14da5e9dc6a46e5835f476ee3c5\" returns successfully" Jul 14 22:05:30.739461 kubelet[1419]: I0714 22:05:30.739439 1419 scope.go:117] "RemoveContainer" containerID="931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495" Jul 14 22:05:30.739905 env[1210]: time="2025-07-14T22:05:30.739827967Z" level=error msg="ContainerStatus for \"931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495\": not found" Jul 14 22:05:30.740076 kubelet[1419]: E0714 22:05:30.740049 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495\": not found" containerID="931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495" Jul 14 22:05:30.740134 kubelet[1419]: I0714 22:05:30.740084 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495"} err="failed to get container status \"931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495\": rpc error: code = NotFound desc = an error occurred when try to find container \"931225628fe81bad09acba978ea6fb4d19571b375764a4d444630da52fec3495\": not found" Jul 14 22:05:30.740167 kubelet[1419]: I0714 22:05:30.740137 1419 scope.go:117] "RemoveContainer" containerID="4c2fd9bb3affc4b190b7af15573fbbf4ceb9b779de892845de9267ee95cc2ddc" Jul 14 22:05:30.740425 env[1210]: time="2025-07-14T22:05:30.740335981Z" level=error msg="ContainerStatus for \"4c2fd9bb3affc4b190b7af15573fbbf4ceb9b779de892845de9267ee95cc2ddc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c2fd9bb3affc4b190b7af15573fbbf4ceb9b779de892845de9267ee95cc2ddc\": not found" Jul 14 22:05:30.740486 kubelet[1419]: E0714 22:05:30.740459 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c2fd9bb3affc4b190b7af15573fbbf4ceb9b779de892845de9267ee95cc2ddc\": not found" containerID="4c2fd9bb3affc4b190b7af15573fbbf4ceb9b779de892845de9267ee95cc2ddc" Jul 14 22:05:30.740517 kubelet[1419]: I0714 22:05:30.740490 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4c2fd9bb3affc4b190b7af15573fbbf4ceb9b779de892845de9267ee95cc2ddc"} err="failed to get container status \"4c2fd9bb3affc4b190b7af15573fbbf4ceb9b779de892845de9267ee95cc2ddc\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c2fd9bb3affc4b190b7af15573fbbf4ceb9b779de892845de9267ee95cc2ddc\": not found" Jul 14 22:05:30.740517 kubelet[1419]: I0714 22:05:30.740508 1419 scope.go:117] "RemoveContainer" containerID="cd62967aea796ee6d009276253be640e6b1f8f3814f83741a9fe4d8dd8698974" Jul 14 22:05:30.740733 env[1210]: time="2025-07-14T22:05:30.740651149Z" level=error msg="ContainerStatus for \"cd62967aea796ee6d009276253be640e6b1f8f3814f83741a9fe4d8dd8698974\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cd62967aea796ee6d009276253be640e6b1f8f3814f83741a9fe4d8dd8698974\": not found" Jul 14 22:05:30.740793 kubelet[1419]: E0714 22:05:30.740753 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cd62967aea796ee6d009276253be640e6b1f8f3814f83741a9fe4d8dd8698974\": not found" containerID="cd62967aea796ee6d009276253be640e6b1f8f3814f83741a9fe4d8dd8698974" Jul 14 22:05:30.740793 kubelet[1419]: I0714 22:05:30.740767 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cd62967aea796ee6d009276253be640e6b1f8f3814f83741a9fe4d8dd8698974"} err="failed to get container status \"cd62967aea796ee6d009276253be640e6b1f8f3814f83741a9fe4d8dd8698974\": rpc error: code = NotFound desc = an error occurred when try to find container \"cd62967aea796ee6d009276253be640e6b1f8f3814f83741a9fe4d8dd8698974\": not found" Jul 14 22:05:30.740793 kubelet[1419]: I0714 22:05:30.740777 1419 scope.go:117] "RemoveContainer" containerID="ba4c3d35262dc0e82b941f997254b3872789fbc1b0cf55fac1d750c1f8fbf2c1" Jul 14 22:05:30.741074 env[1210]: time="2025-07-14T22:05:30.740980837Z" level=error msg="ContainerStatus for \"ba4c3d35262dc0e82b941f997254b3872789fbc1b0cf55fac1d750c1f8fbf2c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba4c3d35262dc0e82b941f997254b3872789fbc1b0cf55fac1d750c1f8fbf2c1\": not found" Jul 14 22:05:30.741150 kubelet[1419]: E0714 22:05:30.741081 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba4c3d35262dc0e82b941f997254b3872789fbc1b0cf55fac1d750c1f8fbf2c1\": not found" containerID="ba4c3d35262dc0e82b941f997254b3872789fbc1b0cf55fac1d750c1f8fbf2c1" Jul 14 22:05:30.741150 kubelet[1419]: I0714 22:05:30.741095 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba4c3d35262dc0e82b941f997254b3872789fbc1b0cf55fac1d750c1f8fbf2c1"} err="failed to get container status \"ba4c3d35262dc0e82b941f997254b3872789fbc1b0cf55fac1d750c1f8fbf2c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba4c3d35262dc0e82b941f997254b3872789fbc1b0cf55fac1d750c1f8fbf2c1\": not found" Jul 14 22:05:30.741150 kubelet[1419]: I0714 22:05:30.741114 1419 scope.go:117] "RemoveContainer" containerID="e88675aab7b130ea5c10ccffbc5bf344dc0dd14da5e9dc6a46e5835f476ee3c5" Jul 14 22:05:30.741442 env[1210]: time="2025-07-14T22:05:30.741339566Z" level=error msg="ContainerStatus for \"e88675aab7b130ea5c10ccffbc5bf344dc0dd14da5e9dc6a46e5835f476ee3c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e88675aab7b130ea5c10ccffbc5bf344dc0dd14da5e9dc6a46e5835f476ee3c5\": not found" Jul 14 22:05:30.741506 kubelet[1419]: E0714 22:05:30.741444 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e88675aab7b130ea5c10ccffbc5bf344dc0dd14da5e9dc6a46e5835f476ee3c5\": not found" containerID="e88675aab7b130ea5c10ccffbc5bf344dc0dd14da5e9dc6a46e5835f476ee3c5" Jul 14 22:05:30.741506 kubelet[1419]: I0714 22:05:30.741464 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e88675aab7b130ea5c10ccffbc5bf344dc0dd14da5e9dc6a46e5835f476ee3c5"} err="failed to get container status \"e88675aab7b130ea5c10ccffbc5bf344dc0dd14da5e9dc6a46e5835f476ee3c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"e88675aab7b130ea5c10ccffbc5bf344dc0dd14da5e9dc6a46e5835f476ee3c5\": not found" Jul 14 22:05:31.085098 systemd[1]: var-lib-kubelet-pods-32ee73e2\x2d935d\x2d48cc\x2d834d\x2d1e4198754b9e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg4z2f.mount: Deactivated successfully. Jul 14 22:05:31.085197 systemd[1]: var-lib-kubelet-pods-32ee73e2\x2d935d\x2d48cc\x2d834d\x2d1e4198754b9e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 14 22:05:31.085247 systemd[1]: var-lib-kubelet-pods-32ee73e2\x2d935d\x2d48cc\x2d834d\x2d1e4198754b9e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 14 22:05:31.409372 kubelet[1419]: E0714 22:05:31.409261 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:32.409522 kubelet[1419]: E0714 22:05:32.409485 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:32.546359 kubelet[1419]: I0714 22:05:32.546297 1419 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32ee73e2-935d-48cc-834d-1e4198754b9e" path="/var/lib/kubelet/pods/32ee73e2-935d-48cc-834d-1e4198754b9e/volumes" Jul 14 22:05:33.410265 kubelet[1419]: E0714 22:05:33.410184 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:33.436694 systemd[1]: Created slice kubepods-besteffort-pod35ff0786_d068_42ad_bc51_ff6361996520.slice. Jul 14 22:05:33.453363 systemd[1]: Created slice kubepods-burstable-podc2437593_be54_4314_adf6_298158546e1a.slice. Jul 14 22:05:33.578690 kubelet[1419]: I0714 22:05:33.578643 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-cilium-cgroup\") pod \"cilium-67jfv\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " pod="kube-system/cilium-67jfv" Jul 14 22:05:33.578690 kubelet[1419]: I0714 22:05:33.578685 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-etc-cni-netd\") pod \"cilium-67jfv\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " pod="kube-system/cilium-67jfv" Jul 14 22:05:33.578690 kubelet[1419]: I0714 22:05:33.578705 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-lib-modules\") pod \"cilium-67jfv\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " pod="kube-system/cilium-67jfv" Jul 14 22:05:33.578905 kubelet[1419]: I0714 22:05:33.578723 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlj5j\" (UniqueName: \"kubernetes.io/projected/35ff0786-d068-42ad-bc51-ff6361996520-kube-api-access-xlj5j\") pod \"cilium-operator-6c4d7847fc-xlrjl\" (UID: \"35ff0786-d068-42ad-bc51-ff6361996520\") " pod="kube-system/cilium-operator-6c4d7847fc-xlrjl" Jul 14 22:05:33.578905 kubelet[1419]: I0714 22:05:33.578741 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-xtables-lock\") pod \"cilium-67jfv\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " pod="kube-system/cilium-67jfv" Jul 14 22:05:33.578905 kubelet[1419]: I0714 22:05:33.578755 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-host-proc-sys-net\") pod \"cilium-67jfv\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " pod="kube-system/cilium-67jfv" Jul 14 22:05:33.578905 kubelet[1419]: I0714 22:05:33.578769 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-host-proc-sys-kernel\") pod \"cilium-67jfv\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " pod="kube-system/cilium-67jfv" Jul 14 22:05:33.578905 kubelet[1419]: I0714 22:05:33.578787 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2437593-be54-4314-adf6-298158546e1a-hubble-tls\") pod \"cilium-67jfv\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " pod="kube-system/cilium-67jfv" Jul 14 22:05:33.579061 kubelet[1419]: I0714 22:05:33.578805 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2437593-be54-4314-adf6-298158546e1a-clustermesh-secrets\") pod \"cilium-67jfv\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " pod="kube-system/cilium-67jfv" Jul 14 22:05:33.579061 kubelet[1419]: I0714 22:05:33.578820 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z67qq\" (UniqueName: \"kubernetes.io/projected/c2437593-be54-4314-adf6-298158546e1a-kube-api-access-z67qq\") pod \"cilium-67jfv\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " pod="kube-system/cilium-67jfv" Jul 14 22:05:33.579061 kubelet[1419]: I0714 22:05:33.578835 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/35ff0786-d068-42ad-bc51-ff6361996520-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-xlrjl\" (UID: \"35ff0786-d068-42ad-bc51-ff6361996520\") " pod="kube-system/cilium-operator-6c4d7847fc-xlrjl" Jul 14 22:05:33.579061 kubelet[1419]: I0714 22:05:33.578852 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-cilium-run\") pod \"cilium-67jfv\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " pod="kube-system/cilium-67jfv" Jul 14 22:05:33.579061 kubelet[1419]: I0714 22:05:33.578866 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-cni-path\") pod \"cilium-67jfv\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " pod="kube-system/cilium-67jfv" Jul 14 22:05:33.579182 kubelet[1419]: I0714 22:05:33.578880 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2437593-be54-4314-adf6-298158546e1a-cilium-config-path\") pod \"cilium-67jfv\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " pod="kube-system/cilium-67jfv" Jul 14 22:05:33.579182 kubelet[1419]: I0714 22:05:33.578894 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c2437593-be54-4314-adf6-298158546e1a-cilium-ipsec-secrets\") pod \"cilium-67jfv\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " pod="kube-system/cilium-67jfv" Jul 14 22:05:33.579182 kubelet[1419]: I0714 22:05:33.578910 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-hostproc\") pod \"cilium-67jfv\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " pod="kube-system/cilium-67jfv" Jul 14 22:05:33.579182 kubelet[1419]: I0714 22:05:33.578945 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-bpf-maps\") pod \"cilium-67jfv\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " pod="kube-system/cilium-67jfv" Jul 14 22:05:33.606610 kubelet[1419]: E0714 22:05:33.606551 1419 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-z67qq lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-67jfv" podUID="c2437593-be54-4314-adf6-298158546e1a" Jul 14 22:05:33.740371 kubelet[1419]: E0714 22:05:33.740255 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:05:33.741943 env[1210]: time="2025-07-14T22:05:33.741886397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xlrjl,Uid:35ff0786-d068-42ad-bc51-ff6361996520,Namespace:kube-system,Attempt:0,}" Jul 14 22:05:33.759084 env[1210]: time="2025-07-14T22:05:33.759007431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:05:33.759233 env[1210]: time="2025-07-14T22:05:33.759093471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:05:33.759233 env[1210]: time="2025-07-14T22:05:33.759120231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:05:33.759339 env[1210]: time="2025-07-14T22:05:33.759306111Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a2d53a8801d422296c3ed956162f258e7a37db39eb752677388923a9c80463c pid=3001 runtime=io.containerd.runc.v2 Jul 14 22:05:33.770606 systemd[1]: Started cri-containerd-0a2d53a8801d422296c3ed956162f258e7a37db39eb752677388923a9c80463c.scope. Jul 14 22:05:33.821017 env[1210]: time="2025-07-14T22:05:33.820962007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-xlrjl,Uid:35ff0786-d068-42ad-bc51-ff6361996520,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a2d53a8801d422296c3ed956162f258e7a37db39eb752677388923a9c80463c\"" Jul 14 22:05:33.822304 kubelet[1419]: E0714 22:05:33.821713 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:05:33.822969 env[1210]: time="2025-07-14T22:05:33.822935846Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 14 22:05:33.880208 kubelet[1419]: I0714 22:05:33.880154 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-cilium-cgroup\") pod \"c2437593-be54-4314-adf6-298158546e1a\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " Jul 14 22:05:33.880208 kubelet[1419]: I0714 22:05:33.880212 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-host-proc-sys-net\") pod \"c2437593-be54-4314-adf6-298158546e1a\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " Jul 14 22:05:33.880377 kubelet[1419]: I0714 22:05:33.880237 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z67qq\" (UniqueName: \"kubernetes.io/projected/c2437593-be54-4314-adf6-298158546e1a-kube-api-access-z67qq\") pod \"c2437593-be54-4314-adf6-298158546e1a\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " Jul 14 22:05:33.880377 kubelet[1419]: I0714 22:05:33.880255 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-bpf-maps\") pod \"c2437593-be54-4314-adf6-298158546e1a\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " Jul 14 22:05:33.880377 kubelet[1419]: I0714 22:05:33.880270 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-lib-modules\") pod \"c2437593-be54-4314-adf6-298158546e1a\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " Jul 14 22:05:33.880377 kubelet[1419]: I0714 22:05:33.880283 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-xtables-lock\") pod \"c2437593-be54-4314-adf6-298158546e1a\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " Jul 14 22:05:33.880377 kubelet[1419]: I0714 22:05:33.880300 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2437593-be54-4314-adf6-298158546e1a-clustermesh-secrets\") pod \"c2437593-be54-4314-adf6-298158546e1a\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " Jul 14 22:05:33.880377 kubelet[1419]: I0714 22:05:33.880317 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-cilium-run\") pod \"c2437593-be54-4314-adf6-298158546e1a\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " Jul 14 22:05:33.880522 kubelet[1419]: I0714 22:05:33.880343 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2437593-be54-4314-adf6-298158546e1a-cilium-config-path\") pod \"c2437593-be54-4314-adf6-298158546e1a\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " Jul 14 22:05:33.880522 kubelet[1419]: I0714 22:05:33.880359 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2437593-be54-4314-adf6-298158546e1a-hubble-tls\") pod \"c2437593-be54-4314-adf6-298158546e1a\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " Jul 14 22:05:33.880522 kubelet[1419]: I0714 22:05:33.880372 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-cni-path\") pod \"c2437593-be54-4314-adf6-298158546e1a\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " Jul 14 22:05:33.880522 kubelet[1419]: I0714 22:05:33.880391 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c2437593-be54-4314-adf6-298158546e1a-cilium-ipsec-secrets\") pod \"c2437593-be54-4314-adf6-298158546e1a\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " Jul 14 22:05:33.880522 kubelet[1419]: I0714 22:05:33.880406 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-hostproc\") pod \"c2437593-be54-4314-adf6-298158546e1a\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " Jul 14 22:05:33.880522 kubelet[1419]: I0714 22:05:33.880419 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-etc-cni-netd\") pod \"c2437593-be54-4314-adf6-298158546e1a\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " Jul 14 22:05:33.880656 kubelet[1419]: I0714 22:05:33.880433 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-host-proc-sys-kernel\") pod \"c2437593-be54-4314-adf6-298158546e1a\" (UID: \"c2437593-be54-4314-adf6-298158546e1a\") " Jul 14 22:05:33.880656 kubelet[1419]: I0714 22:05:33.880173 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c2437593-be54-4314-adf6-298158546e1a" (UID: "c2437593-be54-4314-adf6-298158546e1a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:05:33.880656 kubelet[1419]: I0714 22:05:33.880493 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c2437593-be54-4314-adf6-298158546e1a" (UID: "c2437593-be54-4314-adf6-298158546e1a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:05:33.880656 kubelet[1419]: I0714 22:05:33.880524 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c2437593-be54-4314-adf6-298158546e1a" (UID: "c2437593-be54-4314-adf6-298158546e1a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:05:33.881534 kubelet[1419]: I0714 22:05:33.880845 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-hostproc" (OuterVolumeSpecName: "hostproc") pod "c2437593-be54-4314-adf6-298158546e1a" (UID: "c2437593-be54-4314-adf6-298158546e1a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:05:33.881534 kubelet[1419]: I0714 22:05:33.881245 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c2437593-be54-4314-adf6-298158546e1a" (UID: "c2437593-be54-4314-adf6-298158546e1a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:05:33.881534 kubelet[1419]: I0714 22:05:33.881277 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c2437593-be54-4314-adf6-298158546e1a" (UID: "c2437593-be54-4314-adf6-298158546e1a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:05:33.881534 kubelet[1419]: I0714 22:05:33.881291 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c2437593-be54-4314-adf6-298158546e1a" (UID: "c2437593-be54-4314-adf6-298158546e1a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:05:33.881534 kubelet[1419]: I0714 22:05:33.881312 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c2437593-be54-4314-adf6-298158546e1a" (UID: "c2437593-be54-4314-adf6-298158546e1a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:05:33.881698 kubelet[1419]: I0714 22:05:33.881322 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-cni-path" (OuterVolumeSpecName: "cni-path") pod "c2437593-be54-4314-adf6-298158546e1a" (UID: "c2437593-be54-4314-adf6-298158546e1a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:05:33.881698 kubelet[1419]: I0714 22:05:33.881355 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c2437593-be54-4314-adf6-298158546e1a" (UID: "c2437593-be54-4314-adf6-298158546e1a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 14 22:05:33.883128 kubelet[1419]: I0714 22:05:33.883099 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2437593-be54-4314-adf6-298158546e1a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c2437593-be54-4314-adf6-298158546e1a" (UID: "c2437593-be54-4314-adf6-298158546e1a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 14 22:05:33.883336 kubelet[1419]: I0714 22:05:33.883312 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2437593-be54-4314-adf6-298158546e1a-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "c2437593-be54-4314-adf6-298158546e1a" (UID: "c2437593-be54-4314-adf6-298158546e1a"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 14 22:05:33.883659 kubelet[1419]: I0714 22:05:33.883635 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2437593-be54-4314-adf6-298158546e1a-kube-api-access-z67qq" (OuterVolumeSpecName: "kube-api-access-z67qq") pod "c2437593-be54-4314-adf6-298158546e1a" (UID: "c2437593-be54-4314-adf6-298158546e1a"). InnerVolumeSpecName "kube-api-access-z67qq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 22:05:33.883776 kubelet[1419]: I0714 22:05:33.883749 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2437593-be54-4314-adf6-298158546e1a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c2437593-be54-4314-adf6-298158546e1a" (UID: "c2437593-be54-4314-adf6-298158546e1a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 22:05:33.885459 kubelet[1419]: I0714 22:05:33.885438 1419 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2437593-be54-4314-adf6-298158546e1a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c2437593-be54-4314-adf6-298158546e1a" (UID: "c2437593-be54-4314-adf6-298158546e1a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 14 22:05:33.980744 kubelet[1419]: I0714 22:05:33.980703 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-cilium-run\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:33.980887 kubelet[1419]: I0714 22:05:33.980874 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2437593-be54-4314-adf6-298158546e1a-cilium-config-path\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:33.980986 kubelet[1419]: I0714 22:05:33.980975 1419 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2437593-be54-4314-adf6-298158546e1a-hubble-tls\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:33.981048 kubelet[1419]: I0714 22:05:33.981031 1419 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-cni-path\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:33.981133 kubelet[1419]: I0714 22:05:33.981123 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c2437593-be54-4314-adf6-298158546e1a-cilium-ipsec-secrets\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:33.981237 kubelet[1419]: I0714 22:05:33.981225 1419 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-hostproc\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:33.981300 kubelet[1419]: I0714 22:05:33.981281 1419 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-etc-cni-netd\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:33.981356 kubelet[1419]: I0714 22:05:33.981346 1419 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-host-proc-sys-kernel\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:33.981413 kubelet[1419]: I0714 22:05:33.981397 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-cilium-cgroup\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:33.981475 kubelet[1419]: I0714 22:05:33.981465 1419 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-host-proc-sys-net\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:33.981533 kubelet[1419]: I0714 22:05:33.981523 1419 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z67qq\" (UniqueName: \"kubernetes.io/projected/c2437593-be54-4314-adf6-298158546e1a-kube-api-access-z67qq\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:33.981596 kubelet[1419]: I0714 22:05:33.981587 1419 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-bpf-maps\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:33.981646 kubelet[1419]: I0714 22:05:33.981636 1419 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-lib-modules\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:33.981697 kubelet[1419]: I0714 22:05:33.981688 1419 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2437593-be54-4314-adf6-298158546e1a-xtables-lock\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:33.981759 kubelet[1419]: I0714 22:05:33.981750 1419 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2437593-be54-4314-adf6-298158546e1a-clustermesh-secrets\") on node \"10.0.0.99\" DevicePath \"\"" Jul 14 22:05:34.410897 kubelet[1419]: E0714 22:05:34.410857 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:34.496756 kubelet[1419]: E0714 22:05:34.496723 1419 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 14 22:05:34.549511 systemd[1]: Removed slice kubepods-burstable-podc2437593_be54_4314_adf6_298158546e1a.slice. Jul 14 22:05:34.684726 systemd[1]: var-lib-kubelet-pods-c2437593\x2dbe54\x2d4314\x2dadf6\x2d298158546e1a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz67qq.mount: Deactivated successfully. Jul 14 22:05:34.684815 systemd[1]: var-lib-kubelet-pods-c2437593\x2dbe54\x2d4314\x2dadf6\x2d298158546e1a-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 14 22:05:34.684865 systemd[1]: var-lib-kubelet-pods-c2437593\x2dbe54\x2d4314\x2dadf6\x2d298158546e1a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 14 22:05:34.684933 systemd[1]: var-lib-kubelet-pods-c2437593\x2dbe54\x2d4314\x2dadf6\x2d298158546e1a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 14 22:05:34.768653 systemd[1]: Created slice kubepods-burstable-podd7a36163_830b_4d1b_8cc5_4e778f65ecd2.slice. Jul 14 22:05:34.891579 kubelet[1419]: I0714 22:05:34.889065 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d7a36163-830b-4d1b-8cc5-4e778f65ecd2-host-proc-sys-kernel\") pod \"cilium-22q2d\" (UID: \"d7a36163-830b-4d1b-8cc5-4e778f65ecd2\") " pod="kube-system/cilium-22q2d" Jul 14 22:05:34.891579 kubelet[1419]: I0714 22:05:34.889112 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d7a36163-830b-4d1b-8cc5-4e778f65ecd2-bpf-maps\") pod \"cilium-22q2d\" (UID: \"d7a36163-830b-4d1b-8cc5-4e778f65ecd2\") " pod="kube-system/cilium-22q2d" Jul 14 22:05:34.891579 kubelet[1419]: I0714 22:05:34.889137 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d7a36163-830b-4d1b-8cc5-4e778f65ecd2-cilium-config-path\") pod \"cilium-22q2d\" (UID: \"d7a36163-830b-4d1b-8cc5-4e778f65ecd2\") " pod="kube-system/cilium-22q2d" Jul 14 22:05:34.891579 kubelet[1419]: I0714 22:05:34.889169 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d7a36163-830b-4d1b-8cc5-4e778f65ecd2-cilium-run\") pod \"cilium-22q2d\" (UID: \"d7a36163-830b-4d1b-8cc5-4e778f65ecd2\") " pod="kube-system/cilium-22q2d" Jul 14 22:05:34.891579 kubelet[1419]: I0714 22:05:34.889190 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d7a36163-830b-4d1b-8cc5-4e778f65ecd2-clustermesh-secrets\") pod \"cilium-22q2d\" (UID: \"d7a36163-830b-4d1b-8cc5-4e778f65ecd2\") " pod="kube-system/cilium-22q2d" Jul 14 22:05:34.891818 kubelet[1419]: I0714 22:05:34.889214 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d7a36163-830b-4d1b-8cc5-4e778f65ecd2-cilium-ipsec-secrets\") pod \"cilium-22q2d\" (UID: \"d7a36163-830b-4d1b-8cc5-4e778f65ecd2\") " pod="kube-system/cilium-22q2d" Jul 14 22:05:34.891818 kubelet[1419]: I0714 22:05:34.889235 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d7a36163-830b-4d1b-8cc5-4e778f65ecd2-cni-path\") pod \"cilium-22q2d\" (UID: \"d7a36163-830b-4d1b-8cc5-4e778f65ecd2\") " pod="kube-system/cilium-22q2d" Jul 14 22:05:34.891818 kubelet[1419]: I0714 22:05:34.889254 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7a36163-830b-4d1b-8cc5-4e778f65ecd2-xtables-lock\") pod \"cilium-22q2d\" (UID: \"d7a36163-830b-4d1b-8cc5-4e778f65ecd2\") " pod="kube-system/cilium-22q2d" Jul 14 22:05:34.891818 kubelet[1419]: I0714 22:05:34.889278 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7a36163-830b-4d1b-8cc5-4e778f65ecd2-lib-modules\") pod \"cilium-22q2d\" (UID: \"d7a36163-830b-4d1b-8cc5-4e778f65ecd2\") " pod="kube-system/cilium-22q2d" Jul 14 22:05:34.891818 kubelet[1419]: I0714 22:05:34.889299 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d7a36163-830b-4d1b-8cc5-4e778f65ecd2-cilium-cgroup\") pod \"cilium-22q2d\" (UID: \"d7a36163-830b-4d1b-8cc5-4e778f65ecd2\") " pod="kube-system/cilium-22q2d" Jul 14 22:05:34.891818 kubelet[1419]: I0714 22:05:34.889318 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d7a36163-830b-4d1b-8cc5-4e778f65ecd2-etc-cni-netd\") pod \"cilium-22q2d\" (UID: \"d7a36163-830b-4d1b-8cc5-4e778f65ecd2\") " pod="kube-system/cilium-22q2d" Jul 14 22:05:34.892028 kubelet[1419]: I0714 22:05:34.889338 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d7a36163-830b-4d1b-8cc5-4e778f65ecd2-hubble-tls\") pod \"cilium-22q2d\" (UID: \"d7a36163-830b-4d1b-8cc5-4e778f65ecd2\") " pod="kube-system/cilium-22q2d" Jul 14 22:05:34.892028 kubelet[1419]: I0714 22:05:34.889358 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg6xv\" (UniqueName: \"kubernetes.io/projected/d7a36163-830b-4d1b-8cc5-4e778f65ecd2-kube-api-access-cg6xv\") pod \"cilium-22q2d\" (UID: \"d7a36163-830b-4d1b-8cc5-4e778f65ecd2\") " pod="kube-system/cilium-22q2d" Jul 14 22:05:34.892028 kubelet[1419]: I0714 22:05:34.889383 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d7a36163-830b-4d1b-8cc5-4e778f65ecd2-hostproc\") pod \"cilium-22q2d\" (UID: \"d7a36163-830b-4d1b-8cc5-4e778f65ecd2\") " pod="kube-system/cilium-22q2d" Jul 14 22:05:34.892028 kubelet[1419]: I0714 22:05:34.889407 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d7a36163-830b-4d1b-8cc5-4e778f65ecd2-host-proc-sys-net\") pod \"cilium-22q2d\" (UID: \"d7a36163-830b-4d1b-8cc5-4e778f65ecd2\") " pod="kube-system/cilium-22q2d" Jul 14 22:05:34.928312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount980581638.mount: Deactivated successfully. Jul 14 22:05:35.081873 kubelet[1419]: E0714 22:05:35.081840 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:05:35.082352 env[1210]: time="2025-07-14T22:05:35.082316571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-22q2d,Uid:d7a36163-830b-4d1b-8cc5-4e778f65ecd2,Namespace:kube-system,Attempt:0,}" Jul 14 22:05:35.095262 env[1210]: time="2025-07-14T22:05:35.095074901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:05:35.095262 env[1210]: time="2025-07-14T22:05:35.095115301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:05:35.095262 env[1210]: time="2025-07-14T22:05:35.095126021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:05:35.095449 env[1210]: time="2025-07-14T22:05:35.095294942Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/292798473885edeffdae983a9b762209469ef1ee34c9c13afe97cd8234e0bc63 pid=3051 runtime=io.containerd.runc.v2 Jul 14 22:05:35.105681 systemd[1]: Started cri-containerd-292798473885edeffdae983a9b762209469ef1ee34c9c13afe97cd8234e0bc63.scope. Jul 14 22:05:35.138213 env[1210]: time="2025-07-14T22:05:35.138152698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-22q2d,Uid:d7a36163-830b-4d1b-8cc5-4e778f65ecd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"292798473885edeffdae983a9b762209469ef1ee34c9c13afe97cd8234e0bc63\"" Jul 14 22:05:35.139067 kubelet[1419]: E0714 22:05:35.139043 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:05:35.143754 env[1210]: time="2025-07-14T22:05:35.143714503Z" level=info msg="CreateContainer within sandbox \"292798473885edeffdae983a9b762209469ef1ee34c9c13afe97cd8234e0bc63\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 14 22:05:35.152107 env[1210]: time="2025-07-14T22:05:35.152050630Z" level=info msg="CreateContainer within sandbox \"292798473885edeffdae983a9b762209469ef1ee34c9c13afe97cd8234e0bc63\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"853a6c13eae838ecc06d3fc5cfe40cc2bb768fccfccdd264bb788949fe65fb91\"" Jul 14 22:05:35.152541 env[1210]: time="2025-07-14T22:05:35.152515070Z" level=info msg="StartContainer for \"853a6c13eae838ecc06d3fc5cfe40cc2bb768fccfccdd264bb788949fe65fb91\"" Jul 14 22:05:35.166229 systemd[1]: Started cri-containerd-853a6c13eae838ecc06d3fc5cfe40cc2bb768fccfccdd264bb788949fe65fb91.scope. Jul 14 22:05:35.196951 env[1210]: time="2025-07-14T22:05:35.196327388Z" level=info msg="StartContainer for \"853a6c13eae838ecc06d3fc5cfe40cc2bb768fccfccdd264bb788949fe65fb91\" returns successfully" Jul 14 22:05:35.206034 systemd[1]: cri-containerd-853a6c13eae838ecc06d3fc5cfe40cc2bb768fccfccdd264bb788949fe65fb91.scope: Deactivated successfully. Jul 14 22:05:35.262306 env[1210]: time="2025-07-14T22:05:35.262257204Z" level=info msg="shim disconnected" id=853a6c13eae838ecc06d3fc5cfe40cc2bb768fccfccdd264bb788949fe65fb91 Jul 14 22:05:35.262306 env[1210]: time="2025-07-14T22:05:35.262302284Z" level=warning msg="cleaning up after shim disconnected" id=853a6c13eae838ecc06d3fc5cfe40cc2bb768fccfccdd264bb788949fe65fb91 namespace=k8s.io Jul 14 22:05:35.262306 env[1210]: time="2025-07-14T22:05:35.262311564Z" level=info msg="cleaning up dead shim" Jul 14 22:05:35.268970 env[1210]: time="2025-07-14T22:05:35.268924690Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:05:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3133 runtime=io.containerd.runc.v2\n" Jul 14 22:05:35.411071 kubelet[1419]: E0714 22:05:35.410968 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:35.449348 env[1210]: time="2025-07-14T22:05:35.449300084Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:05:35.450823 env[1210]: time="2025-07-14T22:05:35.450789725Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:05:35.453303 env[1210]: time="2025-07-14T22:05:35.453264087Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 14 22:05:35.453983 env[1210]: time="2025-07-14T22:05:35.453940048Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 14 22:05:35.457807 env[1210]: time="2025-07-14T22:05:35.457748451Z" level=info msg="CreateContainer within sandbox \"0a2d53a8801d422296c3ed956162f258e7a37db39eb752677388923a9c80463c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 14 22:05:35.471122 env[1210]: time="2025-07-14T22:05:35.471059983Z" level=info msg="CreateContainer within sandbox \"0a2d53a8801d422296c3ed956162f258e7a37db39eb752677388923a9c80463c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6fad4df0e6f2ac851d4c8a6f5476982b48a2233689f9aa81c47c5271eb4d9703\"" Jul 14 22:05:35.471965 env[1210]: time="2025-07-14T22:05:35.471939543Z" level=info msg="StartContainer for \"6fad4df0e6f2ac851d4c8a6f5476982b48a2233689f9aa81c47c5271eb4d9703\"" Jul 14 22:05:35.487081 systemd[1]: Started cri-containerd-6fad4df0e6f2ac851d4c8a6f5476982b48a2233689f9aa81c47c5271eb4d9703.scope. Jul 14 22:05:35.540760 env[1210]: time="2025-07-14T22:05:35.540715962Z" level=info msg="StartContainer for \"6fad4df0e6f2ac851d4c8a6f5476982b48a2233689f9aa81c47c5271eb4d9703\" returns successfully" Jul 14 22:05:35.672580 kubelet[1419]: I0714 22:05:35.672459 1419 setters.go:618] "Node became not ready" node="10.0.0.99" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-14T22:05:35Z","lastTransitionTime":"2025-07-14T22:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 14 22:05:35.733289 kubelet[1419]: E0714 22:05:35.733260 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:05:35.733934 kubelet[1419]: E0714 22:05:35.733890 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:05:35.737623 env[1210]: time="2025-07-14T22:05:35.737570370Z" level=info msg="CreateContainer within sandbox \"292798473885edeffdae983a9b762209469ef1ee34c9c13afe97cd8234e0bc63\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 14 22:05:35.750090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1349072132.mount: Deactivated successfully. Jul 14 22:05:35.751304 env[1210]: time="2025-07-14T22:05:35.750748262Z" level=info msg="CreateContainer within sandbox \"292798473885edeffdae983a9b762209469ef1ee34c9c13afe97cd8234e0bc63\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"67a8d617eebe48887832e87e72f20359c0949958ad608aed4c860e070427ae5c\"" Jul 14 22:05:35.754040 env[1210]: time="2025-07-14T22:05:35.754001384Z" level=info msg="StartContainer for \"67a8d617eebe48887832e87e72f20359c0949958ad608aed4c860e070427ae5c\"" Jul 14 22:05:35.758841 kubelet[1419]: I0714 22:05:35.758646 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-xlrjl" podStartSLOduration=1.126265265 podStartE2EDuration="2.758628908s" podCreationTimestamp="2025-07-14 22:05:33 +0000 UTC" firstStartedPulling="2025-07-14 22:05:33.822685846 +0000 UTC m=+90.792395359" lastFinishedPulling="2025-07-14 22:05:35.455049489 +0000 UTC m=+92.424759002" observedRunningTime="2025-07-14 22:05:35.758492548 +0000 UTC m=+92.728202061" watchObservedRunningTime="2025-07-14 22:05:35.758628908 +0000 UTC m=+92.728338421" Jul 14 22:05:35.775335 systemd[1]: Started cri-containerd-67a8d617eebe48887832e87e72f20359c0949958ad608aed4c860e070427ae5c.scope. Jul 14 22:05:35.808961 env[1210]: time="2025-07-14T22:05:35.807538270Z" level=info msg="StartContainer for \"67a8d617eebe48887832e87e72f20359c0949958ad608aed4c860e070427ae5c\" returns successfully" Jul 14 22:05:35.821799 systemd[1]: cri-containerd-67a8d617eebe48887832e87e72f20359c0949958ad608aed4c860e070427ae5c.scope: Deactivated successfully. Jul 14 22:05:35.871323 env[1210]: time="2025-07-14T22:05:35.871276805Z" level=info msg="shim disconnected" id=67a8d617eebe48887832e87e72f20359c0949958ad608aed4c860e070427ae5c Jul 14 22:05:35.871669 env[1210]: time="2025-07-14T22:05:35.871646965Z" level=warning msg="cleaning up after shim disconnected" id=67a8d617eebe48887832e87e72f20359c0949958ad608aed4c860e070427ae5c namespace=k8s.io Jul 14 22:05:35.871764 env[1210]: time="2025-07-14T22:05:35.871749325Z" level=info msg="cleaning up dead shim" Jul 14 22:05:35.878438 env[1210]: time="2025-07-14T22:05:35.878401091Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:05:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3233 runtime=io.containerd.runc.v2\n" Jul 14 22:05:36.411939 kubelet[1419]: E0714 22:05:36.411879 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:36.549930 kubelet[1419]: I0714 22:05:36.549885 1419 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2437593-be54-4314-adf6-298158546e1a" path="/var/lib/kubelet/pods/c2437593-be54-4314-adf6-298158546e1a/volumes" Jul 14 22:05:36.683657 systemd[1]: run-containerd-runc-k8s.io-67a8d617eebe48887832e87e72f20359c0949958ad608aed4c860e070427ae5c-runc.oozBjP.mount: Deactivated successfully. Jul 14 22:05:36.683764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67a8d617eebe48887832e87e72f20359c0949958ad608aed4c860e070427ae5c-rootfs.mount: Deactivated successfully. Jul 14 22:05:36.736951 kubelet[1419]: E0714 22:05:36.736631 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:05:36.736951 kubelet[1419]: E0714 22:05:36.736766 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:05:36.740499 env[1210]: time="2025-07-14T22:05:36.740457904Z" level=info msg="CreateContainer within sandbox \"292798473885edeffdae983a9b762209469ef1ee34c9c13afe97cd8234e0bc63\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 14 22:05:36.753278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount774973500.mount: Deactivated successfully. Jul 14 22:05:36.756516 env[1210]: time="2025-07-14T22:05:36.756476167Z" level=info msg="CreateContainer within sandbox \"292798473885edeffdae983a9b762209469ef1ee34c9c13afe97cd8234e0bc63\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"60bc1ac3612dbe87017b352b3a3bbdb1817059c11303e42828a6928b3aef1664\"" Jul 14 22:05:36.757289 env[1210]: time="2025-07-14T22:05:36.757262968Z" level=info msg="StartContainer for \"60bc1ac3612dbe87017b352b3a3bbdb1817059c11303e42828a6928b3aef1664\"" Jul 14 22:05:36.774320 systemd[1]: Started cri-containerd-60bc1ac3612dbe87017b352b3a3bbdb1817059c11303e42828a6928b3aef1664.scope. Jul 14 22:05:36.807476 env[1210]: time="2025-07-14T22:05:36.807420681Z" level=info msg="StartContainer for \"60bc1ac3612dbe87017b352b3a3bbdb1817059c11303e42828a6928b3aef1664\" returns successfully" Jul 14 22:05:36.810426 systemd[1]: cri-containerd-60bc1ac3612dbe87017b352b3a3bbdb1817059c11303e42828a6928b3aef1664.scope: Deactivated successfully. Jul 14 22:05:36.830671 env[1210]: time="2025-07-14T22:05:36.830623194Z" level=info msg="shim disconnected" id=60bc1ac3612dbe87017b352b3a3bbdb1817059c11303e42828a6928b3aef1664 Jul 14 22:05:36.830671 env[1210]: time="2025-07-14T22:05:36.830669234Z" level=warning msg="cleaning up after shim disconnected" id=60bc1ac3612dbe87017b352b3a3bbdb1817059c11303e42828a6928b3aef1664 namespace=k8s.io Jul 14 22:05:36.830872 env[1210]: time="2025-07-14T22:05:36.830678634Z" level=info msg="cleaning up dead shim" Jul 14 22:05:36.836823 env[1210]: time="2025-07-14T22:05:36.836777843Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:05:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3289 runtime=io.containerd.runc.v2\n" Jul 14 22:05:37.412454 kubelet[1419]: E0714 22:05:37.412400 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:37.683807 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60bc1ac3612dbe87017b352b3a3bbdb1817059c11303e42828a6928b3aef1664-rootfs.mount: Deactivated successfully. Jul 14 22:05:37.739874 kubelet[1419]: E0714 22:05:37.739825 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:05:37.747583 env[1210]: time="2025-07-14T22:05:37.745114098Z" level=info msg="CreateContainer within sandbox \"292798473885edeffdae983a9b762209469ef1ee34c9c13afe97cd8234e0bc63\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 14 22:05:37.763516 env[1210]: time="2025-07-14T22:05:37.763465895Z" level=info msg="CreateContainer within sandbox \"292798473885edeffdae983a9b762209469ef1ee34c9c13afe97cd8234e0bc63\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c11e15468678439c13ae2f39494552c57a75adedee5339015d45c533addc1aa2\"" Jul 14 22:05:37.764249 env[1210]: time="2025-07-14T22:05:37.764144577Z" level=info msg="StartContainer for \"c11e15468678439c13ae2f39494552c57a75adedee5339015d45c533addc1aa2\"" Jul 14 22:05:37.786651 systemd[1]: Started cri-containerd-c11e15468678439c13ae2f39494552c57a75adedee5339015d45c533addc1aa2.scope. Jul 14 22:05:37.811481 systemd[1]: cri-containerd-c11e15468678439c13ae2f39494552c57a75adedee5339015d45c533addc1aa2.scope: Deactivated successfully. Jul 14 22:05:37.814276 env[1210]: time="2025-07-14T22:05:37.814220998Z" level=info msg="StartContainer for \"c11e15468678439c13ae2f39494552c57a75adedee5339015d45c533addc1aa2\" returns successfully" Jul 14 22:05:37.833716 env[1210]: time="2025-07-14T22:05:37.833672637Z" level=info msg="shim disconnected" id=c11e15468678439c13ae2f39494552c57a75adedee5339015d45c533addc1aa2 Jul 14 22:05:37.833958 env[1210]: time="2025-07-14T22:05:37.833939278Z" level=warning msg="cleaning up after shim disconnected" id=c11e15468678439c13ae2f39494552c57a75adedee5339015d45c533addc1aa2 namespace=k8s.io Jul 14 22:05:37.834047 env[1210]: time="2025-07-14T22:05:37.834032278Z" level=info msg="cleaning up dead shim" Jul 14 22:05:37.840716 env[1210]: time="2025-07-14T22:05:37.840678371Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:05:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3345 runtime=io.containerd.runc.v2\n" Jul 14 22:05:38.412856 kubelet[1419]: E0714 22:05:38.412812 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:38.683957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c11e15468678439c13ae2f39494552c57a75adedee5339015d45c533addc1aa2-rootfs.mount: Deactivated successfully. Jul 14 22:05:38.744433 kubelet[1419]: E0714 22:05:38.744399 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:05:38.748120 env[1210]: time="2025-07-14T22:05:38.748066258Z" level=info msg="CreateContainer within sandbox \"292798473885edeffdae983a9b762209469ef1ee34c9c13afe97cd8234e0bc63\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 14 22:05:38.761234 env[1210]: time="2025-07-14T22:05:38.761178412Z" level=info msg="CreateContainer within sandbox \"292798473885edeffdae983a9b762209469ef1ee34c9c13afe97cd8234e0bc63\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5f2f0d51bc6b179656fc0c06c45a12580ad82ba77a4927c49a42f2916690c249\"" Jul 14 22:05:38.762340 env[1210]: time="2025-07-14T22:05:38.762295655Z" level=info msg="StartContainer for \"5f2f0d51bc6b179656fc0c06c45a12580ad82ba77a4927c49a42f2916690c249\"" Jul 14 22:05:38.779224 systemd[1]: Started cri-containerd-5f2f0d51bc6b179656fc0c06c45a12580ad82ba77a4927c49a42f2916690c249.scope. Jul 14 22:05:38.820869 env[1210]: time="2025-07-14T22:05:38.820807165Z" level=info msg="StartContainer for \"5f2f0d51bc6b179656fc0c06c45a12580ad82ba77a4927c49a42f2916690c249\" returns successfully" Jul 14 22:05:39.070961 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 14 22:05:39.413300 kubelet[1419]: E0714 22:05:39.413182 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:39.749552 kubelet[1419]: E0714 22:05:39.749456 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:05:39.765360 kubelet[1419]: I0714 22:05:39.765301 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-22q2d" podStartSLOduration=5.765285047 podStartE2EDuration="5.765285047s" podCreationTimestamp="2025-07-14 22:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:05:39.764870166 +0000 UTC m=+96.734579679" watchObservedRunningTime="2025-07-14 22:05:39.765285047 +0000 UTC m=+96.734994520" Jul 14 22:05:40.413402 kubelet[1419]: E0714 22:05:40.413337 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:41.082614 kubelet[1419]: E0714 22:05:41.082558 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:05:41.414562 kubelet[1419]: E0714 22:05:41.414459 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:42.414835 kubelet[1419]: E0714 22:05:42.414789 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:43.415960 kubelet[1419]: E0714 22:05:43.415928 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:44.240841 systemd[1]: run-containerd-runc-k8s.io-5f2f0d51bc6b179656fc0c06c45a12580ad82ba77a4927c49a42f2916690c249-runc.7tYPE7.mount: Deactivated successfully. Jul 14 22:05:44.347415 kubelet[1419]: E0714 22:05:44.347372 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:44.416734 kubelet[1419]: E0714 22:05:44.416677 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:45.417221 kubelet[1419]: E0714 22:05:45.417168 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:46.355272 systemd[1]: run-containerd-runc-k8s.io-5f2f0d51bc6b179656fc0c06c45a12580ad82ba77a4927c49a42f2916690c249-runc.3SHS4m.mount: Deactivated successfully. Jul 14 22:05:46.417678 kubelet[1419]: E0714 22:05:46.417635 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:46.544725 kubelet[1419]: E0714 22:05:46.544689 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:05:47.418266 kubelet[1419]: E0714 22:05:47.418217 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:48.419258 kubelet[1419]: E0714 22:05:48.419218 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:49.419745 kubelet[1419]: E0714 22:05:49.419689 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:50.420480 kubelet[1419]: E0714 22:05:50.420437 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:51.421210 kubelet[1419]: E0714 22:05:51.421154 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:52.421446 kubelet[1419]: E0714 22:05:52.421384 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:53.422020 kubelet[1419]: E0714 22:05:53.421971 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:54.422753 kubelet[1419]: E0714 22:05:54.422702 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:55.423018 kubelet[1419]: E0714 22:05:55.422971 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:56.423620 kubelet[1419]: E0714 22:05:56.423569 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:57.423843 kubelet[1419]: E0714 22:05:57.423788 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:58.424777 kubelet[1419]: E0714 22:05:58.424697 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:05:59.425525 kubelet[1419]: E0714 22:05:59.425460 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:06:00.426533 kubelet[1419]: E0714 22:06:00.426475 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:06:01.132169 systemd[1]: run-containerd-runc-k8s.io-5f2f0d51bc6b179656fc0c06c45a12580ad82ba77a4927c49a42f2916690c249-runc.vVeVhv.mount: Deactivated successfully. Jul 14 22:06:01.426848 kubelet[1419]: E0714 22:06:01.426708 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:06:02.427427 kubelet[1419]: E0714 22:06:02.427380 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:06:03.366864 kubelet[1419]: E0714 22:06:03.366824 1419 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 127.0.0.1:57020->127.0.0.1:43159: read tcp 127.0.0.1:57020->127.0.0.1:43159: read: connection reset by peer Jul 14 22:06:03.428056 kubelet[1419]: E0714 22:06:03.428006 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:06:04.347402 kubelet[1419]: E0714 22:06:04.347367 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:06:04.361354 env[1210]: time="2025-07-14T22:06:04.361318008Z" level=info msg="StopPodSandbox for \"739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d\"" Jul 14 22:06:04.361632 env[1210]: time="2025-07-14T22:06:04.361402929Z" level=info msg="TearDown network for sandbox \"739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d\" successfully" Jul 14 22:06:04.361632 env[1210]: time="2025-07-14T22:06:04.361440810Z" level=info msg="StopPodSandbox for \"739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d\" returns successfully" Jul 14 22:06:04.362652 env[1210]: time="2025-07-14T22:06:04.361767654Z" level=info msg="RemovePodSandbox for \"739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d\"" Jul 14 22:06:04.362741 env[1210]: time="2025-07-14T22:06:04.362662825Z" level=info msg="Forcibly stopping sandbox \"739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d\"" Jul 14 22:06:04.362803 env[1210]: time="2025-07-14T22:06:04.362782466Z" level=info msg="TearDown network for sandbox \"739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d\" successfully" Jul 14 22:06:04.366707 env[1210]: time="2025-07-14T22:06:04.366654593Z" level=info msg="RemovePodSandbox \"739d93877925aab68d217fef02c8f7b0c4c59ef23accdf9ea6431abb0d70f67d\" returns successfully" Jul 14 22:06:04.429097 kubelet[1419]: E0714 22:06:04.429065 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:06:05.082736 kubelet[1419]: E0714 22:06:05.082699 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:06:05.422636 systemd[1]: run-containerd-runc-k8s.io-5f2f0d51bc6b179656fc0c06c45a12580ad82ba77a4927c49a42f2916690c249-runc.aGbTz4.mount: Deactivated successfully. Jul 14 22:06:05.432516 kubelet[1419]: E0714 22:06:05.429949 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:06:06.430681 kubelet[1419]: E0714 22:06:06.430632 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:06:07.431306 kubelet[1419]: E0714 22:06:07.431256 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:06:07.534078 systemd[1]: run-containerd-runc-k8s.io-5f2f0d51bc6b179656fc0c06c45a12580ad82ba77a4927c49a42f2916690c249-runc.V7oKRS.mount: Deactivated successfully. Jul 14 22:06:08.432019 kubelet[1419]: E0714 22:06:08.431984 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:06:09.433219 kubelet[1419]: E0714 22:06:09.433178 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:06:10.433355 kubelet[1419]: E0714 22:06:10.433305 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:06:11.434201 kubelet[1419]: E0714 22:06:11.434167 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:06:11.777014 systemd[1]: run-containerd-runc-k8s.io-5f2f0d51bc6b179656fc0c06c45a12580ad82ba77a4927c49a42f2916690c249-runc.Y6a4SA.mount: Deactivated successfully. Jul 14 22:06:12.435218 kubelet[1419]: E0714 22:06:12.435163 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:06:13.435805 kubelet[1419]: E0714 22:06:13.435769 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:06:13.909669 systemd-networkd[1049]: lxc_health: Link UP Jul 14 22:06:13.917060 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 14 22:06:13.916793 systemd-networkd[1049]: lxc_health: Gained carrier Jul 14 22:06:14.436551 kubelet[1419]: E0714 22:06:14.436499 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:06:15.083590 kubelet[1419]: E0714 22:06:15.083561 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:06:15.436949 kubelet[1419]: E0714 22:06:15.436798 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:06:15.681089 systemd-networkd[1049]: lxc_health: Gained IPv6LL Jul 14 22:06:15.812282 kubelet[1419]: E0714 22:06:15.812181 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:06:16.437339 kubelet[1419]: E0714 22:06:16.437291 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:06:17.438081 kubelet[1419]: E0714 22:06:17.438032 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:06:18.438230 kubelet[1419]: E0714 22:06:18.438147 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:06:19.438804 kubelet[1419]: E0714 22:06:19.438758 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:06:20.439500 kubelet[1419]: E0714 22:06:20.439449 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 14 22:06:21.439610 kubelet[1419]: E0714 22:06:21.439562 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"