Jul 10 00:36:57.723618 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 10 00:36:57.723637 kernel: Linux version 5.15.186-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Wed Jul 9 23:19:15 -00 2025 Jul 10 00:36:57.723645 kernel: efi: EFI v2.70 by EDK II Jul 10 00:36:57.723650 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Jul 10 00:36:57.723655 kernel: random: crng init done Jul 10 00:36:57.723661 kernel: ACPI: Early table checksum verification disabled Jul 10 00:36:57.723667 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jul 10 00:36:57.723674 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 10 00:36:57.723679 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:36:57.723685 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:36:57.723690 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:36:57.723695 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:36:57.723701 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:36:57.723706 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:36:57.723713 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:36:57.723719 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:36:57.723725 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:36:57.723730 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 10 00:36:57.723736 kernel: NUMA: Failed to initialise from firmware Jul 10 00:36:57.723742 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:36:57.723748 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Jul 10 00:36:57.723753 kernel: Zone ranges: Jul 10 00:36:57.723759 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:36:57.723766 kernel: DMA32 empty Jul 10 00:36:57.723771 kernel: Normal empty Jul 10 00:36:57.723777 kernel: Movable zone start for each node Jul 10 00:36:57.723782 kernel: Early memory node ranges Jul 10 00:36:57.723788 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jul 10 00:36:57.723793 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jul 10 00:36:57.723799 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jul 10 00:36:57.723805 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jul 10 00:36:57.723810 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jul 10 00:36:57.723816 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jul 10 00:36:57.723821 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jul 10 00:36:57.723827 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:36:57.723834 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 10 00:36:57.723840 kernel: psci: probing for conduit method from ACPI. Jul 10 00:36:57.723845 kernel: psci: PSCIv1.1 detected in firmware. Jul 10 00:36:57.723851 kernel: psci: Using standard PSCI v0.2 function IDs Jul 10 00:36:57.723857 kernel: psci: Trusted OS migration not required Jul 10 00:36:57.723864 kernel: psci: SMC Calling Convention v1.1 Jul 10 00:36:57.723870 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 10 00:36:57.723878 kernel: ACPI: SRAT not present Jul 10 00:36:57.723884 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 10 00:36:57.723915 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 10 00:36:57.723922 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 10 00:36:57.723928 kernel: Detected PIPT I-cache on CPU0 Jul 10 00:36:57.723934 kernel: CPU features: detected: GIC system register CPU interface Jul 10 00:36:57.723940 kernel: CPU features: detected: Hardware dirty bit management Jul 10 00:36:57.723946 kernel: CPU features: detected: Spectre-v4 Jul 10 00:36:57.723952 kernel: CPU features: detected: Spectre-BHB Jul 10 00:36:57.723960 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 10 00:36:57.723966 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 10 00:36:57.723972 kernel: CPU features: detected: ARM erratum 1418040 Jul 10 00:36:57.723982 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 10 00:36:57.723988 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 10 00:36:57.723994 kernel: Policy zone: DMA Jul 10 00:36:57.724001 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=97626bbec4e8c603c151f40dbbae5fabba3cda417023e06335ea30183b36a27f Jul 10 00:36:57.724008 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:36:57.724014 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:36:57.724020 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:36:57.724026 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:36:57.724037 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Jul 10 00:36:57.724043 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 00:36:57.724049 kernel: trace event string verifier disabled Jul 10 00:36:57.724055 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:36:57.724062 kernel: rcu: RCU event tracing is enabled. Jul 10 00:36:57.724068 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 00:36:57.724074 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:36:57.724080 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:36:57.724087 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:36:57.724093 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 00:36:57.724099 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 10 00:36:57.724106 kernel: GICv3: 256 SPIs implemented Jul 10 00:36:57.724113 kernel: GICv3: 0 Extended SPIs implemented Jul 10 00:36:57.724119 kernel: GICv3: Distributor has no Range Selector support Jul 10 00:36:57.724125 kernel: Root IRQ handler: gic_handle_irq Jul 10 00:36:57.724131 kernel: GICv3: 16 PPIs implemented Jul 10 00:36:57.724137 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 10 00:36:57.724145 kernel: ACPI: SRAT not present Jul 10 00:36:57.724151 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 10 00:36:57.724157 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Jul 10 00:36:57.724164 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Jul 10 00:36:57.724172 kernel: GICv3: using LPI property table @0x00000000400d0000 Jul 10 00:36:57.724178 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Jul 10 00:36:57.724186 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:36:57.724192 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 10 00:36:57.724198 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 10 00:36:57.724204 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 10 00:36:57.724211 kernel: arm-pv: using stolen time PV Jul 10 00:36:57.724217 kernel: Console: colour dummy device 80x25 Jul 10 00:36:57.724223 kernel: ACPI: Core revision 20210730 Jul 10 00:36:57.724230 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 10 00:36:57.724236 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:36:57.724242 kernel: LSM: Security Framework initializing Jul 10 00:36:57.724250 kernel: SELinux: Initializing. Jul 10 00:36:57.724257 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:36:57.724263 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:36:57.724269 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:36:57.724276 kernel: Platform MSI: ITS@0x8080000 domain created Jul 10 00:36:57.724282 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 10 00:36:57.724289 kernel: Remapping and enabling EFI services. Jul 10 00:36:57.724295 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:36:57.724307 kernel: Detected PIPT I-cache on CPU1 Jul 10 00:36:57.724316 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 10 00:36:57.724322 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Jul 10 00:36:57.724329 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:36:57.724335 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 10 00:36:57.724342 kernel: Detected PIPT I-cache on CPU2 Jul 10 00:36:57.724348 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 10 00:36:57.724355 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Jul 10 00:36:57.724362 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:36:57.724368 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 10 00:36:57.724374 kernel: Detected PIPT I-cache on CPU3 Jul 10 00:36:57.724382 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 10 00:36:57.724388 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Jul 10 00:36:57.724394 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:36:57.724401 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 10 00:36:57.724411 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 00:36:57.724418 kernel: SMP: Total of 4 processors activated. Jul 10 00:36:57.724425 kernel: CPU features: detected: 32-bit EL0 Support Jul 10 00:36:57.724432 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 10 00:36:57.724439 kernel: CPU features: detected: Common not Private translations Jul 10 00:36:57.724445 kernel: CPU features: detected: CRC32 instructions Jul 10 00:36:57.724452 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 10 00:36:57.724458 kernel: CPU features: detected: LSE atomic instructions Jul 10 00:36:57.724467 kernel: CPU features: detected: Privileged Access Never Jul 10 00:36:57.724473 kernel: CPU features: detected: RAS Extension Support Jul 10 00:36:57.724480 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 10 00:36:57.724487 kernel: CPU: All CPU(s) started at EL1 Jul 10 00:36:57.724496 kernel: alternatives: patching kernel code Jul 10 00:36:57.724504 kernel: devtmpfs: initialized Jul 10 00:36:57.724515 kernel: KASLR enabled Jul 10 00:36:57.724523 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:36:57.724529 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 00:36:57.724536 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:36:57.724543 kernel: SMBIOS 3.0.0 present. Jul 10 00:36:57.724550 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jul 10 00:36:57.724557 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:36:57.724566 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 10 00:36:57.724575 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 10 00:36:57.724597 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 10 00:36:57.724604 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:36:57.724612 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 Jul 10 00:36:57.724619 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:36:57.724630 kernel: cpuidle: using governor menu Jul 10 00:36:57.724637 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 10 00:36:57.724643 kernel: ASID allocator initialised with 32768 entries Jul 10 00:36:57.724650 kernel: ACPI: bus type PCI registered Jul 10 00:36:57.724658 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:36:57.724664 kernel: Serial: AMBA PL011 UART driver Jul 10 00:36:57.724671 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:36:57.724678 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 10 00:36:57.724685 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:36:57.724692 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 10 00:36:57.724698 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 00:36:57.724705 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 10 00:36:57.724712 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:36:57.724720 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:36:57.724727 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:36:57.724733 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 10 00:36:57.724740 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 10 00:36:57.724747 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 10 00:36:57.724754 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:36:57.724760 kernel: ACPI: Interpreter enabled Jul 10 00:36:57.724767 kernel: ACPI: Using GIC for interrupt routing Jul 10 00:36:57.724774 kernel: ACPI: MCFG table detected, 1 entries Jul 10 00:36:57.724782 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 10 00:36:57.724789 kernel: printk: console [ttyAMA0] enabled Jul 10 00:36:57.724795 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 00:36:57.724934 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:36:57.724999 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 10 00:36:57.725059 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 10 00:36:57.725119 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 10 00:36:57.725191 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 10 00:36:57.725201 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 10 00:36:57.725208 kernel: PCI host bridge to bus 0000:00 Jul 10 00:36:57.725285 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 10 00:36:57.725345 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 10 00:36:57.725399 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 10 00:36:57.725451 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 00:36:57.725533 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 10 00:36:57.725601 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 10 00:36:57.725660 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 10 00:36:57.725718 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 10 00:36:57.725775 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 00:36:57.725831 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 00:36:57.725939 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 10 00:36:57.726016 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 10 00:36:57.726074 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 10 00:36:57.726128 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 10 00:36:57.726182 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 10 00:36:57.726190 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 10 00:36:57.726197 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 10 00:36:57.726204 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 10 00:36:57.726210 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 10 00:36:57.726219 kernel: iommu: Default domain type: Translated Jul 10 00:36:57.726225 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 10 00:36:57.726232 kernel: vgaarb: loaded Jul 10 00:36:57.726238 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 10 00:36:57.726247 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 10 00:36:57.726255 kernel: PTP clock support registered Jul 10 00:36:57.726262 kernel: Registered efivars operations Jul 10 00:36:57.726271 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 10 00:36:57.726277 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:36:57.726289 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:36:57.726296 kernel: pnp: PnP ACPI init Jul 10 00:36:57.726359 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 10 00:36:57.726371 kernel: pnp: PnP ACPI: found 1 devices Jul 10 00:36:57.726378 kernel: NET: Registered PF_INET protocol family Jul 10 00:36:57.726385 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:36:57.726391 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 00:36:57.726398 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:36:57.726408 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:36:57.726415 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 10 00:36:57.726422 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 00:36:57.726428 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:36:57.726435 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:36:57.726441 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:36:57.726448 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:36:57.726454 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 10 00:36:57.726461 kernel: kvm [1]: HYP mode not available Jul 10 00:36:57.726469 kernel: Initialise system trusted keyrings Jul 10 00:36:57.726476 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 00:36:57.726482 kernel: Key type asymmetric registered Jul 10 00:36:57.726488 kernel: Asymmetric key parser 'x509' registered Jul 10 00:36:57.726495 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 10 00:36:57.726502 kernel: io scheduler mq-deadline registered Jul 10 00:36:57.726510 kernel: io scheduler kyber registered Jul 10 00:36:57.726517 kernel: io scheduler bfq registered Jul 10 00:36:57.726524 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 10 00:36:57.726532 kernel: ACPI: button: Power Button [PWRB] Jul 10 00:36:57.726539 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 10 00:36:57.726597 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 10 00:36:57.726606 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:36:57.726613 kernel: thunder_xcv, ver 1.0 Jul 10 00:36:57.726619 kernel: thunder_bgx, ver 1.0 Jul 10 00:36:57.726626 kernel: nicpf, ver 1.0 Jul 10 00:36:57.726632 kernel: nicvf, ver 1.0 Jul 10 00:36:57.726700 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 10 00:36:57.726761 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-10T00:36:57 UTC (1752107817) Jul 10 00:36:57.726771 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 00:36:57.726778 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:36:57.726784 kernel: Segment Routing with IPv6 Jul 10 00:36:57.726791 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:36:57.726797 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:36:57.726804 kernel: Key type dns_resolver registered Jul 10 00:36:57.726810 kernel: registered taskstats version 1 Jul 10 00:36:57.726818 kernel: Loading compiled-in X.509 certificates Jul 10 00:36:57.726825 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.186-flatcar: 9e274a0dc4fc3d34232d90d226b034c4fe0e3e22' Jul 10 00:36:57.726832 kernel: Key type .fscrypt registered Jul 10 00:36:57.726838 kernel: Key type fscrypt-provisioning registered Jul 10 00:36:57.726845 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:36:57.726851 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:36:57.726858 kernel: ima: No architecture policies found Jul 10 00:36:57.726864 kernel: clk: Disabling unused clocks Jul 10 00:36:57.726870 kernel: Freeing unused kernel memory: 36416K Jul 10 00:36:57.726878 kernel: Run /init as init process Jul 10 00:36:57.726884 kernel: with arguments: Jul 10 00:36:57.726907 kernel: /init Jul 10 00:36:57.726914 kernel: with environment: Jul 10 00:36:57.726920 kernel: HOME=/ Jul 10 00:36:57.726927 kernel: TERM=linux Jul 10 00:36:57.726933 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:36:57.726941 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 10 00:36:57.726952 systemd[1]: Detected virtualization kvm. Jul 10 00:36:57.726959 systemd[1]: Detected architecture arm64. Jul 10 00:36:57.726966 systemd[1]: Running in initrd. Jul 10 00:36:57.726973 systemd[1]: No hostname configured, using default hostname. Jul 10 00:36:57.726979 systemd[1]: Hostname set to . Jul 10 00:36:57.726987 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:36:57.726994 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:36:57.727001 systemd[1]: Started systemd-ask-password-console.path. Jul 10 00:36:57.727009 systemd[1]: Reached target cryptsetup.target. Jul 10 00:36:57.727016 systemd[1]: Reached target paths.target. Jul 10 00:36:57.727023 systemd[1]: Reached target slices.target. Jul 10 00:36:57.727029 systemd[1]: Reached target swap.target. Jul 10 00:36:57.727036 systemd[1]: Reached target timers.target. Jul 10 00:36:57.727043 systemd[1]: Listening on iscsid.socket. Jul 10 00:36:57.727050 systemd[1]: Listening on iscsiuio.socket. Jul 10 00:36:57.727059 systemd[1]: Listening on systemd-journald-audit.socket. Jul 10 00:36:57.727068 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 10 00:36:57.727076 systemd[1]: Listening on systemd-journald.socket. Jul 10 00:36:57.727083 systemd[1]: Listening on systemd-networkd.socket. Jul 10 00:36:57.727090 systemd[1]: Listening on systemd-udevd-control.socket. Jul 10 00:36:57.727097 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 10 00:36:57.727103 systemd[1]: Reached target sockets.target. Jul 10 00:36:57.727110 systemd[1]: Starting kmod-static-nodes.service... Jul 10 00:36:57.727117 systemd[1]: Finished network-cleanup.service. Jul 10 00:36:57.727131 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:36:57.727139 systemd[1]: Starting systemd-journald.service... Jul 10 00:36:57.727149 systemd[1]: Starting systemd-modules-load.service... Jul 10 00:36:57.727156 systemd[1]: Starting systemd-resolved.service... Jul 10 00:36:57.727167 systemd[1]: Starting systemd-vconsole-setup.service... Jul 10 00:36:57.727175 systemd[1]: Finished kmod-static-nodes.service. Jul 10 00:36:57.727182 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:36:57.727189 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 10 00:36:57.727199 systemd[1]: Finished systemd-vconsole-setup.service. Jul 10 00:36:57.727208 systemd[1]: Starting dracut-cmdline-ask.service... Jul 10 00:36:57.727215 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:36:57.727227 systemd-journald[290]: Journal started Jul 10 00:36:57.727270 systemd-journald[290]: Runtime Journal (/run/log/journal/c7b1a099ace04e88a951a819c4a1626a) is 6.0M, max 48.7M, 42.6M free. Jul 10 00:36:57.708731 systemd-modules-load[291]: Inserted module 'overlay' Jul 10 00:36:57.729799 systemd[1]: Started systemd-journald.service. Jul 10 00:36:57.729824 kernel: audit: type=1130 audit(1752107817.728:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:57.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:57.729784 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 10 00:36:57.734837 kernel: Bridge firewalling registered Jul 10 00:36:57.734857 kernel: audit: type=1130 audit(1752107817.731:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:57.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:57.732346 systemd-modules-load[291]: Inserted module 'br_netfilter' Jul 10 00:36:57.733082 systemd-resolved[292]: Positive Trust Anchors: Jul 10 00:36:57.733089 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:36:57.733116 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 10 00:36:57.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:57.737797 systemd-resolved[292]: Defaulting to hostname 'linux'. Jul 10 00:36:57.752667 kernel: audit: type=1130 audit(1752107817.741:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:57.752693 kernel: SCSI subsystem initialized Jul 10 00:36:57.752702 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:36:57.752712 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:36:57.752720 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 10 00:36:57.741225 systemd[1]: Started systemd-resolved.service. Jul 10 00:36:57.757550 kernel: audit: type=1130 audit(1752107817.753:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:57.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:57.741958 systemd[1]: Reached target nss-lookup.target. Jul 10 00:36:57.751738 systemd[1]: Finished dracut-cmdline-ask.service. Jul 10 00:36:57.754742 systemd[1]: Starting dracut-cmdline.service... Jul 10 00:36:57.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:57.756549 systemd-modules-load[291]: Inserted module 'dm_multipath' Jul 10 00:36:57.765993 kernel: audit: type=1130 audit(1752107817.761:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:57.758664 systemd[1]: Finished systemd-modules-load.service. Jul 10 00:36:57.765227 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:36:57.768589 dracut-cmdline[308]: dracut-dracut-053 Jul 10 00:36:57.769380 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=97626bbec4e8c603c151f40dbbae5fabba3cda417023e06335ea30183b36a27f Jul 10 00:36:57.775329 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:36:57.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:57.778910 kernel: audit: type=1130 audit(1752107817.776:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:57.826918 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:36:57.840929 kernel: iscsi: registered transport (tcp) Jul 10 00:36:57.855919 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:36:57.855942 kernel: QLogic iSCSI HBA Driver Jul 10 00:36:57.888278 systemd[1]: Finished dracut-cmdline.service. Jul 10 00:36:57.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:57.889723 systemd[1]: Starting dracut-pre-udev.service... Jul 10 00:36:57.891958 kernel: audit: type=1130 audit(1752107817.887:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:57.933912 kernel: raid6: neonx8 gen() 13768 MB/s Jul 10 00:36:57.950904 kernel: raid6: neonx8 xor() 10806 MB/s Jul 10 00:36:57.967896 kernel: raid6: neonx4 gen() 13557 MB/s Jul 10 00:36:57.984900 kernel: raid6: neonx4 xor() 11225 MB/s Jul 10 00:36:58.001899 kernel: raid6: neonx2 gen() 12955 MB/s Jul 10 00:36:58.018902 kernel: raid6: neonx2 xor() 10335 MB/s Jul 10 00:36:58.035907 kernel: raid6: neonx1 gen() 10570 MB/s Jul 10 00:36:58.052908 kernel: raid6: neonx1 xor() 8711 MB/s Jul 10 00:36:58.069909 kernel: raid6: int64x8 gen() 6269 MB/s Jul 10 00:36:58.086902 kernel: raid6: int64x8 xor() 3542 MB/s Jul 10 00:36:58.103912 kernel: raid6: int64x4 gen() 7204 MB/s Jul 10 00:36:58.120905 kernel: raid6: int64x4 xor() 3849 MB/s Jul 10 00:36:58.137903 kernel: raid6: int64x2 gen() 6150 MB/s Jul 10 00:36:58.154907 kernel: raid6: int64x2 xor() 3314 MB/s Jul 10 00:36:58.171913 kernel: raid6: int64x1 gen() 5040 MB/s Jul 10 00:36:58.189106 kernel: raid6: int64x1 xor() 2645 MB/s Jul 10 00:36:58.189119 kernel: raid6: using algorithm neonx8 gen() 13768 MB/s Jul 10 00:36:58.189128 kernel: raid6: .... xor() 10806 MB/s, rmw enabled Jul 10 00:36:58.189136 kernel: raid6: using neon recovery algorithm Jul 10 00:36:58.199905 kernel: xor: measuring software checksum speed Jul 10 00:36:58.199924 kernel: 8regs : 17177 MB/sec Jul 10 00:36:58.201294 kernel: 32regs : 19191 MB/sec Jul 10 00:36:58.201306 kernel: arm64_neon : 27785 MB/sec Jul 10 00:36:58.201314 kernel: xor: using function: arm64_neon (27785 MB/sec) Jul 10 00:36:58.257918 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 10 00:36:58.268314 systemd[1]: Finished dracut-pre-udev.service. Jul 10 00:36:58.272511 kernel: audit: type=1130 audit(1752107818.268:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:58.272544 kernel: audit: type=1334 audit(1752107818.270:10): prog-id=7 op=LOAD Jul 10 00:36:58.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:58.270000 audit: BPF prog-id=7 op=LOAD Jul 10 00:36:58.271000 audit: BPF prog-id=8 op=LOAD Jul 10 00:36:58.272913 systemd[1]: Starting systemd-udevd.service... Jul 10 00:36:58.284824 systemd-udevd[491]: Using default interface naming scheme 'v252'. Jul 10 00:36:58.288170 systemd[1]: Started systemd-udevd.service. Jul 10 00:36:58.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:58.290206 systemd[1]: Starting dracut-pre-trigger.service... Jul 10 00:36:58.302095 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Jul 10 00:36:58.330822 systemd[1]: Finished dracut-pre-trigger.service. Jul 10 00:36:58.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:58.332312 systemd[1]: Starting systemd-udev-trigger.service... Jul 10 00:36:58.366405 systemd[1]: Finished systemd-udev-trigger.service. Jul 10 00:36:58.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:58.413951 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 00:36:58.419108 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:36:58.419124 kernel: GPT:9289727 != 19775487 Jul 10 00:36:58.419137 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:36:58.419146 kernel: GPT:9289727 != 19775487 Jul 10 00:36:58.419154 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:36:58.419161 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:36:58.429918 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (545) Jul 10 00:36:58.434121 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 10 00:36:58.438718 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 10 00:36:58.439540 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 10 00:36:58.443323 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 10 00:36:58.448363 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 10 00:36:58.450037 systemd[1]: Starting disk-uuid.service... Jul 10 00:36:58.456611 disk-uuid[561]: Primary Header is updated. Jul 10 00:36:58.456611 disk-uuid[561]: Secondary Entries is updated. Jul 10 00:36:58.456611 disk-uuid[561]: Secondary Header is updated. Jul 10 00:36:58.459087 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:36:58.468921 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:36:58.470909 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:36:59.472710 disk-uuid[562]: The operation has completed successfully. Jul 10 00:36:59.474179 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:36:59.498115 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:36:59.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:59.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:59.498207 systemd[1]: Finished disk-uuid.service. Jul 10 00:36:59.499749 systemd[1]: Starting verity-setup.service... Jul 10 00:36:59.520879 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 10 00:36:59.546548 systemd[1]: Found device dev-mapper-usr.device. Jul 10 00:36:59.548628 systemd[1]: Finished verity-setup.service. Jul 10 00:36:59.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:59.550413 systemd[1]: Mounting sysusr-usr.mount... Jul 10 00:36:59.606751 systemd[1]: Mounted sysusr-usr.mount. Jul 10 00:36:59.607945 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 10 00:36:59.607617 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 10 00:36:59.608360 systemd[1]: Starting ignition-setup.service... Jul 10 00:36:59.610170 systemd[1]: Starting parse-ip-for-networkd.service... Jul 10 00:36:59.623115 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:36:59.623163 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:36:59.623173 kernel: BTRFS info (device vda6): has skinny extents Jul 10 00:36:59.634085 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 10 00:36:59.639586 systemd[1]: Finished ignition-setup.service. Jul 10 00:36:59.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:59.641001 systemd[1]: Starting ignition-fetch-offline.service... Jul 10 00:36:59.719873 ignition[644]: Ignition 2.14.0 Jul 10 00:36:59.719881 ignition[644]: Stage: fetch-offline Jul 10 00:36:59.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:59.720091 systemd[1]: Finished parse-ip-for-networkd.service. Jul 10 00:36:59.722000 audit: BPF prog-id=9 op=LOAD Jul 10 00:36:59.719938 ignition[644]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:36:59.719949 ignition[644]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:36:59.723056 systemd[1]: Starting systemd-networkd.service... Jul 10 00:36:59.720082 ignition[644]: parsed url from cmdline: "" Jul 10 00:36:59.720085 ignition[644]: no config URL provided Jul 10 00:36:59.720090 ignition[644]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:36:59.720097 ignition[644]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:36:59.720115 ignition[644]: op(1): [started] loading QEMU firmware config module Jul 10 00:36:59.720120 ignition[644]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 00:36:59.727049 ignition[644]: op(1): [finished] loading QEMU firmware config module Jul 10 00:36:59.738674 ignition[644]: parsing config with SHA512: d985ad52b18dacba5625bc2b5d0f8689aec2732c58357f8d01910ebfc1571b7ade4595bb9131ee9d735b5ec348c14dcdf42b42b79d557e87a652f654dfb2bef8 Jul 10 00:36:59.742124 unknown[644]: fetched base config from "system" Jul 10 00:36:59.742133 unknown[644]: fetched user config from "qemu" Jul 10 00:36:59.742412 ignition[644]: fetch-offline: fetch-offline passed Jul 10 00:36:59.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:59.743637 systemd[1]: Finished ignition-fetch-offline.service. Jul 10 00:36:59.742461 ignition[644]: Ignition finished successfully Jul 10 00:36:59.748746 systemd-networkd[740]: lo: Link UP Jul 10 00:36:59.748758 systemd-networkd[740]: lo: Gained carrier Jul 10 00:36:59.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:59.749393 systemd-networkd[740]: Enumeration completed Jul 10 00:36:59.749482 systemd[1]: Started systemd-networkd.service. Jul 10 00:36:59.749756 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:36:59.750194 systemd[1]: Reached target network.target. Jul 10 00:36:59.751252 systemd-networkd[740]: eth0: Link UP Jul 10 00:36:59.751255 systemd-networkd[740]: eth0: Gained carrier Jul 10 00:36:59.751415 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 00:36:59.752166 systemd[1]: Starting ignition-kargs.service... Jul 10 00:36:59.761726 ignition[742]: Ignition 2.14.0 Jul 10 00:36:59.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:59.754539 systemd[1]: Starting iscsiuio.service... Jul 10 00:36:59.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:59.761733 ignition[742]: Stage: kargs Jul 10 00:36:59.763472 systemd[1]: Started iscsiuio.service. Jul 10 00:36:59.761830 ignition[742]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:36:59.764758 systemd[1]: Finished ignition-kargs.service. Jul 10 00:36:59.761839 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:36:59.767448 systemd[1]: Starting ignition-disks.service... Jul 10 00:36:59.773923 iscsid[752]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 10 00:36:59.773923 iscsid[752]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 10 00:36:59.773923 iscsid[752]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 10 00:36:59.773923 iscsid[752]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 10 00:36:59.773923 iscsid[752]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 10 00:36:59.773923 iscsid[752]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 10 00:36:59.773923 iscsid[752]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 10 00:36:59.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:59.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:59.762498 ignition[742]: kargs: kargs passed Jul 10 00:36:59.769168 systemd[1]: Starting iscsid.service... Jul 10 00:36:59.762541 ignition[742]: Ignition finished successfully Jul 10 00:36:59.774977 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:36:59.774361 ignition[751]: Ignition 2.14.0 Jul 10 00:36:59.776036 systemd[1]: Started iscsid.service. Jul 10 00:36:59.774367 ignition[751]: Stage: disks Jul 10 00:36:59.779060 systemd[1]: Finished ignition-disks.service. Jul 10 00:36:59.774461 ignition[751]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:36:59.781765 systemd[1]: Reached target initrd-root-device.target. Jul 10 00:36:59.774470 ignition[751]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:36:59.783078 systemd[1]: Reached target local-fs-pre.target. Jul 10 00:36:59.775175 ignition[751]: disks: disks passed Jul 10 00:36:59.784464 systemd[1]: Reached target local-fs.target. Jul 10 00:36:59.775217 ignition[751]: Ignition finished successfully Jul 10 00:36:59.786036 systemd[1]: Reached target sysinit.target. Jul 10 00:36:59.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:59.787558 systemd[1]: Reached target basic.target. Jul 10 00:36:59.789453 systemd[1]: Starting dracut-initqueue.service... Jul 10 00:36:59.799215 systemd[1]: Finished dracut-initqueue.service. Jul 10 00:36:59.800249 systemd[1]: Reached target remote-fs-pre.target. Jul 10 00:36:59.801265 systemd[1]: Reached target remote-cryptsetup.target. Jul 10 00:36:59.802371 systemd[1]: Reached target remote-fs.target. Jul 10 00:36:59.804375 systemd[1]: Starting dracut-pre-mount.service... Jul 10 00:36:59.812032 systemd[1]: Finished dracut-pre-mount.service. Jul 10 00:36:59.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:59.813427 systemd[1]: Starting systemd-fsck-root.service... Jul 10 00:36:59.823726 systemd-fsck[774]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 10 00:36:59.827550 systemd[1]: Finished systemd-fsck-root.service. Jul 10 00:36:59.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:59.831438 systemd[1]: Mounting sysroot.mount... Jul 10 00:36:59.839732 systemd[1]: Mounted sysroot.mount. Jul 10 00:36:59.840804 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 10 00:36:59.840338 systemd[1]: Reached target initrd-root-fs.target. Jul 10 00:36:59.843171 systemd[1]: Mounting sysroot-usr.mount... Jul 10 00:36:59.843863 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 10 00:36:59.843922 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:36:59.843947 systemd[1]: Reached target ignition-diskful.target. Jul 10 00:36:59.845794 systemd[1]: Mounted sysroot-usr.mount. Jul 10 00:36:59.847252 systemd[1]: Starting initrd-setup-root.service... Jul 10 00:36:59.851315 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:36:59.854654 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:36:59.857863 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:36:59.861731 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:36:59.889976 systemd[1]: Finished initrd-setup-root.service. Jul 10 00:36:59.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:59.891388 systemd[1]: Starting ignition-mount.service... Jul 10 00:36:59.892539 systemd[1]: Starting sysroot-boot.service... Jul 10 00:36:59.896945 bash[825]: umount: /sysroot/usr/share/oem: not mounted. Jul 10 00:36:59.905573 ignition[827]: INFO : Ignition 2.14.0 Jul 10 00:36:59.906506 ignition[827]: INFO : Stage: mount Jul 10 00:36:59.907328 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:36:59.908355 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:36:59.910316 ignition[827]: INFO : mount: mount passed Jul 10 00:36:59.911128 ignition[827]: INFO : Ignition finished successfully Jul 10 00:36:59.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:36:59.911094 systemd[1]: Finished ignition-mount.service. Jul 10 00:36:59.915333 systemd[1]: Finished sysroot-boot.service. Jul 10 00:36:59.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:00.563044 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 10 00:37:00.569929 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (835) Jul 10 00:37:00.569968 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:37:00.571912 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:37:00.571938 kernel: BTRFS info (device vda6): has skinny extents Jul 10 00:37:00.574687 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 10 00:37:00.576080 systemd[1]: Starting ignition-files.service... Jul 10 00:37:00.589706 ignition[855]: INFO : Ignition 2.14.0 Jul 10 00:37:00.589706 ignition[855]: INFO : Stage: files Jul 10 00:37:00.590966 ignition[855]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:37:00.590966 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:37:00.590966 ignition[855]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:37:00.593295 ignition[855]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:37:00.593295 ignition[855]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:37:00.595344 ignition[855]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:37:00.595344 ignition[855]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:37:00.597299 ignition[855]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:37:00.597299 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:37:00.597299 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:37:00.597299 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:37:00.597299 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:37:00.597299 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 00:37:00.597299 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 00:37:00.597299 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 00:37:00.597299 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 10 00:37:00.595774 unknown[855]: wrote ssh authorized keys file for user: core Jul 10 00:37:01.176542 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jul 10 00:37:01.750255 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 00:37:01.750255 ignition[855]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jul 10 00:37:01.752838 ignition[855]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:37:01.754522 ignition[855]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:37:01.754522 ignition[855]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jul 10 00:37:01.754522 ignition[855]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 00:37:01.754522 ignition[855]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:37:01.765999 systemd-networkd[740]: eth0: Gained IPv6LL Jul 10 00:37:01.785752 ignition[855]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:37:01.786952 ignition[855]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 00:37:01.786952 ignition[855]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:37:01.786952 ignition[855]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:37:01.786952 ignition[855]: INFO : files: files passed Jul 10 00:37:01.786952 ignition[855]: INFO : Ignition finished successfully Jul 10 00:37:01.789943 systemd[1]: Finished ignition-files.service. Jul 10 00:37:01.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.799427 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 10 00:37:01.800119 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 10 00:37:01.802208 systemd[1]: Starting ignition-quench.service... Jul 10 00:37:01.805318 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:37:01.805407 systemd[1]: Finished ignition-quench.service. Jul 10 00:37:01.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.808202 initrd-setup-root-after-ignition[880]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 10 00:37:01.809664 initrd-setup-root-after-ignition[882]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:37:01.810342 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 10 00:37:01.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.811908 systemd[1]: Reached target ignition-complete.target. Jul 10 00:37:01.813971 systemd[1]: Starting initrd-parse-etc.service... Jul 10 00:37:01.826536 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:37:01.826635 systemd[1]: Finished initrd-parse-etc.service. Jul 10 00:37:01.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.828222 systemd[1]: Reached target initrd-fs.target. Jul 10 00:37:01.829348 systemd[1]: Reached target initrd.target. Jul 10 00:37:01.830489 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 10 00:37:01.831265 systemd[1]: Starting dracut-pre-pivot.service... Jul 10 00:37:01.841596 systemd[1]: Finished dracut-pre-pivot.service. Jul 10 00:37:01.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.842976 systemd[1]: Starting initrd-cleanup.service... Jul 10 00:37:01.850820 systemd[1]: Stopped target nss-lookup.target. Jul 10 00:37:01.851584 systemd[1]: Stopped target remote-cryptsetup.target. Jul 10 00:37:01.852838 systemd[1]: Stopped target timers.target. Jul 10 00:37:01.854148 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:37:01.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.854253 systemd[1]: Stopped dracut-pre-pivot.service. Jul 10 00:37:01.855392 systemd[1]: Stopped target initrd.target. Jul 10 00:37:01.856682 systemd[1]: Stopped target basic.target. Jul 10 00:37:01.857787 systemd[1]: Stopped target ignition-complete.target. Jul 10 00:37:01.859009 systemd[1]: Stopped target ignition-diskful.target. Jul 10 00:37:01.860090 systemd[1]: Stopped target initrd-root-device.target. Jul 10 00:37:01.861422 systemd[1]: Stopped target remote-fs.target. Jul 10 00:37:01.862572 systemd[1]: Stopped target remote-fs-pre.target. Jul 10 00:37:01.863814 systemd[1]: Stopped target sysinit.target. Jul 10 00:37:01.864944 systemd[1]: Stopped target local-fs.target. Jul 10 00:37:01.866189 systemd[1]: Stopped target local-fs-pre.target. Jul 10 00:37:01.867435 systemd[1]: Stopped target swap.target. Jul 10 00:37:01.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.868512 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:37:01.868628 systemd[1]: Stopped dracut-pre-mount.service. Jul 10 00:37:01.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.869847 systemd[1]: Stopped target cryptsetup.target. Jul 10 00:37:01.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.870883 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:37:01.870988 systemd[1]: Stopped dracut-initqueue.service. Jul 10 00:37:01.872301 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:37:01.872391 systemd[1]: Stopped ignition-fetch-offline.service. Jul 10 00:37:01.873534 systemd[1]: Stopped target paths.target. Jul 10 00:37:01.874582 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:37:01.877918 systemd[1]: Stopped systemd-ask-password-console.path. Jul 10 00:37:01.879194 systemd[1]: Stopped target slices.target. Jul 10 00:37:01.880511 systemd[1]: Stopped target sockets.target. Jul 10 00:37:01.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.881730 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:37:01.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.881844 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 10 00:37:01.886516 iscsid[752]: iscsid shutting down. Jul 10 00:37:01.882979 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:37:01.883066 systemd[1]: Stopped ignition-files.service. Jul 10 00:37:01.884959 systemd[1]: Stopping ignition-mount.service... Jul 10 00:37:01.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.885791 systemd[1]: Stopping iscsid.service... Jul 10 00:37:01.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.887655 systemd[1]: Stopping sysroot-boot.service... Jul 10 00:37:01.893177 ignition[895]: INFO : Ignition 2.14.0 Jul 10 00:37:01.893177 ignition[895]: INFO : Stage: umount Jul 10 00:37:01.893177 ignition[895]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:37:01.893177 ignition[895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:37:01.893177 ignition[895]: INFO : umount: umount passed Jul 10 00:37:01.893177 ignition[895]: INFO : Ignition finished successfully Jul 10 00:37:01.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.888936 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:37:01.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.889071 systemd[1]: Stopped systemd-udev-trigger.service. Jul 10 00:37:01.890409 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:37:01.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.890506 systemd[1]: Stopped dracut-pre-trigger.service. Jul 10 00:37:01.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.893851 systemd[1]: iscsid.service: Deactivated successfully. Jul 10 00:37:01.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.893981 systemd[1]: Stopped iscsid.service. Jul 10 00:37:01.895126 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:37:01.895209 systemd[1]: Stopped ignition-mount.service. Jul 10 00:37:01.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.898418 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:37:01.899690 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:37:01.899772 systemd[1]: Finished initrd-cleanup.service. Jul 10 00:37:01.901033 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:37:01.901068 systemd[1]: Closed iscsid.socket. Jul 10 00:37:01.903145 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:37:01.903186 systemd[1]: Stopped ignition-disks.service. Jul 10 00:37:01.904471 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:37:01.904508 systemd[1]: Stopped ignition-kargs.service. Jul 10 00:37:01.905834 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:37:01.905875 systemd[1]: Stopped ignition-setup.service. Jul 10 00:37:01.907585 systemd[1]: Stopping iscsiuio.service... Jul 10 00:37:01.909009 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 10 00:37:01.909088 systemd[1]: Stopped iscsiuio.service. Jul 10 00:37:01.910131 systemd[1]: Stopped target network.target. Jul 10 00:37:01.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.911416 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:37:01.911444 systemd[1]: Closed iscsiuio.socket. Jul 10 00:37:01.913728 systemd[1]: Stopping systemd-networkd.service... Jul 10 00:37:01.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.914979 systemd[1]: Stopping systemd-resolved.service... Jul 10 00:37:01.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.923226 systemd-networkd[740]: eth0: DHCPv6 lease lost Jul 10 00:37:01.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.924398 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:37:01.924484 systemd[1]: Stopped systemd-networkd.service. Jul 10 00:37:01.934000 audit: BPF prog-id=9 op=UNLOAD Jul 10 00:37:01.925470 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:37:01.925498 systemd[1]: Closed systemd-networkd.socket. Jul 10 00:37:01.927237 systemd[1]: Stopping network-cleanup.service... Jul 10 00:37:01.936000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.927747 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:37:01.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.927798 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 10 00:37:01.939000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.928998 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:37:01.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.929036 systemd[1]: Stopped systemd-sysctl.service. Jul 10 00:37:01.941000 audit: BPF prog-id=6 op=UNLOAD Jul 10 00:37:01.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.930878 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:37:01.930958 systemd[1]: Stopped systemd-modules-load.service. Jul 10 00:37:01.931663 systemd[1]: Stopping systemd-udevd.service... Jul 10 00:37:01.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.936205 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:37:01.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.936651 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:37:01.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.936741 systemd[1]: Stopped systemd-resolved.service. Jul 10 00:37:01.938017 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:37:01.938095 systemd[1]: Stopped sysroot-boot.service. Jul 10 00:37:01.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.939317 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:37:01.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.939361 systemd[1]: Stopped initrd-setup-root.service. Jul 10 00:37:01.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.940202 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:37:01.940319 systemd[1]: Stopped systemd-udevd.service. Jul 10 00:37:01.941408 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:37:01.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:01.941485 systemd[1]: Stopped network-cleanup.service. Jul 10 00:37:01.942495 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:37:01.942527 systemd[1]: Closed systemd-udevd-control.socket. Jul 10 00:37:01.943412 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:37:01.943443 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 10 00:37:01.944507 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:37:01.944545 systemd[1]: Stopped dracut-pre-udev.service. Jul 10 00:37:01.945551 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:37:01.945587 systemd[1]: Stopped dracut-cmdline.service. Jul 10 00:37:01.946534 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:37:01.946567 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 10 00:37:01.948500 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 10 00:37:01.949509 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:37:01.949569 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 10 00:37:01.951135 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:37:01.951173 systemd[1]: Stopped kmod-static-nodes.service. Jul 10 00:37:01.951779 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:37:01.951818 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 10 00:37:01.953674 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 10 00:37:01.954088 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:37:01.954163 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 10 00:37:01.955486 systemd[1]: Reached target initrd-switch-root.target. Jul 10 00:37:01.957175 systemd[1]: Starting initrd-switch-root.service... Jul 10 00:37:01.963204 systemd[1]: Switching root. Jul 10 00:37:01.982099 systemd-journald[290]: Journal stopped Jul 10 00:37:03.954545 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Jul 10 00:37:03.954590 kernel: SELinux: Class mctp_socket not defined in policy. Jul 10 00:37:03.954602 kernel: SELinux: Class anon_inode not defined in policy. Jul 10 00:37:03.954614 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 10 00:37:03.954624 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:37:03.954634 kernel: SELinux: policy capability open_perms=1 Jul 10 00:37:03.954646 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:37:03.954656 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:37:03.954673 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:37:03.954682 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:37:03.954692 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:37:03.954701 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:37:03.954710 kernel: kauditd_printk_skb: 64 callbacks suppressed Jul 10 00:37:03.954720 kernel: audit: type=1403 audit(1752107822.045:75): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:37:03.954732 systemd[1]: Successfully loaded SELinux policy in 32.868ms. Jul 10 00:37:03.954748 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.799ms. Jul 10 00:37:03.954760 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 10 00:37:03.954770 systemd[1]: Detected virtualization kvm. Jul 10 00:37:03.954781 systemd[1]: Detected architecture arm64. Jul 10 00:37:03.954791 systemd[1]: Detected first boot. Jul 10 00:37:03.954801 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:37:03.954812 kernel: audit: type=1400 audit(1752107822.116:76): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 10 00:37:03.954824 kernel: audit: type=1400 audit(1752107822.116:77): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 10 00:37:03.954833 kernel: audit: type=1334 audit(1752107822.116:78): prog-id=10 op=LOAD Jul 10 00:37:03.954843 kernel: audit: type=1334 audit(1752107822.116:79): prog-id=10 op=UNLOAD Jul 10 00:37:03.954852 kernel: audit: type=1334 audit(1752107822.118:80): prog-id=11 op=LOAD Jul 10 00:37:03.954873 kernel: audit: type=1334 audit(1752107822.118:81): prog-id=11 op=UNLOAD Jul 10 00:37:03.954884 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 10 00:37:03.954907 kernel: audit: type=1400 audit(1752107822.162:82): avc: denied { associate } for pid=928 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 10 00:37:03.954923 kernel: audit: type=1300 audit(1752107822.162:82): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001058cc a1=4000028e40 a2=4000027100 a3=32 items=0 ppid=911 pid=928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:37:03.954936 kernel: audit: type=1327 audit(1752107822.162:82): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 10 00:37:03.954949 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:37:03.954961 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:37:03.954972 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:37:03.954983 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:37:03.954993 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:37:03.955003 systemd[1]: Stopped initrd-switch-root.service. Jul 10 00:37:03.955014 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:37:03.955027 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 10 00:37:03.955037 systemd[1]: Created slice system-addon\x2drun.slice. Jul 10 00:37:03.955048 systemd[1]: Created slice system-getty.slice. Jul 10 00:37:03.955062 systemd[1]: Created slice system-modprobe.slice. Jul 10 00:37:03.955072 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 10 00:37:03.955084 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 10 00:37:03.955095 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 10 00:37:03.955106 systemd[1]: Created slice user.slice. Jul 10 00:37:03.955116 systemd[1]: Started systemd-ask-password-console.path. Jul 10 00:37:03.955127 systemd[1]: Started systemd-ask-password-wall.path. Jul 10 00:37:03.955138 systemd[1]: Set up automount boot.automount. Jul 10 00:37:03.955149 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 10 00:37:03.955160 systemd[1]: Stopped target initrd-switch-root.target. Jul 10 00:37:03.955171 systemd[1]: Stopped target initrd-fs.target. Jul 10 00:37:03.955182 systemd[1]: Stopped target initrd-root-fs.target. Jul 10 00:37:03.955193 systemd[1]: Reached target integritysetup.target. Jul 10 00:37:03.955203 systemd[1]: Reached target remote-cryptsetup.target. Jul 10 00:37:03.955213 systemd[1]: Reached target remote-fs.target. Jul 10 00:37:03.955224 systemd[1]: Reached target slices.target. Jul 10 00:37:03.955234 systemd[1]: Reached target swap.target. Jul 10 00:37:03.955245 systemd[1]: Reached target torcx.target. Jul 10 00:37:03.955255 systemd[1]: Reached target veritysetup.target. Jul 10 00:37:03.955267 systemd[1]: Listening on systemd-coredump.socket. Jul 10 00:37:03.955277 systemd[1]: Listening on systemd-initctl.socket. Jul 10 00:37:03.955287 systemd[1]: Listening on systemd-networkd.socket. Jul 10 00:37:03.955298 systemd[1]: Listening on systemd-udevd-control.socket. Jul 10 00:37:03.955308 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 10 00:37:03.955319 systemd[1]: Listening on systemd-userdbd.socket. Jul 10 00:37:03.955330 systemd[1]: Mounting dev-hugepages.mount... Jul 10 00:37:03.955340 systemd[1]: Mounting dev-mqueue.mount... Jul 10 00:37:03.955351 systemd[1]: Mounting media.mount... Jul 10 00:37:03.955361 systemd[1]: Mounting sys-kernel-debug.mount... Jul 10 00:37:03.955373 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 10 00:37:03.955384 systemd[1]: Mounting tmp.mount... Jul 10 00:37:03.955394 systemd[1]: Starting flatcar-tmpfiles.service... Jul 10 00:37:03.955404 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:37:03.955415 systemd[1]: Starting kmod-static-nodes.service... Jul 10 00:37:03.955425 systemd[1]: Starting modprobe@configfs.service... Jul 10 00:37:03.955436 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:37:03.955447 systemd[1]: Starting modprobe@drm.service... Jul 10 00:37:03.955458 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:37:03.955471 systemd[1]: Starting modprobe@fuse.service... Jul 10 00:37:03.955482 systemd[1]: Starting modprobe@loop.service... Jul 10 00:37:03.955493 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:37:03.955503 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:37:03.955514 systemd[1]: Stopped systemd-fsck-root.service. Jul 10 00:37:03.955526 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:37:03.955536 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:37:03.955546 systemd[1]: Stopped systemd-journald.service. Jul 10 00:37:03.955556 kernel: fuse: init (API version 7.34) Jul 10 00:37:03.955567 kernel: loop: module loaded Jul 10 00:37:03.955578 systemd[1]: Starting systemd-journald.service... Jul 10 00:37:03.955588 systemd[1]: Starting systemd-modules-load.service... Jul 10 00:37:03.955598 systemd[1]: Starting systemd-network-generator.service... Jul 10 00:37:03.955609 systemd[1]: Starting systemd-remount-fs.service... Jul 10 00:37:03.955620 systemd[1]: Starting systemd-udev-trigger.service... Jul 10 00:37:03.955630 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:37:03.955640 systemd[1]: Stopped verity-setup.service. Jul 10 00:37:03.955650 systemd[1]: Mounted dev-hugepages.mount. Jul 10 00:37:03.955661 systemd[1]: Mounted dev-mqueue.mount. Jul 10 00:37:03.955671 systemd[1]: Mounted media.mount. Jul 10 00:37:03.955681 systemd[1]: Mounted sys-kernel-debug.mount. Jul 10 00:37:03.955693 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 10 00:37:03.955703 systemd[1]: Mounted tmp.mount. Jul 10 00:37:03.955713 systemd[1]: Finished kmod-static-nodes.service. Jul 10 00:37:03.955723 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:37:03.955733 systemd[1]: Finished modprobe@configfs.service. Jul 10 00:37:03.955743 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:37:03.955757 systemd-journald[990]: Journal started Jul 10 00:37:03.955798 systemd-journald[990]: Runtime Journal (/run/log/journal/c7b1a099ace04e88a951a819c4a1626a) is 6.0M, max 48.7M, 42.6M free. Jul 10 00:37:02.045000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:37:02.116000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 10 00:37:03.957873 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:37:03.957926 systemd[1]: Started systemd-journald.service. Jul 10 00:37:02.116000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 10 00:37:02.116000 audit: BPF prog-id=10 op=LOAD Jul 10 00:37:02.116000 audit: BPF prog-id=10 op=UNLOAD Jul 10 00:37:02.118000 audit: BPF prog-id=11 op=LOAD Jul 10 00:37:02.118000 audit: BPF prog-id=11 op=UNLOAD Jul 10 00:37:02.162000 audit[928]: AVC avc: denied { associate } for pid=928 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 10 00:37:02.162000 audit[928]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001058cc a1=4000028e40 a2=4000027100 a3=32 items=0 ppid=911 pid=928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:37:02.162000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 10 00:37:02.163000 audit[928]: AVC avc: denied { associate } for pid=928 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 10 00:37:02.163000 audit[928]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001059a5 a2=1ed a3=0 items=2 ppid=911 pid=928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:37:02.163000 audit: CWD cwd="/" Jul 10 00:37:02.163000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:37:02.163000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:37:02.163000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 10 00:37:03.831000 audit: BPF prog-id=12 op=LOAD Jul 10 00:37:03.831000 audit: BPF prog-id=3 op=UNLOAD Jul 10 00:37:03.831000 audit: BPF prog-id=13 op=LOAD Jul 10 00:37:03.831000 audit: BPF prog-id=14 op=LOAD Jul 10 00:37:03.831000 audit: BPF prog-id=4 op=UNLOAD Jul 10 00:37:03.831000 audit: BPF prog-id=5 op=UNLOAD Jul 10 00:37:03.832000 audit: BPF prog-id=15 op=LOAD Jul 10 00:37:03.832000 audit: BPF prog-id=12 op=UNLOAD Jul 10 00:37:03.832000 audit: BPF prog-id=16 op=LOAD Jul 10 00:37:03.832000 audit: BPF prog-id=17 op=LOAD Jul 10 00:37:03.832000 audit: BPF prog-id=13 op=UNLOAD Jul 10 00:37:03.832000 audit: BPF prog-id=14 op=UNLOAD Jul 10 00:37:03.832000 audit: BPF prog-id=18 op=LOAD Jul 10 00:37:03.832000 audit: BPF prog-id=15 op=UNLOAD Jul 10 00:37:03.832000 audit: BPF prog-id=19 op=LOAD Jul 10 00:37:03.832000 audit: BPF prog-id=20 op=LOAD Jul 10 00:37:03.832000 audit: BPF prog-id=16 op=UNLOAD Jul 10 00:37:03.832000 audit: BPF prog-id=17 op=UNLOAD Jul 10 00:37:03.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.843000 audit: BPF prog-id=18 op=UNLOAD Jul 10 00:37:03.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.922000 audit: BPF prog-id=21 op=LOAD Jul 10 00:37:03.922000 audit: BPF prog-id=22 op=LOAD Jul 10 00:37:03.922000 audit: BPF prog-id=23 op=LOAD Jul 10 00:37:03.922000 audit: BPF prog-id=19 op=UNLOAD Jul 10 00:37:03.922000 audit: BPF prog-id=20 op=UNLOAD Jul 10 00:37:03.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.951000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 10 00:37:03.951000 audit[990]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffc0da6320 a2=4000 a3=1 items=0 ppid=1 pid=990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:37:03.951000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 10 00:37:03.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.830538 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:37:02.161191 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-10T00:37:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:37:03.830550 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 10 00:37:02.161475 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-10T00:37:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 10 00:37:03.833938 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:37:02.161502 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-10T00:37:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 10 00:37:03.958871 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:37:02.161532 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-10T00:37:02Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 10 00:37:03.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.959015 systemd[1]: Finished modprobe@drm.service. Jul 10 00:37:02.161541 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-10T00:37:02Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 10 00:37:02.161571 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-10T00:37:02Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 10 00:37:02.161583 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-10T00:37:02Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 10 00:37:03.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:02.161774 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-10T00:37:02Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 10 00:37:03.959921 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:37:02.161811 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-10T00:37:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 10 00:37:03.960034 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:37:02.161823 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-10T00:37:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 10 00:37:03.960988 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:37:02.162779 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-10T00:37:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 10 00:37:03.961094 systemd[1]: Finished modprobe@fuse.service. Jul 10 00:37:02.162819 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-10T00:37:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 10 00:37:02.162838 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-10T00:37:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Jul 10 00:37:02.162852 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-10T00:37:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 10 00:37:03.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:02.162881 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-10T00:37:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Jul 10 00:37:02.162907 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-10T00:37:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 10 00:37:03.962129 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:37:03.589438 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-10T00:37:03Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 10 00:37:03.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.962278 systemd[1]: Finished modprobe@loop.service. Jul 10 00:37:03.589699 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-10T00:37:03Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 10 00:37:03.589788 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-10T00:37:03Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 10 00:37:03.589972 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-10T00:37:03Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 10 00:37:03.590020 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-10T00:37:03Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 10 00:37:03.590079 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-07-10T00:37:03Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 10 00:37:03.963221 systemd[1]: Finished systemd-modules-load.service. Jul 10 00:37:03.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.964274 systemd[1]: Finished systemd-network-generator.service. Jul 10 00:37:03.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.965239 systemd[1]: Finished systemd-remount-fs.service. Jul 10 00:37:03.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.966270 systemd[1]: Reached target network-pre.target. Jul 10 00:37:03.968047 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 10 00:37:03.969629 systemd[1]: Mounting sys-kernel-config.mount... Jul 10 00:37:03.970325 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:37:03.971967 systemd[1]: Starting systemd-hwdb-update.service... Jul 10 00:37:03.973581 systemd[1]: Starting systemd-journal-flush.service... Jul 10 00:37:03.974288 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:37:03.975290 systemd[1]: Starting systemd-random-seed.service... Jul 10 00:37:03.975990 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:37:03.977092 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:37:03.979848 systemd[1]: Finished flatcar-tmpfiles.service. Jul 10 00:37:03.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.980701 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 10 00:37:03.981534 systemd[1]: Mounted sys-kernel-config.mount. Jul 10 00:37:03.983310 systemd[1]: Starting systemd-sysusers.service... Jul 10 00:37:03.984712 systemd-journald[990]: Time spent on flushing to /var/log/journal/c7b1a099ace04e88a951a819c4a1626a is 12.885ms for 990 entries. Jul 10 00:37:03.984712 systemd-journald[990]: System Journal (/var/log/journal/c7b1a099ace04e88a951a819c4a1626a) is 8.0M, max 195.6M, 187.6M free. Jul 10 00:37:04.003882 systemd-journald[990]: Received client request to flush runtime journal. Jul 10 00:37:03.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:03.989106 systemd[1]: Finished systemd-udev-trigger.service. Jul 10 00:37:04.004772 udevadm[1028]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 10 00:37:03.990885 systemd[1]: Starting systemd-udev-settle.service... Jul 10 00:37:03.992089 systemd[1]: Finished systemd-random-seed.service. Jul 10 00:37:03.992782 systemd[1]: Reached target first-boot-complete.target. Jul 10 00:37:04.004491 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:37:04.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:04.005422 systemd[1]: Finished systemd-journal-flush.service. Jul 10 00:37:04.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:04.012274 systemd[1]: Finished systemd-sysusers.service. Jul 10 00:37:04.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:04.013917 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 10 00:37:04.028398 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 10 00:37:04.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:04.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:04.343000 audit: BPF prog-id=24 op=LOAD Jul 10 00:37:04.343000 audit: BPF prog-id=25 op=LOAD Jul 10 00:37:04.343000 audit: BPF prog-id=7 op=UNLOAD Jul 10 00:37:04.343000 audit: BPF prog-id=8 op=UNLOAD Jul 10 00:37:04.343341 systemd[1]: Finished systemd-hwdb-update.service. Jul 10 00:37:04.345489 systemd[1]: Starting systemd-udevd.service... Jul 10 00:37:04.366567 systemd-udevd[1034]: Using default interface naming scheme 'v252'. Jul 10 00:37:04.386073 systemd[1]: Started systemd-udevd.service. Jul 10 00:37:04.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:04.386000 audit: BPF prog-id=26 op=LOAD Jul 10 00:37:04.388911 systemd[1]: Starting systemd-networkd.service... Jul 10 00:37:04.416818 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Jul 10 00:37:04.421000 audit: BPF prog-id=27 op=LOAD Jul 10 00:37:04.421000 audit: BPF prog-id=28 op=LOAD Jul 10 00:37:04.421000 audit: BPF prog-id=29 op=LOAD Jul 10 00:37:04.422721 systemd[1]: Starting systemd-userdbd.service... Jul 10 00:37:04.430548 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 10 00:37:04.462993 systemd[1]: Started systemd-userdbd.service. Jul 10 00:37:04.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:04.494271 systemd[1]: Finished systemd-udev-settle.service. Jul 10 00:37:04.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:04.496251 systemd[1]: Starting lvm2-activation-early.service... Jul 10 00:37:04.504768 lvm[1066]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:37:04.511318 systemd-networkd[1041]: lo: Link UP Jul 10 00:37:04.511555 systemd-networkd[1041]: lo: Gained carrier Jul 10 00:37:04.512019 systemd-networkd[1041]: Enumeration completed Jul 10 00:37:04.512180 systemd[1]: Started systemd-networkd.service. Jul 10 00:37:04.512299 systemd-networkd[1041]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:37:04.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:04.513555 systemd-networkd[1041]: eth0: Link UP Jul 10 00:37:04.513640 systemd-networkd[1041]: eth0: Gained carrier Jul 10 00:37:04.535795 systemd[1]: Finished lvm2-activation-early.service. Jul 10 00:37:04.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:04.536629 systemd[1]: Reached target cryptsetup.target. Jul 10 00:37:04.538035 systemd-networkd[1041]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:37:04.538417 systemd[1]: Starting lvm2-activation.service... Jul 10 00:37:04.542024 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:37:04.574923 systemd[1]: Finished lvm2-activation.service. Jul 10 00:37:04.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:04.575664 systemd[1]: Reached target local-fs-pre.target. Jul 10 00:37:04.576319 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:37:04.576345 systemd[1]: Reached target local-fs.target. Jul 10 00:37:04.576917 systemd[1]: Reached target machines.target. Jul 10 00:37:04.578741 systemd[1]: Starting ldconfig.service... Jul 10 00:37:04.579939 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:37:04.580001 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:37:04.581157 systemd[1]: Starting systemd-boot-update.service... Jul 10 00:37:04.582784 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 10 00:37:04.585284 systemd[1]: Starting systemd-machine-id-commit.service... Jul 10 00:37:04.587956 systemd[1]: Starting systemd-sysext.service... Jul 10 00:37:04.593524 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1070 (bootctl) Jul 10 00:37:04.594736 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 10 00:37:04.605743 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 10 00:37:04.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:04.612076 systemd[1]: Unmounting usr-share-oem.mount... Jul 10 00:37:04.616972 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 10 00:37:04.617189 systemd[1]: Unmounted usr-share-oem.mount. Jul 10 00:37:04.684683 systemd[1]: Finished systemd-machine-id-commit.service. Jul 10 00:37:04.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:04.686906 kernel: loop0: detected capacity change from 0 to 211168 Jul 10 00:37:04.700179 systemd-fsck[1078]: fsck.fat 4.2 (2021-01-31) Jul 10 00:37:04.700179 systemd-fsck[1078]: /dev/vda1: 236 files, 117310/258078 clusters Jul 10 00:37:04.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:04.702711 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 10 00:37:04.707912 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:37:04.735911 kernel: loop1: detected capacity change from 0 to 211168 Jul 10 00:37:04.740318 (sd-sysext)[1082]: Using extensions 'kubernetes'. Jul 10 00:37:04.740928 (sd-sysext)[1082]: Merged extensions into '/usr'. Jul 10 00:37:04.759602 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:37:04.761627 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:37:04.764606 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:37:04.767430 systemd[1]: Starting modprobe@loop.service... Jul 10 00:37:04.768442 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:37:04.768566 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:37:04.769383 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:37:04.769513 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:37:04.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:04.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:04.771102 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:37:04.771336 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:37:04.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:04.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:04.772995 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:37:04.773211 systemd[1]: Finished modprobe@loop.service. Jul 10 00:37:04.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:04.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:04.774821 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:37:04.775069 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:37:04.804509 ldconfig[1069]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:37:04.808069 systemd[1]: Finished ldconfig.service. Jul 10 00:37:04.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:04.942612 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:37:04.944439 systemd[1]: Mounting boot.mount... Jul 10 00:37:04.946286 systemd[1]: Mounting usr-share-oem.mount... Jul 10 00:37:04.952917 systemd[1]: Mounted boot.mount. Jul 10 00:37:04.953806 systemd[1]: Mounted usr-share-oem.mount. Jul 10 00:37:04.955718 systemd[1]: Finished systemd-sysext.service. Jul 10 00:37:04.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:04.957901 systemd[1]: Starting ensure-sysext.service... Jul 10 00:37:04.959614 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 10 00:37:04.964423 systemd[1]: Finished systemd-boot-update.service. Jul 10 00:37:04.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:04.965521 systemd[1]: Reloading. Jul 10 00:37:04.969176 systemd-tmpfiles[1090]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 10 00:37:04.970289 systemd-tmpfiles[1090]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:37:04.971625 systemd-tmpfiles[1090]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:37:05.000901 /usr/lib/systemd/system-generators/torcx-generator[1110]: time="2025-07-10T00:37:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:37:05.001320 /usr/lib/systemd/system-generators/torcx-generator[1110]: time="2025-07-10T00:37:05Z" level=info msg="torcx already run" Jul 10 00:37:05.060076 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:37:05.060095 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:37:05.076240 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:37:05.116000 audit: BPF prog-id=30 op=LOAD Jul 10 00:37:05.116000 audit: BPF prog-id=21 op=UNLOAD Jul 10 00:37:05.116000 audit: BPF prog-id=31 op=LOAD Jul 10 00:37:05.116000 audit: BPF prog-id=32 op=LOAD Jul 10 00:37:05.117000 audit: BPF prog-id=22 op=UNLOAD Jul 10 00:37:05.117000 audit: BPF prog-id=23 op=UNLOAD Jul 10 00:37:05.117000 audit: BPF prog-id=33 op=LOAD Jul 10 00:37:05.117000 audit: BPF prog-id=26 op=UNLOAD Jul 10 00:37:05.117000 audit: BPF prog-id=34 op=LOAD Jul 10 00:37:05.117000 audit: BPF prog-id=35 op=LOAD Jul 10 00:37:05.117000 audit: BPF prog-id=24 op=UNLOAD Jul 10 00:37:05.117000 audit: BPF prog-id=25 op=UNLOAD Jul 10 00:37:05.119000 audit: BPF prog-id=36 op=LOAD Jul 10 00:37:05.119000 audit: BPF prog-id=27 op=UNLOAD Jul 10 00:37:05.119000 audit: BPF prog-id=37 op=LOAD Jul 10 00:37:05.119000 audit: BPF prog-id=38 op=LOAD Jul 10 00:37:05.119000 audit: BPF prog-id=28 op=UNLOAD Jul 10 00:37:05.119000 audit: BPF prog-id=29 op=UNLOAD Jul 10 00:37:05.122808 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 10 00:37:05.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.126826 systemd[1]: Starting audit-rules.service... Jul 10 00:37:05.128631 systemd[1]: Starting clean-ca-certificates.service... Jul 10 00:37:05.132979 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 10 00:37:05.136000 audit: BPF prog-id=39 op=LOAD Jul 10 00:37:05.139000 audit: BPF prog-id=40 op=LOAD Jul 10 00:37:05.138605 systemd[1]: Starting systemd-resolved.service... Jul 10 00:37:05.140619 systemd[1]: Starting systemd-timesyncd.service... Jul 10 00:37:05.142321 systemd[1]: Starting systemd-update-utmp.service... Jul 10 00:37:05.146000 audit[1160]: SYSTEM_BOOT pid=1160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.147035 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:37:05.148643 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:37:05.150475 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:37:05.152393 systemd[1]: Starting modprobe@loop.service... Jul 10 00:37:05.153049 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:37:05.153175 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:37:05.154074 systemd[1]: Finished clean-ca-certificates.service. Jul 10 00:37:05.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.155216 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:37:05.155328 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:37:05.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.156362 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:37:05.156477 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:37:05.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.157607 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:37:05.157720 systemd[1]: Finished modprobe@loop.service. Jul 10 00:37:05.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.160394 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:37:05.160532 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:37:05.160640 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:37:05.163041 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:37:05.164551 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:37:05.166503 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:37:05.168503 systemd[1]: Starting modprobe@loop.service... Jul 10 00:37:05.169185 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:37:05.169307 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:37:05.169395 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:37:05.170292 systemd[1]: Finished systemd-update-utmp.service. Jul 10 00:37:05.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.171444 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 10 00:37:05.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.172644 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:37:05.172757 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:37:05.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.173922 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:37:05.174037 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:37:05.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.175152 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:37:05.175272 systemd[1]: Finished modprobe@loop.service. Jul 10 00:37:05.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.179405 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:37:05.181039 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:37:05.183204 systemd[1]: Starting modprobe@drm.service... Jul 10 00:37:05.185525 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:37:05.189226 systemd[1]: Starting modprobe@loop.service... Jul 10 00:37:05.190067 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:37:05.190203 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:37:05.195995 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 10 00:37:05.198587 systemd[1]: Starting systemd-update-done.service... Jul 10 00:37:05.199345 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:37:05.200861 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:37:05.201044 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:37:05.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.202292 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:37:05.202412 systemd[1]: Finished modprobe@drm.service. Jul 10 00:37:05.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.203509 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:37:05.203624 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:37:05.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.204723 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:37:05.205030 systemd[1]: Finished modprobe@loop.service. Jul 10 00:37:05.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.206436 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:37:05.206529 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:37:05.206761 systemd[1]: Finished systemd-update-done.service. Jul 10 00:37:05.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.207897 systemd[1]: Finished ensure-sysext.service. Jul 10 00:37:05.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:37:05.213000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 10 00:37:05.213000 audit[1179]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd8b87610 a2=420 a3=0 items=0 ppid=1149 pid=1179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:37:05.213000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 10 00:37:05.215199 augenrules[1179]: No rules Jul 10 00:37:05.215431 systemd[1]: Started systemd-timesyncd.service. Jul 10 00:37:05.215776 systemd-resolved[1153]: Positive Trust Anchors: Jul 10 00:37:05.215786 systemd-resolved[1153]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:37:05.215812 systemd-resolved[1153]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 10 00:37:05.644324 systemd[1]: Finished audit-rules.service. Jul 10 00:37:05.644460 systemd-timesyncd[1157]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 00:37:05.644732 systemd-timesyncd[1157]: Initial clock synchronization to Thu 2025-07-10 00:37:05.644255 UTC. Jul 10 00:37:05.646128 systemd[1]: Reached target time-set.target. Jul 10 00:37:05.655084 systemd-resolved[1153]: Defaulting to hostname 'linux'. Jul 10 00:37:05.660109 systemd[1]: Started systemd-resolved.service. Jul 10 00:37:05.660828 systemd[1]: Reached target network.target. Jul 10 00:37:05.661424 systemd[1]: Reached target nss-lookup.target. Jul 10 00:37:05.662103 systemd[1]: Reached target sysinit.target. Jul 10 00:37:05.662753 systemd[1]: Started motdgen.path. Jul 10 00:37:05.663351 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 10 00:37:05.664410 systemd[1]: Started logrotate.timer. Jul 10 00:37:05.665126 systemd[1]: Started mdadm.timer. Jul 10 00:37:05.665651 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 10 00:37:05.666281 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:37:05.666314 systemd[1]: Reached target paths.target. Jul 10 00:37:05.666857 systemd[1]: Reached target timers.target. Jul 10 00:37:05.667698 systemd[1]: Listening on dbus.socket. Jul 10 00:37:05.669336 systemd[1]: Starting docker.socket... Jul 10 00:37:05.672239 systemd[1]: Listening on sshd.socket. Jul 10 00:37:05.672912 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:37:05.673340 systemd[1]: Listening on docker.socket. Jul 10 00:37:05.674044 systemd[1]: Reached target sockets.target. Jul 10 00:37:05.674655 systemd[1]: Reached target basic.target. Jul 10 00:37:05.675280 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 10 00:37:05.675310 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 10 00:37:05.676300 systemd[1]: Starting containerd.service... Jul 10 00:37:05.677875 systemd[1]: Starting dbus.service... Jul 10 00:37:05.679741 systemd[1]: Starting enable-oem-cloudinit.service... Jul 10 00:37:05.681677 systemd[1]: Starting extend-filesystems.service... Jul 10 00:37:05.682389 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 10 00:37:05.683828 systemd[1]: Starting motdgen.service... Jul 10 00:37:05.687484 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 10 00:37:05.687771 jq[1191]: false Jul 10 00:37:05.689764 systemd[1]: Starting sshd-keygen.service... Jul 10 00:37:05.693052 systemd[1]: Starting systemd-logind.service... Jul 10 00:37:05.694040 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:37:05.694113 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:37:05.694682 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:37:05.695402 systemd[1]: Starting update-engine.service... Jul 10 00:37:05.697062 extend-filesystems[1192]: Found loop1 Jul 10 00:37:05.697759 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 10 00:37:05.698737 extend-filesystems[1192]: Found vda Jul 10 00:37:05.699690 extend-filesystems[1192]: Found vda1 Jul 10 00:37:05.701067 jq[1206]: true Jul 10 00:37:05.701032 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:37:05.701209 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 10 00:37:05.701471 extend-filesystems[1192]: Found vda2 Jul 10 00:37:05.701481 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:37:05.701699 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 10 00:37:05.702598 extend-filesystems[1192]: Found vda3 Jul 10 00:37:05.703322 extend-filesystems[1192]: Found usr Jul 10 00:37:05.704881 extend-filesystems[1192]: Found vda4 Jul 10 00:37:05.708978 extend-filesystems[1192]: Found vda6 Jul 10 00:37:05.709717 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:37:05.709869 systemd[1]: Finished motdgen.service. Jul 10 00:37:05.710551 extend-filesystems[1192]: Found vda7 Jul 10 00:37:05.710551 extend-filesystems[1192]: Found vda9 Jul 10 00:37:05.710551 extend-filesystems[1192]: Checking size of /dev/vda9 Jul 10 00:37:05.722601 jq[1211]: true Jul 10 00:37:05.724663 dbus-daemon[1190]: [system] SELinux support is enabled Jul 10 00:37:05.724821 systemd[1]: Started dbus.service. Jul 10 00:37:05.727425 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:37:05.727458 systemd[1]: Reached target system-config.target. Jul 10 00:37:05.728277 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:37:05.728292 systemd[1]: Reached target user-config.target. Jul 10 00:37:05.742149 systemd-logind[1201]: Watching system buttons on /dev/input/event0 (Power Button) Jul 10 00:37:05.742355 systemd-logind[1201]: New seat seat0. Jul 10 00:37:05.742906 extend-filesystems[1192]: Resized partition /dev/vda9 Jul 10 00:37:05.744914 extend-filesystems[1237]: resize2fs 1.46.5 (30-Dec-2021) Jul 10 00:37:05.753033 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 00:37:05.755541 systemd[1]: Started systemd-logind.service. Jul 10 00:37:05.781601 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 00:37:05.788073 update_engine[1204]: I0710 00:37:05.785699 1204 main.cc:92] Flatcar Update Engine starting Jul 10 00:37:05.793426 extend-filesystems[1237]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 00:37:05.793426 extend-filesystems[1237]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 00:37:05.793426 extend-filesystems[1237]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 00:37:05.804273 extend-filesystems[1192]: Resized filesystem in /dev/vda9 Jul 10 00:37:05.795010 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:37:05.805028 bash[1238]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:37:05.805117 update_engine[1204]: I0710 00:37:05.797537 1204 update_check_scheduler.cc:74] Next update check in 3m38s Jul 10 00:37:05.795180 systemd[1]: Finished extend-filesystems.service. Jul 10 00:37:05.797619 systemd[1]: Started update-engine.service. Jul 10 00:37:05.800499 systemd[1]: Started locksmithd.service. Jul 10 00:37:05.802316 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 10 00:37:05.810264 env[1212]: time="2025-07-10T00:37:05.810209282Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 10 00:37:05.828764 env[1212]: time="2025-07-10T00:37:05.828703522Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 10 00:37:05.829018 env[1212]: time="2025-07-10T00:37:05.828992242Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:37:05.830307 env[1212]: time="2025-07-10T00:37:05.830269402Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.186-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:37:05.830307 env[1212]: time="2025-07-10T00:37:05.830303442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:37:05.830561 env[1212]: time="2025-07-10T00:37:05.830517002Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:37:05.830561 env[1212]: time="2025-07-10T00:37:05.830547442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 10 00:37:05.830647 env[1212]: time="2025-07-10T00:37:05.830562082Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 10 00:37:05.830647 env[1212]: time="2025-07-10T00:37:05.830586082Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 10 00:37:05.830685 env[1212]: time="2025-07-10T00:37:05.830659882Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:37:05.831019 env[1212]: time="2025-07-10T00:37:05.830990562Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:37:05.831143 env[1212]: time="2025-07-10T00:37:05.831121882Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:37:05.831173 env[1212]: time="2025-07-10T00:37:05.831143802Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 10 00:37:05.831219 env[1212]: time="2025-07-10T00:37:05.831199682Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 10 00:37:05.831253 env[1212]: time="2025-07-10T00:37:05.831218362Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:37:05.835321 env[1212]: time="2025-07-10T00:37:05.835197642Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 10 00:37:05.835363 env[1212]: time="2025-07-10T00:37:05.835324482Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 10 00:37:05.835363 env[1212]: time="2025-07-10T00:37:05.835344482Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 10 00:37:05.835402 env[1212]: time="2025-07-10T00:37:05.835375522Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 10 00:37:05.835433 env[1212]: time="2025-07-10T00:37:05.835390682Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 10 00:37:05.835433 env[1212]: time="2025-07-10T00:37:05.835417242Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 10 00:37:05.835433 env[1212]: time="2025-07-10T00:37:05.835430962Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 10 00:37:05.835870 env[1212]: time="2025-07-10T00:37:05.835843722Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 10 00:37:05.835918 env[1212]: time="2025-07-10T00:37:05.835870282Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 10 00:37:05.835918 env[1212]: time="2025-07-10T00:37:05.835884802Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 10 00:37:05.835918 env[1212]: time="2025-07-10T00:37:05.835898762Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 10 00:37:05.835918 env[1212]: time="2025-07-10T00:37:05.835913322Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 10 00:37:05.836137 env[1212]: time="2025-07-10T00:37:05.836111722Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 10 00:37:05.836252 env[1212]: time="2025-07-10T00:37:05.836194202Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 10 00:37:05.836489 env[1212]: time="2025-07-10T00:37:05.836450922Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 10 00:37:05.836522 env[1212]: time="2025-07-10T00:37:05.836496722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 10 00:37:05.836522 env[1212]: time="2025-07-10T00:37:05.836512002Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 10 00:37:05.836655 env[1212]: time="2025-07-10T00:37:05.836641242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 10 00:37:05.836683 env[1212]: time="2025-07-10T00:37:05.836659082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 10 00:37:05.836683 env[1212]: time="2025-07-10T00:37:05.836672362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 10 00:37:05.836720 env[1212]: time="2025-07-10T00:37:05.836684442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 10 00:37:05.836720 env[1212]: time="2025-07-10T00:37:05.836696762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 10 00:37:05.836767 env[1212]: time="2025-07-10T00:37:05.836722282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 10 00:37:05.836767 env[1212]: time="2025-07-10T00:37:05.836734082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 10 00:37:05.836767 env[1212]: time="2025-07-10T00:37:05.836745562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 10 00:37:05.836767 env[1212]: time="2025-07-10T00:37:05.836759642Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 10 00:37:05.836900 env[1212]: time="2025-07-10T00:37:05.836884082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 10 00:37:05.836923 env[1212]: time="2025-07-10T00:37:05.836905122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 10 00:37:05.836923 env[1212]: time="2025-07-10T00:37:05.836918602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 10 00:37:05.836959 env[1212]: time="2025-07-10T00:37:05.836929642Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 10 00:37:05.836959 env[1212]: time="2025-07-10T00:37:05.836942522Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 10 00:37:05.836959 env[1212]: time="2025-07-10T00:37:05.836953642Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 10 00:37:05.837018 env[1212]: time="2025-07-10T00:37:05.836969202Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 10 00:37:05.837018 env[1212]: time="2025-07-10T00:37:05.837002242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 10 00:37:05.837244 env[1212]: time="2025-07-10T00:37:05.837181002Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 10 00:37:05.837828 env[1212]: time="2025-07-10T00:37:05.837243042Z" level=info msg="Connect containerd service" Jul 10 00:37:05.837828 env[1212]: time="2025-07-10T00:37:05.837271202Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 10 00:37:05.837917 env[1212]: time="2025-07-10T00:37:05.837892362Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:37:05.838105 env[1212]: time="2025-07-10T00:37:05.838078642Z" level=info msg="Start subscribing containerd event" Jul 10 00:37:05.838143 env[1212]: time="2025-07-10T00:37:05.838124482Z" level=info msg="Start recovering state" Jul 10 00:37:05.838198 env[1212]: time="2025-07-10T00:37:05.838179042Z" level=info msg="Start event monitor" Jul 10 00:37:05.838235 env[1212]: time="2025-07-10T00:37:05.838204922Z" level=info msg="Start snapshots syncer" Jul 10 00:37:05.838235 env[1212]: time="2025-07-10T00:37:05.838213802Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:37:05.838235 env[1212]: time="2025-07-10T00:37:05.838220882Z" level=info msg="Start streaming server" Jul 10 00:37:05.838542 env[1212]: time="2025-07-10T00:37:05.838497802Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:37:05.838594 env[1212]: time="2025-07-10T00:37:05.838580602Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:37:05.839523 env[1212]: time="2025-07-10T00:37:05.838626602Z" level=info msg="containerd successfully booted in 0.029304s" Jul 10 00:37:05.838698 systemd[1]: Started containerd.service. Jul 10 00:37:05.847921 locksmithd[1241]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:37:06.161744 systemd-networkd[1041]: eth0: Gained IPv6LL Jul 10 00:37:06.163664 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 10 00:37:06.164653 systemd[1]: Reached target network-online.target. Jul 10 00:37:06.166743 systemd[1]: Starting kubelet.service... Jul 10 00:37:06.746740 systemd[1]: Started kubelet.service. Jul 10 00:37:07.175429 kubelet[1254]: E0710 00:37:07.175323 1254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:37:07.177405 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:37:07.177537 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:37:07.840782 sshd_keygen[1205]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:37:07.858324 systemd[1]: Finished sshd-keygen.service. Jul 10 00:37:07.860543 systemd[1]: Starting issuegen.service... Jul 10 00:37:07.865629 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:37:07.865797 systemd[1]: Finished issuegen.service. Jul 10 00:37:07.867923 systemd[1]: Starting systemd-user-sessions.service... Jul 10 00:37:07.874350 systemd[1]: Finished systemd-user-sessions.service. Jul 10 00:37:07.876617 systemd[1]: Started getty@tty1.service. Jul 10 00:37:07.878863 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 10 00:37:07.879930 systemd[1]: Reached target getty.target. Jul 10 00:37:07.880771 systemd[1]: Reached target multi-user.target. Jul 10 00:37:07.883024 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 10 00:37:07.889835 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 10 00:37:07.890008 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 10 00:37:07.891170 systemd[1]: Startup finished in 569ms (kernel) + 4.439s (initrd) + 5.456s (userspace) = 10.464s. Jul 10 00:37:10.612646 systemd[1]: Created slice system-sshd.slice. Jul 10 00:37:10.613731 systemd[1]: Started sshd@0-10.0.0.92:22-10.0.0.1:37174.service. Jul 10 00:37:10.677338 sshd[1276]: Accepted publickey for core from 10.0.0.1 port 37174 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:37:10.679859 sshd[1276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:37:10.688181 systemd[1]: Created slice user-500.slice. Jul 10 00:37:10.689354 systemd[1]: Starting user-runtime-dir@500.service... Jul 10 00:37:10.690940 systemd-logind[1201]: New session 1 of user core. Jul 10 00:37:10.697417 systemd[1]: Finished user-runtime-dir@500.service. Jul 10 00:37:10.698713 systemd[1]: Starting user@500.service... Jul 10 00:37:10.701555 (systemd)[1279]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:37:10.759823 systemd[1279]: Queued start job for default target default.target. Jul 10 00:37:10.760634 systemd[1279]: Reached target paths.target. Jul 10 00:37:10.760673 systemd[1279]: Reached target sockets.target. Jul 10 00:37:10.760685 systemd[1279]: Reached target timers.target. Jul 10 00:37:10.760695 systemd[1279]: Reached target basic.target. Jul 10 00:37:10.760741 systemd[1279]: Reached target default.target. Jul 10 00:37:10.760765 systemd[1279]: Startup finished in 53ms. Jul 10 00:37:10.760819 systemd[1]: Started user@500.service. Jul 10 00:37:10.761736 systemd[1]: Started session-1.scope. Jul 10 00:37:10.813778 systemd[1]: Started sshd@1-10.0.0.92:22-10.0.0.1:37180.service. Jul 10 00:37:10.854810 sshd[1288]: Accepted publickey for core from 10.0.0.1 port 37180 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:37:10.856088 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:37:10.860273 systemd-logind[1201]: New session 2 of user core. Jul 10 00:37:10.861405 systemd[1]: Started session-2.scope. Jul 10 00:37:10.915761 sshd[1288]: pam_unix(sshd:session): session closed for user core Jul 10 00:37:10.919740 systemd[1]: Started sshd@2-10.0.0.92:22-10.0.0.1:37194.service. Jul 10 00:37:10.920226 systemd[1]: sshd@1-10.0.0.92:22-10.0.0.1:37180.service: Deactivated successfully. Jul 10 00:37:10.920906 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:37:10.921439 systemd-logind[1201]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:37:10.922481 systemd-logind[1201]: Removed session 2. Jul 10 00:37:10.960922 sshd[1293]: Accepted publickey for core from 10.0.0.1 port 37194 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:37:10.962145 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:37:10.965460 systemd-logind[1201]: New session 3 of user core. Jul 10 00:37:10.966256 systemd[1]: Started session-3.scope. Jul 10 00:37:11.015293 sshd[1293]: pam_unix(sshd:session): session closed for user core Jul 10 00:37:11.017795 systemd[1]: sshd@2-10.0.0.92:22-10.0.0.1:37194.service: Deactivated successfully. Jul 10 00:37:11.018318 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:37:11.019026 systemd-logind[1201]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:37:11.020005 systemd[1]: Started sshd@3-10.0.0.92:22-10.0.0.1:37206.service. Jul 10 00:37:11.020676 systemd-logind[1201]: Removed session 3. Jul 10 00:37:11.060602 sshd[1300]: Accepted publickey for core from 10.0.0.1 port 37206 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:37:11.061746 sshd[1300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:37:11.065618 systemd-logind[1201]: New session 4 of user core. Jul 10 00:37:11.066024 systemd[1]: Started session-4.scope. Jul 10 00:37:11.120593 sshd[1300]: pam_unix(sshd:session): session closed for user core Jul 10 00:37:11.124211 systemd[1]: Started sshd@4-10.0.0.92:22-10.0.0.1:37218.service. Jul 10 00:37:11.126131 systemd[1]: sshd@3-10.0.0.92:22-10.0.0.1:37206.service: Deactivated successfully. Jul 10 00:37:11.126742 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:37:11.127235 systemd-logind[1201]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:37:11.127806 systemd-logind[1201]: Removed session 4. Jul 10 00:37:11.165624 sshd[1305]: Accepted publickey for core from 10.0.0.1 port 37218 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:37:11.166778 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:37:11.171504 systemd-logind[1201]: New session 5 of user core. Jul 10 00:37:11.172424 systemd[1]: Started session-5.scope. Jul 10 00:37:11.246637 sudo[1309]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:37:11.246856 sudo[1309]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 10 00:37:11.259162 systemd[1]: Starting coreos-metadata.service... Jul 10 00:37:11.266107 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 10 00:37:11.266270 systemd[1]: Finished coreos-metadata.service. Jul 10 00:37:11.742887 systemd[1]: Stopped kubelet.service. Jul 10 00:37:11.744969 systemd[1]: Starting kubelet.service... Jul 10 00:37:11.765793 systemd[1]: Reloading. Jul 10 00:37:11.828459 /usr/lib/systemd/system-generators/torcx-generator[1369]: time="2025-07-10T00:37:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:37:11.828786 /usr/lib/systemd/system-generators/torcx-generator[1369]: time="2025-07-10T00:37:11Z" level=info msg="torcx already run" Jul 10 00:37:11.976724 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:37:11.976878 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:37:11.992320 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:37:12.064953 systemd[1]: Started kubelet.service. Jul 10 00:37:12.068323 systemd[1]: Stopping kubelet.service... Jul 10 00:37:12.068775 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:37:12.069042 systemd[1]: Stopped kubelet.service. Jul 10 00:37:12.070810 systemd[1]: Starting kubelet.service... Jul 10 00:37:12.170416 systemd[1]: Started kubelet.service. Jul 10 00:37:12.208603 kubelet[1416]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:37:12.208603 kubelet[1416]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:37:12.208603 kubelet[1416]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:37:12.208925 kubelet[1416]: I0710 00:37:12.208676 1416 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:37:12.770059 kubelet[1416]: I0710 00:37:12.770014 1416 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 00:37:12.770059 kubelet[1416]: I0710 00:37:12.770046 1416 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:37:12.770286 kubelet[1416]: I0710 00:37:12.770258 1416 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 00:37:12.835284 kubelet[1416]: I0710 00:37:12.835246 1416 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:37:12.848232 kubelet[1416]: E0710 00:37:12.848195 1416 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:37:12.848232 kubelet[1416]: I0710 00:37:12.848227 1416 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:37:12.850736 kubelet[1416]: I0710 00:37:12.850718 1416 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:37:12.851918 kubelet[1416]: I0710 00:37:12.851875 1416 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:37:12.852077 kubelet[1416]: I0710 00:37:12.851924 1416 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.92","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:37:12.852167 kubelet[1416]: I0710 00:37:12.852158 1416 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:37:12.852213 kubelet[1416]: I0710 00:37:12.852169 1416 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 00:37:12.852385 kubelet[1416]: I0710 00:37:12.852372 1416 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:37:12.861972 kubelet[1416]: I0710 00:37:12.861937 1416 kubelet.go:480] "Attempting to sync node with API server" Jul 10 00:37:12.861972 kubelet[1416]: I0710 00:37:12.861969 1416 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:37:12.862838 kubelet[1416]: I0710 00:37:12.862822 1416 kubelet.go:386] "Adding apiserver pod source" Jul 10 00:37:12.862906 kubelet[1416]: I0710 00:37:12.862851 1416 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:37:12.863050 kubelet[1416]: E0710 00:37:12.862999 1416 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:12.863149 kubelet[1416]: E0710 00:37:12.863134 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:12.864021 kubelet[1416]: I0710 00:37:12.864001 1416 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 10 00:37:12.864814 kubelet[1416]: I0710 00:37:12.864794 1416 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 00:37:12.864987 kubelet[1416]: W0710 00:37:12.864976 1416 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:37:12.867367 kubelet[1416]: I0710 00:37:12.867350 1416 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:37:12.867474 kubelet[1416]: I0710 00:37:12.867462 1416 server.go:1289] "Started kubelet" Jul 10 00:37:12.868064 kubelet[1416]: I0710 00:37:12.867902 1416 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:37:12.869083 kubelet[1416]: I0710 00:37:12.869063 1416 server.go:317] "Adding debug handlers to kubelet server" Jul 10 00:37:12.869565 kubelet[1416]: I0710 00:37:12.869540 1416 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:37:12.869625 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 10 00:37:12.870021 kubelet[1416]: I0710 00:37:12.869998 1416 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:37:12.872038 kubelet[1416]: E0710 00:37:12.872006 1416 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 10 00:37:12.872038 kubelet[1416]: I0710 00:37:12.872039 1416 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:37:12.872250 kubelet[1416]: I0710 00:37:12.872224 1416 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:37:12.872289 kubelet[1416]: I0710 00:37:12.872282 1416 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:37:12.875064 kubelet[1416]: E0710 00:37:12.875017 1416 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 10 00:37:12.875664 kubelet[1416]: I0710 00:37:12.875633 1416 factory.go:223] Registration of the systemd container factory successfully Jul 10 00:37:12.875814 kubelet[1416]: I0710 00:37:12.875790 1416 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:37:12.875979 kubelet[1416]: I0710 00:37:12.867938 1416 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:37:12.876192 kubelet[1416]: I0710 00:37:12.876178 1416 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:37:12.877773 kubelet[1416]: I0710 00:37:12.877748 1416 factory.go:223] Registration of the containerd container factory successfully Jul 10 00:37:12.878060 kubelet[1416]: E0710 00:37:12.878032 1416 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:37:12.884328 kubelet[1416]: E0710 00:37:12.884284 1416 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.92\" not found" node="10.0.0.92" Jul 10 00:37:12.902346 kubelet[1416]: I0710 00:37:12.902324 1416 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:37:12.902524 kubelet[1416]: I0710 00:37:12.902463 1416 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:37:12.902610 kubelet[1416]: I0710 00:37:12.902600 1416 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:37:12.972991 kubelet[1416]: E0710 00:37:12.972952 1416 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Jul 10 00:37:12.985204 kubelet[1416]: I0710 00:37:12.985179 1416 policy_none.go:49] "None policy: Start" Jul 10 00:37:12.985314 kubelet[1416]: I0710 00:37:12.985212 1416 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:37:12.985314 kubelet[1416]: I0710 00:37:12.985224 1416 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:37:12.989491 systemd[1]: Created slice kubepods.slice. Jul 10 00:37:12.993490 systemd[1]: Created slice kubepods-burstable.slice. Jul 10 00:37:12.995928 systemd[1]: Created slice kubepods-besteffort.slice. Jul 10 00:37:13.005453 kubelet[1416]: E0710 00:37:13.005411 1416 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 00:37:13.005623 kubelet[1416]: I0710 00:37:13.005598 1416 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:37:13.005664 kubelet[1416]: I0710 00:37:13.005618 1416 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:37:13.006855 kubelet[1416]: I0710 00:37:13.005875 1416 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:37:13.007254 kubelet[1416]: E0710 00:37:13.007216 1416 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:37:13.007254 kubelet[1416]: E0710 00:37:13.007255 1416 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.92\" not found" Jul 10 00:37:13.041503 kubelet[1416]: I0710 00:37:13.041389 1416 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 00:37:13.042411 kubelet[1416]: I0710 00:37:13.042389 1416 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 00:37:13.042533 kubelet[1416]: I0710 00:37:13.042517 1416 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 00:37:13.042631 kubelet[1416]: I0710 00:37:13.042619 1416 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:37:13.042703 kubelet[1416]: I0710 00:37:13.042694 1416 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 00:37:13.042934 kubelet[1416]: E0710 00:37:13.042801 1416 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 10 00:37:13.106372 kubelet[1416]: I0710 00:37:13.106327 1416 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.92" Jul 10 00:37:13.111159 kubelet[1416]: I0710 00:37:13.111125 1416 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.92" Jul 10 00:37:13.205264 sudo[1309]: pam_unix(sudo:session): session closed for user root Jul 10 00:37:13.207776 sshd[1305]: pam_unix(sshd:session): session closed for user core Jul 10 00:37:13.210159 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:37:13.210814 systemd[1]: sshd@4-10.0.0.92:22-10.0.0.1:37218.service: Deactivated successfully. Jul 10 00:37:13.211608 systemd-logind[1201]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:37:13.212200 systemd-logind[1201]: Removed session 5. Jul 10 00:37:13.219352 kubelet[1416]: I0710 00:37:13.219321 1416 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 10 00:37:13.219719 env[1212]: time="2025-07-10T00:37:13.219661802Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:37:13.220077 kubelet[1416]: I0710 00:37:13.220060 1416 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 10 00:37:13.771780 kubelet[1416]: I0710 00:37:13.771713 1416 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 10 00:37:13.771912 kubelet[1416]: I0710 00:37:13.771890 1416 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jul 10 00:37:13.771961 kubelet[1416]: I0710 00:37:13.771920 1416 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" Jul 10 00:37:13.863186 kubelet[1416]: I0710 00:37:13.863145 1416 apiserver.go:52] "Watching apiserver" Jul 10 00:37:13.863429 kubelet[1416]: E0710 00:37:13.863393 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:13.873617 systemd[1]: Created slice kubepods-burstable-podb2afdc61_73b4_43b0_8ead_4b40bb59fd3f.slice. Jul 10 00:37:13.875536 kubelet[1416]: I0710 00:37:13.875511 1416 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:37:13.878053 kubelet[1416]: I0710 00:37:13.878017 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/80fdaf91-a92f-4693-837a-c0dc3e9bd9c6-kube-proxy\") pod \"kube-proxy-8tq8p\" (UID: \"80fdaf91-a92f-4693-837a-c0dc3e9bd9c6\") " pod="kube-system/kube-proxy-8tq8p" Jul 10 00:37:13.878266 kubelet[1416]: I0710 00:37:13.878246 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80fdaf91-a92f-4693-837a-c0dc3e9bd9c6-xtables-lock\") pod \"kube-proxy-8tq8p\" (UID: \"80fdaf91-a92f-4693-837a-c0dc3e9bd9c6\") " pod="kube-system/kube-proxy-8tq8p" Jul 10 00:37:13.878350 kubelet[1416]: I0710 00:37:13.878337 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80fdaf91-a92f-4693-837a-c0dc3e9bd9c6-lib-modules\") pod \"kube-proxy-8tq8p\" (UID: \"80fdaf91-a92f-4693-837a-c0dc3e9bd9c6\") " pod="kube-system/kube-proxy-8tq8p" Jul 10 00:37:13.878426 kubelet[1416]: I0710 00:37:13.878411 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnwjv\" (UniqueName: \"kubernetes.io/projected/80fdaf91-a92f-4693-837a-c0dc3e9bd9c6-kube-api-access-tnwjv\") pod \"kube-proxy-8tq8p\" (UID: \"80fdaf91-a92f-4693-837a-c0dc3e9bd9c6\") " pod="kube-system/kube-proxy-8tq8p" Jul 10 00:37:13.878516 kubelet[1416]: I0710 00:37:13.878501 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xvnw\" (UniqueName: \"kubernetes.io/projected/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-kube-api-access-5xvnw\") pod \"cilium-vp6j4\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " pod="kube-system/cilium-vp6j4" Jul 10 00:37:13.878609 kubelet[1416]: I0710 00:37:13.878593 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-bpf-maps\") pod \"cilium-vp6j4\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " pod="kube-system/cilium-vp6j4" Jul 10 00:37:13.878699 kubelet[1416]: I0710 00:37:13.878685 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-hostproc\") pod \"cilium-vp6j4\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " pod="kube-system/cilium-vp6j4" Jul 10 00:37:13.878779 kubelet[1416]: I0710 00:37:13.878759 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-xtables-lock\") pod \"cilium-vp6j4\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " pod="kube-system/cilium-vp6j4" Jul 10 00:37:13.878856 kubelet[1416]: I0710 00:37:13.878842 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-host-proc-sys-net\") pod \"cilium-vp6j4\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " pod="kube-system/cilium-vp6j4" Jul 10 00:37:13.878932 kubelet[1416]: I0710 00:37:13.878918 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-hubble-tls\") pod \"cilium-vp6j4\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " pod="kube-system/cilium-vp6j4" Jul 10 00:37:13.879013 kubelet[1416]: I0710 00:37:13.878999 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-cilium-run\") pod \"cilium-vp6j4\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " pod="kube-system/cilium-vp6j4" Jul 10 00:37:13.879145 kubelet[1416]: I0710 00:37:13.879075 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-cilium-cgroup\") pod \"cilium-vp6j4\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " pod="kube-system/cilium-vp6j4" Jul 10 00:37:13.879226 kubelet[1416]: I0710 00:37:13.879212 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-cni-path\") pod \"cilium-vp6j4\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " pod="kube-system/cilium-vp6j4" Jul 10 00:37:13.879293 kubelet[1416]: I0710 00:37:13.879280 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-clustermesh-secrets\") pod \"cilium-vp6j4\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " pod="kube-system/cilium-vp6j4" Jul 10 00:37:13.879367 kubelet[1416]: I0710 00:37:13.879354 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-cilium-config-path\") pod \"cilium-vp6j4\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " pod="kube-system/cilium-vp6j4" Jul 10 00:37:13.879441 kubelet[1416]: I0710 00:37:13.879425 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-etc-cni-netd\") pod \"cilium-vp6j4\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " pod="kube-system/cilium-vp6j4" Jul 10 00:37:13.879535 kubelet[1416]: I0710 00:37:13.879521 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-lib-modules\") pod \"cilium-vp6j4\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " pod="kube-system/cilium-vp6j4" Jul 10 00:37:13.879641 kubelet[1416]: I0710 00:37:13.879628 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-host-proc-sys-kernel\") pod \"cilium-vp6j4\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " pod="kube-system/cilium-vp6j4" Jul 10 00:37:13.884500 systemd[1]: Created slice kubepods-besteffort-pod80fdaf91_a92f_4693_837a_c0dc3e9bd9c6.slice. Jul 10 00:37:13.981715 kubelet[1416]: I0710 00:37:13.981654 1416 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 10 00:37:14.183606 kubelet[1416]: E0710 00:37:14.183461 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:14.184962 env[1212]: time="2025-07-10T00:37:14.184892162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vp6j4,Uid:b2afdc61-73b4-43b0-8ead-4b40bb59fd3f,Namespace:kube-system,Attempt:0,}" Jul 10 00:37:14.195583 kubelet[1416]: E0710 00:37:14.195545 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:14.196402 env[1212]: time="2025-07-10T00:37:14.196111722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8tq8p,Uid:80fdaf91-a92f-4693-837a-c0dc3e9bd9c6,Namespace:kube-system,Attempt:0,}" Jul 10 00:37:14.729173 env[1212]: time="2025-07-10T00:37:14.729117922Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:14.730138 env[1212]: time="2025-07-10T00:37:14.730097522Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:14.733165 env[1212]: time="2025-07-10T00:37:14.733132402Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:14.734617 env[1212]: time="2025-07-10T00:37:14.734595002Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:14.736761 env[1212]: time="2025-07-10T00:37:14.736737402Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:14.739076 env[1212]: time="2025-07-10T00:37:14.739050042Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:14.740854 env[1212]: time="2025-07-10T00:37:14.740819242Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:14.741662 env[1212]: time="2025-07-10T00:37:14.741640482Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:14.757186 env[1212]: time="2025-07-10T00:37:14.757126202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:37:14.757186 env[1212]: time="2025-07-10T00:37:14.757168762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:37:14.757286 env[1212]: time="2025-07-10T00:37:14.757185122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:14.757402 env[1212]: time="2025-07-10T00:37:14.757375162Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0a7ddcf955260eb9202b4d97a6d414196024528d65e4291ad8aa3946ffd6af2 pid=1486 runtime=io.containerd.runc.v2 Jul 10 00:37:14.757783 env[1212]: time="2025-07-10T00:37:14.757743282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:37:14.757865 env[1212]: time="2025-07-10T00:37:14.757840762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:37:14.757945 env[1212]: time="2025-07-10T00:37:14.757857962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:14.758131 env[1212]: time="2025-07-10T00:37:14.758101722Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc pid=1485 runtime=io.containerd.runc.v2 Jul 10 00:37:14.787456 systemd[1]: Started cri-containerd-c0a7ddcf955260eb9202b4d97a6d414196024528d65e4291ad8aa3946ffd6af2.scope. Jul 10 00:37:14.790357 systemd[1]: Started cri-containerd-c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc.scope. Jul 10 00:37:14.827303 env[1212]: time="2025-07-10T00:37:14.826521002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vp6j4,Uid:b2afdc61-73b4-43b0-8ead-4b40bb59fd3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc\"" Jul 10 00:37:14.828192 kubelet[1416]: E0710 00:37:14.827729 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:14.831124 env[1212]: time="2025-07-10T00:37:14.831070562Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 00:37:14.833832 env[1212]: time="2025-07-10T00:37:14.833801602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8tq8p,Uid:80fdaf91-a92f-4693-837a-c0dc3e9bd9c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0a7ddcf955260eb9202b4d97a6d414196024528d65e4291ad8aa3946ffd6af2\"" Jul 10 00:37:14.834592 kubelet[1416]: E0710 00:37:14.834431 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:14.863817 kubelet[1416]: E0710 00:37:14.863765 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:14.987751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2212393081.mount: Deactivated successfully. Jul 10 00:37:15.864165 kubelet[1416]: E0710 00:37:15.864125 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:16.864964 kubelet[1416]: E0710 00:37:16.864919 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:17.865438 kubelet[1416]: E0710 00:37:17.865391 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:18.148275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount734424857.mount: Deactivated successfully. Jul 10 00:37:18.866064 kubelet[1416]: E0710 00:37:18.866006 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:19.866961 kubelet[1416]: E0710 00:37:19.866894 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:20.383587 env[1212]: time="2025-07-10T00:37:20.383533562Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:20.385794 env[1212]: time="2025-07-10T00:37:20.385757202Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:20.388005 env[1212]: time="2025-07-10T00:37:20.387960482Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:20.388243 env[1212]: time="2025-07-10T00:37:20.388203842Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 10 00:37:20.389905 env[1212]: time="2025-07-10T00:37:20.389766042Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 10 00:37:20.392313 env[1212]: time="2025-07-10T00:37:20.392279602Z" level=info msg="CreateContainer within sandbox \"c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:37:20.406559 env[1212]: time="2025-07-10T00:37:20.406523362Z" level=info msg="CreateContainer within sandbox \"c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"59727e0ce048a891885a2e904c066ec69f43ea7e25b56a2dfb0e25eb8359ae21\"" Jul 10 00:37:20.409084 env[1212]: time="2025-07-10T00:37:20.409057322Z" level=info msg="StartContainer for \"59727e0ce048a891885a2e904c066ec69f43ea7e25b56a2dfb0e25eb8359ae21\"" Jul 10 00:37:20.427212 systemd[1]: Started cri-containerd-59727e0ce048a891885a2e904c066ec69f43ea7e25b56a2dfb0e25eb8359ae21.scope. Jul 10 00:37:20.465616 env[1212]: time="2025-07-10T00:37:20.465568162Z" level=info msg="StartContainer for \"59727e0ce048a891885a2e904c066ec69f43ea7e25b56a2dfb0e25eb8359ae21\" returns successfully" Jul 10 00:37:20.503002 systemd[1]: cri-containerd-59727e0ce048a891885a2e904c066ec69f43ea7e25b56a2dfb0e25eb8359ae21.scope: Deactivated successfully. Jul 10 00:37:20.608324 env[1212]: time="2025-07-10T00:37:20.608278802Z" level=info msg="shim disconnected" id=59727e0ce048a891885a2e904c066ec69f43ea7e25b56a2dfb0e25eb8359ae21 Jul 10 00:37:20.608324 env[1212]: time="2025-07-10T00:37:20.608322442Z" level=warning msg="cleaning up after shim disconnected" id=59727e0ce048a891885a2e904c066ec69f43ea7e25b56a2dfb0e25eb8359ae21 namespace=k8s.io Jul 10 00:37:20.608324 env[1212]: time="2025-07-10T00:37:20.608331602Z" level=info msg="cleaning up dead shim" Jul 10 00:37:20.615277 env[1212]: time="2025-07-10T00:37:20.615227682Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:37:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1600 runtime=io.containerd.runc.v2\n" Jul 10 00:37:20.867689 kubelet[1416]: E0710 00:37:20.867563 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:21.058943 kubelet[1416]: E0710 00:37:21.058911 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:21.062059 env[1212]: time="2025-07-10T00:37:21.062014042Z" level=info msg="CreateContainer within sandbox \"c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:37:21.088389 env[1212]: time="2025-07-10T00:37:21.088340602Z" level=info msg="CreateContainer within sandbox \"c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"de5cd30170ec87f0b0fa622d1dbe0b7795b5afc0b9738f739053e5bdcb886ab0\"" Jul 10 00:37:21.089837 env[1212]: time="2025-07-10T00:37:21.089593562Z" level=info msg="StartContainer for \"de5cd30170ec87f0b0fa622d1dbe0b7795b5afc0b9738f739053e5bdcb886ab0\"" Jul 10 00:37:21.117601 systemd[1]: Started cri-containerd-de5cd30170ec87f0b0fa622d1dbe0b7795b5afc0b9738f739053e5bdcb886ab0.scope. Jul 10 00:37:21.187637 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:37:21.187831 systemd[1]: Stopped systemd-sysctl.service. Jul 10 00:37:21.188018 systemd[1]: Stopping systemd-sysctl.service... Jul 10 00:37:21.189525 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:37:21.190546 systemd[1]: cri-containerd-de5cd30170ec87f0b0fa622d1dbe0b7795b5afc0b9738f739053e5bdcb886ab0.scope: Deactivated successfully. Jul 10 00:37:21.193074 env[1212]: time="2025-07-10T00:37:21.191630602Z" level=info msg="StartContainer for \"de5cd30170ec87f0b0fa622d1dbe0b7795b5afc0b9738f739053e5bdcb886ab0\" returns successfully" Jul 10 00:37:21.198497 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:37:21.220481 env[1212]: time="2025-07-10T00:37:21.220416162Z" level=info msg="shim disconnected" id=de5cd30170ec87f0b0fa622d1dbe0b7795b5afc0b9738f739053e5bdcb886ab0 Jul 10 00:37:21.220481 env[1212]: time="2025-07-10T00:37:21.220471002Z" level=warning msg="cleaning up after shim disconnected" id=de5cd30170ec87f0b0fa622d1dbe0b7795b5afc0b9738f739053e5bdcb886ab0 namespace=k8s.io Jul 10 00:37:21.220481 env[1212]: time="2025-07-10T00:37:21.220480842Z" level=info msg="cleaning up dead shim" Jul 10 00:37:21.226861 env[1212]: time="2025-07-10T00:37:21.226811762Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:37:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1664 runtime=io.containerd.runc.v2\n" Jul 10 00:37:21.403031 systemd[1]: run-containerd-runc-k8s.io-59727e0ce048a891885a2e904c066ec69f43ea7e25b56a2dfb0e25eb8359ae21-runc.OuniDT.mount: Deactivated successfully. Jul 10 00:37:21.403129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59727e0ce048a891885a2e904c066ec69f43ea7e25b56a2dfb0e25eb8359ae21-rootfs.mount: Deactivated successfully. Jul 10 00:37:21.495016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3422438760.mount: Deactivated successfully. Jul 10 00:37:21.868223 kubelet[1416]: E0710 00:37:21.868107 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:21.993721 env[1212]: time="2025-07-10T00:37:21.993669042Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:21.995137 env[1212]: time="2025-07-10T00:37:21.995099042Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:21.998510 env[1212]: time="2025-07-10T00:37:21.998423802Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:22.000147 env[1212]: time="2025-07-10T00:37:22.000114722Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:22.000992 env[1212]: time="2025-07-10T00:37:22.000910282Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 10 00:37:22.005000 env[1212]: time="2025-07-10T00:37:22.004864282Z" level=info msg="CreateContainer within sandbox \"c0a7ddcf955260eb9202b4d97a6d414196024528d65e4291ad8aa3946ffd6af2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:37:22.017472 env[1212]: time="2025-07-10T00:37:22.017423882Z" level=info msg="CreateContainer within sandbox \"c0a7ddcf955260eb9202b4d97a6d414196024528d65e4291ad8aa3946ffd6af2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d5fe1bd2e00ba23fd5016143285b519c1ac7944bfed44d58bcd478e00b7c10ec\"" Jul 10 00:37:22.018205 env[1212]: time="2025-07-10T00:37:22.018131442Z" level=info msg="StartContainer for \"d5fe1bd2e00ba23fd5016143285b519c1ac7944bfed44d58bcd478e00b7c10ec\"" Jul 10 00:37:22.034084 systemd[1]: Started cri-containerd-d5fe1bd2e00ba23fd5016143285b519c1ac7944bfed44d58bcd478e00b7c10ec.scope. Jul 10 00:37:22.062177 kubelet[1416]: E0710 00:37:22.062132 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:22.067160 env[1212]: time="2025-07-10T00:37:22.067095882Z" level=info msg="CreateContainer within sandbox \"c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:37:22.090060 env[1212]: time="2025-07-10T00:37:22.089152042Z" level=info msg="StartContainer for \"d5fe1bd2e00ba23fd5016143285b519c1ac7944bfed44d58bcd478e00b7c10ec\" returns successfully" Jul 10 00:37:22.094049 env[1212]: time="2025-07-10T00:37:22.093990802Z" level=info msg="CreateContainer within sandbox \"c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5a579c39b5431044dcddf6e42420085e1804304df6e39b4a27e16f848ce3ac41\"" Jul 10 00:37:22.096188 env[1212]: time="2025-07-10T00:37:22.094427442Z" level=info msg="StartContainer for \"5a579c39b5431044dcddf6e42420085e1804304df6e39b4a27e16f848ce3ac41\"" Jul 10 00:37:22.114101 systemd[1]: Started cri-containerd-5a579c39b5431044dcddf6e42420085e1804304df6e39b4a27e16f848ce3ac41.scope. Jul 10 00:37:22.169330 env[1212]: time="2025-07-10T00:37:22.169159602Z" level=info msg="StartContainer for \"5a579c39b5431044dcddf6e42420085e1804304df6e39b4a27e16f848ce3ac41\" returns successfully" Jul 10 00:37:22.181396 systemd[1]: cri-containerd-5a579c39b5431044dcddf6e42420085e1804304df6e39b4a27e16f848ce3ac41.scope: Deactivated successfully. Jul 10 00:37:22.318544 env[1212]: time="2025-07-10T00:37:22.318493602Z" level=info msg="shim disconnected" id=5a579c39b5431044dcddf6e42420085e1804304df6e39b4a27e16f848ce3ac41 Jul 10 00:37:22.318791 env[1212]: time="2025-07-10T00:37:22.318772122Z" level=warning msg="cleaning up after shim disconnected" id=5a579c39b5431044dcddf6e42420085e1804304df6e39b4a27e16f848ce3ac41 namespace=k8s.io Jul 10 00:37:22.318866 env[1212]: time="2025-07-10T00:37:22.318852762Z" level=info msg="cleaning up dead shim" Jul 10 00:37:22.325950 env[1212]: time="2025-07-10T00:37:22.325906442Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:37:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1774 runtime=io.containerd.runc.v2\n" Jul 10 00:37:22.868912 kubelet[1416]: E0710 00:37:22.868871 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:23.071546 kubelet[1416]: E0710 00:37:23.071504 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:23.073360 kubelet[1416]: E0710 00:37:23.073334 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:23.077462 env[1212]: time="2025-07-10T00:37:23.077419642Z" level=info msg="CreateContainer within sandbox \"c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:37:23.090730 env[1212]: time="2025-07-10T00:37:23.090685882Z" level=info msg="CreateContainer within sandbox \"c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"434022797045cd4bc3b1f571319a4e145ab81ca966b968de3bb3e412d731c782\"" Jul 10 00:37:23.091329 env[1212]: time="2025-07-10T00:37:23.091301042Z" level=info msg="StartContainer for \"434022797045cd4bc3b1f571319a4e145ab81ca966b968de3bb3e412d731c782\"" Jul 10 00:37:23.107543 systemd[1]: Started cri-containerd-434022797045cd4bc3b1f571319a4e145ab81ca966b968de3bb3e412d731c782.scope. Jul 10 00:37:23.138301 systemd[1]: cri-containerd-434022797045cd4bc3b1f571319a4e145ab81ca966b968de3bb3e412d731c782.scope: Deactivated successfully. Jul 10 00:37:23.139389 env[1212]: time="2025-07-10T00:37:23.139277322Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2afdc61_73b4_43b0_8ead_4b40bb59fd3f.slice/cri-containerd-434022797045cd4bc3b1f571319a4e145ab81ca966b968de3bb3e412d731c782.scope/memory.events\": no such file or directory" Jul 10 00:37:23.141242 env[1212]: time="2025-07-10T00:37:23.141200402Z" level=info msg="StartContainer for \"434022797045cd4bc3b1f571319a4e145ab81ca966b968de3bb3e412d731c782\" returns successfully" Jul 10 00:37:23.159609 env[1212]: time="2025-07-10T00:37:23.159559602Z" level=info msg="shim disconnected" id=434022797045cd4bc3b1f571319a4e145ab81ca966b968de3bb3e412d731c782 Jul 10 00:37:23.159609 env[1212]: time="2025-07-10T00:37:23.159609882Z" level=warning msg="cleaning up after shim disconnected" id=434022797045cd4bc3b1f571319a4e145ab81ca966b968de3bb3e412d731c782 namespace=k8s.io Jul 10 00:37:23.159833 env[1212]: time="2025-07-10T00:37:23.159619722Z" level=info msg="cleaning up dead shim" Jul 10 00:37:23.166486 env[1212]: time="2025-07-10T00:37:23.166429682Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:37:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1948 runtime=io.containerd.runc.v2\n" Jul 10 00:37:23.402909 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-434022797045cd4bc3b1f571319a4e145ab81ca966b968de3bb3e412d731c782-rootfs.mount: Deactivated successfully. Jul 10 00:37:23.869419 kubelet[1416]: E0710 00:37:23.869275 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:24.076876 kubelet[1416]: E0710 00:37:24.076844 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:24.077597 kubelet[1416]: E0710 00:37:24.077558 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:24.081374 env[1212]: time="2025-07-10T00:37:24.081332762Z" level=info msg="CreateContainer within sandbox \"c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:37:24.097217 kubelet[1416]: I0710 00:37:24.097129 1416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8tq8p" podStartSLOduration=3.930186322 podStartE2EDuration="11.097112882s" podCreationTimestamp="2025-07-10 00:37:13 +0000 UTC" firstStartedPulling="2025-07-10 00:37:14.834875882 +0000 UTC m=+2.660979041" lastFinishedPulling="2025-07-10 00:37:22.001802442 +0000 UTC m=+9.827905601" observedRunningTime="2025-07-10 00:37:23.096171202 +0000 UTC m=+10.922274361" watchObservedRunningTime="2025-07-10 00:37:24.097112882 +0000 UTC m=+11.923216001" Jul 10 00:37:24.099417 env[1212]: time="2025-07-10T00:37:24.099365882Z" level=info msg="CreateContainer within sandbox \"c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"50b0dd64e7ee73aa8d503468cf00195a15dfe4c6cd258bba96c71d02b6bf0f7d\"" Jul 10 00:37:24.100003 env[1212]: time="2025-07-10T00:37:24.099970242Z" level=info msg="StartContainer for \"50b0dd64e7ee73aa8d503468cf00195a15dfe4c6cd258bba96c71d02b6bf0f7d\"" Jul 10 00:37:24.116521 systemd[1]: Started cri-containerd-50b0dd64e7ee73aa8d503468cf00195a15dfe4c6cd258bba96c71d02b6bf0f7d.scope. Jul 10 00:37:24.175755 env[1212]: time="2025-07-10T00:37:24.175652122Z" level=info msg="StartContainer for \"50b0dd64e7ee73aa8d503468cf00195a15dfe4c6cd258bba96c71d02b6bf0f7d\" returns successfully" Jul 10 00:37:24.321682 kubelet[1416]: I0710 00:37:24.321643 1416 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 00:37:24.450604 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 10 00:37:24.679608 kernel: Initializing XFRM netlink socket Jul 10 00:37:24.681597 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 10 00:37:24.869946 kubelet[1416]: E0710 00:37:24.869885 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:25.080346 kubelet[1416]: E0710 00:37:25.080235 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:25.095217 kubelet[1416]: I0710 00:37:25.095092 1416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vp6j4" podStartSLOduration=6.536222122 podStartE2EDuration="12.095077722s" podCreationTimestamp="2025-07-10 00:37:13 +0000 UTC" firstStartedPulling="2025-07-10 00:37:14.830746002 +0000 UTC m=+2.656849161" lastFinishedPulling="2025-07-10 00:37:20.389601602 +0000 UTC m=+8.215704761" observedRunningTime="2025-07-10 00:37:25.094958042 +0000 UTC m=+12.921061201" watchObservedRunningTime="2025-07-10 00:37:25.095077722 +0000 UTC m=+12.921180881" Jul 10 00:37:25.870782 kubelet[1416]: E0710 00:37:25.870725 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:26.082356 kubelet[1416]: E0710 00:37:26.082325 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:26.298892 systemd-networkd[1041]: cilium_host: Link UP Jul 10 00:37:26.300289 systemd-networkd[1041]: cilium_net: Link UP Jul 10 00:37:26.303219 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 10 00:37:26.303321 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 10 00:37:26.302970 systemd-networkd[1041]: cilium_net: Gained carrier Jul 10 00:37:26.303167 systemd-networkd[1041]: cilium_host: Gained carrier Jul 10 00:37:26.303264 systemd-networkd[1041]: cilium_net: Gained IPv6LL Jul 10 00:37:26.303377 systemd-networkd[1041]: cilium_host: Gained IPv6LL Jul 10 00:37:26.389027 systemd-networkd[1041]: cilium_vxlan: Link UP Jul 10 00:37:26.389034 systemd-networkd[1041]: cilium_vxlan: Gained carrier Jul 10 00:37:26.713605 kernel: NET: Registered PF_ALG protocol family Jul 10 00:37:26.871731 kubelet[1416]: E0710 00:37:26.871680 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:27.084015 kubelet[1416]: E0710 00:37:27.083875 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:27.315793 systemd-networkd[1041]: lxc_health: Link UP Jul 10 00:37:27.328195 systemd-networkd[1041]: lxc_health: Gained carrier Jul 10 00:37:27.328597 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 10 00:37:27.872350 kubelet[1416]: E0710 00:37:27.872301 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:27.985722 systemd-networkd[1041]: cilium_vxlan: Gained IPv6LL Jul 10 00:37:28.344129 kubelet[1416]: E0710 00:37:28.344093 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:28.433728 systemd-networkd[1041]: lxc_health: Gained IPv6LL Jul 10 00:37:28.872821 kubelet[1416]: E0710 00:37:28.872788 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:29.086970 kubelet[1416]: E0710 00:37:29.086938 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:29.095113 systemd[1]: Created slice kubepods-besteffort-pod705b6eaf_83bb_4a79_af51_1da2719ca4d4.slice. Jul 10 00:37:29.178734 kubelet[1416]: I0710 00:37:29.178687 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdfgl\" (UniqueName: \"kubernetes.io/projected/705b6eaf-83bb-4a79-af51-1da2719ca4d4-kube-api-access-wdfgl\") pod \"nginx-deployment-7fcdb87857-shm5q\" (UID: \"705b6eaf-83bb-4a79-af51-1da2719ca4d4\") " pod="default/nginx-deployment-7fcdb87857-shm5q" Jul 10 00:37:29.398078 env[1212]: time="2025-07-10T00:37:29.398015362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-shm5q,Uid:705b6eaf-83bb-4a79-af51-1da2719ca4d4,Namespace:default,Attempt:0,}" Jul 10 00:37:29.434204 systemd-networkd[1041]: lxc356c355ea66f: Link UP Jul 10 00:37:29.446599 kernel: eth0: renamed from tmp2cc04 Jul 10 00:37:29.454161 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 10 00:37:29.454273 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc356c355ea66f: link becomes ready Jul 10 00:37:29.454395 systemd-networkd[1041]: lxc356c355ea66f: Gained carrier Jul 10 00:37:29.873752 kubelet[1416]: E0710 00:37:29.873621 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:30.088287 kubelet[1416]: E0710 00:37:30.087870 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:37:30.874481 kubelet[1416]: E0710 00:37:30.874443 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:31.057701 systemd-networkd[1041]: lxc356c355ea66f: Gained IPv6LL Jul 10 00:37:31.875807 kubelet[1416]: E0710 00:37:31.875761 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:31.953763 env[1212]: time="2025-07-10T00:37:31.953557922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:37:31.953763 env[1212]: time="2025-07-10T00:37:31.953616562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:37:31.953763 env[1212]: time="2025-07-10T00:37:31.953627242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:31.954133 env[1212]: time="2025-07-10T00:37:31.953825722Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cc043ebe11b22fc84a9b1ece07642fd37678ad01131ffb86364a4d56dfee994 pid=2487 runtime=io.containerd.runc.v2 Jul 10 00:37:31.975740 systemd[1]: Started cri-containerd-2cc043ebe11b22fc84a9b1ece07642fd37678ad01131ffb86364a4d56dfee994.scope. Jul 10 00:37:32.043633 systemd-resolved[1153]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:37:32.063433 env[1212]: time="2025-07-10T00:37:32.063380842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-shm5q,Uid:705b6eaf-83bb-4a79-af51-1da2719ca4d4,Namespace:default,Attempt:0,} returns sandbox id \"2cc043ebe11b22fc84a9b1ece07642fd37678ad01131ffb86364a4d56dfee994\"" Jul 10 00:37:32.064604 env[1212]: time="2025-07-10T00:37:32.064553722Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 10 00:37:32.863855 kubelet[1416]: E0710 00:37:32.863808 1416 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:32.876454 kubelet[1416]: E0710 00:37:32.876419 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:33.876865 kubelet[1416]: E0710 00:37:33.876805 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:34.070754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3786773131.mount: Deactivated successfully. Jul 10 00:37:34.877618 kubelet[1416]: E0710 00:37:34.877560 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:35.314348 env[1212]: time="2025-07-10T00:37:35.314288002Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:35.316730 env[1212]: time="2025-07-10T00:37:35.316691562Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:35.318288 env[1212]: time="2025-07-10T00:37:35.318254882Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:35.319931 env[1212]: time="2025-07-10T00:37:35.319895762Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:35.320732 env[1212]: time="2025-07-10T00:37:35.320698962Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\"" Jul 10 00:37:35.324055 env[1212]: time="2025-07-10T00:37:35.324015322Z" level=info msg="CreateContainer within sandbox \"2cc043ebe11b22fc84a9b1ece07642fd37678ad01131ffb86364a4d56dfee994\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 10 00:37:35.334056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount880464385.mount: Deactivated successfully. Jul 10 00:37:35.338357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3629211351.mount: Deactivated successfully. Jul 10 00:37:35.342119 env[1212]: time="2025-07-10T00:37:35.342073962Z" level=info msg="CreateContainer within sandbox \"2cc043ebe11b22fc84a9b1ece07642fd37678ad01131ffb86364a4d56dfee994\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"99ab7209215b27b42eef98acead5d2e704e1562b45a8f90334dfa9076a0bd93c\"" Jul 10 00:37:35.342764 env[1212]: time="2025-07-10T00:37:35.342723842Z" level=info msg="StartContainer for \"99ab7209215b27b42eef98acead5d2e704e1562b45a8f90334dfa9076a0bd93c\"" Jul 10 00:37:35.357039 systemd[1]: Started cri-containerd-99ab7209215b27b42eef98acead5d2e704e1562b45a8f90334dfa9076a0bd93c.scope. Jul 10 00:37:35.393473 env[1212]: time="2025-07-10T00:37:35.393418162Z" level=info msg="StartContainer for \"99ab7209215b27b42eef98acead5d2e704e1562b45a8f90334dfa9076a0bd93c\" returns successfully" Jul 10 00:37:35.877734 kubelet[1416]: E0710 00:37:35.877692 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:36.109978 kubelet[1416]: I0710 00:37:36.109916 1416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-shm5q" podStartSLOduration=3.852509762 podStartE2EDuration="7.109902282s" podCreationTimestamp="2025-07-10 00:37:29 +0000 UTC" firstStartedPulling="2025-07-10 00:37:32.064235682 +0000 UTC m=+19.890338841" lastFinishedPulling="2025-07-10 00:37:35.321628202 +0000 UTC m=+23.147731361" observedRunningTime="2025-07-10 00:37:36.108495962 +0000 UTC m=+23.934599201" watchObservedRunningTime="2025-07-10 00:37:36.109902282 +0000 UTC m=+23.936005441" Jul 10 00:37:36.878472 kubelet[1416]: E0710 00:37:36.878410 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:37.878619 kubelet[1416]: E0710 00:37:37.878543 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:38.879586 kubelet[1416]: E0710 00:37:38.879532 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:39.879823 kubelet[1416]: E0710 00:37:39.879781 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:40.662059 systemd[1]: Created slice kubepods-besteffort-pod6c0b36bc_9570_4b69_8c63_e743991a1b02.slice. Jul 10 00:37:40.740974 kubelet[1416]: I0710 00:37:40.740918 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/6c0b36bc-9570-4b69-8c63-e743991a1b02-data\") pod \"nfs-server-provisioner-0\" (UID: \"6c0b36bc-9570-4b69-8c63-e743991a1b02\") " pod="default/nfs-server-provisioner-0" Jul 10 00:37:40.741256 kubelet[1416]: I0710 00:37:40.741239 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvktt\" (UniqueName: \"kubernetes.io/projected/6c0b36bc-9570-4b69-8c63-e743991a1b02-kube-api-access-gvktt\") pod \"nfs-server-provisioner-0\" (UID: \"6c0b36bc-9570-4b69-8c63-e743991a1b02\") " pod="default/nfs-server-provisioner-0" Jul 10 00:37:40.880581 kubelet[1416]: E0710 00:37:40.880545 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:40.964820 env[1212]: time="2025-07-10T00:37:40.964703911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6c0b36bc-9570-4b69-8c63-e743991a1b02,Namespace:default,Attempt:0,}" Jul 10 00:37:41.004619 kernel: eth0: renamed from tmp3d7bd Jul 10 00:37:41.011827 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 10 00:37:41.011941 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc628375526a3a: link becomes ready Jul 10 00:37:41.011442 systemd-networkd[1041]: lxc628375526a3a: Link UP Jul 10 00:37:41.011930 systemd-networkd[1041]: lxc628375526a3a: Gained carrier Jul 10 00:37:41.190831 env[1212]: time="2025-07-10T00:37:41.190750749Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:37:41.190831 env[1212]: time="2025-07-10T00:37:41.190791790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:37:41.191054 env[1212]: time="2025-07-10T00:37:41.191019112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:41.191407 env[1212]: time="2025-07-10T00:37:41.191333435Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d7bd3e064e74648cd024f67ca06c7d3aaf5ba0d752e107bf49f2d2682dbb145 pid=2619 runtime=io.containerd.runc.v2 Jul 10 00:37:41.207566 systemd[1]: Started cri-containerd-3d7bd3e064e74648cd024f67ca06c7d3aaf5ba0d752e107bf49f2d2682dbb145.scope. Jul 10 00:37:41.229702 systemd-resolved[1153]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:37:41.245761 env[1212]: time="2025-07-10T00:37:41.245544726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6c0b36bc-9570-4b69-8c63-e743991a1b02,Namespace:default,Attempt:0,} returns sandbox id \"3d7bd3e064e74648cd024f67ca06c7d3aaf5ba0d752e107bf49f2d2682dbb145\"" Jul 10 00:37:41.246955 env[1212]: time="2025-07-10T00:37:41.246898340Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 10 00:37:41.854136 systemd[1]: run-containerd-runc-k8s.io-3d7bd3e064e74648cd024f67ca06c7d3aaf5ba0d752e107bf49f2d2682dbb145-runc.LUnBQy.mount: Deactivated successfully. Jul 10 00:37:41.880968 kubelet[1416]: E0710 00:37:41.880713 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:42.833702 systemd-networkd[1041]: lxc628375526a3a: Gained IPv6LL Jul 10 00:37:42.881482 kubelet[1416]: E0710 00:37:42.881435 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:43.489568 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1095905594.mount: Deactivated successfully. Jul 10 00:37:43.881860 kubelet[1416]: E0710 00:37:43.881725 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:44.882172 kubelet[1416]: E0710 00:37:44.882123 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:45.305496 env[1212]: time="2025-07-10T00:37:45.305447222Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:45.307954 env[1212]: time="2025-07-10T00:37:45.307916000Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:45.309502 env[1212]: time="2025-07-10T00:37:45.309473532Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:45.311187 env[1212]: time="2025-07-10T00:37:45.311160185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:45.311964 env[1212]: time="2025-07-10T00:37:45.311933151Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jul 10 00:37:45.316453 env[1212]: time="2025-07-10T00:37:45.316416585Z" level=info msg="CreateContainer within sandbox \"3d7bd3e064e74648cd024f67ca06c7d3aaf5ba0d752e107bf49f2d2682dbb145\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 10 00:37:45.326503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3939296817.mount: Deactivated successfully. Jul 10 00:37:45.335714 env[1212]: time="2025-07-10T00:37:45.335650410Z" level=info msg="CreateContainer within sandbox \"3d7bd3e064e74648cd024f67ca06c7d3aaf5ba0d752e107bf49f2d2682dbb145\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"7c4d7ac9cc4864bd5d8061e9976372fbef38881e33112b00a44aef9952f1e387\"" Jul 10 00:37:45.336405 env[1212]: time="2025-07-10T00:37:45.336374336Z" level=info msg="StartContainer for \"7c4d7ac9cc4864bd5d8061e9976372fbef38881e33112b00a44aef9952f1e387\"" Jul 10 00:37:45.356738 systemd[1]: Started cri-containerd-7c4d7ac9cc4864bd5d8061e9976372fbef38881e33112b00a44aef9952f1e387.scope. Jul 10 00:37:45.455479 env[1212]: time="2025-07-10T00:37:45.455418357Z" level=info msg="StartContainer for \"7c4d7ac9cc4864bd5d8061e9976372fbef38881e33112b00a44aef9952f1e387\" returns successfully" Jul 10 00:37:45.882768 kubelet[1416]: E0710 00:37:45.882720 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:46.883335 kubelet[1416]: E0710 00:37:46.883273 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:47.883705 kubelet[1416]: E0710 00:37:47.883668 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:48.884899 kubelet[1416]: E0710 00:37:48.884803 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:49.885052 kubelet[1416]: E0710 00:37:49.884942 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:50.811789 kubelet[1416]: I0710 00:37:50.811732 1416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=6.745197107 podStartE2EDuration="10.81171425s" podCreationTimestamp="2025-07-10 00:37:40 +0000 UTC" firstStartedPulling="2025-07-10 00:37:41.246650657 +0000 UTC m=+29.072753816" lastFinishedPulling="2025-07-10 00:37:45.3131678 +0000 UTC m=+33.139270959" observedRunningTime="2025-07-10 00:37:46.128245236 +0000 UTC m=+33.954348395" watchObservedRunningTime="2025-07-10 00:37:50.81171425 +0000 UTC m=+38.637817409" Jul 10 00:37:50.818850 systemd[1]: Created slice kubepods-besteffort-podf96266a1_b680_462c_9da4_90c3cff51d57.slice. Jul 10 00:37:50.885425 kubelet[1416]: E0710 00:37:50.885386 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:50.903395 kubelet[1416]: I0710 00:37:50.903146 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r5c2\" (UniqueName: \"kubernetes.io/projected/f96266a1-b680-462c-9da4-90c3cff51d57-kube-api-access-2r5c2\") pod \"test-pod-1\" (UID: \"f96266a1-b680-462c-9da4-90c3cff51d57\") " pod="default/test-pod-1" Jul 10 00:37:50.903395 kubelet[1416]: I0710 00:37:50.903181 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-db1dfa1f-86ae-4062-949a-bf2318ebbd6d\" (UniqueName: \"kubernetes.io/nfs/f96266a1-b680-462c-9da4-90c3cff51d57-pvc-db1dfa1f-86ae-4062-949a-bf2318ebbd6d\") pod \"test-pod-1\" (UID: \"f96266a1-b680-462c-9da4-90c3cff51d57\") " pod="default/test-pod-1" Jul 10 00:37:51.033622 kernel: FS-Cache: Loaded Jul 10 00:37:51.064860 kernel: RPC: Registered named UNIX socket transport module. Jul 10 00:37:51.064979 kernel: RPC: Registered udp transport module. Jul 10 00:37:51.065663 kernel: RPC: Registered tcp transport module. Jul 10 00:37:51.066993 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 10 00:37:51.120611 kernel: FS-Cache: Netfs 'nfs' registered for caching Jul 10 00:37:51.270636 kernel: NFS: Registering the id_resolver key type Jul 10 00:37:51.270767 kernel: Key type id_resolver registered Jul 10 00:37:51.270789 kernel: Key type id_legacy registered Jul 10 00:37:51.311706 nfsidmap[2741]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 10 00:37:51.316254 nfsidmap[2744]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 10 00:37:51.422304 env[1212]: time="2025-07-10T00:37:51.422262013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f96266a1-b680-462c-9da4-90c3cff51d57,Namespace:default,Attempt:0,}" Jul 10 00:37:51.429725 update_engine[1204]: I0710 00:37:51.429513 1204 update_attempter.cc:509] Updating boot flags... Jul 10 00:37:51.489599 systemd-networkd[1041]: lxc7e5b408a7723: Link UP Jul 10 00:37:51.500650 kernel: eth0: renamed from tmpc0ff9 Jul 10 00:37:51.518345 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 10 00:37:51.518438 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7e5b408a7723: link becomes ready Jul 10 00:37:51.518930 systemd-networkd[1041]: lxc7e5b408a7723: Gained carrier Jul 10 00:37:51.654212 env[1212]: time="2025-07-10T00:37:51.654041204Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:37:51.654212 env[1212]: time="2025-07-10T00:37:51.654084564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:37:51.654212 env[1212]: time="2025-07-10T00:37:51.654102964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:37:51.654482 env[1212]: time="2025-07-10T00:37:51.654244285Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0ff943cf4273098b1bb6bac359687561762ce8632dd93441e89ba5cbf012aa3 pid=2790 runtime=io.containerd.runc.v2 Jul 10 00:37:51.666729 systemd[1]: Started cri-containerd-c0ff943cf4273098b1bb6bac359687561762ce8632dd93441e89ba5cbf012aa3.scope. Jul 10 00:37:51.705647 systemd-resolved[1153]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:37:51.721854 env[1212]: time="2025-07-10T00:37:51.721807552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f96266a1-b680-462c-9da4-90c3cff51d57,Namespace:default,Attempt:0,} returns sandbox id \"c0ff943cf4273098b1bb6bac359687561762ce8632dd93441e89ba5cbf012aa3\"" Jul 10 00:37:51.722923 env[1212]: time="2025-07-10T00:37:51.722895237Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 10 00:37:51.886341 kubelet[1416]: E0710 00:37:51.886285 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:51.958022 env[1212]: time="2025-07-10T00:37:51.957981805Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:51.959231 env[1212]: time="2025-07-10T00:37:51.959197372Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:51.960925 env[1212]: time="2025-07-10T00:37:51.960897980Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:51.962601 env[1212]: time="2025-07-10T00:37:51.962550629Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:37:51.963386 env[1212]: time="2025-07-10T00:37:51.963355873Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\"" Jul 10 00:37:51.967347 env[1212]: time="2025-07-10T00:37:51.967305013Z" level=info msg="CreateContainer within sandbox \"c0ff943cf4273098b1bb6bac359687561762ce8632dd93441e89ba5cbf012aa3\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 10 00:37:51.977803 env[1212]: time="2025-07-10T00:37:51.977767307Z" level=info msg="CreateContainer within sandbox \"c0ff943cf4273098b1bb6bac359687561762ce8632dd93441e89ba5cbf012aa3\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"4aab8161c3c95e5b19eb39f5fcca6e302b96911f864c71ece2a2e0b2ef14314c\"" Jul 10 00:37:51.978208 env[1212]: time="2025-07-10T00:37:51.978186029Z" level=info msg="StartContainer for \"4aab8161c3c95e5b19eb39f5fcca6e302b96911f864c71ece2a2e0b2ef14314c\"" Jul 10 00:37:51.991966 systemd[1]: Started cri-containerd-4aab8161c3c95e5b19eb39f5fcca6e302b96911f864c71ece2a2e0b2ef14314c.scope. Jul 10 00:37:52.020020 env[1212]: time="2025-07-10T00:37:52.019973318Z" level=info msg="StartContainer for \"4aab8161c3c95e5b19eb39f5fcca6e302b96911f864c71ece2a2e0b2ef14314c\" returns successfully" Jul 10 00:37:52.142199 kubelet[1416]: I0710 00:37:52.142132 1416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=11.900285183 podStartE2EDuration="12.142118706s" podCreationTimestamp="2025-07-10 00:37:40 +0000 UTC" firstStartedPulling="2025-07-10 00:37:51.722670516 +0000 UTC m=+39.548773635" lastFinishedPulling="2025-07-10 00:37:51.964503999 +0000 UTC m=+39.790607158" observedRunningTime="2025-07-10 00:37:52.141781985 +0000 UTC m=+39.967885144" watchObservedRunningTime="2025-07-10 00:37:52.142118706 +0000 UTC m=+39.968221865" Jul 10 00:37:52.689744 systemd-networkd[1041]: lxc7e5b408a7723: Gained IPv6LL Jul 10 00:37:52.863980 kubelet[1416]: E0710 00:37:52.863887 1416 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:52.887276 kubelet[1416]: E0710 00:37:52.887220 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:53.887789 kubelet[1416]: E0710 00:37:53.887712 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:54.888912 kubelet[1416]: E0710 00:37:54.888835 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:55.889011 kubelet[1416]: E0710 00:37:55.888965 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:56.890079 kubelet[1416]: E0710 00:37:56.889984 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:57.890456 kubelet[1416]: E0710 00:37:57.890395 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:58.014639 env[1212]: time="2025-07-10T00:37:58.014542915Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:37:58.017979 kubelet[1416]: E0710 00:37:58.017893 1416 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:37:58.020726 env[1212]: time="2025-07-10T00:37:58.020689295Z" level=info msg="StopContainer for \"50b0dd64e7ee73aa8d503468cf00195a15dfe4c6cd258bba96c71d02b6bf0f7d\" with timeout 2 (s)" Jul 10 00:37:58.021119 env[1212]: time="2025-07-10T00:37:58.021096776Z" level=info msg="Stop container \"50b0dd64e7ee73aa8d503468cf00195a15dfe4c6cd258bba96c71d02b6bf0f7d\" with signal terminated" Jul 10 00:37:58.031818 systemd-networkd[1041]: lxc_health: Link DOWN Jul 10 00:37:58.031824 systemd-networkd[1041]: lxc_health: Lost carrier Jul 10 00:37:58.075106 systemd[1]: cri-containerd-50b0dd64e7ee73aa8d503468cf00195a15dfe4c6cd258bba96c71d02b6bf0f7d.scope: Deactivated successfully. Jul 10 00:37:58.075440 systemd[1]: cri-containerd-50b0dd64e7ee73aa8d503468cf00195a15dfe4c6cd258bba96c71d02b6bf0f7d.scope: Consumed 6.688s CPU time. Jul 10 00:37:58.096627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50b0dd64e7ee73aa8d503468cf00195a15dfe4c6cd258bba96c71d02b6bf0f7d-rootfs.mount: Deactivated successfully. Jul 10 00:37:58.112110 env[1212]: time="2025-07-10T00:37:58.112061074Z" level=info msg="shim disconnected" id=50b0dd64e7ee73aa8d503468cf00195a15dfe4c6cd258bba96c71d02b6bf0f7d Jul 10 00:37:58.112408 env[1212]: time="2025-07-10T00:37:58.112388635Z" level=warning msg="cleaning up after shim disconnected" id=50b0dd64e7ee73aa8d503468cf00195a15dfe4c6cd258bba96c71d02b6bf0f7d namespace=k8s.io Jul 10 00:37:58.112470 env[1212]: time="2025-07-10T00:37:58.112457595Z" level=info msg="cleaning up dead shim" Jul 10 00:37:58.121349 env[1212]: time="2025-07-10T00:37:58.121277184Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:37:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2926 runtime=io.containerd.runc.v2\n" Jul 10 00:37:58.123779 env[1212]: time="2025-07-10T00:37:58.123733232Z" level=info msg="StopContainer for \"50b0dd64e7ee73aa8d503468cf00195a15dfe4c6cd258bba96c71d02b6bf0f7d\" returns successfully" Jul 10 00:37:58.124466 env[1212]: time="2025-07-10T00:37:58.124433594Z" level=info msg="StopPodSandbox for \"c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc\"" Jul 10 00:37:58.124540 env[1212]: time="2025-07-10T00:37:58.124509874Z" level=info msg="Container to stop \"de5cd30170ec87f0b0fa622d1dbe0b7795b5afc0b9738f739053e5bdcb886ab0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:37:58.124540 env[1212]: time="2025-07-10T00:37:58.124525394Z" level=info msg="Container to stop \"5a579c39b5431044dcddf6e42420085e1804304df6e39b4a27e16f848ce3ac41\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:37:58.124617 env[1212]: time="2025-07-10T00:37:58.124537634Z" level=info msg="Container to stop \"50b0dd64e7ee73aa8d503468cf00195a15dfe4c6cd258bba96c71d02b6bf0f7d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:37:58.124617 env[1212]: time="2025-07-10T00:37:58.124549994Z" level=info msg="Container to stop \"59727e0ce048a891885a2e904c066ec69f43ea7e25b56a2dfb0e25eb8359ae21\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:37:58.124617 env[1212]: time="2025-07-10T00:37:58.124560954Z" level=info msg="Container to stop \"434022797045cd4bc3b1f571319a4e145ab81ca966b968de3bb3e412d731c782\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:37:58.126213 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc-shm.mount: Deactivated successfully. Jul 10 00:37:58.133760 systemd[1]: cri-containerd-c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc.scope: Deactivated successfully. Jul 10 00:37:58.163335 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc-rootfs.mount: Deactivated successfully. Jul 10 00:37:58.169914 env[1212]: time="2025-07-10T00:37:58.169865583Z" level=info msg="shim disconnected" id=c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc Jul 10 00:37:58.170130 env[1212]: time="2025-07-10T00:37:58.170111583Z" level=warning msg="cleaning up after shim disconnected" id=c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc namespace=k8s.io Jul 10 00:37:58.170191 env[1212]: time="2025-07-10T00:37:58.170178224Z" level=info msg="cleaning up dead shim" Jul 10 00:37:58.177494 env[1212]: time="2025-07-10T00:37:58.177452447Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:37:58Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2956 runtime=io.containerd.runc.v2\n" Jul 10 00:37:58.178182 env[1212]: time="2025-07-10T00:37:58.178148170Z" level=info msg="TearDown network for sandbox \"c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc\" successfully" Jul 10 00:37:58.178278 env[1212]: time="2025-07-10T00:37:58.178260050Z" level=info msg="StopPodSandbox for \"c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc\" returns successfully" Jul 10 00:37:58.248510 kubelet[1416]: I0710 00:37:58.248407 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-xtables-lock\") pod \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " Jul 10 00:37:58.248753 kubelet[1416]: I0710 00:37:58.248522 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f" (UID: "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:37:58.248753 kubelet[1416]: I0710 00:37:58.248604 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f" (UID: "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:37:58.248753 kubelet[1416]: I0710 00:37:58.248622 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-cilium-run\") pod \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " Jul 10 00:37:58.248753 kubelet[1416]: I0710 00:37:58.248643 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-etc-cni-netd\") pod \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " Jul 10 00:37:58.248753 kubelet[1416]: I0710 00:37:58.248658 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-lib-modules\") pod \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " Jul 10 00:37:58.248753 kubelet[1416]: I0710 00:37:58.248682 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-bpf-maps\") pod \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " Jul 10 00:37:58.248902 kubelet[1416]: I0710 00:37:58.248702 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xvnw\" (UniqueName: \"kubernetes.io/projected/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-kube-api-access-5xvnw\") pod \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " Jul 10 00:37:58.248902 kubelet[1416]: I0710 00:37:58.248717 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-cilium-cgroup\") pod \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " Jul 10 00:37:58.248902 kubelet[1416]: I0710 00:37:58.248734 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-cilium-config-path\") pod \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " Jul 10 00:37:58.248902 kubelet[1416]: I0710 00:37:58.248748 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-host-proc-sys-kernel\") pod \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " Jul 10 00:37:58.248902 kubelet[1416]: I0710 00:37:58.248763 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-clustermesh-secrets\") pod \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " Jul 10 00:37:58.248902 kubelet[1416]: I0710 00:37:58.248780 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-hostproc\") pod \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " Jul 10 00:37:58.249037 kubelet[1416]: I0710 00:37:58.248793 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-host-proc-sys-net\") pod \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " Jul 10 00:37:58.249037 kubelet[1416]: I0710 00:37:58.248817 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-hubble-tls\") pod \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " Jul 10 00:37:58.249037 kubelet[1416]: I0710 00:37:58.248836 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-cni-path\") pod \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\" (UID: \"b2afdc61-73b4-43b0-8ead-4b40bb59fd3f\") " Jul 10 00:37:58.249037 kubelet[1416]: I0710 00:37:58.248868 1416 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-xtables-lock\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:37:58.249037 kubelet[1416]: I0710 00:37:58.248878 1416 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-cilium-run\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:37:58.249037 kubelet[1416]: I0710 00:37:58.248905 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-cni-path" (OuterVolumeSpecName: "cni-path") pod "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f" (UID: "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:37:58.249213 kubelet[1416]: I0710 00:37:58.248922 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f" (UID: "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:37:58.249213 kubelet[1416]: I0710 00:37:58.248936 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f" (UID: "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:37:58.249213 kubelet[1416]: I0710 00:37:58.248951 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f" (UID: "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:37:58.251137 kubelet[1416]: I0710 00:37:58.249334 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f" (UID: "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:37:58.251137 kubelet[1416]: I0710 00:37:58.249369 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f" (UID: "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:37:58.251137 kubelet[1416]: I0710 00:37:58.249391 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f" (UID: "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:37:58.251137 kubelet[1416]: I0710 00:37:58.249408 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-hostproc" (OuterVolumeSpecName: "hostproc") pod "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f" (UID: "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:37:58.251493 kubelet[1416]: I0710 00:37:58.251155 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f" (UID: "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:37:58.255015 systemd[1]: var-lib-kubelet-pods-b2afdc61\x2d73b4\x2d43b0\x2d8ead\x2d4b40bb59fd3f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:37:58.256487 kubelet[1416]: I0710 00:37:58.256450 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f" (UID: "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:37:58.256736 kubelet[1416]: I0710 00:37:58.256676 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-kube-api-access-5xvnw" (OuterVolumeSpecName: "kube-api-access-5xvnw") pod "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f" (UID: "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f"). InnerVolumeSpecName "kube-api-access-5xvnw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:37:58.257406 kubelet[1416]: I0710 00:37:58.257378 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f" (UID: "b2afdc61-73b4-43b0-8ead-4b40bb59fd3f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:37:58.350067 kubelet[1416]: I0710 00:37:58.350026 1416 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-bpf-maps\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:37:58.350245 kubelet[1416]: I0710 00:37:58.350232 1416 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5xvnw\" (UniqueName: \"kubernetes.io/projected/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-kube-api-access-5xvnw\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:37:58.350340 kubelet[1416]: I0710 00:37:58.350326 1416 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-cilium-cgroup\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:37:58.350407 kubelet[1416]: I0710 00:37:58.350397 1416 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-cilium-config-path\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:37:58.350469 kubelet[1416]: I0710 00:37:58.350459 1416 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-host-proc-sys-kernel\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:37:58.350530 kubelet[1416]: I0710 00:37:58.350519 1416 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-clustermesh-secrets\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:37:58.350620 kubelet[1416]: I0710 00:37:58.350609 1416 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-hostproc\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:37:58.350692 kubelet[1416]: I0710 00:37:58.350681 1416 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-host-proc-sys-net\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:37:58.350758 kubelet[1416]: I0710 00:37:58.350749 1416 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-hubble-tls\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:37:58.350813 kubelet[1416]: I0710 00:37:58.350804 1416 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-cni-path\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:37:58.350879 kubelet[1416]: I0710 00:37:58.350869 1416 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-etc-cni-netd\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:37:58.350938 kubelet[1416]: I0710 00:37:58.350928 1416 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f-lib-modules\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:37:58.890677 kubelet[1416]: E0710 00:37:58.890636 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:37:58.974071 systemd[1]: var-lib-kubelet-pods-b2afdc61\x2d73b4\x2d43b0\x2d8ead\x2d4b40bb59fd3f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5xvnw.mount: Deactivated successfully. Jul 10 00:37:58.974175 systemd[1]: var-lib-kubelet-pods-b2afdc61\x2d73b4\x2d43b0\x2d8ead\x2d4b40bb59fd3f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:37:59.048343 systemd[1]: Removed slice kubepods-burstable-podb2afdc61_73b4_43b0_8ead_4b40bb59fd3f.slice. Jul 10 00:37:59.048435 systemd[1]: kubepods-burstable-podb2afdc61_73b4_43b0_8ead_4b40bb59fd3f.slice: Consumed 6.889s CPU time. Jul 10 00:37:59.154458 kubelet[1416]: I0710 00:37:59.154372 1416 scope.go:117] "RemoveContainer" containerID="50b0dd64e7ee73aa8d503468cf00195a15dfe4c6cd258bba96c71d02b6bf0f7d" Jul 10 00:37:59.160024 env[1212]: time="2025-07-10T00:37:59.159557712Z" level=info msg="RemoveContainer for \"50b0dd64e7ee73aa8d503468cf00195a15dfe4c6cd258bba96c71d02b6bf0f7d\"" Jul 10 00:37:59.176996 env[1212]: time="2025-07-10T00:37:59.176824165Z" level=info msg="RemoveContainer for \"50b0dd64e7ee73aa8d503468cf00195a15dfe4c6cd258bba96c71d02b6bf0f7d\" returns successfully" Jul 10 00:37:59.177293 kubelet[1416]: I0710 00:37:59.177271 1416 scope.go:117] "RemoveContainer" containerID="434022797045cd4bc3b1f571319a4e145ab81ca966b968de3bb3e412d731c782" Jul 10 00:37:59.178541 env[1212]: time="2025-07-10T00:37:59.178289330Z" level=info msg="RemoveContainer for \"434022797045cd4bc3b1f571319a4e145ab81ca966b968de3bb3e412d731c782\"" Jul 10 00:37:59.197273 env[1212]: time="2025-07-10T00:37:59.197163028Z" level=info msg="RemoveContainer for \"434022797045cd4bc3b1f571319a4e145ab81ca966b968de3bb3e412d731c782\" returns successfully" Jul 10 00:37:59.197589 kubelet[1416]: I0710 00:37:59.197497 1416 scope.go:117] "RemoveContainer" containerID="5a579c39b5431044dcddf6e42420085e1804304df6e39b4a27e16f848ce3ac41" Jul 10 00:37:59.199766 env[1212]: time="2025-07-10T00:37:59.199469915Z" level=info msg="RemoveContainer for \"5a579c39b5431044dcddf6e42420085e1804304df6e39b4a27e16f848ce3ac41\"" Jul 10 00:37:59.228374 env[1212]: time="2025-07-10T00:37:59.226852079Z" level=info msg="RemoveContainer for \"5a579c39b5431044dcddf6e42420085e1804304df6e39b4a27e16f848ce3ac41\" returns successfully" Jul 10 00:37:59.228515 kubelet[1416]: I0710 00:37:59.227104 1416 scope.go:117] "RemoveContainer" containerID="de5cd30170ec87f0b0fa622d1dbe0b7795b5afc0b9738f739053e5bdcb886ab0" Jul 10 00:37:59.229836 env[1212]: time="2025-07-10T00:37:59.229803288Z" level=info msg="RemoveContainer for \"de5cd30170ec87f0b0fa622d1dbe0b7795b5afc0b9738f739053e5bdcb886ab0\"" Jul 10 00:37:59.234355 env[1212]: time="2025-07-10T00:37:59.234308262Z" level=info msg="RemoveContainer for \"de5cd30170ec87f0b0fa622d1dbe0b7795b5afc0b9738f739053e5bdcb886ab0\" returns successfully" Jul 10 00:37:59.234683 kubelet[1416]: I0710 00:37:59.234658 1416 scope.go:117] "RemoveContainer" containerID="59727e0ce048a891885a2e904c066ec69f43ea7e25b56a2dfb0e25eb8359ae21" Jul 10 00:37:59.235728 env[1212]: time="2025-07-10T00:37:59.235688146Z" level=info msg="RemoveContainer for \"59727e0ce048a891885a2e904c066ec69f43ea7e25b56a2dfb0e25eb8359ae21\"" Jul 10 00:37:59.238261 env[1212]: time="2025-07-10T00:37:59.238221674Z" level=info msg="RemoveContainer for \"59727e0ce048a891885a2e904c066ec69f43ea7e25b56a2dfb0e25eb8359ae21\" returns successfully" Jul 10 00:37:59.891440 kubelet[1416]: E0710 00:37:59.891391 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:38:00.892254 kubelet[1416]: E0710 00:38:00.892200 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:38:01.046218 kubelet[1416]: I0710 00:38:01.045765 1416 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2afdc61-73b4-43b0-8ead-4b40bb59fd3f" path="/var/lib/kubelet/pods/b2afdc61-73b4-43b0-8ead-4b40bb59fd3f/volumes" Jul 10 00:38:01.895128 kubelet[1416]: E0710 00:38:01.895066 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:38:01.955014 systemd[1]: Created slice kubepods-besteffort-podf2eedc13_c32d_4efd_b442_5486876177d0.slice. Jul 10 00:38:01.968598 systemd[1]: Created slice kubepods-burstable-pod5b0e0f6e_bd12_4580_a254_844d19482704.slice. Jul 10 00:38:01.973349 kubelet[1416]: I0710 00:38:01.973315 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-lib-modules\") pod \"cilium-2llmf\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " pod="kube-system/cilium-2llmf" Jul 10 00:38:01.973523 kubelet[1416]: I0710 00:38:01.973505 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-host-proc-sys-kernel\") pod \"cilium-2llmf\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " pod="kube-system/cilium-2llmf" Jul 10 00:38:01.973653 kubelet[1416]: I0710 00:38:01.973639 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-cni-path\") pod \"cilium-2llmf\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " pod="kube-system/cilium-2llmf" Jul 10 00:38:01.973730 kubelet[1416]: I0710 00:38:01.973717 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-host-proc-sys-net\") pod \"cilium-2llmf\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " pod="kube-system/cilium-2llmf" Jul 10 00:38:01.973795 kubelet[1416]: I0710 00:38:01.973783 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzmfh\" (UniqueName: \"kubernetes.io/projected/5b0e0f6e-bd12-4580-a254-844d19482704-kube-api-access-mzmfh\") pod \"cilium-2llmf\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " pod="kube-system/cilium-2llmf" Jul 10 00:38:01.973869 kubelet[1416]: I0710 00:38:01.973856 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2eedc13-c32d-4efd-b442-5486876177d0-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-vwmlc\" (UID: \"f2eedc13-c32d-4efd-b442-5486876177d0\") " pod="kube-system/cilium-operator-6c4d7847fc-vwmlc" Jul 10 00:38:01.973956 kubelet[1416]: I0710 00:38:01.973942 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-cilium-cgroup\") pod \"cilium-2llmf\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " pod="kube-system/cilium-2llmf" Jul 10 00:38:01.974036 kubelet[1416]: I0710 00:38:01.974022 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5b0e0f6e-bd12-4580-a254-844d19482704-clustermesh-secrets\") pod \"cilium-2llmf\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " pod="kube-system/cilium-2llmf" Jul 10 00:38:01.974115 kubelet[1416]: I0710 00:38:01.974103 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-cilium-run\") pod \"cilium-2llmf\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " pod="kube-system/cilium-2llmf" Jul 10 00:38:01.974196 kubelet[1416]: I0710 00:38:01.974183 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-hostproc\") pod \"cilium-2llmf\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " pod="kube-system/cilium-2llmf" Jul 10 00:38:01.974274 kubelet[1416]: I0710 00:38:01.974259 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-etc-cni-netd\") pod \"cilium-2llmf\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " pod="kube-system/cilium-2llmf" Jul 10 00:38:01.974366 kubelet[1416]: I0710 00:38:01.974351 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-xtables-lock\") pod \"cilium-2llmf\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " pod="kube-system/cilium-2llmf" Jul 10 00:38:01.974446 kubelet[1416]: I0710 00:38:01.974434 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5b0e0f6e-bd12-4580-a254-844d19482704-hubble-tls\") pod \"cilium-2llmf\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " pod="kube-system/cilium-2llmf" Jul 10 00:38:01.974524 kubelet[1416]: I0710 00:38:01.974511 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk4cr\" (UniqueName: \"kubernetes.io/projected/f2eedc13-c32d-4efd-b442-5486876177d0-kube-api-access-kk4cr\") pod \"cilium-operator-6c4d7847fc-vwmlc\" (UID: \"f2eedc13-c32d-4efd-b442-5486876177d0\") " pod="kube-system/cilium-operator-6c4d7847fc-vwmlc" Jul 10 00:38:01.974608 kubelet[1416]: I0710 00:38:01.974596 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-bpf-maps\") pod \"cilium-2llmf\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " pod="kube-system/cilium-2llmf" Jul 10 00:38:01.974686 kubelet[1416]: I0710 00:38:01.974673 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b0e0f6e-bd12-4580-a254-844d19482704-cilium-config-path\") pod \"cilium-2llmf\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " pod="kube-system/cilium-2llmf" Jul 10 00:38:01.974771 kubelet[1416]: I0710 00:38:01.974757 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5b0e0f6e-bd12-4580-a254-844d19482704-cilium-ipsec-secrets\") pod \"cilium-2llmf\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " pod="kube-system/cilium-2llmf" Jul 10 00:38:02.121878 kubelet[1416]: E0710 00:38:02.121830 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:38:02.122485 env[1212]: time="2025-07-10T00:38:02.122422251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2llmf,Uid:5b0e0f6e-bd12-4580-a254-844d19482704,Namespace:kube-system,Attempt:0,}" Jul 10 00:38:02.143272 env[1212]: time="2025-07-10T00:38:02.143185024Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:38:02.143440 env[1212]: time="2025-07-10T00:38:02.143281744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:38:02.143440 env[1212]: time="2025-07-10T00:38:02.143318584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:38:02.143535 env[1212]: time="2025-07-10T00:38:02.143492464Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6fced54b5ee13662a67450dfa9e969de350865b05f67ba330fd7216ebaa7721e pid=2985 runtime=io.containerd.runc.v2 Jul 10 00:38:02.155348 systemd[1]: Started cri-containerd-6fced54b5ee13662a67450dfa9e969de350865b05f67ba330fd7216ebaa7721e.scope. Jul 10 00:38:02.188658 env[1212]: time="2025-07-10T00:38:02.188612138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2llmf,Uid:5b0e0f6e-bd12-4580-a254-844d19482704,Namespace:kube-system,Attempt:0,} returns sandbox id \"6fced54b5ee13662a67450dfa9e969de350865b05f67ba330fd7216ebaa7721e\"" Jul 10 00:38:02.189885 kubelet[1416]: E0710 00:38:02.189305 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:38:02.197337 env[1212]: time="2025-07-10T00:38:02.197264760Z" level=info msg="CreateContainer within sandbox \"6fced54b5ee13662a67450dfa9e969de350865b05f67ba330fd7216ebaa7721e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:38:02.208288 env[1212]: time="2025-07-10T00:38:02.208218668Z" level=info msg="CreateContainer within sandbox \"6fced54b5ee13662a67450dfa9e969de350865b05f67ba330fd7216ebaa7721e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ffc07bad34d77fe47ceda2dafc647d6dfd9815bf4adc6d3fb42887f09e205c1a\"" Jul 10 00:38:02.209107 env[1212]: time="2025-07-10T00:38:02.209062550Z" level=info msg="StartContainer for \"ffc07bad34d77fe47ceda2dafc647d6dfd9815bf4adc6d3fb42887f09e205c1a\"" Jul 10 00:38:02.223864 systemd[1]: Started cri-containerd-ffc07bad34d77fe47ceda2dafc647d6dfd9815bf4adc6d3fb42887f09e205c1a.scope. Jul 10 00:38:02.260301 kubelet[1416]: E0710 00:38:02.259342 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:38:02.260462 env[1212]: time="2025-07-10T00:38:02.259911199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vwmlc,Uid:f2eedc13-c32d-4efd-b442-5486876177d0,Namespace:kube-system,Attempt:0,}" Jul 10 00:38:02.260501 systemd[1]: cri-containerd-ffc07bad34d77fe47ceda2dafc647d6dfd9815bf4adc6d3fb42887f09e205c1a.scope: Deactivated successfully. Jul 10 00:38:02.274486 env[1212]: time="2025-07-10T00:38:02.274435235Z" level=info msg="shim disconnected" id=ffc07bad34d77fe47ceda2dafc647d6dfd9815bf4adc6d3fb42887f09e205c1a Jul 10 00:38:02.274486 env[1212]: time="2025-07-10T00:38:02.274484515Z" level=warning msg="cleaning up after shim disconnected" id=ffc07bad34d77fe47ceda2dafc647d6dfd9815bf4adc6d3fb42887f09e205c1a namespace=k8s.io Jul 10 00:38:02.274486 env[1212]: time="2025-07-10T00:38:02.274494155Z" level=info msg="cleaning up dead shim" Jul 10 00:38:02.281888 env[1212]: time="2025-07-10T00:38:02.281823694Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:38:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3044 runtime=io.containerd.runc.v2\ntime=\"2025-07-10T00:38:02Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ffc07bad34d77fe47ceda2dafc647d6dfd9815bf4adc6d3fb42887f09e205c1a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 10 00:38:02.282340 env[1212]: time="2025-07-10T00:38:02.282213295Z" level=error msg="copy shim log" error="read /proc/self/fd/58: file already closed" Jul 10 00:38:02.282544 env[1212]: time="2025-07-10T00:38:02.282500736Z" level=error msg="Failed to pipe stdout of container \"ffc07bad34d77fe47ceda2dafc647d6dfd9815bf4adc6d3fb42887f09e205c1a\"" error="reading from a closed fifo" Jul 10 00:38:02.282744 env[1212]: time="2025-07-10T00:38:02.282710336Z" level=error msg="Failed to pipe stderr of container \"ffc07bad34d77fe47ceda2dafc647d6dfd9815bf4adc6d3fb42887f09e205c1a\"" error="reading from a closed fifo" Jul 10 00:38:02.285437 env[1212]: time="2025-07-10T00:38:02.285365343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:38:02.285437 env[1212]: time="2025-07-10T00:38:02.285412063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:38:02.285437 env[1212]: time="2025-07-10T00:38:02.285423183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:38:02.285608 env[1212]: time="2025-07-10T00:38:02.285353663Z" level=error msg="StartContainer for \"ffc07bad34d77fe47ceda2dafc647d6dfd9815bf4adc6d3fb42887f09e205c1a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 10 00:38:02.285851 env[1212]: time="2025-07-10T00:38:02.285731704Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b5feba325581fb406747384a01117873d0a7491fecb1aab959204572f4e6cfa5 pid=3063 runtime=io.containerd.runc.v2 Jul 10 00:38:02.286329 kubelet[1416]: E0710 00:38:02.286057 1416 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ffc07bad34d77fe47ceda2dafc647d6dfd9815bf4adc6d3fb42887f09e205c1a" Jul 10 00:38:02.287563 kubelet[1416]: E0710 00:38:02.287507 1416 kuberuntime_manager.go:1358] "Unhandled Error" err=< Jul 10 00:38:02.287563 kubelet[1416]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 10 00:38:02.287563 kubelet[1416]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 10 00:38:02.287563 kubelet[1416]: rm /hostbin/cilium-mount Jul 10 00:38:02.287720 kubelet[1416]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mzmfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-2llmf_kube-system(5b0e0f6e-bd12-4580-a254-844d19482704): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 10 00:38:02.287720 kubelet[1416]: > logger="UnhandledError" Jul 10 00:38:02.289099 kubelet[1416]: E0710 00:38:02.288646 1416 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-2llmf" podUID="5b0e0f6e-bd12-4580-a254-844d19482704" Jul 10 00:38:02.299189 systemd[1]: Started cri-containerd-b5feba325581fb406747384a01117873d0a7491fecb1aab959204572f4e6cfa5.scope. Jul 10 00:38:02.346944 env[1212]: time="2025-07-10T00:38:02.346892858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-vwmlc,Uid:f2eedc13-c32d-4efd-b442-5486876177d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5feba325581fb406747384a01117873d0a7491fecb1aab959204572f4e6cfa5\"" Jul 10 00:38:02.347878 kubelet[1416]: E0710 00:38:02.347849 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:38:02.348996 env[1212]: time="2025-07-10T00:38:02.348961944Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 00:38:02.896473 kubelet[1416]: E0710 00:38:02.896415 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:38:03.018959 kubelet[1416]: E0710 00:38:03.018921 1416 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:38:03.165027 env[1212]: time="2025-07-10T00:38:03.164966700Z" level=info msg="StopPodSandbox for \"6fced54b5ee13662a67450dfa9e969de350865b05f67ba330fd7216ebaa7721e\"" Jul 10 00:38:03.167731 env[1212]: time="2025-07-10T00:38:03.165047820Z" level=info msg="Container to stop \"ffc07bad34d77fe47ceda2dafc647d6dfd9815bf4adc6d3fb42887f09e205c1a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:38:03.166812 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6fced54b5ee13662a67450dfa9e969de350865b05f67ba330fd7216ebaa7721e-shm.mount: Deactivated successfully. Jul 10 00:38:03.172895 systemd[1]: cri-containerd-6fced54b5ee13662a67450dfa9e969de350865b05f67ba330fd7216ebaa7721e.scope: Deactivated successfully. Jul 10 00:38:03.198106 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6fced54b5ee13662a67450dfa9e969de350865b05f67ba330fd7216ebaa7721e-rootfs.mount: Deactivated successfully. Jul 10 00:38:03.208944 env[1212]: time="2025-07-10T00:38:03.208893764Z" level=info msg="shim disconnected" id=6fced54b5ee13662a67450dfa9e969de350865b05f67ba330fd7216ebaa7721e Jul 10 00:38:03.208944 env[1212]: time="2025-07-10T00:38:03.208944404Z" level=warning msg="cleaning up after shim disconnected" id=6fced54b5ee13662a67450dfa9e969de350865b05f67ba330fd7216ebaa7721e namespace=k8s.io Jul 10 00:38:03.209140 env[1212]: time="2025-07-10T00:38:03.208955284Z" level=info msg="cleaning up dead shim" Jul 10 00:38:03.216770 env[1212]: time="2025-07-10T00:38:03.216700902Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:38:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3113 runtime=io.containerd.runc.v2\n" Jul 10 00:38:03.217089 env[1212]: time="2025-07-10T00:38:03.217058983Z" level=info msg="TearDown network for sandbox \"6fced54b5ee13662a67450dfa9e969de350865b05f67ba330fd7216ebaa7721e\" successfully" Jul 10 00:38:03.217127 env[1212]: time="2025-07-10T00:38:03.217088663Z" level=info msg="StopPodSandbox for \"6fced54b5ee13662a67450dfa9e969de350865b05f67ba330fd7216ebaa7721e\" returns successfully" Jul 10 00:38:03.283967 kubelet[1416]: I0710 00:38:03.283927 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-host-proc-sys-kernel\") pod \"5b0e0f6e-bd12-4580-a254-844d19482704\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " Jul 10 00:38:03.283967 kubelet[1416]: I0710 00:38:03.283966 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-cilium-run\") pod \"5b0e0f6e-bd12-4580-a254-844d19482704\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " Jul 10 00:38:03.284168 kubelet[1416]: I0710 00:38:03.283985 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-etc-cni-netd\") pod \"5b0e0f6e-bd12-4580-a254-844d19482704\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " Jul 10 00:38:03.284168 kubelet[1416]: I0710 00:38:03.284008 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5b0e0f6e-bd12-4580-a254-844d19482704-hubble-tls\") pod \"5b0e0f6e-bd12-4580-a254-844d19482704\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " Jul 10 00:38:03.284168 kubelet[1416]: I0710 00:38:03.284031 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzmfh\" (UniqueName: \"kubernetes.io/projected/5b0e0f6e-bd12-4580-a254-844d19482704-kube-api-access-mzmfh\") pod \"5b0e0f6e-bd12-4580-a254-844d19482704\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " Jul 10 00:38:03.284168 kubelet[1416]: I0710 00:38:03.284047 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-cilium-cgroup\") pod \"5b0e0f6e-bd12-4580-a254-844d19482704\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " Jul 10 00:38:03.284168 kubelet[1416]: I0710 00:38:03.284060 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-bpf-maps\") pod \"5b0e0f6e-bd12-4580-a254-844d19482704\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " Jul 10 00:38:03.284168 kubelet[1416]: I0710 00:38:03.284088 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b0e0f6e-bd12-4580-a254-844d19482704-cilium-config-path\") pod \"5b0e0f6e-bd12-4580-a254-844d19482704\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " Jul 10 00:38:03.284168 kubelet[1416]: I0710 00:38:03.284104 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5b0e0f6e-bd12-4580-a254-844d19482704-cilium-ipsec-secrets\") pod \"5b0e0f6e-bd12-4580-a254-844d19482704\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " Jul 10 00:38:03.284168 kubelet[1416]: I0710 00:38:03.284120 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-lib-modules\") pod \"5b0e0f6e-bd12-4580-a254-844d19482704\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " Jul 10 00:38:03.284168 kubelet[1416]: I0710 00:38:03.284136 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-hostproc\") pod \"5b0e0f6e-bd12-4580-a254-844d19482704\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " Jul 10 00:38:03.284168 kubelet[1416]: I0710 00:38:03.284150 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-xtables-lock\") pod \"5b0e0f6e-bd12-4580-a254-844d19482704\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " Jul 10 00:38:03.284168 kubelet[1416]: I0710 00:38:03.284164 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-cni-path\") pod \"5b0e0f6e-bd12-4580-a254-844d19482704\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " Jul 10 00:38:03.284437 kubelet[1416]: I0710 00:38:03.284177 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-host-proc-sys-net\") pod \"5b0e0f6e-bd12-4580-a254-844d19482704\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " Jul 10 00:38:03.284437 kubelet[1416]: I0710 00:38:03.284193 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5b0e0f6e-bd12-4580-a254-844d19482704-clustermesh-secrets\") pod \"5b0e0f6e-bd12-4580-a254-844d19482704\" (UID: \"5b0e0f6e-bd12-4580-a254-844d19482704\") " Jul 10 00:38:03.287323 kubelet[1416]: I0710 00:38:03.284605 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5b0e0f6e-bd12-4580-a254-844d19482704" (UID: "5b0e0f6e-bd12-4580-a254-844d19482704"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:38:03.287323 kubelet[1416]: I0710 00:38:03.284644 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5b0e0f6e-bd12-4580-a254-844d19482704" (UID: "5b0e0f6e-bd12-4580-a254-844d19482704"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:38:03.287323 kubelet[1416]: I0710 00:38:03.284661 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5b0e0f6e-bd12-4580-a254-844d19482704" (UID: "5b0e0f6e-bd12-4580-a254-844d19482704"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:38:03.287323 kubelet[1416]: I0710 00:38:03.284677 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5b0e0f6e-bd12-4580-a254-844d19482704" (UID: "5b0e0f6e-bd12-4580-a254-844d19482704"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:38:03.287323 kubelet[1416]: I0710 00:38:03.284826 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5b0e0f6e-bd12-4580-a254-844d19482704" (UID: "5b0e0f6e-bd12-4580-a254-844d19482704"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:38:03.287323 kubelet[1416]: I0710 00:38:03.284868 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-hostproc" (OuterVolumeSpecName: "hostproc") pod "5b0e0f6e-bd12-4580-a254-844d19482704" (UID: "5b0e0f6e-bd12-4580-a254-844d19482704"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:38:03.287323 kubelet[1416]: I0710 00:38:03.285912 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5b0e0f6e-bd12-4580-a254-844d19482704" (UID: "5b0e0f6e-bd12-4580-a254-844d19482704"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:38:03.287323 kubelet[1416]: I0710 00:38:03.285952 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-cni-path" (OuterVolumeSpecName: "cni-path") pod "5b0e0f6e-bd12-4580-a254-844d19482704" (UID: "5b0e0f6e-bd12-4580-a254-844d19482704"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:38:03.287323 kubelet[1416]: I0710 00:38:03.285972 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5b0e0f6e-bd12-4580-a254-844d19482704" (UID: "5b0e0f6e-bd12-4580-a254-844d19482704"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:38:03.287323 kubelet[1416]: I0710 00:38:03.285991 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5b0e0f6e-bd12-4580-a254-844d19482704" (UID: "5b0e0f6e-bd12-4580-a254-844d19482704"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 10 00:38:03.287323 kubelet[1416]: I0710 00:38:03.287026 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b0e0f6e-bd12-4580-a254-844d19482704-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5b0e0f6e-bd12-4580-a254-844d19482704" (UID: "5b0e0f6e-bd12-4580-a254-844d19482704"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:38:03.294612 kubelet[1416]: I0710 00:38:03.289771 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b0e0f6e-bd12-4580-a254-844d19482704-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5b0e0f6e-bd12-4580-a254-844d19482704" (UID: "5b0e0f6e-bd12-4580-a254-844d19482704"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:38:03.291622 systemd[1]: var-lib-kubelet-pods-5b0e0f6e\x2dbd12\x2d4580\x2da254\x2d844d19482704-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmzmfh.mount: Deactivated successfully. Jul 10 00:38:03.291720 systemd[1]: var-lib-kubelet-pods-5b0e0f6e\x2dbd12\x2d4580\x2da254\x2d844d19482704-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:38:03.296082 kubelet[1416]: I0710 00:38:03.295963 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b0e0f6e-bd12-4580-a254-844d19482704-kube-api-access-mzmfh" (OuterVolumeSpecName: "kube-api-access-mzmfh") pod "5b0e0f6e-bd12-4580-a254-844d19482704" (UID: "5b0e0f6e-bd12-4580-a254-844d19482704"). InnerVolumeSpecName "kube-api-access-mzmfh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:38:03.296900 kubelet[1416]: I0710 00:38:03.296865 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b0e0f6e-bd12-4580-a254-844d19482704-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5b0e0f6e-bd12-4580-a254-844d19482704" (UID: "5b0e0f6e-bd12-4580-a254-844d19482704"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:38:03.301298 kubelet[1416]: I0710 00:38:03.301248 1416 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b0e0f6e-bd12-4580-a254-844d19482704-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "5b0e0f6e-bd12-4580-a254-844d19482704" (UID: "5b0e0f6e-bd12-4580-a254-844d19482704"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:38:03.385374 kubelet[1416]: I0710 00:38:03.385193 1416 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-xtables-lock\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:38:03.385374 kubelet[1416]: I0710 00:38:03.385227 1416 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-cni-path\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:38:03.385374 kubelet[1416]: I0710 00:38:03.385236 1416 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-host-proc-sys-net\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:38:03.385374 kubelet[1416]: I0710 00:38:03.385246 1416 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5b0e0f6e-bd12-4580-a254-844d19482704-clustermesh-secrets\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:38:03.385374 kubelet[1416]: I0710 00:38:03.385256 1416 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-host-proc-sys-kernel\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:38:03.385374 kubelet[1416]: I0710 00:38:03.385264 1416 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-cilium-run\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:38:03.385374 kubelet[1416]: I0710 00:38:03.385271 1416 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-etc-cni-netd\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:38:03.385374 kubelet[1416]: I0710 00:38:03.385288 1416 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5b0e0f6e-bd12-4580-a254-844d19482704-hubble-tls\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:38:03.385374 kubelet[1416]: I0710 00:38:03.385300 1416 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mzmfh\" (UniqueName: \"kubernetes.io/projected/5b0e0f6e-bd12-4580-a254-844d19482704-kube-api-access-mzmfh\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:38:03.385374 kubelet[1416]: I0710 00:38:03.385311 1416 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-cilium-cgroup\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:38:03.385374 kubelet[1416]: I0710 00:38:03.385319 1416 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-bpf-maps\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:38:03.385374 kubelet[1416]: I0710 00:38:03.385326 1416 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b0e0f6e-bd12-4580-a254-844d19482704-cilium-config-path\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:38:03.385374 kubelet[1416]: I0710 00:38:03.385334 1416 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5b0e0f6e-bd12-4580-a254-844d19482704-cilium-ipsec-secrets\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:38:03.385374 kubelet[1416]: I0710 00:38:03.385341 1416 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-lib-modules\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:38:03.385374 kubelet[1416]: I0710 00:38:03.385349 1416 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5b0e0f6e-bd12-4580-a254-844d19482704-hostproc\") on node \"10.0.0.92\" DevicePath \"\"" Jul 10 00:38:03.822710 env[1212]: time="2025-07-10T00:38:03.822652698Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:38:03.824343 env[1212]: time="2025-07-10T00:38:03.824297461Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:38:03.825866 env[1212]: time="2025-07-10T00:38:03.825832585Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:38:03.826271 env[1212]: time="2025-07-10T00:38:03.826240346Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 10 00:38:03.830586 env[1212]: time="2025-07-10T00:38:03.830535436Z" level=info msg="CreateContainer within sandbox \"b5feba325581fb406747384a01117873d0a7491fecb1aab959204572f4e6cfa5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 00:38:03.840090 env[1212]: time="2025-07-10T00:38:03.840028739Z" level=info msg="CreateContainer within sandbox \"b5feba325581fb406747384a01117873d0a7491fecb1aab959204572f4e6cfa5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1a0a247b349594f4b3f0372e91d2e7fe78f049497a19dbe5ee80fbabbe43e066\"" Jul 10 00:38:03.840616 env[1212]: time="2025-07-10T00:38:03.840585180Z" level=info msg="StartContainer for \"1a0a247b349594f4b3f0372e91d2e7fe78f049497a19dbe5ee80fbabbe43e066\"" Jul 10 00:38:03.854610 systemd[1]: Started cri-containerd-1a0a247b349594f4b3f0372e91d2e7fe78f049497a19dbe5ee80fbabbe43e066.scope. Jul 10 00:38:03.896806 kubelet[1416]: E0710 00:38:03.896768 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:38:03.977670 env[1212]: time="2025-07-10T00:38:03.977622105Z" level=info msg="StartContainer for \"1a0a247b349594f4b3f0372e91d2e7fe78f049497a19dbe5ee80fbabbe43e066\" returns successfully" Jul 10 00:38:04.081320 systemd[1]: var-lib-kubelet-pods-5b0e0f6e\x2dbd12\x2d4580\x2da254\x2d844d19482704-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:38:04.081425 systemd[1]: var-lib-kubelet-pods-5b0e0f6e\x2dbd12\x2d4580\x2da254\x2d844d19482704-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 10 00:38:04.167958 kubelet[1416]: E0710 00:38:04.167896 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:38:04.169753 kubelet[1416]: I0710 00:38:04.169706 1416 scope.go:117] "RemoveContainer" containerID="ffc07bad34d77fe47ceda2dafc647d6dfd9815bf4adc6d3fb42887f09e205c1a" Jul 10 00:38:04.175073 systemd[1]: Removed slice kubepods-burstable-pod5b0e0f6e_bd12_4580_a254_844d19482704.slice. Jul 10 00:38:04.176477 env[1212]: time="2025-07-10T00:38:04.176418790Z" level=info msg="RemoveContainer for \"ffc07bad34d77fe47ceda2dafc647d6dfd9815bf4adc6d3fb42887f09e205c1a\"" Jul 10 00:38:04.189052 env[1212]: time="2025-07-10T00:38:04.188991737Z" level=info msg="RemoveContainer for \"ffc07bad34d77fe47ceda2dafc647d6dfd9815bf4adc6d3fb42887f09e205c1a\" returns successfully" Jul 10 00:38:04.207274 kubelet[1416]: I0710 00:38:04.207200 1416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-vwmlc" podStartSLOduration=1.728369252 podStartE2EDuration="3.207175378s" podCreationTimestamp="2025-07-10 00:38:01 +0000 UTC" firstStartedPulling="2025-07-10 00:38:02.348664503 +0000 UTC m=+50.174767622" lastFinishedPulling="2025-07-10 00:38:03.827470629 +0000 UTC m=+51.653573748" observedRunningTime="2025-07-10 00:38:04.183930766 +0000 UTC m=+52.010033925" watchObservedRunningTime="2025-07-10 00:38:04.207175378 +0000 UTC m=+52.033278537" Jul 10 00:38:04.213599 kubelet[1416]: I0710 00:38:04.213476 1416 setters.go:618] "Node became not ready" node="10.0.0.92" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-10T00:38:04Z","lastTransitionTime":"2025-07-10T00:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 10 00:38:04.233581 systemd[1]: Created slice kubepods-burstable-pod552e2b03_ac11_41b8_897e_613f1acd1f56.slice. Jul 10 00:38:04.289522 kubelet[1416]: I0710 00:38:04.289466 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/552e2b03-ac11-41b8-897e-613f1acd1f56-lib-modules\") pod \"cilium-mrn5f\" (UID: \"552e2b03-ac11-41b8-897e-613f1acd1f56\") " pod="kube-system/cilium-mrn5f" Jul 10 00:38:04.289522 kubelet[1416]: I0710 00:38:04.289510 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/552e2b03-ac11-41b8-897e-613f1acd1f56-xtables-lock\") pod \"cilium-mrn5f\" (UID: \"552e2b03-ac11-41b8-897e-613f1acd1f56\") " pod="kube-system/cilium-mrn5f" Jul 10 00:38:04.289522 kubelet[1416]: I0710 00:38:04.289532 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/552e2b03-ac11-41b8-897e-613f1acd1f56-cilium-config-path\") pod \"cilium-mrn5f\" (UID: \"552e2b03-ac11-41b8-897e-613f1acd1f56\") " pod="kube-system/cilium-mrn5f" Jul 10 00:38:04.289766 kubelet[1416]: I0710 00:38:04.289547 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/552e2b03-ac11-41b8-897e-613f1acd1f56-bpf-maps\") pod \"cilium-mrn5f\" (UID: \"552e2b03-ac11-41b8-897e-613f1acd1f56\") " pod="kube-system/cilium-mrn5f" Jul 10 00:38:04.289766 kubelet[1416]: I0710 00:38:04.289613 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/552e2b03-ac11-41b8-897e-613f1acd1f56-host-proc-sys-kernel\") pod \"cilium-mrn5f\" (UID: \"552e2b03-ac11-41b8-897e-613f1acd1f56\") " pod="kube-system/cilium-mrn5f" Jul 10 00:38:04.289766 kubelet[1416]: I0710 00:38:04.289653 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/552e2b03-ac11-41b8-897e-613f1acd1f56-cilium-run\") pod \"cilium-mrn5f\" (UID: \"552e2b03-ac11-41b8-897e-613f1acd1f56\") " pod="kube-system/cilium-mrn5f" Jul 10 00:38:04.289766 kubelet[1416]: I0710 00:38:04.289684 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/552e2b03-ac11-41b8-897e-613f1acd1f56-hostproc\") pod \"cilium-mrn5f\" (UID: \"552e2b03-ac11-41b8-897e-613f1acd1f56\") " pod="kube-system/cilium-mrn5f" Jul 10 00:38:04.289766 kubelet[1416]: I0710 00:38:04.289700 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/552e2b03-ac11-41b8-897e-613f1acd1f56-cni-path\") pod \"cilium-mrn5f\" (UID: \"552e2b03-ac11-41b8-897e-613f1acd1f56\") " pod="kube-system/cilium-mrn5f" Jul 10 00:38:04.289766 kubelet[1416]: I0710 00:38:04.289716 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/552e2b03-ac11-41b8-897e-613f1acd1f56-cilium-ipsec-secrets\") pod \"cilium-mrn5f\" (UID: \"552e2b03-ac11-41b8-897e-613f1acd1f56\") " pod="kube-system/cilium-mrn5f" Jul 10 00:38:04.289766 kubelet[1416]: I0710 00:38:04.289744 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/552e2b03-ac11-41b8-897e-613f1acd1f56-hubble-tls\") pod \"cilium-mrn5f\" (UID: \"552e2b03-ac11-41b8-897e-613f1acd1f56\") " pod="kube-system/cilium-mrn5f" Jul 10 00:38:04.289766 kubelet[1416]: I0710 00:38:04.289760 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/552e2b03-ac11-41b8-897e-613f1acd1f56-cilium-cgroup\") pod \"cilium-mrn5f\" (UID: \"552e2b03-ac11-41b8-897e-613f1acd1f56\") " pod="kube-system/cilium-mrn5f" Jul 10 00:38:04.289946 kubelet[1416]: I0710 00:38:04.289779 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/552e2b03-ac11-41b8-897e-613f1acd1f56-host-proc-sys-net\") pod \"cilium-mrn5f\" (UID: \"552e2b03-ac11-41b8-897e-613f1acd1f56\") " pod="kube-system/cilium-mrn5f" Jul 10 00:38:04.289946 kubelet[1416]: I0710 00:38:04.289795 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/552e2b03-ac11-41b8-897e-613f1acd1f56-clustermesh-secrets\") pod \"cilium-mrn5f\" (UID: \"552e2b03-ac11-41b8-897e-613f1acd1f56\") " pod="kube-system/cilium-mrn5f" Jul 10 00:38:04.289946 kubelet[1416]: I0710 00:38:04.289809 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjkp9\" (UniqueName: \"kubernetes.io/projected/552e2b03-ac11-41b8-897e-613f1acd1f56-kube-api-access-cjkp9\") pod \"cilium-mrn5f\" (UID: \"552e2b03-ac11-41b8-897e-613f1acd1f56\") " pod="kube-system/cilium-mrn5f" Jul 10 00:38:04.289946 kubelet[1416]: I0710 00:38:04.289838 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/552e2b03-ac11-41b8-897e-613f1acd1f56-etc-cni-netd\") pod \"cilium-mrn5f\" (UID: \"552e2b03-ac11-41b8-897e-613f1acd1f56\") " pod="kube-system/cilium-mrn5f" Jul 10 00:38:04.550603 kubelet[1416]: E0710 00:38:04.550537 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:38:04.551446 env[1212]: time="2025-07-10T00:38:04.551395462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mrn5f,Uid:552e2b03-ac11-41b8-897e-613f1acd1f56,Namespace:kube-system,Attempt:0,}" Jul 10 00:38:04.567396 env[1212]: time="2025-07-10T00:38:04.567323898Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:38:04.567561 env[1212]: time="2025-07-10T00:38:04.567368978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:38:04.567561 env[1212]: time="2025-07-10T00:38:04.567380138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:38:04.567561 env[1212]: time="2025-07-10T00:38:04.567514378Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4d46824e90ae406e13ae99af527a6ca6b63d20122501c3fe878e960f39876a5 pid=3181 runtime=io.containerd.runc.v2 Jul 10 00:38:04.578403 systemd[1]: Started cri-containerd-f4d46824e90ae406e13ae99af527a6ca6b63d20122501c3fe878e960f39876a5.scope. Jul 10 00:38:04.613941 env[1212]: time="2025-07-10T00:38:04.613895361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mrn5f,Uid:552e2b03-ac11-41b8-897e-613f1acd1f56,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4d46824e90ae406e13ae99af527a6ca6b63d20122501c3fe878e960f39876a5\"" Jul 10 00:38:04.614854 kubelet[1416]: E0710 00:38:04.614828 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:38:04.618793 env[1212]: time="2025-07-10T00:38:04.618746292Z" level=info msg="CreateContainer within sandbox \"f4d46824e90ae406e13ae99af527a6ca6b63d20122501c3fe878e960f39876a5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:38:04.629377 env[1212]: time="2025-07-10T00:38:04.629322515Z" level=info msg="CreateContainer within sandbox \"f4d46824e90ae406e13ae99af527a6ca6b63d20122501c3fe878e960f39876a5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d31c7b73b04c1825e11ef292bc90a5db9533a181559353c0be69b3c628a141d7\"" Jul 10 00:38:04.630082 env[1212]: time="2025-07-10T00:38:04.630052397Z" level=info msg="StartContainer for \"d31c7b73b04c1825e11ef292bc90a5db9533a181559353c0be69b3c628a141d7\"" Jul 10 00:38:04.646388 systemd[1]: Started cri-containerd-d31c7b73b04c1825e11ef292bc90a5db9533a181559353c0be69b3c628a141d7.scope. Jul 10 00:38:04.674745 env[1212]: time="2025-07-10T00:38:04.674696096Z" level=info msg="StartContainer for \"d31c7b73b04c1825e11ef292bc90a5db9533a181559353c0be69b3c628a141d7\" returns successfully" Jul 10 00:38:04.684966 systemd[1]: cri-containerd-d31c7b73b04c1825e11ef292bc90a5db9533a181559353c0be69b3c628a141d7.scope: Deactivated successfully. Jul 10 00:38:04.712115 env[1212]: time="2025-07-10T00:38:04.712061579Z" level=info msg="shim disconnected" id=d31c7b73b04c1825e11ef292bc90a5db9533a181559353c0be69b3c628a141d7 Jul 10 00:38:04.712115 env[1212]: time="2025-07-10T00:38:04.712106459Z" level=warning msg="cleaning up after shim disconnected" id=d31c7b73b04c1825e11ef292bc90a5db9533a181559353c0be69b3c628a141d7 namespace=k8s.io Jul 10 00:38:04.712115 env[1212]: time="2025-07-10T00:38:04.712117019Z" level=info msg="cleaning up dead shim" Jul 10 00:38:04.718818 env[1212]: time="2025-07-10T00:38:04.718768874Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:38:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3263 runtime=io.containerd.runc.v2\n" Jul 10 00:38:04.897757 kubelet[1416]: E0710 00:38:04.897650 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:38:05.045946 kubelet[1416]: I0710 00:38:05.045900 1416 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b0e0f6e-bd12-4580-a254-844d19482704" path="/var/lib/kubelet/pods/5b0e0f6e-bd12-4580-a254-844d19482704/volumes" Jul 10 00:38:05.174149 kubelet[1416]: E0710 00:38:05.174107 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:38:05.174820 kubelet[1416]: E0710 00:38:05.174784 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:38:05.179056 env[1212]: time="2025-07-10T00:38:05.178989831Z" level=info msg="CreateContainer within sandbox \"f4d46824e90ae406e13ae99af527a6ca6b63d20122501c3fe878e960f39876a5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:38:05.190303 env[1212]: time="2025-07-10T00:38:05.190248535Z" level=info msg="CreateContainer within sandbox \"f4d46824e90ae406e13ae99af527a6ca6b63d20122501c3fe878e960f39876a5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2095b578a72d79b1b5c0a01fc6c170fd8bb368b9ae3beace040deb8d57689d63\"" Jul 10 00:38:05.191108 env[1212]: time="2025-07-10T00:38:05.191074857Z" level=info msg="StartContainer for \"2095b578a72d79b1b5c0a01fc6c170fd8bb368b9ae3beace040deb8d57689d63\"" Jul 10 00:38:05.215524 systemd[1]: Started cri-containerd-2095b578a72d79b1b5c0a01fc6c170fd8bb368b9ae3beace040deb8d57689d63.scope. Jul 10 00:38:05.247095 env[1212]: time="2025-07-10T00:38:05.247045373Z" level=info msg="StartContainer for \"2095b578a72d79b1b5c0a01fc6c170fd8bb368b9ae3beace040deb8d57689d63\" returns successfully" Jul 10 00:38:05.258933 systemd[1]: cri-containerd-2095b578a72d79b1b5c0a01fc6c170fd8bb368b9ae3beace040deb8d57689d63.scope: Deactivated successfully. Jul 10 00:38:05.277994 env[1212]: time="2025-07-10T00:38:05.277944477Z" level=info msg="shim disconnected" id=2095b578a72d79b1b5c0a01fc6c170fd8bb368b9ae3beace040deb8d57689d63 Jul 10 00:38:05.277994 env[1212]: time="2025-07-10T00:38:05.277991318Z" level=warning msg="cleaning up after shim disconnected" id=2095b578a72d79b1b5c0a01fc6c170fd8bb368b9ae3beace040deb8d57689d63 namespace=k8s.io Jul 10 00:38:05.277994 env[1212]: time="2025-07-10T00:38:05.278002638Z" level=info msg="cleaning up dead shim" Jul 10 00:38:05.285177 env[1212]: time="2025-07-10T00:38:05.285132612Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:38:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3326 runtime=io.containerd.runc.v2\n" Jul 10 00:38:05.379813 kubelet[1416]: W0710 00:38:05.379765 1416 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b0e0f6e_bd12_4580_a254_844d19482704.slice/cri-containerd-ffc07bad34d77fe47ceda2dafc647d6dfd9815bf4adc6d3fb42887f09e205c1a.scope WatchSource:0}: container "ffc07bad34d77fe47ceda2dafc647d6dfd9815bf4adc6d3fb42887f09e205c1a" in namespace "k8s.io": not found Jul 10 00:38:05.898799 kubelet[1416]: E0710 00:38:05.898747 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:38:06.081018 systemd[1]: run-containerd-runc-k8s.io-2095b578a72d79b1b5c0a01fc6c170fd8bb368b9ae3beace040deb8d57689d63-runc.pSTytw.mount: Deactivated successfully. Jul 10 00:38:06.081117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2095b578a72d79b1b5c0a01fc6c170fd8bb368b9ae3beace040deb8d57689d63-rootfs.mount: Deactivated successfully. Jul 10 00:38:06.177760 kubelet[1416]: E0710 00:38:06.177728 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:38:06.185831 env[1212]: time="2025-07-10T00:38:06.185780543Z" level=info msg="CreateContainer within sandbox \"f4d46824e90ae406e13ae99af527a6ca6b63d20122501c3fe878e960f39876a5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:38:06.201066 env[1212]: time="2025-07-10T00:38:06.201012813Z" level=info msg="CreateContainer within sandbox \"f4d46824e90ae406e13ae99af527a6ca6b63d20122501c3fe878e960f39876a5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1546620c54f65decf51ec579c2a5961b7c09b0bc95e43613260a7ddf9c861090\"" Jul 10 00:38:06.202354 env[1212]: time="2025-07-10T00:38:06.202321216Z" level=info msg="StartContainer for \"1546620c54f65decf51ec579c2a5961b7c09b0bc95e43613260a7ddf9c861090\"" Jul 10 00:38:06.224289 systemd[1]: Started cri-containerd-1546620c54f65decf51ec579c2a5961b7c09b0bc95e43613260a7ddf9c861090.scope. Jul 10 00:38:06.254344 env[1212]: time="2025-07-10T00:38:06.254292277Z" level=info msg="StartContainer for \"1546620c54f65decf51ec579c2a5961b7c09b0bc95e43613260a7ddf9c861090\" returns successfully" Jul 10 00:38:06.258059 systemd[1]: cri-containerd-1546620c54f65decf51ec579c2a5961b7c09b0bc95e43613260a7ddf9c861090.scope: Deactivated successfully. Jul 10 00:38:06.279333 env[1212]: time="2025-07-10T00:38:06.279283646Z" level=info msg="shim disconnected" id=1546620c54f65decf51ec579c2a5961b7c09b0bc95e43613260a7ddf9c861090 Jul 10 00:38:06.279333 env[1212]: time="2025-07-10T00:38:06.279332406Z" level=warning msg="cleaning up after shim disconnected" id=1546620c54f65decf51ec579c2a5961b7c09b0bc95e43613260a7ddf9c861090 namespace=k8s.io Jul 10 00:38:06.279561 env[1212]: time="2025-07-10T00:38:06.279343006Z" level=info msg="cleaning up dead shim" Jul 10 00:38:06.286031 env[1212]: time="2025-07-10T00:38:06.285987459Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:38:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3384 runtime=io.containerd.runc.v2\n" Jul 10 00:38:06.899464 kubelet[1416]: E0710 00:38:06.899407 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:38:07.080955 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1546620c54f65decf51ec579c2a5961b7c09b0bc95e43613260a7ddf9c861090-rootfs.mount: Deactivated successfully. Jul 10 00:38:07.181348 kubelet[1416]: E0710 00:38:07.181313 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:38:07.189001 env[1212]: time="2025-07-10T00:38:07.188943039Z" level=info msg="CreateContainer within sandbox \"f4d46824e90ae406e13ae99af527a6ca6b63d20122501c3fe878e960f39876a5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:38:07.208791 env[1212]: time="2025-07-10T00:38:07.208735555Z" level=info msg="CreateContainer within sandbox \"f4d46824e90ae406e13ae99af527a6ca6b63d20122501c3fe878e960f39876a5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9edd6d2d1fe487ffdc231d24a871efb31fb835eaeed9bc4baabf14a8a257cce5\"" Jul 10 00:38:07.209558 env[1212]: time="2025-07-10T00:38:07.209426996Z" level=info msg="StartContainer for \"9edd6d2d1fe487ffdc231d24a871efb31fb835eaeed9bc4baabf14a8a257cce5\"" Jul 10 00:38:07.229972 systemd[1]: Started cri-containerd-9edd6d2d1fe487ffdc231d24a871efb31fb835eaeed9bc4baabf14a8a257cce5.scope. Jul 10 00:38:07.259983 systemd[1]: cri-containerd-9edd6d2d1fe487ffdc231d24a871efb31fb835eaeed9bc4baabf14a8a257cce5.scope: Deactivated successfully. Jul 10 00:38:07.261297 env[1212]: time="2025-07-10T00:38:07.261149171Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod552e2b03_ac11_41b8_897e_613f1acd1f56.slice/cri-containerd-9edd6d2d1fe487ffdc231d24a871efb31fb835eaeed9bc4baabf14a8a257cce5.scope/memory.events\": no such file or directory" Jul 10 00:38:07.263123 env[1212]: time="2025-07-10T00:38:07.263051254Z" level=info msg="StartContainer for \"9edd6d2d1fe487ffdc231d24a871efb31fb835eaeed9bc4baabf14a8a257cce5\" returns successfully" Jul 10 00:38:07.283912 env[1212]: time="2025-07-10T00:38:07.283864172Z" level=info msg="shim disconnected" id=9edd6d2d1fe487ffdc231d24a871efb31fb835eaeed9bc4baabf14a8a257cce5 Jul 10 00:38:07.283912 env[1212]: time="2025-07-10T00:38:07.283909972Z" level=warning msg="cleaning up after shim disconnected" id=9edd6d2d1fe487ffdc231d24a871efb31fb835eaeed9bc4baabf14a8a257cce5 namespace=k8s.io Jul 10 00:38:07.283912 env[1212]: time="2025-07-10T00:38:07.283919453Z" level=info msg="cleaning up dead shim" Jul 10 00:38:07.290942 env[1212]: time="2025-07-10T00:38:07.290888625Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:38:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3440 runtime=io.containerd.runc.v2\n" Jul 10 00:38:07.900525 kubelet[1416]: E0710 00:38:07.900476 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:38:08.019770 kubelet[1416]: E0710 00:38:08.019731 1416 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:38:08.081033 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9edd6d2d1fe487ffdc231d24a871efb31fb835eaeed9bc4baabf14a8a257cce5-rootfs.mount: Deactivated successfully. Jul 10 00:38:08.185320 kubelet[1416]: E0710 00:38:08.185281 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:38:08.189342 env[1212]: time="2025-07-10T00:38:08.189294048Z" level=info msg="CreateContainer within sandbox \"f4d46824e90ae406e13ae99af527a6ca6b63d20122501c3fe878e960f39876a5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:38:08.205697 env[1212]: time="2025-07-10T00:38:08.205616836Z" level=info msg="CreateContainer within sandbox \"f4d46824e90ae406e13ae99af527a6ca6b63d20122501c3fe878e960f39876a5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0fa3dbf403c58919392953ebaae470b34fe90ea87ae7d6950b705e5dd58e9086\"" Jul 10 00:38:08.206333 env[1212]: time="2025-07-10T00:38:08.206179277Z" level=info msg="StartContainer for \"0fa3dbf403c58919392953ebaae470b34fe90ea87ae7d6950b705e5dd58e9086\"" Jul 10 00:38:08.228755 systemd[1]: Started cri-containerd-0fa3dbf403c58919392953ebaae470b34fe90ea87ae7d6950b705e5dd58e9086.scope. Jul 10 00:38:08.266100 env[1212]: time="2025-07-10T00:38:08.266040699Z" level=info msg="StartContainer for \"0fa3dbf403c58919392953ebaae470b34fe90ea87ae7d6950b705e5dd58e9086\" returns successfully" Jul 10 00:38:08.498192 kubelet[1416]: W0710 00:38:08.498077 1416 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod552e2b03_ac11_41b8_897e_613f1acd1f56.slice/cri-containerd-d31c7b73b04c1825e11ef292bc90a5db9533a181559353c0be69b3c628a141d7.scope WatchSource:0}: task d31c7b73b04c1825e11ef292bc90a5db9533a181559353c0be69b3c628a141d7 not found Jul 10 00:38:08.537592 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 10 00:38:08.901417 kubelet[1416]: E0710 00:38:08.901299 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:38:09.189730 kubelet[1416]: E0710 00:38:09.189697 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:38:09.204729 kubelet[1416]: I0710 00:38:09.204666 1416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mrn5f" podStartSLOduration=5.204650008 podStartE2EDuration="5.204650008s" podCreationTimestamp="2025-07-10 00:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:38:09.203951847 +0000 UTC m=+57.030055006" watchObservedRunningTime="2025-07-10 00:38:09.204650008 +0000 UTC m=+57.030753167" Jul 10 00:38:09.901641 kubelet[1416]: E0710 00:38:09.901594 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:38:10.551521 kubelet[1416]: E0710 00:38:10.551489 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:38:10.902657 kubelet[1416]: E0710 00:38:10.902514 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:38:11.446895 systemd-networkd[1041]: lxc_health: Link UP Jul 10 00:38:11.458781 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 10 00:38:11.458710 systemd-networkd[1041]: lxc_health: Gained carrier Jul 10 00:38:11.607879 kubelet[1416]: W0710 00:38:11.607826 1416 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod552e2b03_ac11_41b8_897e_613f1acd1f56.slice/cri-containerd-2095b578a72d79b1b5c0a01fc6c170fd8bb368b9ae3beace040deb8d57689d63.scope WatchSource:0}: task 2095b578a72d79b1b5c0a01fc6c170fd8bb368b9ae3beace040deb8d57689d63 not found Jul 10 00:38:11.903676 kubelet[1416]: E0710 00:38:11.903520 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:38:12.552027 kubelet[1416]: E0710 00:38:12.551992 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:38:12.607623 systemd[1]: run-containerd-runc-k8s.io-0fa3dbf403c58919392953ebaae470b34fe90ea87ae7d6950b705e5dd58e9086-runc.udzocV.mount: Deactivated successfully. Jul 10 00:38:12.863812 kubelet[1416]: E0710 00:38:12.863682 1416 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:38:12.879203 env[1212]: time="2025-07-10T00:38:12.879152053Z" level=info msg="StopPodSandbox for \"c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc\"" Jul 10 00:38:12.879544 env[1212]: time="2025-07-10T00:38:12.879258733Z" level=info msg="TearDown network for sandbox \"c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc\" successfully" Jul 10 00:38:12.879544 env[1212]: time="2025-07-10T00:38:12.879294453Z" level=info msg="StopPodSandbox for \"c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc\" returns successfully" Jul 10 00:38:12.880006 env[1212]: time="2025-07-10T00:38:12.879970694Z" level=info msg="RemovePodSandbox for \"c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc\"" Jul 10 00:38:12.880091 env[1212]: time="2025-07-10T00:38:12.880005534Z" level=info msg="Forcibly stopping sandbox \"c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc\"" Jul 10 00:38:12.880091 env[1212]: time="2025-07-10T00:38:12.880070614Z" level=info msg="TearDown network for sandbox \"c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc\" successfully" Jul 10 00:38:12.884012 env[1212]: time="2025-07-10T00:38:12.883973220Z" level=info msg="RemovePodSandbox \"c6a7004c6e52e4911423a62195dd42791de0b556553519302068f63742d9a3cc\" returns successfully" Jul 10 00:38:12.884503 env[1212]: time="2025-07-10T00:38:12.884472020Z" level=info msg="StopPodSandbox for \"6fced54b5ee13662a67450dfa9e969de350865b05f67ba330fd7216ebaa7721e\"" Jul 10 00:38:12.884731 env[1212]: time="2025-07-10T00:38:12.884687900Z" level=info msg="TearDown network for sandbox \"6fced54b5ee13662a67450dfa9e969de350865b05f67ba330fd7216ebaa7721e\" successfully" Jul 10 00:38:12.884797 env[1212]: time="2025-07-10T00:38:12.884781381Z" level=info msg="StopPodSandbox for \"6fced54b5ee13662a67450dfa9e969de350865b05f67ba330fd7216ebaa7721e\" returns successfully" Jul 10 00:38:12.885130 env[1212]: time="2025-07-10T00:38:12.885104021Z" level=info msg="RemovePodSandbox for \"6fced54b5ee13662a67450dfa9e969de350865b05f67ba330fd7216ebaa7721e\"" Jul 10 00:38:12.885194 env[1212]: time="2025-07-10T00:38:12.885133261Z" level=info msg="Forcibly stopping sandbox \"6fced54b5ee13662a67450dfa9e969de350865b05f67ba330fd7216ebaa7721e\"" Jul 10 00:38:12.885242 env[1212]: time="2025-07-10T00:38:12.885206021Z" level=info msg="TearDown network for sandbox \"6fced54b5ee13662a67450dfa9e969de350865b05f67ba330fd7216ebaa7721e\" successfully" Jul 10 00:38:12.887486 env[1212]: time="2025-07-10T00:38:12.887449744Z" level=info msg="RemovePodSandbox \"6fced54b5ee13662a67450dfa9e969de350865b05f67ba330fd7216ebaa7721e\" returns successfully" Jul 10 00:38:12.903792 kubelet[1416]: E0710 00:38:12.903752 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:38:13.105813 systemd-networkd[1041]: lxc_health: Gained IPv6LL Jul 10 00:38:13.197705 kubelet[1416]: E0710 00:38:13.197671 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:38:13.904312 kubelet[1416]: E0710 00:38:13.904270 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:38:14.199009 kubelet[1416]: E0710 00:38:14.198928 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:38:14.714468 kubelet[1416]: W0710 00:38:14.714409 1416 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod552e2b03_ac11_41b8_897e_613f1acd1f56.slice/cri-containerd-1546620c54f65decf51ec579c2a5961b7c09b0bc95e43613260a7ddf9c861090.scope WatchSource:0}: task 1546620c54f65decf51ec579c2a5961b7c09b0bc95e43613260a7ddf9c861090 not found Jul 10 00:38:14.738398 systemd[1]: run-containerd-runc-k8s.io-0fa3dbf403c58919392953ebaae470b34fe90ea87ae7d6950b705e5dd58e9086-runc.X3jcMF.mount: Deactivated successfully. Jul 10 00:38:14.905151 kubelet[1416]: E0710 00:38:14.905085 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:38:15.906295 kubelet[1416]: E0710 00:38:15.906237 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:38:16.907325 kubelet[1416]: E0710 00:38:16.907190 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:38:17.821289 kubelet[1416]: W0710 00:38:17.821243 1416 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod552e2b03_ac11_41b8_897e_613f1acd1f56.slice/cri-containerd-9edd6d2d1fe487ffdc231d24a871efb31fb835eaeed9bc4baabf14a8a257cce5.scope WatchSource:0}: task 9edd6d2d1fe487ffdc231d24a871efb31fb835eaeed9bc4baabf14a8a257cce5 not found Jul 10 00:38:17.908093 kubelet[1416]: E0710 00:38:17.908041 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"