Jul 11 00:40:24.757943 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 11 00:40:24.757965 kernel: Linux version 5.15.186-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Thu Jul 10 23:22:35 -00 2025 Jul 11 00:40:24.757973 kernel: efi: EFI v2.70 by EDK II Jul 11 00:40:24.757979 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Jul 11 00:40:24.757984 kernel: random: crng init done Jul 11 00:40:24.757996 kernel: ACPI: Early table checksum verification disabled Jul 11 00:40:24.758004 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jul 11 00:40:24.758012 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 11 00:40:24.758018 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:40:24.758023 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:40:24.758029 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:40:24.758034 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:40:24.758039 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:40:24.758045 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:40:24.758053 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:40:24.758059 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:40:24.758065 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:40:24.758071 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 11 00:40:24.758076 kernel: NUMA: Failed to initialise from firmware Jul 11 00:40:24.758082 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:40:24.758088 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Jul 11 00:40:24.758093 kernel: Zone ranges: Jul 11 00:40:24.758099 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:40:24.758106 kernel: DMA32 empty Jul 11 00:40:24.758112 kernel: Normal empty Jul 11 00:40:24.758117 kernel: Movable zone start for each node Jul 11 00:40:24.758123 kernel: Early memory node ranges Jul 11 00:40:24.758129 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jul 11 00:40:24.758134 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jul 11 00:40:24.758140 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jul 11 00:40:24.758145 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jul 11 00:40:24.758151 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jul 11 00:40:24.758158 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jul 11 00:40:24.758164 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jul 11 00:40:24.758169 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:40:24.758176 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 11 00:40:24.758182 kernel: psci: probing for conduit method from ACPI. Jul 11 00:40:24.758188 kernel: psci: PSCIv1.1 detected in firmware. Jul 11 00:40:24.758193 kernel: psci: Using standard PSCI v0.2 function IDs Jul 11 00:40:24.758199 kernel: psci: Trusted OS migration not required Jul 11 00:40:24.758207 kernel: psci: SMC Calling Convention v1.1 Jul 11 00:40:24.758213 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 11 00:40:24.758221 kernel: ACPI: SRAT not present Jul 11 00:40:24.758227 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 11 00:40:24.758233 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 11 00:40:24.758239 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 11 00:40:24.758245 kernel: Detected PIPT I-cache on CPU0 Jul 11 00:40:24.758252 kernel: CPU features: detected: GIC system register CPU interface Jul 11 00:40:24.758258 kernel: CPU features: detected: Hardware dirty bit management Jul 11 00:40:24.758263 kernel: CPU features: detected: Spectre-v4 Jul 11 00:40:24.758270 kernel: CPU features: detected: Spectre-BHB Jul 11 00:40:24.758277 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 11 00:40:24.758283 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 11 00:40:24.758289 kernel: CPU features: detected: ARM erratum 1418040 Jul 11 00:40:24.758295 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 11 00:40:24.758301 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 11 00:40:24.758307 kernel: Policy zone: DMA Jul 11 00:40:24.758314 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8fd3ef416118421b63f30b3d02e5d4feea39e34704e91050cdad11fae31df42c Jul 11 00:40:24.758320 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:40:24.758327 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 00:40:24.758333 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:40:24.758339 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:40:24.758346 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Jul 11 00:40:24.758353 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 00:40:24.758358 kernel: trace event string verifier disabled Jul 11 00:40:24.758364 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:40:24.758371 kernel: rcu: RCU event tracing is enabled. Jul 11 00:40:24.758377 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 00:40:24.758383 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:40:24.758390 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:40:24.758396 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:40:24.758402 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 00:40:24.758408 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 11 00:40:24.758415 kernel: GICv3: 256 SPIs implemented Jul 11 00:40:24.758421 kernel: GICv3: 0 Extended SPIs implemented Jul 11 00:40:24.758427 kernel: GICv3: Distributor has no Range Selector support Jul 11 00:40:24.758439 kernel: Root IRQ handler: gic_handle_irq Jul 11 00:40:24.758463 kernel: GICv3: 16 PPIs implemented Jul 11 00:40:24.758470 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 11 00:40:24.758476 kernel: ACPI: SRAT not present Jul 11 00:40:24.758482 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 11 00:40:24.758488 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Jul 11 00:40:24.758494 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Jul 11 00:40:24.758500 kernel: GICv3: using LPI property table @0x00000000400d0000 Jul 11 00:40:24.758507 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Jul 11 00:40:24.758515 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:40:24.758521 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 11 00:40:24.758527 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 11 00:40:24.758534 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 11 00:40:24.758540 kernel: arm-pv: using stolen time PV Jul 11 00:40:24.758546 kernel: Console: colour dummy device 80x25 Jul 11 00:40:24.758553 kernel: ACPI: Core revision 20210730 Jul 11 00:40:24.758559 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 11 00:40:24.758566 kernel: pid_max: default: 32768 minimum: 301 Jul 11 00:40:24.758572 kernel: LSM: Security Framework initializing Jul 11 00:40:24.758579 kernel: SELinux: Initializing. Jul 11 00:40:24.758585 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:40:24.758592 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:40:24.758598 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:40:24.758604 kernel: Platform MSI: ITS@0x8080000 domain created Jul 11 00:40:24.758610 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 11 00:40:24.758617 kernel: Remapping and enabling EFI services. Jul 11 00:40:24.758623 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:40:24.758629 kernel: Detected PIPT I-cache on CPU1 Jul 11 00:40:24.758636 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 11 00:40:24.758643 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Jul 11 00:40:24.758649 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:40:24.758655 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 11 00:40:24.758661 kernel: Detected PIPT I-cache on CPU2 Jul 11 00:40:24.758668 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 11 00:40:24.758674 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Jul 11 00:40:24.758681 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:40:24.758687 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 11 00:40:24.758693 kernel: Detected PIPT I-cache on CPU3 Jul 11 00:40:24.758700 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 11 00:40:24.758707 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Jul 11 00:40:24.758713 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:40:24.758719 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 11 00:40:24.758730 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 00:40:24.758737 kernel: SMP: Total of 4 processors activated. Jul 11 00:40:24.758744 kernel: CPU features: detected: 32-bit EL0 Support Jul 11 00:40:24.758750 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 11 00:40:24.758757 kernel: CPU features: detected: Common not Private translations Jul 11 00:40:24.758764 kernel: CPU features: detected: CRC32 instructions Jul 11 00:40:24.758770 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 11 00:40:24.758777 kernel: CPU features: detected: LSE atomic instructions Jul 11 00:40:24.758784 kernel: CPU features: detected: Privileged Access Never Jul 11 00:40:24.758791 kernel: CPU features: detected: RAS Extension Support Jul 11 00:40:24.758797 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 11 00:40:24.758804 kernel: CPU: All CPU(s) started at EL1 Jul 11 00:40:24.758810 kernel: alternatives: patching kernel code Jul 11 00:40:24.758818 kernel: devtmpfs: initialized Jul 11 00:40:24.758824 kernel: KASLR enabled Jul 11 00:40:24.758831 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:40:24.758837 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 00:40:24.758844 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:40:24.758851 kernel: SMBIOS 3.0.0 present. Jul 11 00:40:24.758857 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jul 11 00:40:24.758864 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:40:24.758870 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 11 00:40:24.758878 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 11 00:40:24.758885 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 11 00:40:24.758892 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:40:24.758899 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 Jul 11 00:40:24.758905 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:40:24.758912 kernel: cpuidle: using governor menu Jul 11 00:40:24.758918 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 11 00:40:24.758925 kernel: ASID allocator initialised with 32768 entries Jul 11 00:40:24.758931 kernel: ACPI: bus type PCI registered Jul 11 00:40:24.758939 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:40:24.758946 kernel: Serial: AMBA PL011 UART driver Jul 11 00:40:24.758952 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:40:24.758959 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 11 00:40:24.758965 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:40:24.758972 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 11 00:40:24.758978 kernel: cryptd: max_cpu_qlen set to 1000 Jul 11 00:40:24.758985 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 11 00:40:24.758992 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:40:24.758999 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:40:24.759006 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:40:24.759012 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 11 00:40:24.759019 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 11 00:40:24.759025 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 11 00:40:24.759032 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:40:24.759038 kernel: ACPI: Interpreter enabled Jul 11 00:40:24.759045 kernel: ACPI: Using GIC for interrupt routing Jul 11 00:40:24.759052 kernel: ACPI: MCFG table detected, 1 entries Jul 11 00:40:24.759059 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 11 00:40:24.759066 kernel: printk: console [ttyAMA0] enabled Jul 11 00:40:24.759072 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 00:40:24.759206 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:40:24.759272 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 11 00:40:24.759330 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 11 00:40:24.759387 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 11 00:40:24.759480 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 11 00:40:24.759492 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 11 00:40:24.759500 kernel: PCI host bridge to bus 0000:00 Jul 11 00:40:24.759573 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 11 00:40:24.759629 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 11 00:40:24.759683 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 11 00:40:24.759737 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 00:40:24.759815 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 11 00:40:24.759890 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 11 00:40:24.759954 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 11 00:40:24.760015 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 11 00:40:24.760075 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 00:40:24.760136 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 00:40:24.760196 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 11 00:40:24.760259 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 11 00:40:24.760314 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 11 00:40:24.760375 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 11 00:40:24.760428 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 11 00:40:24.760453 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 11 00:40:24.760461 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 11 00:40:24.760468 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 11 00:40:24.760477 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 11 00:40:24.760484 kernel: iommu: Default domain type: Translated Jul 11 00:40:24.760491 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 11 00:40:24.760497 kernel: vgaarb: loaded Jul 11 00:40:24.760504 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 11 00:40:24.760511 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 11 00:40:24.760518 kernel: PTP clock support registered Jul 11 00:40:24.760524 kernel: Registered efivars operations Jul 11 00:40:24.760531 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 11 00:40:24.760538 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:40:24.760546 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:40:24.760553 kernel: pnp: PnP ACPI init Jul 11 00:40:24.760631 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 11 00:40:24.760641 kernel: pnp: PnP ACPI: found 1 devices Jul 11 00:40:24.760648 kernel: NET: Registered PF_INET protocol family Jul 11 00:40:24.760655 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 00:40:24.760662 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 00:40:24.760668 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:40:24.760677 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:40:24.760684 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 11 00:40:24.760690 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 00:40:24.760697 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:40:24.760704 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:40:24.760710 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:40:24.760717 kernel: PCI: CLS 0 bytes, default 64 Jul 11 00:40:24.760724 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 11 00:40:24.760730 kernel: kvm [1]: HYP mode not available Jul 11 00:40:24.760738 kernel: Initialise system trusted keyrings Jul 11 00:40:24.760745 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 00:40:24.760751 kernel: Key type asymmetric registered Jul 11 00:40:24.760758 kernel: Asymmetric key parser 'x509' registered Jul 11 00:40:24.760764 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 11 00:40:24.760771 kernel: io scheduler mq-deadline registered Jul 11 00:40:24.760777 kernel: io scheduler kyber registered Jul 11 00:40:24.760784 kernel: io scheduler bfq registered Jul 11 00:40:24.760790 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 11 00:40:24.760799 kernel: ACPI: button: Power Button [PWRB] Jul 11 00:40:24.760806 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 11 00:40:24.760871 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 11 00:40:24.760880 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:40:24.760886 kernel: thunder_xcv, ver 1.0 Jul 11 00:40:24.760893 kernel: thunder_bgx, ver 1.0 Jul 11 00:40:24.760899 kernel: nicpf, ver 1.0 Jul 11 00:40:24.760906 kernel: nicvf, ver 1.0 Jul 11 00:40:24.760975 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 11 00:40:24.761034 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-11T00:40:24 UTC (1752194424) Jul 11 00:40:24.761043 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 11 00:40:24.761050 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:40:24.761056 kernel: Segment Routing with IPv6 Jul 11 00:40:24.761063 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:40:24.761070 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:40:24.761076 kernel: Key type dns_resolver registered Jul 11 00:40:24.761082 kernel: registered taskstats version 1 Jul 11 00:40:24.761090 kernel: Loading compiled-in X.509 certificates Jul 11 00:40:24.761097 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.186-flatcar: e29f2f0310c2b60e0457f826e7476605fb3b6ab2' Jul 11 00:40:24.761104 kernel: Key type .fscrypt registered Jul 11 00:40:24.761110 kernel: Key type fscrypt-provisioning registered Jul 11 00:40:24.761117 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:40:24.761123 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:40:24.761130 kernel: ima: No architecture policies found Jul 11 00:40:24.761136 kernel: clk: Disabling unused clocks Jul 11 00:40:24.761143 kernel: Freeing unused kernel memory: 36416K Jul 11 00:40:24.761151 kernel: Run /init as init process Jul 11 00:40:24.761158 kernel: with arguments: Jul 11 00:40:24.761164 kernel: /init Jul 11 00:40:24.761170 kernel: with environment: Jul 11 00:40:24.761177 kernel: HOME=/ Jul 11 00:40:24.761183 kernel: TERM=linux Jul 11 00:40:24.761190 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:40:24.761198 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 11 00:40:24.761207 systemd[1]: Detected virtualization kvm. Jul 11 00:40:24.761215 systemd[1]: Detected architecture arm64. Jul 11 00:40:24.761221 systemd[1]: Running in initrd. Jul 11 00:40:24.761228 systemd[1]: No hostname configured, using default hostname. Jul 11 00:40:24.761235 systemd[1]: Hostname set to . Jul 11 00:40:24.761243 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:40:24.761249 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:40:24.761256 systemd[1]: Started systemd-ask-password-console.path. Jul 11 00:40:24.761264 systemd[1]: Reached target cryptsetup.target. Jul 11 00:40:24.761271 systemd[1]: Reached target paths.target. Jul 11 00:40:24.761278 systemd[1]: Reached target slices.target. Jul 11 00:40:24.761285 systemd[1]: Reached target swap.target. Jul 11 00:40:24.761292 systemd[1]: Reached target timers.target. Jul 11 00:40:24.761299 systemd[1]: Listening on iscsid.socket. Jul 11 00:40:24.761306 systemd[1]: Listening on iscsiuio.socket. Jul 11 00:40:24.761314 systemd[1]: Listening on systemd-journald-audit.socket. Jul 11 00:40:24.761321 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 11 00:40:24.761328 systemd[1]: Listening on systemd-journald.socket. Jul 11 00:40:24.761335 systemd[1]: Listening on systemd-networkd.socket. Jul 11 00:40:24.761342 systemd[1]: Listening on systemd-udevd-control.socket. Jul 11 00:40:24.761349 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 11 00:40:24.761356 systemd[1]: Reached target sockets.target. Jul 11 00:40:24.761363 systemd[1]: Starting kmod-static-nodes.service... Jul 11 00:40:24.761370 systemd[1]: Finished network-cleanup.service. Jul 11 00:40:24.761378 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:40:24.761385 systemd[1]: Starting systemd-journald.service... Jul 11 00:40:24.761392 systemd[1]: Starting systemd-modules-load.service... Jul 11 00:40:24.761399 systemd[1]: Starting systemd-resolved.service... Jul 11 00:40:24.761406 systemd[1]: Starting systemd-vconsole-setup.service... Jul 11 00:40:24.761413 systemd[1]: Finished kmod-static-nodes.service. Jul 11 00:40:24.761420 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:40:24.761428 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 11 00:40:24.761450 systemd[1]: Finished systemd-vconsole-setup.service. Jul 11 00:40:24.761462 systemd[1]: Starting dracut-cmdline-ask.service... Jul 11 00:40:24.761472 systemd-journald[290]: Journal started Jul 11 00:40:24.761515 systemd-journald[290]: Runtime Journal (/run/log/journal/192ff685ece74aed97d92963e0a78c29) is 6.0M, max 48.7M, 42.6M free. Jul 11 00:40:24.754061 systemd-modules-load[291]: Inserted module 'overlay' Jul 11 00:40:24.766061 systemd[1]: Started systemd-journald.service. Jul 11 00:40:24.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:24.768585 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 11 00:40:24.771204 kernel: audit: type=1130 audit(1752194424.767:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:24.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:24.774494 kernel: audit: type=1130 audit(1752194424.771:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:24.779475 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:40:24.779884 systemd[1]: Finished dracut-cmdline-ask.service. Jul 11 00:40:24.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:24.781611 systemd[1]: Starting dracut-cmdline.service... Jul 11 00:40:24.785996 kernel: audit: type=1130 audit(1752194424.780:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:24.786022 kernel: Bridge firewalling registered Jul 11 00:40:24.784691 systemd-modules-load[291]: Inserted module 'br_netfilter' Jul 11 00:40:24.786083 systemd-resolved[292]: Positive Trust Anchors: Jul 11 00:40:24.786090 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:40:24.786118 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 11 00:40:24.790281 systemd-resolved[292]: Defaulting to hostname 'linux'. Jul 11 00:40:24.791078 systemd[1]: Started systemd-resolved.service. Jul 11 00:40:24.800357 kernel: SCSI subsystem initialized Jul 11 00:40:24.800377 kernel: audit: type=1130 audit(1752194424.795:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:24.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:24.796216 systemd[1]: Reached target nss-lookup.target. Jul 11 00:40:24.804006 dracut-cmdline[309]: dracut-dracut-053 Jul 11 00:40:24.805643 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:40:24.805661 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:40:24.805670 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 11 00:40:24.806258 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8fd3ef416118421b63f30b3d02e5d4feea39e34704e91050cdad11fae31df42c Jul 11 00:40:24.811076 systemd-modules-load[291]: Inserted module 'dm_multipath' Jul 11 00:40:24.812173 systemd[1]: Finished systemd-modules-load.service. Jul 11 00:40:24.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:24.813882 systemd[1]: Starting systemd-sysctl.service... Jul 11 00:40:24.817730 kernel: audit: type=1130 audit(1752194424.812:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:24.821790 systemd[1]: Finished systemd-sysctl.service. Jul 11 00:40:24.825510 kernel: audit: type=1130 audit(1752194424.822:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:24.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:24.867468 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:40:24.879479 kernel: iscsi: registered transport (tcp) Jul 11 00:40:24.893809 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:40:24.893849 kernel: QLogic iSCSI HBA Driver Jul 11 00:40:24.928131 systemd[1]: Finished dracut-cmdline.service. Jul 11 00:40:24.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:24.929822 systemd[1]: Starting dracut-pre-udev.service... Jul 11 00:40:24.933037 kernel: audit: type=1130 audit(1752194424.928:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:24.975516 kernel: raid6: neonx8 gen() 13686 MB/s Jul 11 00:40:24.992490 kernel: raid6: neonx8 xor() 10710 MB/s Jul 11 00:40:25.009475 kernel: raid6: neonx4 gen() 13492 MB/s Jul 11 00:40:25.026480 kernel: raid6: neonx4 xor() 11277 MB/s Jul 11 00:40:25.043478 kernel: raid6: neonx2 gen() 13056 MB/s Jul 11 00:40:25.060470 kernel: raid6: neonx2 xor() 10272 MB/s Jul 11 00:40:25.077470 kernel: raid6: neonx1 gen() 10604 MB/s Jul 11 00:40:25.094475 kernel: raid6: neonx1 xor() 8805 MB/s Jul 11 00:40:25.111475 kernel: raid6: int64x8 gen() 6268 MB/s Jul 11 00:40:25.128477 kernel: raid6: int64x8 xor() 3544 MB/s Jul 11 00:40:25.145479 kernel: raid6: int64x4 gen() 7211 MB/s Jul 11 00:40:25.162472 kernel: raid6: int64x4 xor() 3848 MB/s Jul 11 00:40:25.179481 kernel: raid6: int64x2 gen() 6149 MB/s Jul 11 00:40:25.196480 kernel: raid6: int64x2 xor() 3317 MB/s Jul 11 00:40:25.213476 kernel: raid6: int64x1 gen() 5041 MB/s Jul 11 00:40:25.230542 kernel: raid6: int64x1 xor() 2643 MB/s Jul 11 00:40:25.230563 kernel: raid6: using algorithm neonx8 gen() 13686 MB/s Jul 11 00:40:25.230580 kernel: raid6: .... xor() 10710 MB/s, rmw enabled Jul 11 00:40:25.231589 kernel: raid6: using neon recovery algorithm Jul 11 00:40:25.245316 kernel: xor: measuring software checksum speed Jul 11 00:40:25.246634 kernel: 8regs : 1606 MB/sec Jul 11 00:40:25.246656 kernel: 32regs : 20723 MB/sec Jul 11 00:40:25.247841 kernel: arm64_neon : 27570 MB/sec Jul 11 00:40:25.247853 kernel: xor: using function: arm64_neon (27570 MB/sec) Jul 11 00:40:25.303498 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 11 00:40:25.314237 systemd[1]: Finished dracut-pre-udev.service. Jul 11 00:40:25.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:25.316149 systemd[1]: Starting systemd-udevd.service... Jul 11 00:40:25.320188 kernel: audit: type=1130 audit(1752194425.314:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:25.320209 kernel: audit: type=1334 audit(1752194425.315:10): prog-id=7 op=LOAD Jul 11 00:40:25.315000 audit: BPF prog-id=7 op=LOAD Jul 11 00:40:25.315000 audit: BPF prog-id=8 op=LOAD Jul 11 00:40:25.330122 systemd-udevd[492]: Using default interface naming scheme 'v252'. Jul 11 00:40:25.333363 systemd[1]: Started systemd-udevd.service. Jul 11 00:40:25.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:25.334955 systemd[1]: Starting dracut-pre-trigger.service... Jul 11 00:40:25.346838 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Jul 11 00:40:25.372004 systemd[1]: Finished dracut-pre-trigger.service. Jul 11 00:40:25.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:25.373554 systemd[1]: Starting systemd-udev-trigger.service... Jul 11 00:40:25.406782 systemd[1]: Finished systemd-udev-trigger.service. Jul 11 00:40:25.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:25.436495 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 00:40:25.442396 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 00:40:25.442411 kernel: GPT:9289727 != 19775487 Jul 11 00:40:25.442425 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 00:40:25.442461 kernel: GPT:9289727 != 19775487 Jul 11 00:40:25.442472 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 00:40:25.442480 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:40:25.458249 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 11 00:40:25.461683 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 11 00:40:25.464483 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (555) Jul 11 00:40:25.464581 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 11 00:40:25.465586 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 11 00:40:25.473478 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 11 00:40:25.475069 systemd[1]: Starting disk-uuid.service... Jul 11 00:40:25.480849 disk-uuid[563]: Primary Header is updated. Jul 11 00:40:25.480849 disk-uuid[563]: Secondary Entries is updated. Jul 11 00:40:25.480849 disk-uuid[563]: Secondary Header is updated. Jul 11 00:40:25.484483 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:40:26.495094 disk-uuid[564]: The operation has completed successfully. Jul 11 00:40:26.496168 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:40:26.516891 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:40:26.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:26.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:26.516987 systemd[1]: Finished disk-uuid.service. Jul 11 00:40:26.521000 systemd[1]: Starting verity-setup.service... Jul 11 00:40:26.535473 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 11 00:40:26.556540 systemd[1]: Found device dev-mapper-usr.device. Jul 11 00:40:26.558651 systemd[1]: Mounting sysusr-usr.mount... Jul 11 00:40:26.560769 systemd[1]: Finished verity-setup.service. Jul 11 00:40:26.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:26.609042 systemd[1]: Mounted sysusr-usr.mount. Jul 11 00:40:26.610309 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 11 00:40:26.609863 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 11 00:40:26.610513 systemd[1]: Starting ignition-setup.service... Jul 11 00:40:26.612770 systemd[1]: Starting parse-ip-for-networkd.service... Jul 11 00:40:26.620073 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:40:26.620118 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:40:26.620128 kernel: BTRFS info (device vda6): has skinny extents Jul 11 00:40:26.628806 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 11 00:40:26.634895 systemd[1]: Finished ignition-setup.service. Jul 11 00:40:26.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:26.636352 systemd[1]: Starting ignition-fetch-offline.service... Jul 11 00:40:26.692780 systemd[1]: Finished parse-ip-for-networkd.service. Jul 11 00:40:26.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:26.693000 audit: BPF prog-id=9 op=LOAD Jul 11 00:40:26.694912 systemd[1]: Starting systemd-networkd.service... Jul 11 00:40:26.722598 systemd-networkd[741]: lo: Link UP Jul 11 00:40:26.723502 systemd-networkd[741]: lo: Gained carrier Jul 11 00:40:26.723944 systemd-networkd[741]: Enumeration completed Jul 11 00:40:26.724133 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:40:26.724534 systemd[1]: Started systemd-networkd.service. Jul 11 00:40:26.725671 systemd-networkd[741]: eth0: Link UP Jul 11 00:40:26.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:26.725675 systemd-networkd[741]: eth0: Gained carrier Jul 11 00:40:26.726060 systemd[1]: Reached target network.target. Jul 11 00:40:26.728729 systemd[1]: Starting iscsiuio.service... Jul 11 00:40:26.735785 ignition[658]: Ignition 2.14.0 Jul 11 00:40:26.735794 ignition[658]: Stage: fetch-offline Jul 11 00:40:26.735836 ignition[658]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:40:26.735844 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:40:26.735987 ignition[658]: parsed url from cmdline: "" Jul 11 00:40:26.738559 systemd[1]: Started iscsiuio.service. Jul 11 00:40:26.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:26.735990 ignition[658]: no config URL provided Jul 11 00:40:26.740848 systemd[1]: Starting iscsid.service... Jul 11 00:40:26.735995 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:40:26.736002 ignition[658]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:40:26.736020 ignition[658]: op(1): [started] loading QEMU firmware config module Jul 11 00:40:26.746060 iscsid[748]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 11 00:40:26.746060 iscsid[748]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 11 00:40:26.746060 iscsid[748]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 11 00:40:26.746060 iscsid[748]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 11 00:40:26.746060 iscsid[748]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 11 00:40:26.746060 iscsid[748]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 11 00:40:26.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:26.736024 ignition[658]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 00:40:26.746524 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.127/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:40:26.756535 ignition[658]: op(1): [finished] loading QEMU firmware config module Jul 11 00:40:26.748643 systemd[1]: Started iscsid.service. Jul 11 00:40:26.753625 systemd[1]: Starting dracut-initqueue.service... Jul 11 00:40:26.763689 systemd[1]: Finished dracut-initqueue.service. Jul 11 00:40:26.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:26.764667 systemd[1]: Reached target remote-fs-pre.target. Jul 11 00:40:26.766105 systemd[1]: Reached target remote-cryptsetup.target. Jul 11 00:40:26.767721 systemd[1]: Reached target remote-fs.target. Jul 11 00:40:26.770053 ignition[658]: parsing config with SHA512: 6f62ba9d9625e50d4b0cdbde2b3545b3825f16be6c0e2ea4c70c1ba0a821c6e2314ef953faad7b8ccb99c367190dc7b33ffbf51d26b6a42015add3fb4ac3486c Jul 11 00:40:26.770075 systemd[1]: Starting dracut-pre-mount.service... Jul 11 00:40:26.778593 systemd[1]: Finished dracut-pre-mount.service. Jul 11 00:40:26.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:26.778989 unknown[658]: fetched base config from "system" Jul 11 00:40:26.779395 ignition[658]: fetch-offline: fetch-offline passed Jul 11 00:40:26.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:26.778997 unknown[658]: fetched user config from "qemu" Jul 11 00:40:26.779497 ignition[658]: Ignition finished successfully Jul 11 00:40:26.780233 systemd[1]: Finished ignition-fetch-offline.service. Jul 11 00:40:26.781748 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:40:26.782495 systemd[1]: Starting ignition-kargs.service... Jul 11 00:40:26.790723 ignition[762]: Ignition 2.14.0 Jul 11 00:40:26.790733 ignition[762]: Stage: kargs Jul 11 00:40:26.790815 ignition[762]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:40:26.790824 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:40:26.792808 systemd[1]: Finished ignition-kargs.service. Jul 11 00:40:26.791402 ignition[762]: kargs: kargs passed Jul 11 00:40:26.794771 systemd[1]: Starting ignition-disks.service... Jul 11 00:40:26.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:26.791463 ignition[762]: Ignition finished successfully Jul 11 00:40:26.800999 ignition[768]: Ignition 2.14.0 Jul 11 00:40:26.801008 ignition[768]: Stage: disks Jul 11 00:40:26.801090 ignition[768]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:40:26.801100 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:40:26.803542 ignition[768]: disks: disks passed Jul 11 00:40:26.803582 ignition[768]: Ignition finished successfully Jul 11 00:40:26.805417 systemd[1]: Finished ignition-disks.service. Jul 11 00:40:26.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:26.806889 systemd[1]: Reached target initrd-root-device.target. Jul 11 00:40:26.808181 systemd[1]: Reached target local-fs-pre.target. Jul 11 00:40:26.809531 systemd[1]: Reached target local-fs.target. Jul 11 00:40:26.810758 systemd[1]: Reached target sysinit.target. Jul 11 00:40:26.811976 systemd[1]: Reached target basic.target. Jul 11 00:40:26.813884 systemd[1]: Starting systemd-fsck-root.service... Jul 11 00:40:26.824051 systemd-fsck[776]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 11 00:40:26.828123 systemd[1]: Finished systemd-fsck-root.service. Jul 11 00:40:26.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:26.829877 systemd[1]: Mounting sysroot.mount... Jul 11 00:40:26.835467 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 11 00:40:26.835601 systemd[1]: Mounted sysroot.mount. Jul 11 00:40:26.836295 systemd[1]: Reached target initrd-root-fs.target. Jul 11 00:40:26.838525 systemd[1]: Mounting sysroot-usr.mount... Jul 11 00:40:26.839350 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 11 00:40:26.839387 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:40:26.839408 systemd[1]: Reached target ignition-diskful.target. Jul 11 00:40:26.841273 systemd[1]: Mounted sysroot-usr.mount. Jul 11 00:40:26.843327 systemd[1]: Starting initrd-setup-root.service... Jul 11 00:40:26.847309 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:40:26.850855 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:40:26.854858 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:40:26.858689 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:40:26.883331 systemd[1]: Finished initrd-setup-root.service. Jul 11 00:40:26.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:26.884856 systemd[1]: Starting ignition-mount.service... Jul 11 00:40:26.886130 systemd[1]: Starting sysroot-boot.service... Jul 11 00:40:26.890285 bash[827]: umount: /sysroot/usr/share/oem: not mounted. Jul 11 00:40:26.897781 ignition[829]: INFO : Ignition 2.14.0 Jul 11 00:40:26.897781 ignition[829]: INFO : Stage: mount Jul 11 00:40:26.899957 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:40:26.899957 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:40:26.899957 ignition[829]: INFO : mount: mount passed Jul 11 00:40:26.899957 ignition[829]: INFO : Ignition finished successfully Jul 11 00:40:26.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:26.900620 systemd[1]: Finished ignition-mount.service. Jul 11 00:40:26.905891 systemd[1]: Finished sysroot-boot.service. Jul 11 00:40:26.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:27.567345 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 11 00:40:27.574083 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (837) Jul 11 00:40:27.574119 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:40:27.574129 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:40:27.575463 kernel: BTRFS info (device vda6): has skinny extents Jul 11 00:40:27.578057 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 11 00:40:27.579651 systemd[1]: Starting ignition-files.service... Jul 11 00:40:27.593623 ignition[857]: INFO : Ignition 2.14.0 Jul 11 00:40:27.593623 ignition[857]: INFO : Stage: files Jul 11 00:40:27.595296 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:40:27.595296 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:40:27.595296 ignition[857]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:40:27.598819 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:40:27.598819 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:40:27.601693 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:40:27.601693 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:40:27.601693 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:40:27.601274 unknown[857]: wrote ssh authorized keys file for user: core Jul 11 00:40:27.606951 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:40:27.606951 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:40:27.606951 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:40:27.606951 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:40:27.606951 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 11 00:40:27.606951 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 11 00:40:27.606951 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 11 00:40:27.606951 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 11 00:40:27.980673 systemd-networkd[741]: eth0: Gained IPv6LL Jul 11 00:40:28.201985 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jul 11 00:40:28.706439 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 11 00:40:28.706439 ignition[857]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jul 11 00:40:28.710106 ignition[857]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:40:28.714266 ignition[857]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:40:28.714266 ignition[857]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jul 11 00:40:28.714266 ignition[857]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:40:28.714266 ignition[857]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:40:28.757070 ignition[857]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:40:28.758606 ignition[857]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:40:28.758606 ignition[857]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:40:28.758606 ignition[857]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:40:28.758606 ignition[857]: INFO : files: files passed Jul 11 00:40:28.758606 ignition[857]: INFO : Ignition finished successfully Jul 11 00:40:28.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.760653 systemd[1]: Finished ignition-files.service. Jul 11 00:40:28.769824 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 11 00:40:28.771214 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 11 00:40:28.771875 systemd[1]: Starting ignition-quench.service... Jul 11 00:40:28.775194 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:40:28.775269 systemd[1]: Finished ignition-quench.service. Jul 11 00:40:28.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.777706 initrd-setup-root-after-ignition[883]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 11 00:40:28.779057 initrd-setup-root-after-ignition[885]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:40:28.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.779027 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 11 00:40:28.780098 systemd[1]: Reached target ignition-complete.target. Jul 11 00:40:28.783907 systemd[1]: Starting initrd-parse-etc.service... Jul 11 00:40:28.796138 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:40:28.796227 systemd[1]: Finished initrd-parse-etc.service. Jul 11 00:40:28.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.797902 systemd[1]: Reached target initrd-fs.target. Jul 11 00:40:28.799186 systemd[1]: Reached target initrd.target. Jul 11 00:40:28.800474 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 11 00:40:28.801148 systemd[1]: Starting dracut-pre-pivot.service... Jul 11 00:40:28.814820 systemd[1]: Finished dracut-pre-pivot.service. Jul 11 00:40:28.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.816248 systemd[1]: Starting initrd-cleanup.service... Jul 11 00:40:28.823674 systemd[1]: Stopped target nss-lookup.target. Jul 11 00:40:28.824570 systemd[1]: Stopped target remote-cryptsetup.target. Jul 11 00:40:28.825951 systemd[1]: Stopped target timers.target. Jul 11 00:40:28.827256 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:40:28.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.827357 systemd[1]: Stopped dracut-pre-pivot.service. Jul 11 00:40:28.828742 systemd[1]: Stopped target initrd.target. Jul 11 00:40:28.830206 systemd[1]: Stopped target basic.target. Jul 11 00:40:28.831479 systemd[1]: Stopped target ignition-complete.target. Jul 11 00:40:28.832862 systemd[1]: Stopped target ignition-diskful.target. Jul 11 00:40:28.834112 systemd[1]: Stopped target initrd-root-device.target. Jul 11 00:40:28.835533 systemd[1]: Stopped target remote-fs.target. Jul 11 00:40:28.836870 systemd[1]: Stopped target remote-fs-pre.target. Jul 11 00:40:28.838267 systemd[1]: Stopped target sysinit.target. Jul 11 00:40:28.839501 systemd[1]: Stopped target local-fs.target. Jul 11 00:40:28.840771 systemd[1]: Stopped target local-fs-pre.target. Jul 11 00:40:28.842016 systemd[1]: Stopped target swap.target. Jul 11 00:40:28.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.843178 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:40:28.843280 systemd[1]: Stopped dracut-pre-mount.service. Jul 11 00:40:28.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.844583 systemd[1]: Stopped target cryptsetup.target. Jul 11 00:40:28.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.845745 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:40:28.845842 systemd[1]: Stopped dracut-initqueue.service. Jul 11 00:40:28.847299 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:40:28.847391 systemd[1]: Stopped ignition-fetch-offline.service. Jul 11 00:40:28.848729 systemd[1]: Stopped target paths.target. Jul 11 00:40:28.849903 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:40:28.853493 systemd[1]: Stopped systemd-ask-password-console.path. Jul 11 00:40:28.854927 systemd[1]: Stopped target slices.target. Jul 11 00:40:28.856398 systemd[1]: Stopped target sockets.target. Jul 11 00:40:28.857982 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:40:28.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.858051 systemd[1]: Closed iscsid.socket. Jul 11 00:40:28.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.859266 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:40:28.859359 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 11 00:40:28.860804 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:40:28.860894 systemd[1]: Stopped ignition-files.service. Jul 11 00:40:28.862948 systemd[1]: Stopping ignition-mount.service... Jul 11 00:40:28.864666 systemd[1]: Stopping iscsiuio.service... Jul 11 00:40:28.867086 systemd[1]: Stopping sysroot-boot.service... Jul 11 00:40:28.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.869649 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:40:28.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.869774 systemd[1]: Stopped systemd-udev-trigger.service. Jul 11 00:40:28.874810 ignition[898]: INFO : Ignition 2.14.0 Jul 11 00:40:28.874810 ignition[898]: INFO : Stage: umount Jul 11 00:40:28.874810 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:40:28.874810 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:40:28.874810 ignition[898]: INFO : umount: umount passed Jul 11 00:40:28.874810 ignition[898]: INFO : Ignition finished successfully Jul 11 00:40:28.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.871307 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:40:28.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.871393 systemd[1]: Stopped dracut-pre-trigger.service. Jul 11 00:40:28.874334 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 11 00:40:28.874456 systemd[1]: Stopped iscsiuio.service. Jul 11 00:40:28.876350 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:40:28.876441 systemd[1]: Stopped ignition-mount.service. Jul 11 00:40:28.877906 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:40:28.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.877983 systemd[1]: Finished initrd-cleanup.service. Jul 11 00:40:28.880206 systemd[1]: Stopped target network.target. Jul 11 00:40:28.882570 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:40:28.882603 systemd[1]: Closed iscsiuio.socket. Jul 11 00:40:28.885816 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:40:28.885862 systemd[1]: Stopped ignition-disks.service. Jul 11 00:40:28.887286 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:40:28.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.887326 systemd[1]: Stopped ignition-kargs.service. Jul 11 00:40:28.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.888183 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:40:28.888228 systemd[1]: Stopped ignition-setup.service. Jul 11 00:40:28.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.890108 systemd[1]: Stopping systemd-networkd.service... Jul 11 00:40:28.891155 systemd[1]: Stopping systemd-resolved.service... Jul 11 00:40:28.893196 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:40:28.893503 systemd-networkd[741]: eth0: DHCPv6 lease lost Jul 11 00:40:28.922000 audit: BPF prog-id=9 op=UNLOAD Jul 11 00:40:28.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.896741 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:40:28.896829 systemd[1]: Stopped systemd-networkd.service. Jul 11 00:40:28.899868 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:40:28.899896 systemd[1]: Closed systemd-networkd.socket. Jul 11 00:40:28.903989 systemd[1]: Stopping network-cleanup.service... Jul 11 00:40:28.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.907323 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:40:28.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.907382 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 11 00:40:28.930000 audit: BPF prog-id=6 op=UNLOAD Jul 11 00:40:28.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.910581 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:40:28.910624 systemd[1]: Stopped systemd-sysctl.service. Jul 11 00:40:28.912894 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:40:28.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.912936 systemd[1]: Stopped systemd-modules-load.service. Jul 11 00:40:28.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.918687 systemd[1]: Stopping systemd-udevd.service... Jul 11 00:40:28.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.921236 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 11 00:40:28.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.921724 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:40:28.921823 systemd[1]: Stopped systemd-resolved.service. Jul 11 00:40:28.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.926711 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:40:28.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.926806 systemd[1]: Stopped sysroot-boot.service. Jul 11 00:40:28.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.928144 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:40:28.928254 systemd[1]: Stopped systemd-udevd.service. Jul 11 00:40:28.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:28.929586 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:40:28.929670 systemd[1]: Stopped network-cleanup.service. Jul 11 00:40:28.931108 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:40:28.931142 systemd[1]: Closed systemd-udevd-control.socket. Jul 11 00:40:28.932351 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:40:28.932383 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 11 00:40:28.934746 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:40:28.934789 systemd[1]: Stopped dracut-pre-udev.service. Jul 11 00:40:28.936546 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:40:28.936590 systemd[1]: Stopped dracut-cmdline.service. Jul 11 00:40:28.937965 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:40:28.938005 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 11 00:40:28.939287 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:40:28.939329 systemd[1]: Stopped initrd-setup-root.service. Jul 11 00:40:28.941233 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 11 00:40:28.942201 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 00:40:28.942262 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 11 00:40:28.944408 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:40:28.944480 systemd[1]: Stopped kmod-static-nodes.service. Jul 11 00:40:28.945276 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:40:28.945314 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 11 00:40:28.947519 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 11 00:40:28.947972 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:40:28.972950 systemd-journald[290]: Received SIGTERM from PID 1 (n/a). Jul 11 00:40:28.972983 iscsid[748]: iscsid shutting down. Jul 11 00:40:28.948047 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 11 00:40:28.949337 systemd[1]: Reached target initrd-switch-root.target. Jul 11 00:40:28.951564 systemd[1]: Starting initrd-switch-root.service... Jul 11 00:40:28.957497 systemd[1]: Switching root. Jul 11 00:40:28.976748 systemd-journald[290]: Journal stopped Jul 11 00:40:31.017670 kernel: SELinux: Class mctp_socket not defined in policy. Jul 11 00:40:31.017731 kernel: SELinux: Class anon_inode not defined in policy. Jul 11 00:40:31.017746 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 11 00:40:31.017756 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 00:40:31.017766 kernel: SELinux: policy capability open_perms=1 Jul 11 00:40:31.017775 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 00:40:31.017790 kernel: SELinux: policy capability always_check_network=0 Jul 11 00:40:31.017800 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 00:40:31.017813 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 00:40:31.017822 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 00:40:31.017832 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 00:40:31.017852 kernel: kauditd_printk_skb: 63 callbacks suppressed Jul 11 00:40:31.017863 kernel: audit: type=1403 audit(1752194429.037:74): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 00:40:31.017875 systemd[1]: Successfully loaded SELinux policy in 35.150ms. Jul 11 00:40:31.017894 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.533ms. Jul 11 00:40:31.017906 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 11 00:40:31.017917 systemd[1]: Detected virtualization kvm. Jul 11 00:40:31.017927 systemd[1]: Detected architecture arm64. Jul 11 00:40:31.017939 systemd[1]: Detected first boot. Jul 11 00:40:31.017949 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:40:31.017960 kernel: audit: type=1400 audit(1752194429.104:75): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 11 00:40:31.017970 kernel: audit: type=1400 audit(1752194429.104:76): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 11 00:40:31.017980 kernel: audit: type=1334 audit(1752194429.106:77): prog-id=10 op=LOAD Jul 11 00:40:31.017989 kernel: audit: type=1334 audit(1752194429.106:78): prog-id=10 op=UNLOAD Jul 11 00:40:31.017998 kernel: audit: type=1334 audit(1752194429.108:79): prog-id=11 op=LOAD Jul 11 00:40:31.018008 kernel: audit: type=1334 audit(1752194429.108:80): prog-id=11 op=UNLOAD Jul 11 00:40:31.018020 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 11 00:40:31.018031 kernel: audit: type=1400 audit(1752194429.149:81): avc: denied { associate } for pid=932 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 11 00:40:31.018042 kernel: audit: type=1300 audit(1752194429.149:81): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c589c a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:40:31.018055 kernel: audit: type=1327 audit(1752194429.149:81): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 11 00:40:31.018068 systemd[1]: Populated /etc with preset unit settings. Jul 11 00:40:31.018079 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 11 00:40:31.018089 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 11 00:40:31.018102 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:40:31.018113 systemd[1]: iscsid.service: Deactivated successfully. Jul 11 00:40:31.018123 systemd[1]: Stopped iscsid.service. Jul 11 00:40:31.018134 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 11 00:40:31.018144 systemd[1]: Stopped initrd-switch-root.service. Jul 11 00:40:31.018155 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 11 00:40:31.018165 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 11 00:40:31.018177 systemd[1]: Created slice system-addon\x2drun.slice. Jul 11 00:40:31.018187 systemd[1]: Created slice system-getty.slice. Jul 11 00:40:31.018197 systemd[1]: Created slice system-modprobe.slice. Jul 11 00:40:31.018208 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 11 00:40:31.018218 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 11 00:40:31.018229 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 11 00:40:31.018239 systemd[1]: Created slice user.slice. Jul 11 00:40:31.018250 systemd[1]: Started systemd-ask-password-console.path. Jul 11 00:40:31.018261 systemd[1]: Started systemd-ask-password-wall.path. Jul 11 00:40:31.018271 systemd[1]: Set up automount boot.automount. Jul 11 00:40:31.018284 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 11 00:40:31.018336 systemd[1]: Stopped target initrd-switch-root.target. Jul 11 00:40:31.018348 systemd[1]: Stopped target initrd-fs.target. Jul 11 00:40:31.018360 systemd[1]: Stopped target initrd-root-fs.target. Jul 11 00:40:31.018371 systemd[1]: Reached target integritysetup.target. Jul 11 00:40:31.018381 systemd[1]: Reached target remote-cryptsetup.target. Jul 11 00:40:31.018392 systemd[1]: Reached target remote-fs.target. Jul 11 00:40:31.018409 systemd[1]: Reached target slices.target. Jul 11 00:40:31.018422 systemd[1]: Reached target swap.target. Jul 11 00:40:31.018432 systemd[1]: Reached target torcx.target. Jul 11 00:40:31.018449 systemd[1]: Reached target veritysetup.target. Jul 11 00:40:31.018472 systemd[1]: Listening on systemd-coredump.socket. Jul 11 00:40:31.018483 systemd[1]: Listening on systemd-initctl.socket. Jul 11 00:40:31.018497 systemd[1]: Listening on systemd-networkd.socket. Jul 11 00:40:31.018507 systemd[1]: Listening on systemd-udevd-control.socket. Jul 11 00:40:31.018518 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 11 00:40:31.018528 systemd[1]: Listening on systemd-userdbd.socket. Jul 11 00:40:31.018538 systemd[1]: Mounting dev-hugepages.mount... Jul 11 00:40:31.018548 systemd[1]: Mounting dev-mqueue.mount... Jul 11 00:40:31.018559 systemd[1]: Mounting media.mount... Jul 11 00:40:31.018570 systemd[1]: Mounting sys-kernel-debug.mount... Jul 11 00:40:31.018583 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 11 00:40:31.018595 systemd[1]: Mounting tmp.mount... Jul 11 00:40:31.018605 systemd[1]: Starting flatcar-tmpfiles.service... Jul 11 00:40:31.018616 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 11 00:40:31.018626 systemd[1]: Starting kmod-static-nodes.service... Jul 11 00:40:31.018636 systemd[1]: Starting modprobe@configfs.service... Jul 11 00:40:31.018647 systemd[1]: Starting modprobe@dm_mod.service... Jul 11 00:40:31.018657 systemd[1]: Starting modprobe@drm.service... Jul 11 00:40:31.018667 systemd[1]: Starting modprobe@efi_pstore.service... Jul 11 00:40:31.018677 systemd[1]: Starting modprobe@fuse.service... Jul 11 00:40:31.018689 systemd[1]: Starting modprobe@loop.service... Jul 11 00:40:31.018700 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 00:40:31.018711 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 11 00:40:31.018721 systemd[1]: Stopped systemd-fsck-root.service. Jul 11 00:40:31.018732 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 11 00:40:31.018742 systemd[1]: Stopped systemd-fsck-usr.service. Jul 11 00:40:31.018753 systemd[1]: Stopped systemd-journald.service. Jul 11 00:40:31.018763 systemd[1]: Starting systemd-journald.service... Jul 11 00:40:31.018774 systemd[1]: Starting systemd-modules-load.service... Jul 11 00:40:31.018786 kernel: fuse: init (API version 7.34) Jul 11 00:40:31.018798 systemd[1]: Starting systemd-network-generator.service... Jul 11 00:40:31.018812 kernel: loop: module loaded Jul 11 00:40:31.018822 systemd[1]: Starting systemd-remount-fs.service... Jul 11 00:40:31.018833 systemd[1]: Starting systemd-udev-trigger.service... Jul 11 00:40:31.018843 systemd[1]: verity-setup.service: Deactivated successfully. Jul 11 00:40:31.018853 systemd[1]: Stopped verity-setup.service. Jul 11 00:40:31.018864 systemd[1]: Mounted dev-hugepages.mount. Jul 11 00:40:31.018874 systemd[1]: Mounted dev-mqueue.mount. Jul 11 00:40:31.018884 systemd[1]: Mounted media.mount. Jul 11 00:40:31.018894 systemd[1]: Mounted sys-kernel-debug.mount. Jul 11 00:40:31.018905 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 11 00:40:31.018917 systemd[1]: Mounted tmp.mount. Jul 11 00:40:31.018927 systemd[1]: Finished kmod-static-nodes.service. Jul 11 00:40:31.018938 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 00:40:31.018948 systemd[1]: Finished modprobe@configfs.service. Jul 11 00:40:31.018959 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:40:31.018969 systemd[1]: Finished modprobe@dm_mod.service. Jul 11 00:40:31.018981 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:40:31.018991 systemd[1]: Finished modprobe@drm.service. Jul 11 00:40:31.019002 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:40:31.019012 systemd[1]: Finished modprobe@efi_pstore.service. Jul 11 00:40:31.019023 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 00:40:31.019035 systemd[1]: Finished modprobe@fuse.service. Jul 11 00:40:31.019046 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:40:31.019061 systemd-journald[998]: Journal started Jul 11 00:40:31.019106 systemd-journald[998]: Runtime Journal (/run/log/journal/192ff685ece74aed97d92963e0a78c29) is 6.0M, max 48.7M, 42.6M free. Jul 11 00:40:29.037000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 00:40:29.104000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 11 00:40:29.104000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 11 00:40:29.106000 audit: BPF prog-id=10 op=LOAD Jul 11 00:40:29.106000 audit: BPF prog-id=10 op=UNLOAD Jul 11 00:40:29.108000 audit: BPF prog-id=11 op=LOAD Jul 11 00:40:29.108000 audit: BPF prog-id=11 op=UNLOAD Jul 11 00:40:29.149000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 11 00:40:29.149000 audit[932]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c589c a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:40:29.149000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 11 00:40:29.150000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 11 00:40:29.150000 audit[932]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5975 a2=1ed a3=0 items=2 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:40:29.150000 audit: CWD cwd="/" Jul 11 00:40:29.150000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 11 00:40:29.150000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 11 00:40:29.150000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 11 00:40:30.864000 audit: BPF prog-id=12 op=LOAD Jul 11 00:40:30.864000 audit: BPF prog-id=3 op=UNLOAD Jul 11 00:40:30.864000 audit: BPF prog-id=13 op=LOAD Jul 11 00:40:30.864000 audit: BPF prog-id=14 op=LOAD Jul 11 00:40:30.864000 audit: BPF prog-id=4 op=UNLOAD Jul 11 00:40:30.864000 audit: BPF prog-id=5 op=UNLOAD Jul 11 00:40:30.865000 audit: BPF prog-id=15 op=LOAD Jul 11 00:40:30.865000 audit: BPF prog-id=12 op=UNLOAD Jul 11 00:40:30.865000 audit: BPF prog-id=16 op=LOAD Jul 11 00:40:30.865000 audit: BPF prog-id=17 op=LOAD Jul 11 00:40:30.865000 audit: BPF prog-id=13 op=UNLOAD Jul 11 00:40:30.865000 audit: BPF prog-id=14 op=UNLOAD Jul 11 00:40:30.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:30.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:30.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:30.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:30.881000 audit: BPF prog-id=15 op=UNLOAD Jul 11 00:40:30.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:30.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:30.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:30.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:30.973000 audit: BPF prog-id=18 op=LOAD Jul 11 00:40:30.973000 audit: BPF prog-id=19 op=LOAD Jul 11 00:40:30.973000 audit: BPF prog-id=20 op=LOAD Jul 11 00:40:30.973000 audit: BPF prog-id=16 op=UNLOAD Jul 11 00:40:30.973000 audit: BPF prog-id=17 op=UNLOAD Jul 11 00:40:30.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.002000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.006000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 11 00:40:31.006000 audit[998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffdb251c50 a2=4000 a3=1 items=0 ppid=1 pid=998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:40:31.006000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 11 00:40:31.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.020723 systemd[1]: Finished modprobe@loop.service. Jul 11 00:40:31.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:29.147837 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-07-11T00:40:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 11 00:40:30.863567 systemd[1]: Queued start job for default target multi-user.target. Jul 11 00:40:29.148135 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-07-11T00:40:29Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 11 00:40:30.863580 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 11 00:40:29.148154 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-07-11T00:40:29Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 11 00:40:30.867168 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 11 00:40:29.148185 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-07-11T00:40:29Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 11 00:40:29.148194 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-07-11T00:40:29Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 11 00:40:29.148224 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-07-11T00:40:29Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 11 00:40:29.148236 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-07-11T00:40:29Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 11 00:40:29.148436 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-07-11T00:40:29Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 11 00:40:29.148491 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-07-11T00:40:29Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 11 00:40:29.148504 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-07-11T00:40:29Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 11 00:40:29.149208 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-07-11T00:40:29Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 11 00:40:29.149244 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-07-11T00:40:29Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 11 00:40:29.149261 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-07-11T00:40:29Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Jul 11 00:40:29.149274 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-07-11T00:40:29Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 11 00:40:29.149292 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-07-11T00:40:29Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Jul 11 00:40:29.149304 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-07-11T00:40:29Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 11 00:40:30.610991 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-07-11T00:40:30Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 11 00:40:30.611244 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-07-11T00:40:30Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 11 00:40:31.022533 systemd[1]: Started systemd-journald.service. Jul 11 00:40:31.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:30.611341 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-07-11T00:40:30Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 11 00:40:30.611537 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-07-11T00:40:30Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 11 00:40:30.611593 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-07-11T00:40:30Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 11 00:40:30.611648 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-07-11T00:40:30Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 11 00:40:31.023338 systemd[1]: Finished flatcar-tmpfiles.service. Jul 11 00:40:31.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.024512 systemd[1]: Finished systemd-modules-load.service. Jul 11 00:40:31.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.025676 systemd[1]: Finished systemd-network-generator.service. Jul 11 00:40:31.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.026861 systemd[1]: Finished systemd-remount-fs.service. Jul 11 00:40:31.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.028198 systemd[1]: Reached target network-pre.target. Jul 11 00:40:31.030420 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 11 00:40:31.032430 systemd[1]: Mounting sys-kernel-config.mount... Jul 11 00:40:31.033303 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 00:40:31.034885 systemd[1]: Starting systemd-hwdb-update.service... Jul 11 00:40:31.036899 systemd[1]: Starting systemd-journal-flush.service... Jul 11 00:40:31.037881 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:40:31.038977 systemd[1]: Starting systemd-random-seed.service... Jul 11 00:40:31.040018 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 11 00:40:31.041135 systemd[1]: Starting systemd-sysctl.service... Jul 11 00:40:31.043132 systemd[1]: Starting systemd-sysusers.service... Jul 11 00:40:31.044519 systemd-journald[998]: Time spent on flushing to /var/log/journal/192ff685ece74aed97d92963e0a78c29 is 12.049ms for 978 entries. Jul 11 00:40:31.044519 systemd-journald[998]: System Journal (/var/log/journal/192ff685ece74aed97d92963e0a78c29) is 8.0M, max 195.6M, 187.6M free. Jul 11 00:40:31.072682 systemd-journald[998]: Received client request to flush runtime journal. Jul 11 00:40:31.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.046674 systemd[1]: Finished systemd-udev-trigger.service. Jul 11 00:40:31.047892 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 11 00:40:31.073132 udevadm[1032]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 11 00:40:31.048936 systemd[1]: Mounted sys-kernel-config.mount. Jul 11 00:40:31.050937 systemd[1]: Starting systemd-udev-settle.service... Jul 11 00:40:31.052196 systemd[1]: Finished systemd-random-seed.service. Jul 11 00:40:31.053261 systemd[1]: Reached target first-boot-complete.target. Jul 11 00:40:31.060890 systemd[1]: Finished systemd-sysctl.service. Jul 11 00:40:31.071176 systemd[1]: Finished systemd-sysusers.service. Jul 11 00:40:31.073314 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 11 00:40:31.075812 systemd[1]: Finished systemd-journal-flush.service. Jul 11 00:40:31.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.092249 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 11 00:40:31.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.404000 systemd[1]: Finished systemd-hwdb-update.service. Jul 11 00:40:31.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.405000 audit: BPF prog-id=21 op=LOAD Jul 11 00:40:31.405000 audit: BPF prog-id=22 op=LOAD Jul 11 00:40:31.405000 audit: BPF prog-id=7 op=UNLOAD Jul 11 00:40:31.405000 audit: BPF prog-id=8 op=UNLOAD Jul 11 00:40:31.406510 systemd[1]: Starting systemd-udevd.service... Jul 11 00:40:31.440956 systemd-udevd[1037]: Using default interface naming scheme 'v252'. Jul 11 00:40:31.453438 systemd[1]: Started systemd-udevd.service. Jul 11 00:40:31.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.455000 audit: BPF prog-id=23 op=LOAD Jul 11 00:40:31.460192 systemd[1]: Starting systemd-networkd.service... Jul 11 00:40:31.469000 audit: BPF prog-id=24 op=LOAD Jul 11 00:40:31.469000 audit: BPF prog-id=25 op=LOAD Jul 11 00:40:31.469000 audit: BPF prog-id=26 op=LOAD Jul 11 00:40:31.470589 systemd[1]: Starting systemd-userdbd.service... Jul 11 00:40:31.491098 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Jul 11 00:40:31.504331 systemd[1]: Started systemd-userdbd.service. Jul 11 00:40:31.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.516035 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 11 00:40:31.579994 systemd[1]: Finished systemd-udev-settle.service. Jul 11 00:40:31.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.582361 systemd-networkd[1043]: lo: Link UP Jul 11 00:40:31.582376 systemd-networkd[1043]: lo: Gained carrier Jul 11 00:40:31.582423 systemd[1]: Starting lvm2-activation-early.service... Jul 11 00:40:31.582811 systemd-networkd[1043]: Enumeration completed Jul 11 00:40:31.582923 systemd-networkd[1043]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:40:31.583489 systemd[1]: Started systemd-networkd.service. Jul 11 00:40:31.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.584818 systemd-networkd[1043]: eth0: Link UP Jul 11 00:40:31.584827 systemd-networkd[1043]: eth0: Gained carrier Jul 11 00:40:31.596638 lvm[1070]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:40:31.602687 systemd-networkd[1043]: eth0: DHCPv4 address 10.0.0.127/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:40:31.622306 systemd[1]: Finished lvm2-activation-early.service. Jul 11 00:40:31.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.623381 systemd[1]: Reached target cryptsetup.target. Jul 11 00:40:31.625418 systemd[1]: Starting lvm2-activation.service... Jul 11 00:40:31.629041 lvm[1071]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:40:31.661368 systemd[1]: Finished lvm2-activation.service. Jul 11 00:40:31.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.662386 systemd[1]: Reached target local-fs-pre.target. Jul 11 00:40:31.663289 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 00:40:31.663324 systemd[1]: Reached target local-fs.target. Jul 11 00:40:31.664162 systemd[1]: Reached target machines.target. Jul 11 00:40:31.666236 systemd[1]: Starting ldconfig.service... Jul 11 00:40:31.667393 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 11 00:40:31.667467 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:40:31.668749 systemd[1]: Starting systemd-boot-update.service... Jul 11 00:40:31.670878 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 11 00:40:31.673237 systemd[1]: Starting systemd-machine-id-commit.service... Jul 11 00:40:31.675963 systemd[1]: Starting systemd-sysext.service... Jul 11 00:40:31.677224 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1073 (bootctl) Jul 11 00:40:31.678488 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 11 00:40:31.690871 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 11 00:40:31.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.696972 systemd[1]: Unmounting usr-share-oem.mount... Jul 11 00:40:31.703902 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 11 00:40:31.704092 systemd[1]: Unmounted usr-share-oem.mount. Jul 11 00:40:31.759523 kernel: loop0: detected capacity change from 0 to 207008 Jul 11 00:40:31.760931 systemd[1]: Finished systemd-machine-id-commit.service. Jul 11 00:40:31.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.770460 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 00:40:31.777415 systemd-fsck[1081]: fsck.fat 4.2 (2021-01-31) Jul 11 00:40:31.777415 systemd-fsck[1081]: /dev/vda1: 236 files, 117310/258078 clusters Jul 11 00:40:31.780701 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 11 00:40:31.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.785478 kernel: loop1: detected capacity change from 0 to 207008 Jul 11 00:40:31.791210 (sd-sysext)[1085]: Using extensions 'kubernetes'. Jul 11 00:40:31.791704 (sd-sysext)[1085]: Merged extensions into '/usr'. Jul 11 00:40:31.818088 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 11 00:40:31.820587 systemd[1]: Starting modprobe@dm_mod.service... Jul 11 00:40:31.823459 systemd[1]: Starting modprobe@efi_pstore.service... Jul 11 00:40:31.825747 systemd[1]: Starting modprobe@loop.service... Jul 11 00:40:31.826633 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 11 00:40:31.826832 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:40:31.828114 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:40:31.828296 systemd[1]: Finished modprobe@dm_mod.service. Jul 11 00:40:31.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.829737 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:40:31.829853 systemd[1]: Finished modprobe@efi_pstore.service. Jul 11 00:40:31.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.831260 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:40:31.831374 systemd[1]: Finished modprobe@loop.service. Jul 11 00:40:31.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.832705 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:40:31.832847 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 11 00:40:31.863272 ldconfig[1072]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 00:40:31.866713 systemd[1]: Finished ldconfig.service. Jul 11 00:40:31.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:31.992213 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 00:40:31.994122 systemd[1]: Mounting boot.mount... Jul 11 00:40:31.995980 systemd[1]: Mounting usr-share-oem.mount... Jul 11 00:40:32.002074 systemd[1]: Mounted boot.mount. Jul 11 00:40:32.003087 systemd[1]: Mounted usr-share-oem.mount. Jul 11 00:40:32.005293 systemd[1]: Finished systemd-sysext.service. Jul 11 00:40:32.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:32.007906 systemd[1]: Starting ensure-sysext.service... Jul 11 00:40:32.010024 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 11 00:40:32.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:32.013006 systemd[1]: Finished systemd-boot-update.service. Jul 11 00:40:32.015613 systemd[1]: Reloading. Jul 11 00:40:32.019162 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 11 00:40:32.019941 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 00:40:32.021292 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 00:40:32.050362 /usr/lib/systemd/system-generators/torcx-generator[1113]: time="2025-07-11T00:40:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 11 00:40:32.050390 /usr/lib/systemd/system-generators/torcx-generator[1113]: time="2025-07-11T00:40:32Z" level=info msg="torcx already run" Jul 11 00:40:32.111746 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 11 00:40:32.111769 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 11 00:40:32.127537 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:40:32.169000 audit: BPF prog-id=27 op=LOAD Jul 11 00:40:32.169000 audit: BPF prog-id=23 op=UNLOAD Jul 11 00:40:32.169000 audit: BPF prog-id=28 op=LOAD Jul 11 00:40:32.169000 audit: BPF prog-id=29 op=LOAD Jul 11 00:40:32.169000 audit: BPF prog-id=21 op=UNLOAD Jul 11 00:40:32.169000 audit: BPF prog-id=22 op=UNLOAD Jul 11 00:40:32.169000 audit: BPF prog-id=30 op=LOAD Jul 11 00:40:32.170000 audit: BPF prog-id=24 op=UNLOAD Jul 11 00:40:32.170000 audit: BPF prog-id=31 op=LOAD Jul 11 00:40:32.170000 audit: BPF prog-id=32 op=LOAD Jul 11 00:40:32.170000 audit: BPF prog-id=25 op=UNLOAD Jul 11 00:40:32.170000 audit: BPF prog-id=26 op=UNLOAD Jul 11 00:40:32.171000 audit: BPF prog-id=33 op=LOAD Jul 11 00:40:32.171000 audit: BPF prog-id=18 op=UNLOAD Jul 11 00:40:32.171000 audit: BPF prog-id=34 op=LOAD Jul 11 00:40:32.171000 audit: BPF prog-id=35 op=LOAD Jul 11 00:40:32.171000 audit: BPF prog-id=19 op=UNLOAD Jul 11 00:40:32.171000 audit: BPF prog-id=20 op=UNLOAD Jul 11 00:40:32.174404 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 11 00:40:32.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:32.178650 systemd[1]: Starting audit-rules.service... Jul 11 00:40:32.180416 systemd[1]: Starting clean-ca-certificates.service... Jul 11 00:40:32.182876 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 11 00:40:32.185000 audit: BPF prog-id=36 op=LOAD Jul 11 00:40:32.190261 systemd[1]: Starting systemd-resolved.service... Jul 11 00:40:32.193000 audit: BPF prog-id=37 op=LOAD Jul 11 00:40:32.194537 systemd[1]: Starting systemd-timesyncd.service... Jul 11 00:40:32.197019 systemd[1]: Starting systemd-update-utmp.service... Jul 11 00:40:32.198751 systemd[1]: Finished clean-ca-certificates.service. Jul 11 00:40:32.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:32.201000 audit[1163]: SYSTEM_BOOT pid=1163 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 11 00:40:32.206858 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 11 00:40:32.208198 systemd[1]: Starting modprobe@dm_mod.service... Jul 11 00:40:32.210324 systemd[1]: Starting modprobe@efi_pstore.service... Jul 11 00:40:32.212339 systemd[1]: Starting modprobe@loop.service... Jul 11 00:40:32.213187 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 11 00:40:32.213372 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:40:32.213541 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:40:32.214764 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 11 00:40:32.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:32.216213 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:40:32.216328 systemd[1]: Finished modprobe@dm_mod.service. Jul 11 00:40:32.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:32.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:32.217651 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:40:32.217766 systemd[1]: Finished modprobe@efi_pstore.service. Jul 11 00:40:32.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:32.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:32.219192 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:40:32.219309 systemd[1]: Finished modprobe@loop.service. Jul 11 00:40:32.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:32.220000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 11 00:40:32.222311 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 11 00:40:32.223913 systemd[1]: Starting modprobe@dm_mod.service... Jul 11 00:40:32.225796 systemd[1]: Starting modprobe@efi_pstore.service... Jul 11 00:40:32.227759 systemd[1]: Starting modprobe@loop.service... Jul 11 00:40:32.228706 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 11 00:40:32.228820 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:40:32.233646 systemd[1]: Starting systemd-update-done.service... Jul 11 00:40:32.234000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 11 00:40:32.234000 audit[1178]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc494c2c0 a2=420 a3=0 items=0 ppid=1152 pid=1178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 11 00:40:32.234000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 11 00:40:32.234618 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:40:32.235036 augenrules[1178]: No rules Jul 11 00:40:32.235803 systemd[1]: Finished audit-rules.service. Jul 11 00:40:32.237202 systemd[1]: Finished systemd-update-utmp.service. Jul 11 00:40:32.238472 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:40:32.238595 systemd[1]: Finished modprobe@dm_mod.service. Jul 11 00:40:32.239801 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:40:32.239921 systemd[1]: Finished modprobe@efi_pstore.service. Jul 11 00:40:32.241107 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:40:32.241216 systemd[1]: Finished modprobe@loop.service. Jul 11 00:40:32.242363 systemd[1]: Finished systemd-update-done.service. Jul 11 00:40:32.246613 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 11 00:40:32.247823 systemd[1]: Starting modprobe@dm_mod.service... Jul 11 00:40:32.249853 systemd[1]: Starting modprobe@drm.service... Jul 11 00:40:32.251898 systemd[1]: Starting modprobe@efi_pstore.service... Jul 11 00:40:32.253892 systemd[1]: Starting modprobe@loop.service... Jul 11 00:40:32.662199 systemd-timesyncd[1162]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 00:40:32.662230 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 11 00:40:32.662266 systemd-timesyncd[1162]: Initial clock synchronization to Fri 2025-07-11 00:40:32.662104 UTC. Jul 11 00:40:32.662358 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:40:32.663483 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 11 00:40:32.664592 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:40:32.665503 systemd[1]: Started systemd-timesyncd.service. Jul 11 00:40:32.667144 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:40:32.667270 systemd[1]: Finished modprobe@dm_mod.service. Jul 11 00:40:32.668165 systemd-resolved[1156]: Positive Trust Anchors: Jul 11 00:40:32.668447 systemd-resolved[1156]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:40:32.668535 systemd-resolved[1156]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 11 00:40:32.668571 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:40:32.668688 systemd[1]: Finished modprobe@drm.service. Jul 11 00:40:32.670081 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:40:32.670201 systemd[1]: Finished modprobe@efi_pstore.service. Jul 11 00:40:32.671610 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:40:32.671737 systemd[1]: Finished modprobe@loop.service. Jul 11 00:40:32.673351 systemd[1]: Reached target time-set.target. Jul 11 00:40:32.674301 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:40:32.674344 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 11 00:40:32.674701 systemd[1]: Finished ensure-sysext.service. Jul 11 00:40:32.682226 systemd-resolved[1156]: Defaulting to hostname 'linux'. Jul 11 00:40:32.687244 systemd[1]: Started systemd-resolved.service. Jul 11 00:40:32.688122 systemd[1]: Reached target network.target. Jul 11 00:40:32.688894 systemd[1]: Reached target nss-lookup.target. Jul 11 00:40:32.689671 systemd[1]: Reached target sysinit.target. Jul 11 00:40:32.690589 systemd[1]: Started motdgen.path. Jul 11 00:40:32.691331 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 11 00:40:32.692576 systemd[1]: Started logrotate.timer. Jul 11 00:40:32.693448 systemd[1]: Started mdadm.timer. Jul 11 00:40:32.694198 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 11 00:40:32.695062 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 00:40:32.695100 systemd[1]: Reached target paths.target. Jul 11 00:40:32.695818 systemd[1]: Reached target timers.target. Jul 11 00:40:32.696953 systemd[1]: Listening on dbus.socket. Jul 11 00:40:32.698740 systemd[1]: Starting docker.socket... Jul 11 00:40:32.701941 systemd[1]: Listening on sshd.socket. Jul 11 00:40:32.702759 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:40:32.703213 systemd[1]: Listening on docker.socket. Jul 11 00:40:32.704044 systemd[1]: Reached target sockets.target. Jul 11 00:40:32.704771 systemd[1]: Reached target basic.target. Jul 11 00:40:32.705608 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 11 00:40:32.705642 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 11 00:40:32.706639 systemd[1]: Starting containerd.service... Jul 11 00:40:32.708347 systemd[1]: Starting dbus.service... Jul 11 00:40:32.710165 systemd[1]: Starting enable-oem-cloudinit.service... Jul 11 00:40:32.712189 systemd[1]: Starting extend-filesystems.service... Jul 11 00:40:32.713088 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 11 00:40:32.714436 systemd[1]: Starting motdgen.service... Jul 11 00:40:32.720029 jq[1195]: false Jul 11 00:40:32.717810 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 11 00:40:32.721427 systemd[1]: Starting sshd-keygen.service... Jul 11 00:40:32.724173 systemd[1]: Starting systemd-logind.service... Jul 11 00:40:32.724975 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 11 00:40:32.725050 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 00:40:32.725858 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 00:40:32.726514 systemd[1]: Starting update-engine.service... Jul 11 00:40:32.728310 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 11 00:40:32.730631 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 00:40:32.730826 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 11 00:40:32.731560 jq[1209]: true Jul 11 00:40:32.732639 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 00:40:32.732818 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 11 00:40:32.740128 extend-filesystems[1196]: Found loop1 Jul 11 00:40:32.740128 extend-filesystems[1196]: Found vda Jul 11 00:40:32.740128 extend-filesystems[1196]: Found vda1 Jul 11 00:40:32.740128 extend-filesystems[1196]: Found vda2 Jul 11 00:40:32.740128 extend-filesystems[1196]: Found vda3 Jul 11 00:40:32.740128 extend-filesystems[1196]: Found usr Jul 11 00:40:32.740128 extend-filesystems[1196]: Found vda4 Jul 11 00:40:32.740128 extend-filesystems[1196]: Found vda6 Jul 11 00:40:32.740128 extend-filesystems[1196]: Found vda7 Jul 11 00:40:32.740128 extend-filesystems[1196]: Found vda9 Jul 11 00:40:32.740128 extend-filesystems[1196]: Checking size of /dev/vda9 Jul 11 00:40:32.750557 jq[1214]: true Jul 11 00:40:32.740588 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 00:40:32.740740 systemd[1]: Finished motdgen.service. Jul 11 00:40:32.764281 extend-filesystems[1196]: Resized partition /dev/vda9 Jul 11 00:40:32.764606 dbus-daemon[1194]: [system] SELinux support is enabled Jul 11 00:40:32.765221 systemd[1]: Started dbus.service. Jul 11 00:40:32.769469 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 00:40:32.769497 systemd[1]: Reached target system-config.target. Jul 11 00:40:32.770514 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 00:40:32.770534 systemd[1]: Reached target user-config.target. Jul 11 00:40:32.778071 extend-filesystems[1240]: resize2fs 1.46.5 (30-Dec-2021) Jul 11 00:40:32.798860 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 00:40:32.825568 systemd-logind[1204]: Watching system buttons on /dev/input/event0 (Power Button) Jul 11 00:40:32.826221 systemd-logind[1204]: New seat seat0. Jul 11 00:40:32.828480 systemd[1]: Started systemd-logind.service. Jul 11 00:40:32.831928 update_engine[1207]: I0711 00:40:32.831579 1207 main.cc:92] Flatcar Update Engine starting Jul 11 00:40:32.834907 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 00:40:32.835881 systemd[1]: Started update-engine.service. Jul 11 00:40:32.852977 update_engine[1207]: I0711 00:40:32.835933 1207 update_check_scheduler.cc:74] Next update check in 8m14s Jul 11 00:40:32.838775 systemd[1]: Started locksmithd.service. Jul 11 00:40:32.853486 extend-filesystems[1240]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 00:40:32.853486 extend-filesystems[1240]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 00:40:32.853486 extend-filesystems[1240]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 00:40:32.857708 bash[1241]: Updated "/home/core/.ssh/authorized_keys" Jul 11 00:40:32.857897 env[1215]: time="2025-07-11T00:40:32.856378468Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 11 00:40:32.858323 extend-filesystems[1196]: Resized filesystem in /dev/vda9 Jul 11 00:40:32.859413 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 00:40:32.859601 systemd[1]: Finished extend-filesystems.service. Jul 11 00:40:32.861663 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 11 00:40:32.875084 env[1215]: time="2025-07-11T00:40:32.875041068Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 11 00:40:32.875324 env[1215]: time="2025-07-11T00:40:32.875209468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:40:32.876654 env[1215]: time="2025-07-11T00:40:32.876558268Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.186-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:40:32.876654 env[1215]: time="2025-07-11T00:40:32.876593788Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:40:32.876872 env[1215]: time="2025-07-11T00:40:32.876828948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:40:32.876872 env[1215]: time="2025-07-11T00:40:32.876867588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 11 00:40:32.876947 env[1215]: time="2025-07-11T00:40:32.876882548Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 11 00:40:32.876947 env[1215]: time="2025-07-11T00:40:32.876892188Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 11 00:40:32.876985 env[1215]: time="2025-07-11T00:40:32.876972348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:40:32.877315 env[1215]: time="2025-07-11T00:40:32.877292428Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:40:32.877583 env[1215]: time="2025-07-11T00:40:32.877472348Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:40:32.877583 env[1215]: time="2025-07-11T00:40:32.877495868Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 11 00:40:32.877583 env[1215]: time="2025-07-11T00:40:32.877557748Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 11 00:40:32.877583 env[1215]: time="2025-07-11T00:40:32.877569068Z" level=info msg="metadata content store policy set" policy=shared Jul 11 00:40:32.882950 env[1215]: time="2025-07-11T00:40:32.882914388Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 11 00:40:32.882950 env[1215]: time="2025-07-11T00:40:32.882952508Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 11 00:40:32.883035 env[1215]: time="2025-07-11T00:40:32.882968068Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 11 00:40:32.883035 env[1215]: time="2025-07-11T00:40:32.883007588Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 11 00:40:32.883035 env[1215]: time="2025-07-11T00:40:32.883022828Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 11 00:40:32.883116 env[1215]: time="2025-07-11T00:40:32.883038228Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 11 00:40:32.883116 env[1215]: time="2025-07-11T00:40:32.883051548Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 11 00:40:32.883456 env[1215]: time="2025-07-11T00:40:32.883435148Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 11 00:40:32.883502 env[1215]: time="2025-07-11T00:40:32.883462188Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 11 00:40:32.883502 env[1215]: time="2025-07-11T00:40:32.883477348Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 11 00:40:32.883616 env[1215]: time="2025-07-11T00:40:32.883526188Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 11 00:40:32.883616 env[1215]: time="2025-07-11T00:40:32.883555268Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 11 00:40:32.883802 env[1215]: time="2025-07-11T00:40:32.883690468Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 11 00:40:32.883802 env[1215]: time="2025-07-11T00:40:32.883780908Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 11 00:40:32.884483 env[1215]: time="2025-07-11T00:40:32.884401388Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 11 00:40:32.884483 env[1215]: time="2025-07-11T00:40:32.884443188Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 11 00:40:32.884483 env[1215]: time="2025-07-11T00:40:32.884460828Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 11 00:40:32.890204 env[1215]: time="2025-07-11T00:40:32.884636108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 11 00:40:32.890204 env[1215]: time="2025-07-11T00:40:32.884656828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 11 00:40:32.890204 env[1215]: time="2025-07-11T00:40:32.884669708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 11 00:40:32.890204 env[1215]: time="2025-07-11T00:40:32.884684108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 11 00:40:32.890204 env[1215]: time="2025-07-11T00:40:32.884696308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 11 00:40:32.890204 env[1215]: time="2025-07-11T00:40:32.884723828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 11 00:40:32.890204 env[1215]: time="2025-07-11T00:40:32.884736028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 11 00:40:32.890204 env[1215]: time="2025-07-11T00:40:32.884749908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 11 00:40:32.890204 env[1215]: time="2025-07-11T00:40:32.884763868Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 11 00:40:32.890204 env[1215]: time="2025-07-11T00:40:32.884926508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 11 00:40:32.890204 env[1215]: time="2025-07-11T00:40:32.884943348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 11 00:40:32.890204 env[1215]: time="2025-07-11T00:40:32.884956468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 11 00:40:32.890204 env[1215]: time="2025-07-11T00:40:32.884984028Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 11 00:40:32.890204 env[1215]: time="2025-07-11T00:40:32.884998508Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 11 00:40:32.889173 systemd[1]: Started containerd.service. Jul 11 00:40:32.890602 env[1215]: time="2025-07-11T00:40:32.885009428Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 11 00:40:32.890602 env[1215]: time="2025-07-11T00:40:32.885026148Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 11 00:40:32.890602 env[1215]: time="2025-07-11T00:40:32.885059148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 11 00:40:32.890658 env[1215]: time="2025-07-11T00:40:32.885251788Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 11 00:40:32.890658 env[1215]: time="2025-07-11T00:40:32.885306268Z" level=info msg="Connect containerd service" Jul 11 00:40:32.890658 env[1215]: time="2025-07-11T00:40:32.885336388Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 11 00:40:32.890658 env[1215]: time="2025-07-11T00:40:32.886129828Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:40:32.890658 env[1215]: time="2025-07-11T00:40:32.886517548Z" level=info msg="Start subscribing containerd event" Jul 11 00:40:32.890658 env[1215]: time="2025-07-11T00:40:32.886558628Z" level=info msg="Start recovering state" Jul 11 00:40:32.890658 env[1215]: time="2025-07-11T00:40:32.886616388Z" level=info msg="Start event monitor" Jul 11 00:40:32.890658 env[1215]: time="2025-07-11T00:40:32.886634908Z" level=info msg="Start snapshots syncer" Jul 11 00:40:32.890658 env[1215]: time="2025-07-11T00:40:32.886645268Z" level=info msg="Start cni network conf syncer for default" Jul 11 00:40:32.890658 env[1215]: time="2025-07-11T00:40:32.886652548Z" level=info msg="Start streaming server" Jul 11 00:40:32.890658 env[1215]: time="2025-07-11T00:40:32.887226188Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 00:40:32.890658 env[1215]: time="2025-07-11T00:40:32.887260948Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 00:40:32.890658 env[1215]: time="2025-07-11T00:40:32.887300588Z" level=info msg="containerd successfully booted in 0.041905s" Jul 11 00:40:32.913465 locksmithd[1244]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 00:40:33.828016 systemd-networkd[1043]: eth0: Gained IPv6LL Jul 11 00:40:33.829878 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 11 00:40:33.831086 systemd[1]: Reached target network-online.target. Jul 11 00:40:33.833391 systemd[1]: Starting kubelet.service... Jul 11 00:40:34.410416 systemd[1]: Started kubelet.service. Jul 11 00:40:34.821753 kubelet[1258]: E0711 00:40:34.821493 1258 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:40:34.823735 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:40:34.823888 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:40:35.467307 sshd_keygen[1216]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 00:40:35.484378 systemd[1]: Finished sshd-keygen.service. Jul 11 00:40:35.486669 systemd[1]: Starting issuegen.service... Jul 11 00:40:35.491210 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 00:40:35.491370 systemd[1]: Finished issuegen.service. Jul 11 00:40:35.493555 systemd[1]: Starting systemd-user-sessions.service... Jul 11 00:40:35.499349 systemd[1]: Finished systemd-user-sessions.service. Jul 11 00:40:35.501691 systemd[1]: Started getty@tty1.service. Jul 11 00:40:35.503739 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 11 00:40:35.504854 systemd[1]: Reached target getty.target. Jul 11 00:40:35.505656 systemd[1]: Reached target multi-user.target. Jul 11 00:40:35.507659 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 11 00:40:35.514386 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 11 00:40:35.514554 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 11 00:40:35.515652 systemd[1]: Startup finished in 605ms (kernel) + 4.411s (initrd) + 6.109s (userspace) = 11.126s. Jul 11 00:40:37.845093 systemd[1]: Created slice system-sshd.slice. Jul 11 00:40:37.846246 systemd[1]: Started sshd@0-10.0.0.127:22-10.0.0.1:40322.service. Jul 11 00:40:37.892039 sshd[1281]: Accepted publickey for core from 10.0.0.1 port 40322 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:40:37.894260 sshd[1281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:40:37.903001 systemd-logind[1204]: New session 1 of user core. Jul 11 00:40:37.903863 systemd[1]: Created slice user-500.slice. Jul 11 00:40:37.904948 systemd[1]: Starting user-runtime-dir@500.service... Jul 11 00:40:37.912503 systemd[1]: Finished user-runtime-dir@500.service. Jul 11 00:40:37.913691 systemd[1]: Starting user@500.service... Jul 11 00:40:37.916205 (systemd)[1284]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:40:37.972321 systemd[1284]: Queued start job for default target default.target. Jul 11 00:40:37.972743 systemd[1284]: Reached target paths.target. Jul 11 00:40:37.972785 systemd[1284]: Reached target sockets.target. Jul 11 00:40:37.972797 systemd[1284]: Reached target timers.target. Jul 11 00:40:37.972807 systemd[1284]: Reached target basic.target. Jul 11 00:40:37.972870 systemd[1284]: Reached target default.target. Jul 11 00:40:37.972897 systemd[1284]: Startup finished in 51ms. Jul 11 00:40:37.972927 systemd[1]: Started user@500.service. Jul 11 00:40:37.973758 systemd[1]: Started session-1.scope. Jul 11 00:40:38.023950 systemd[1]: Started sshd@1-10.0.0.127:22-10.0.0.1:40326.service. Jul 11 00:40:38.058403 sshd[1293]: Accepted publickey for core from 10.0.0.1 port 40326 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:40:38.059591 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:40:38.063864 systemd-logind[1204]: New session 2 of user core. Jul 11 00:40:38.064604 systemd[1]: Started session-2.scope. Jul 11 00:40:38.119752 sshd[1293]: pam_unix(sshd:session): session closed for user core Jul 11 00:40:38.123506 systemd[1]: sshd@1-10.0.0.127:22-10.0.0.1:40326.service: Deactivated successfully. Jul 11 00:40:38.124128 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 00:40:38.124617 systemd-logind[1204]: Session 2 logged out. Waiting for processes to exit. Jul 11 00:40:38.125718 systemd[1]: Started sshd@2-10.0.0.127:22-10.0.0.1:40330.service. Jul 11 00:40:38.126439 systemd-logind[1204]: Removed session 2. Jul 11 00:40:38.160244 sshd[1299]: Accepted publickey for core from 10.0.0.1 port 40330 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:40:38.161524 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:40:38.165565 systemd[1]: Started session-3.scope. Jul 11 00:40:38.166152 systemd-logind[1204]: New session 3 of user core. Jul 11 00:40:38.215534 sshd[1299]: pam_unix(sshd:session): session closed for user core Jul 11 00:40:38.218624 systemd[1]: sshd@2-10.0.0.127:22-10.0.0.1:40330.service: Deactivated successfully. Jul 11 00:40:38.219229 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 00:40:38.219719 systemd-logind[1204]: Session 3 logged out. Waiting for processes to exit. Jul 11 00:40:38.220823 systemd[1]: Started sshd@3-10.0.0.127:22-10.0.0.1:40344.service. Jul 11 00:40:38.221530 systemd-logind[1204]: Removed session 3. Jul 11 00:40:38.255767 sshd[1305]: Accepted publickey for core from 10.0.0.1 port 40344 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:40:38.257130 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:40:38.260564 systemd-logind[1204]: New session 4 of user core. Jul 11 00:40:38.261373 systemd[1]: Started session-4.scope. Jul 11 00:40:38.315081 sshd[1305]: pam_unix(sshd:session): session closed for user core Jul 11 00:40:38.318359 systemd[1]: Started sshd@4-10.0.0.127:22-10.0.0.1:40360.service. Jul 11 00:40:38.318930 systemd[1]: sshd@3-10.0.0.127:22-10.0.0.1:40344.service: Deactivated successfully. Jul 11 00:40:38.319642 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 00:40:38.320212 systemd-logind[1204]: Session 4 logged out. Waiting for processes to exit. Jul 11 00:40:38.320932 systemd-logind[1204]: Removed session 4. Jul 11 00:40:38.354281 sshd[1310]: Accepted publickey for core from 10.0.0.1 port 40360 ssh2: RSA SHA256:kAw98lsrYCxXKwzslBlKMy3//X0GU8J77htUo5WbMYE Jul 11 00:40:38.355612 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:40:38.358996 systemd-logind[1204]: New session 5 of user core. Jul 11 00:40:38.359791 systemd[1]: Started session-5.scope. Jul 11 00:40:38.419930 sudo[1315]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 00:40:38.420151 sudo[1315]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 11 00:40:38.432546 systemd[1]: Starting coreos-metadata.service... Jul 11 00:40:38.438959 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 00:40:38.439221 systemd[1]: Finished coreos-metadata.service. Jul 11 00:40:38.920623 systemd[1]: Stopped kubelet.service. Jul 11 00:40:38.923806 systemd[1]: Starting kubelet.service... Jul 11 00:40:38.945047 systemd[1]: Reloading. Jul 11 00:40:38.992985 /usr/lib/systemd/system-generators/torcx-generator[1374]: time="2025-07-11T00:40:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 11 00:40:38.993015 /usr/lib/systemd/system-generators/torcx-generator[1374]: time="2025-07-11T00:40:38Z" level=info msg="torcx already run" Jul 11 00:40:39.098743 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 11 00:40:39.098771 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 11 00:40:39.113969 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:40:39.176086 systemd[1]: Started kubelet.service. Jul 11 00:40:39.177289 systemd[1]: Stopping kubelet.service... Jul 11 00:40:39.177519 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:40:39.177674 systemd[1]: Stopped kubelet.service. Jul 11 00:40:39.179170 systemd[1]: Starting kubelet.service... Jul 11 00:40:39.273157 systemd[1]: Started kubelet.service. Jul 11 00:40:39.309404 kubelet[1419]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:40:39.309692 kubelet[1419]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 00:40:39.309736 kubelet[1419]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:40:39.309925 kubelet[1419]: I0711 00:40:39.309890 1419 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:40:40.316589 kubelet[1419]: I0711 00:40:40.316541 1419 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 11 00:40:40.316589 kubelet[1419]: I0711 00:40:40.316578 1419 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:40:40.316919 kubelet[1419]: I0711 00:40:40.316830 1419 server.go:954] "Client rotation is on, will bootstrap in background" Jul 11 00:40:40.382326 kubelet[1419]: I0711 00:40:40.382298 1419 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:40:40.390003 kubelet[1419]: E0711 00:40:40.389973 1419 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:40:40.390156 kubelet[1419]: I0711 00:40:40.390141 1419 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:40:40.392874 kubelet[1419]: I0711 00:40:40.392852 1419 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:40:40.393635 kubelet[1419]: I0711 00:40:40.393595 1419 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:40:40.393908 kubelet[1419]: I0711 00:40:40.393708 1419 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.127","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:40:40.394089 kubelet[1419]: I0711 00:40:40.394076 1419 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:40:40.394160 kubelet[1419]: I0711 00:40:40.394149 1419 container_manager_linux.go:304] "Creating device plugin manager" Jul 11 00:40:40.394380 kubelet[1419]: I0711 00:40:40.394366 1419 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:40:40.396953 kubelet[1419]: I0711 00:40:40.396928 1419 kubelet.go:446] "Attempting to sync node with API server" Jul 11 00:40:40.397047 kubelet[1419]: I0711 00:40:40.397035 1419 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:40:40.397109 kubelet[1419]: I0711 00:40:40.397100 1419 kubelet.go:352] "Adding apiserver pod source" Jul 11 00:40:40.397176 kubelet[1419]: I0711 00:40:40.397165 1419 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:40:40.397251 kubelet[1419]: E0711 00:40:40.397200 1419 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:40:40.397285 kubelet[1419]: E0711 00:40:40.397247 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:40:40.402879 kubelet[1419]: I0711 00:40:40.402855 1419 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 11 00:40:40.403503 kubelet[1419]: I0711 00:40:40.403472 1419 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:40:40.403626 kubelet[1419]: W0711 00:40:40.403608 1419 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 00:40:40.404525 kubelet[1419]: I0711 00:40:40.404503 1419 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 00:40:40.404567 kubelet[1419]: I0711 00:40:40.404537 1419 server.go:1287] "Started kubelet" Jul 11 00:40:40.419278 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 11 00:40:40.419415 kubelet[1419]: I0711 00:40:40.419396 1419 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:40:40.425314 kubelet[1419]: I0711 00:40:40.425282 1419 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:40:40.426363 kubelet[1419]: I0711 00:40:40.426336 1419 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 00:40:40.426604 kubelet[1419]: E0711 00:40:40.426583 1419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Jul 11 00:40:40.426944 kubelet[1419]: I0711 00:40:40.426912 1419 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:40:40.431114 kubelet[1419]: I0711 00:40:40.431089 1419 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 00:40:40.432571 kubelet[1419]: I0711 00:40:40.431135 1419 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:40:40.432631 kubelet[1419]: I0711 00:40:40.432569 1419 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:40:40.432687 kubelet[1419]: I0711 00:40:40.431300 1419 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:40:40.433719 kubelet[1419]: E0711 00:40:40.433607 1419 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.127\" not found" node="10.0.0.127" Jul 11 00:40:40.434170 kubelet[1419]: I0711 00:40:40.433892 1419 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:40:40.434170 kubelet[1419]: I0711 00:40:40.433993 1419 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:40:40.434785 kubelet[1419]: E0711 00:40:40.434759 1419 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:40:40.435457 kubelet[1419]: I0711 00:40:40.435247 1419 server.go:479] "Adding debug handlers to kubelet server" Jul 11 00:40:40.438316 kubelet[1419]: I0711 00:40:40.438292 1419 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:40:40.446864 kubelet[1419]: I0711 00:40:40.446834 1419 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 00:40:40.446864 kubelet[1419]: I0711 00:40:40.446857 1419 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 00:40:40.446968 kubelet[1419]: I0711 00:40:40.446876 1419 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:40:40.517746 kubelet[1419]: I0711 00:40:40.517709 1419 policy_none.go:49] "None policy: Start" Jul 11 00:40:40.517746 kubelet[1419]: I0711 00:40:40.517736 1419 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 00:40:40.517746 kubelet[1419]: I0711 00:40:40.517756 1419 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:40:40.522999 systemd[1]: Created slice kubepods.slice. Jul 11 00:40:40.527139 kubelet[1419]: E0711 00:40:40.527097 1419 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.127\" not found" Jul 11 00:40:40.527444 systemd[1]: Created slice kubepods-burstable.slice. Jul 11 00:40:40.529793 systemd[1]: Created slice kubepods-besteffort.slice. Jul 11 00:40:40.539457 kubelet[1419]: I0711 00:40:40.539439 1419 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:40:40.539668 kubelet[1419]: I0711 00:40:40.539650 1419 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:40:40.539776 kubelet[1419]: I0711 00:40:40.539727 1419 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:40:40.540389 kubelet[1419]: I0711 00:40:40.540369 1419 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:40:40.541459 kubelet[1419]: E0711 00:40:40.541423 1419 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 00:40:40.541459 kubelet[1419]: E0711 00:40:40.541458 1419 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.127\" not found" Jul 11 00:40:40.552872 kubelet[1419]: I0711 00:40:40.552846 1419 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:40:40.553659 kubelet[1419]: I0711 00:40:40.553642 1419 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:40:40.553720 kubelet[1419]: I0711 00:40:40.553669 1419 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 11 00:40:40.553720 kubelet[1419]: I0711 00:40:40.553687 1419 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 00:40:40.553720 kubelet[1419]: I0711 00:40:40.553693 1419 kubelet.go:2382] "Starting kubelet main sync loop" Jul 11 00:40:40.553796 kubelet[1419]: E0711 00:40:40.553733 1419 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 11 00:40:40.640899 kubelet[1419]: I0711 00:40:40.640856 1419 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.127" Jul 11 00:40:40.644543 kubelet[1419]: I0711 00:40:40.644510 1419 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.127" Jul 11 00:40:40.652059 kubelet[1419]: I0711 00:40:40.652036 1419 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 11 00:40:40.652444 env[1215]: time="2025-07-11T00:40:40.652343668Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 00:40:40.652695 kubelet[1419]: I0711 00:40:40.652558 1419 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 11 00:40:40.805072 sudo[1315]: pam_unix(sudo:session): session closed for user root Jul 11 00:40:40.807032 sshd[1310]: pam_unix(sshd:session): session closed for user core Jul 11 00:40:40.809535 systemd[1]: sshd@4-10.0.0.127:22-10.0.0.1:40360.service: Deactivated successfully. Jul 11 00:40:40.810224 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 00:40:40.810700 systemd-logind[1204]: Session 5 logged out. Waiting for processes to exit. Jul 11 00:40:40.811388 systemd-logind[1204]: Removed session 5. Jul 11 00:40:41.319124 kubelet[1419]: I0711 00:40:41.319091 1419 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 11 00:40:41.319386 kubelet[1419]: W0711 00:40:41.319237 1419 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 11 00:40:41.319386 kubelet[1419]: W0711 00:40:41.319269 1419 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 11 00:40:41.319386 kubelet[1419]: W0711 00:40:41.319292 1419 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 11 00:40:41.397726 kubelet[1419]: I0711 00:40:41.397681 1419 apiserver.go:52] "Watching apiserver" Jul 11 00:40:41.397811 kubelet[1419]: E0711 00:40:41.397694 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:40:41.408221 systemd[1]: Created slice kubepods-burstable-podf1c80a0b_b4d3_4bc0_9af5_cba8e8a6be00.slice. Jul 11 00:40:41.427269 systemd[1]: Created slice kubepods-besteffort-pod9403e514_23c7_4504_83b8_606c258daeac.slice. Jul 11 00:40:41.434053 kubelet[1419]: I0711 00:40:41.434024 1419 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 00:40:41.437549 kubelet[1419]: I0711 00:40:41.437514 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-bpf-maps\") pod \"cilium-b68t5\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " pod="kube-system/cilium-b68t5" Jul 11 00:40:41.437549 kubelet[1419]: I0711 00:40:41.437548 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-hostproc\") pod \"cilium-b68t5\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " pod="kube-system/cilium-b68t5" Jul 11 00:40:41.437657 kubelet[1419]: I0711 00:40:41.437566 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-cilium-cgroup\") pod \"cilium-b68t5\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " pod="kube-system/cilium-b68t5" Jul 11 00:40:41.437657 kubelet[1419]: I0711 00:40:41.437581 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-clustermesh-secrets\") pod \"cilium-b68t5\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " pod="kube-system/cilium-b68t5" Jul 11 00:40:41.437657 kubelet[1419]: I0711 00:40:41.437597 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-cilium-config-path\") pod \"cilium-b68t5\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " pod="kube-system/cilium-b68t5" Jul 11 00:40:41.437657 kubelet[1419]: I0711 00:40:41.437611 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-cilium-run\") pod \"cilium-b68t5\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " pod="kube-system/cilium-b68t5" Jul 11 00:40:41.437657 kubelet[1419]: I0711 00:40:41.437625 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-etc-cni-netd\") pod \"cilium-b68t5\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " pod="kube-system/cilium-b68t5" Jul 11 00:40:41.437657 kubelet[1419]: I0711 00:40:41.437639 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-xtables-lock\") pod \"cilium-b68t5\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " pod="kube-system/cilium-b68t5" Jul 11 00:40:41.437824 kubelet[1419]: I0711 00:40:41.437653 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-host-proc-sys-net\") pod \"cilium-b68t5\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " pod="kube-system/cilium-b68t5" Jul 11 00:40:41.437824 kubelet[1419]: I0711 00:40:41.437667 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzvnz\" (UniqueName: \"kubernetes.io/projected/9403e514-23c7-4504-83b8-606c258daeac-kube-api-access-lzvnz\") pod \"kube-proxy-qjkgl\" (UID: \"9403e514-23c7-4504-83b8-606c258daeac\") " pod="kube-system/kube-proxy-qjkgl" Jul 11 00:40:41.437824 kubelet[1419]: I0711 00:40:41.437684 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-cni-path\") pod \"cilium-b68t5\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " pod="kube-system/cilium-b68t5" Jul 11 00:40:41.437824 kubelet[1419]: I0711 00:40:41.437699 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4s9j\" (UniqueName: \"kubernetes.io/projected/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-kube-api-access-l4s9j\") pod \"cilium-b68t5\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " pod="kube-system/cilium-b68t5" Jul 11 00:40:41.437824 kubelet[1419]: I0711 00:40:41.437714 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9403e514-23c7-4504-83b8-606c258daeac-xtables-lock\") pod \"kube-proxy-qjkgl\" (UID: \"9403e514-23c7-4504-83b8-606c258daeac\") " pod="kube-system/kube-proxy-qjkgl" Jul 11 00:40:41.437943 kubelet[1419]: I0711 00:40:41.437727 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9403e514-23c7-4504-83b8-606c258daeac-lib-modules\") pod \"kube-proxy-qjkgl\" (UID: \"9403e514-23c7-4504-83b8-606c258daeac\") " pod="kube-system/kube-proxy-qjkgl" Jul 11 00:40:41.437943 kubelet[1419]: I0711 00:40:41.437749 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-lib-modules\") pod \"cilium-b68t5\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " pod="kube-system/cilium-b68t5" Jul 11 00:40:41.437943 kubelet[1419]: I0711 00:40:41.437766 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-host-proc-sys-kernel\") pod \"cilium-b68t5\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " pod="kube-system/cilium-b68t5" Jul 11 00:40:41.437943 kubelet[1419]: I0711 00:40:41.437780 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-hubble-tls\") pod \"cilium-b68t5\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " pod="kube-system/cilium-b68t5" Jul 11 00:40:41.437943 kubelet[1419]: I0711 00:40:41.437796 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9403e514-23c7-4504-83b8-606c258daeac-kube-proxy\") pod \"kube-proxy-qjkgl\" (UID: \"9403e514-23c7-4504-83b8-606c258daeac\") " pod="kube-system/kube-proxy-qjkgl" Jul 11 00:40:41.539204 kubelet[1419]: I0711 00:40:41.539169 1419 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 11 00:40:41.725749 kubelet[1419]: E0711 00:40:41.725707 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:40:41.726389 env[1215]: time="2025-07-11T00:40:41.726334948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b68t5,Uid:f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00,Namespace:kube-system,Attempt:0,}" Jul 11 00:40:41.740994 kubelet[1419]: E0711 00:40:41.740966 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:40:41.741360 env[1215]: time="2025-07-11T00:40:41.741324868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qjkgl,Uid:9403e514-23c7-4504-83b8-606c258daeac,Namespace:kube-system,Attempt:0,}" Jul 11 00:40:42.280360 env[1215]: time="2025-07-11T00:40:42.280318068Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:40:42.281580 env[1215]: time="2025-07-11T00:40:42.281547628Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:40:42.282509 env[1215]: time="2025-07-11T00:40:42.282486548Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:40:42.284423 env[1215]: time="2025-07-11T00:40:42.284398308Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:40:42.285135 env[1215]: time="2025-07-11T00:40:42.285116908Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:40:42.286971 env[1215]: time="2025-07-11T00:40:42.286938588Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:40:42.288672 env[1215]: time="2025-07-11T00:40:42.288639308Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:40:42.292370 env[1215]: time="2025-07-11T00:40:42.292340508Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:40:42.316718 env[1215]: time="2025-07-11T00:40:42.316617308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:40:42.316823 env[1215]: time="2025-07-11T00:40:42.316718308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:40:42.316823 env[1215]: time="2025-07-11T00:40:42.316756068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:40:42.317036 env[1215]: time="2025-07-11T00:40:42.316999548Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/87dd1b968b1907bdcd1866ffe1ae7d3e94aa9c4375355939d003389a3e495cbc pid=1485 runtime=io.containerd.runc.v2 Jul 11 00:40:42.317400 env[1215]: time="2025-07-11T00:40:42.317343508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:40:42.317400 env[1215]: time="2025-07-11T00:40:42.317377268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:40:42.317400 env[1215]: time="2025-07-11T00:40:42.317387228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:40:42.317604 env[1215]: time="2025-07-11T00:40:42.317567828Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2 pid=1484 runtime=io.containerd.runc.v2 Jul 11 00:40:42.339334 systemd[1]: Started cri-containerd-87dd1b968b1907bdcd1866ffe1ae7d3e94aa9c4375355939d003389a3e495cbc.scope. Jul 11 00:40:42.345300 systemd[1]: Started cri-containerd-fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2.scope. Jul 11 00:40:42.376715 env[1215]: time="2025-07-11T00:40:42.376665508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qjkgl,Uid:9403e514-23c7-4504-83b8-606c258daeac,Namespace:kube-system,Attempt:0,} returns sandbox id \"87dd1b968b1907bdcd1866ffe1ae7d3e94aa9c4375355939d003389a3e495cbc\"" Jul 11 00:40:42.377535 kubelet[1419]: E0711 00:40:42.377502 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:40:42.378855 env[1215]: time="2025-07-11T00:40:42.378809308Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 11 00:40:42.383570 env[1215]: time="2025-07-11T00:40:42.383541908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b68t5,Uid:f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2\"" Jul 11 00:40:42.384276 kubelet[1419]: E0711 00:40:42.384246 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:40:42.398688 kubelet[1419]: E0711 00:40:42.398660 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:40:42.545184 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount883579713.mount: Deactivated successfully. Jul 11 00:40:43.392516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount226581742.mount: Deactivated successfully. Jul 11 00:40:43.398844 kubelet[1419]: E0711 00:40:43.398775 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:40:43.852346 env[1215]: time="2025-07-11T00:40:43.852053988Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:40:43.853361 env[1215]: time="2025-07-11T00:40:43.853332028Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:40:43.854654 env[1215]: time="2025-07-11T00:40:43.854619828Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:40:43.855872 env[1215]: time="2025-07-11T00:40:43.855848468Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:40:43.856208 env[1215]: time="2025-07-11T00:40:43.856184588Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 11 00:40:43.857671 env[1215]: time="2025-07-11T00:40:43.857526588Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 11 00:40:43.858792 env[1215]: time="2025-07-11T00:40:43.858758708Z" level=info msg="CreateContainer within sandbox \"87dd1b968b1907bdcd1866ffe1ae7d3e94aa9c4375355939d003389a3e495cbc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 00:40:43.871715 env[1215]: time="2025-07-11T00:40:43.871666468Z" level=info msg="CreateContainer within sandbox \"87dd1b968b1907bdcd1866ffe1ae7d3e94aa9c4375355939d003389a3e495cbc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ff8b39cd26bfcc7fa55bfe8168b78667a623d661f25a062dfeeabb9fc2d0f593\"" Jul 11 00:40:43.872228 env[1215]: time="2025-07-11T00:40:43.872197308Z" level=info msg="StartContainer for \"ff8b39cd26bfcc7fa55bfe8168b78667a623d661f25a062dfeeabb9fc2d0f593\"" Jul 11 00:40:43.891954 systemd[1]: Started cri-containerd-ff8b39cd26bfcc7fa55bfe8168b78667a623d661f25a062dfeeabb9fc2d0f593.scope. Jul 11 00:40:43.927142 env[1215]: time="2025-07-11T00:40:43.927106068Z" level=info msg="StartContainer for \"ff8b39cd26bfcc7fa55bfe8168b78667a623d661f25a062dfeeabb9fc2d0f593\" returns successfully" Jul 11 00:40:44.399121 kubelet[1419]: E0711 00:40:44.399064 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:40:44.560747 kubelet[1419]: E0711 00:40:44.560704 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:40:45.399800 kubelet[1419]: E0711 00:40:45.399758 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:40:45.561797 kubelet[1419]: E0711 00:40:45.561760 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:40:46.400376 kubelet[1419]: E0711 00:40:46.400330 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:40:47.285758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4004812292.mount: Deactivated successfully. Jul 11 00:40:47.400906 kubelet[1419]: E0711 00:40:47.400875 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:40:48.401286 kubelet[1419]: E0711 00:40:48.401231 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:40:49.402103 kubelet[1419]: E0711 00:40:49.402076 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:40:49.477899 env[1215]: time="2025-07-11T00:40:49.477856028Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:40:49.479089 env[1215]: time="2025-07-11T00:40:49.479064908Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:40:49.481323 env[1215]: time="2025-07-11T00:40:49.481294348Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:40:49.481691 env[1215]: time="2025-07-11T00:40:49.481664668Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 11 00:40:49.484331 env[1215]: time="2025-07-11T00:40:49.484304828Z" level=info msg="CreateContainer within sandbox \"fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 00:40:49.492620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1996483505.mount: Deactivated successfully. Jul 11 00:40:49.496246 env[1215]: time="2025-07-11T00:40:49.496209348Z" level=info msg="CreateContainer within sandbox \"fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f23e97a2ed6fd873514a86ef2157727331e8279966cf71ba9d82625f6c7b6a4e\"" Jul 11 00:40:49.496750 env[1215]: time="2025-07-11T00:40:49.496713308Z" level=info msg="StartContainer for \"f23e97a2ed6fd873514a86ef2157727331e8279966cf71ba9d82625f6c7b6a4e\"" Jul 11 00:40:49.512531 systemd[1]: Started cri-containerd-f23e97a2ed6fd873514a86ef2157727331e8279966cf71ba9d82625f6c7b6a4e.scope. Jul 11 00:40:49.547965 env[1215]: time="2025-07-11T00:40:49.547910108Z" level=info msg="StartContainer for \"f23e97a2ed6fd873514a86ef2157727331e8279966cf71ba9d82625f6c7b6a4e\" returns successfully" Jul 11 00:40:49.568365 kubelet[1419]: E0711 00:40:49.568329 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:40:49.586820 kubelet[1419]: I0711 00:40:49.586763 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qjkgl" podStartSLOduration=8.107876668 podStartE2EDuration="9.586745588s" podCreationTimestamp="2025-07-11 00:40:40 +0000 UTC" firstStartedPulling="2025-07-11 00:40:42.378421828 +0000 UTC m=+3.102005321" lastFinishedPulling="2025-07-11 00:40:43.857290748 +0000 UTC m=+4.580874241" observedRunningTime="2025-07-11 00:40:44.570449708 +0000 UTC m=+5.294033201" watchObservedRunningTime="2025-07-11 00:40:49.586745588 +0000 UTC m=+10.310329041" Jul 11 00:40:49.612059 systemd[1]: cri-containerd-f23e97a2ed6fd873514a86ef2157727331e8279966cf71ba9d82625f6c7b6a4e.scope: Deactivated successfully. Jul 11 00:40:49.850448 env[1215]: time="2025-07-11T00:40:49.849891028Z" level=info msg="shim disconnected" id=f23e97a2ed6fd873514a86ef2157727331e8279966cf71ba9d82625f6c7b6a4e Jul 11 00:40:49.850752 env[1215]: time="2025-07-11T00:40:49.850729748Z" level=warning msg="cleaning up after shim disconnected" id=f23e97a2ed6fd873514a86ef2157727331e8279966cf71ba9d82625f6c7b6a4e namespace=k8s.io Jul 11 00:40:49.850885 env[1215]: time="2025-07-11T00:40:49.850869988Z" level=info msg="cleaning up dead shim" Jul 11 00:40:49.857368 env[1215]: time="2025-07-11T00:40:49.857334468Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:40:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1769 runtime=io.containerd.runc.v2\n" Jul 11 00:40:50.402964 kubelet[1419]: E0711 00:40:50.402928 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:40:50.491032 systemd[1]: run-containerd-runc-k8s.io-f23e97a2ed6fd873514a86ef2157727331e8279966cf71ba9d82625f6c7b6a4e-runc.ZPOwuK.mount: Deactivated successfully. Jul 11 00:40:50.491117 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f23e97a2ed6fd873514a86ef2157727331e8279966cf71ba9d82625f6c7b6a4e-rootfs.mount: Deactivated successfully. Jul 11 00:40:50.571069 kubelet[1419]: E0711 00:40:50.571033 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:40:50.572512 env[1215]: time="2025-07-11T00:40:50.572479548Z" level=info msg="CreateContainer within sandbox \"fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 00:40:50.588794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2938749745.mount: Deactivated successfully. Jul 11 00:40:50.596970 env[1215]: time="2025-07-11T00:40:50.596923668Z" level=info msg="CreateContainer within sandbox \"fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f101b4be2960a92d701f1d32d4f2ff30d766cc3e6900f29cacd1f062fb13f1a3\"" Jul 11 00:40:50.597375 env[1215]: time="2025-07-11T00:40:50.597336028Z" level=info msg="StartContainer for \"f101b4be2960a92d701f1d32d4f2ff30d766cc3e6900f29cacd1f062fb13f1a3\"" Jul 11 00:40:50.610630 systemd[1]: Started cri-containerd-f101b4be2960a92d701f1d32d4f2ff30d766cc3e6900f29cacd1f062fb13f1a3.scope. Jul 11 00:40:50.643009 env[1215]: time="2025-07-11T00:40:50.642958108Z" level=info msg="StartContainer for \"f101b4be2960a92d701f1d32d4f2ff30d766cc3e6900f29cacd1f062fb13f1a3\" returns successfully" Jul 11 00:40:50.657583 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:40:50.658017 systemd[1]: Stopped systemd-sysctl.service. Jul 11 00:40:50.658748 systemd[1]: Stopping systemd-sysctl.service... Jul 11 00:40:50.660347 systemd[1]: Starting systemd-sysctl.service... Jul 11 00:40:50.662574 systemd[1]: cri-containerd-f101b4be2960a92d701f1d32d4f2ff30d766cc3e6900f29cacd1f062fb13f1a3.scope: Deactivated successfully. Jul 11 00:40:50.666817 systemd[1]: Finished systemd-sysctl.service. Jul 11 00:40:50.681056 env[1215]: time="2025-07-11T00:40:50.681016308Z" level=info msg="shim disconnected" id=f101b4be2960a92d701f1d32d4f2ff30d766cc3e6900f29cacd1f062fb13f1a3 Jul 11 00:40:50.681273 env[1215]: time="2025-07-11T00:40:50.681253548Z" level=warning msg="cleaning up after shim disconnected" id=f101b4be2960a92d701f1d32d4f2ff30d766cc3e6900f29cacd1f062fb13f1a3 namespace=k8s.io Jul 11 00:40:50.681335 env[1215]: time="2025-07-11T00:40:50.681322388Z" level=info msg="cleaning up dead shim" Jul 11 00:40:50.687060 env[1215]: time="2025-07-11T00:40:50.687030268Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:40:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1834 runtime=io.containerd.runc.v2\n" Jul 11 00:40:51.403115 kubelet[1419]: E0711 00:40:51.403063 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:40:51.490847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f101b4be2960a92d701f1d32d4f2ff30d766cc3e6900f29cacd1f062fb13f1a3-rootfs.mount: Deactivated successfully. Jul 11 00:40:51.573399 kubelet[1419]: E0711 00:40:51.573371 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:40:51.575007 env[1215]: time="2025-07-11T00:40:51.574969588Z" level=info msg="CreateContainer within sandbox \"fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 00:40:51.587751 env[1215]: time="2025-07-11T00:40:51.587682348Z" level=info msg="CreateContainer within sandbox \"fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"68dfc1a0db1e79ad90b02661c1ca48d80e78208a88aca1cb4de8edb288715cd1\"" Jul 11 00:40:51.588314 env[1215]: time="2025-07-11T00:40:51.588287308Z" level=info msg="StartContainer for \"68dfc1a0db1e79ad90b02661c1ca48d80e78208a88aca1cb4de8edb288715cd1\"" Jul 11 00:40:51.607132 systemd[1]: Started cri-containerd-68dfc1a0db1e79ad90b02661c1ca48d80e78208a88aca1cb4de8edb288715cd1.scope. Jul 11 00:40:51.642033 env[1215]: time="2025-07-11T00:40:51.641985068Z" level=info msg="StartContainer for \"68dfc1a0db1e79ad90b02661c1ca48d80e78208a88aca1cb4de8edb288715cd1\" returns successfully" Jul 11 00:40:51.654203 systemd[1]: cri-containerd-68dfc1a0db1e79ad90b02661c1ca48d80e78208a88aca1cb4de8edb288715cd1.scope: Deactivated successfully. Jul 11 00:40:51.673022 env[1215]: time="2025-07-11T00:40:51.672979788Z" level=info msg="shim disconnected" id=68dfc1a0db1e79ad90b02661c1ca48d80e78208a88aca1cb4de8edb288715cd1 Jul 11 00:40:51.673204 env[1215]: time="2025-07-11T00:40:51.673186188Z" level=warning msg="cleaning up after shim disconnected" id=68dfc1a0db1e79ad90b02661c1ca48d80e78208a88aca1cb4de8edb288715cd1 namespace=k8s.io Jul 11 00:40:51.673279 env[1215]: time="2025-07-11T00:40:51.673265748Z" level=info msg="cleaning up dead shim" Jul 11 00:40:51.679789 env[1215]: time="2025-07-11T00:40:51.679757268Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:40:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1893 runtime=io.containerd.runc.v2\n" Jul 11 00:40:52.403583 kubelet[1419]: E0711 00:40:52.403534 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:40:52.490955 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68dfc1a0db1e79ad90b02661c1ca48d80e78208a88aca1cb4de8edb288715cd1-rootfs.mount: Deactivated successfully. Jul 11 00:40:52.576573 kubelet[1419]: E0711 00:40:52.576539 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:40:52.578557 env[1215]: time="2025-07-11T00:40:52.578514828Z" level=info msg="CreateContainer within sandbox \"fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 00:40:52.593200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1352001467.mount: Deactivated successfully. Jul 11 00:40:52.596549 env[1215]: time="2025-07-11T00:40:52.596510468Z" level=info msg="CreateContainer within sandbox \"fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c6fbff7ee6d70c1b84b25220a6f4a803478f576b35925a8741be90ed85444203\"" Jul 11 00:40:52.597177 env[1215]: time="2025-07-11T00:40:52.597150508Z" level=info msg="StartContainer for \"c6fbff7ee6d70c1b84b25220a6f4a803478f576b35925a8741be90ed85444203\"" Jul 11 00:40:52.617915 systemd[1]: Started cri-containerd-c6fbff7ee6d70c1b84b25220a6f4a803478f576b35925a8741be90ed85444203.scope. Jul 11 00:40:52.644449 systemd[1]: cri-containerd-c6fbff7ee6d70c1b84b25220a6f4a803478f576b35925a8741be90ed85444203.scope: Deactivated successfully. Jul 11 00:40:52.645639 env[1215]: time="2025-07-11T00:40:52.645547028Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1c80a0b_b4d3_4bc0_9af5_cba8e8a6be00.slice/cri-containerd-c6fbff7ee6d70c1b84b25220a6f4a803478f576b35925a8741be90ed85444203.scope/memory.events\": no such file or directory" Jul 11 00:40:52.647047 env[1215]: time="2025-07-11T00:40:52.647015988Z" level=info msg="StartContainer for \"c6fbff7ee6d70c1b84b25220a6f4a803478f576b35925a8741be90ed85444203\" returns successfully" Jul 11 00:40:52.664890 env[1215]: time="2025-07-11T00:40:52.664758268Z" level=info msg="shim disconnected" id=c6fbff7ee6d70c1b84b25220a6f4a803478f576b35925a8741be90ed85444203 Jul 11 00:40:52.664890 env[1215]: time="2025-07-11T00:40:52.664794068Z" level=warning msg="cleaning up after shim disconnected" id=c6fbff7ee6d70c1b84b25220a6f4a803478f576b35925a8741be90ed85444203 namespace=k8s.io Jul 11 00:40:52.664890 env[1215]: time="2025-07-11T00:40:52.664802868Z" level=info msg="cleaning up dead shim" Jul 11 00:40:52.671689 env[1215]: time="2025-07-11T00:40:52.671653388Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:40:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1948 runtime=io.containerd.runc.v2\n" Jul 11 00:40:53.403688 kubelet[1419]: E0711 00:40:53.403655 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:40:53.490911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6fbff7ee6d70c1b84b25220a6f4a803478f576b35925a8741be90ed85444203-rootfs.mount: Deactivated successfully. Jul 11 00:40:53.579916 kubelet[1419]: E0711 00:40:53.579885 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:40:53.581992 env[1215]: time="2025-07-11T00:40:53.581954388Z" level=info msg="CreateContainer within sandbox \"fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 00:40:53.601518 env[1215]: time="2025-07-11T00:40:53.601477228Z" level=info msg="CreateContainer within sandbox \"fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"878d0e2e4f6cc309de95d39e79d7d7e18e709748d110c9cd09431bb58f2b408d\"" Jul 11 00:40:53.602054 env[1215]: time="2025-07-11T00:40:53.602022708Z" level=info msg="StartContainer for \"878d0e2e4f6cc309de95d39e79d7d7e18e709748d110c9cd09431bb58f2b408d\"" Jul 11 00:40:53.620916 systemd[1]: Started cri-containerd-878d0e2e4f6cc309de95d39e79d7d7e18e709748d110c9cd09431bb58f2b408d.scope. Jul 11 00:40:53.655069 env[1215]: time="2025-07-11T00:40:53.654418188Z" level=info msg="StartContainer for \"878d0e2e4f6cc309de95d39e79d7d7e18e709748d110c9cd09431bb58f2b408d\" returns successfully" Jul 11 00:40:53.761994 kubelet[1419]: I0711 00:40:53.761870 1419 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 11 00:40:53.953890 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 11 00:40:54.202890 kernel: Initializing XFRM netlink socket Jul 11 00:40:54.205861 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 11 00:40:54.404027 kubelet[1419]: E0711 00:40:54.403938 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:40:54.582914 kubelet[1419]: E0711 00:40:54.582792 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:40:54.600724 kubelet[1419]: I0711 00:40:54.600633 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b68t5" podStartSLOduration=7.502317428 podStartE2EDuration="14.600615108s" podCreationTimestamp="2025-07-11 00:40:40 +0000 UTC" firstStartedPulling="2025-07-11 00:40:42.384887668 +0000 UTC m=+3.108471161" lastFinishedPulling="2025-07-11 00:40:49.483185348 +0000 UTC m=+10.206768841" observedRunningTime="2025-07-11 00:40:54.599723468 +0000 UTC m=+15.323306961" watchObservedRunningTime="2025-07-11 00:40:54.600615108 +0000 UTC m=+15.324198601" Jul 11 00:40:55.405061 kubelet[1419]: E0711 00:40:55.405017 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:40:55.584089 kubelet[1419]: E0711 00:40:55.584042 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:40:55.812719 systemd-networkd[1043]: cilium_host: Link UP Jul 11 00:40:55.813714 systemd-networkd[1043]: cilium_net: Link UP Jul 11 00:40:55.815131 systemd-networkd[1043]: cilium_net: Gained carrier Jul 11 00:40:55.815903 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 11 00:40:55.815973 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 11 00:40:55.816154 systemd-networkd[1043]: cilium_host: Gained carrier Jul 11 00:40:55.816292 systemd-networkd[1043]: cilium_host: Gained IPv6LL Jul 11 00:40:55.890456 systemd-networkd[1043]: cilium_vxlan: Link UP Jul 11 00:40:55.890462 systemd-networkd[1043]: cilium_vxlan: Gained carrier Jul 11 00:40:55.947949 systemd-networkd[1043]: cilium_net: Gained IPv6LL Jul 11 00:40:56.176889 kernel: NET: Registered PF_ALG protocol family Jul 11 00:40:56.405472 kubelet[1419]: E0711 00:40:56.405431 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:40:56.585363 kubelet[1419]: E0711 00:40:56.585262 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:40:56.705323 systemd[1]: Created slice kubepods-besteffort-pod70a8bebc_f3c6_4405_af13_9520b0a5af15.slice. Jul 11 00:40:56.723074 kubelet[1419]: I0711 00:40:56.723023 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvmfd\" (UniqueName: \"kubernetes.io/projected/70a8bebc-f3c6-4405-af13-9520b0a5af15-kube-api-access-nvmfd\") pod \"nginx-deployment-7fcdb87857-rwxpj\" (UID: \"70a8bebc-f3c6-4405-af13-9520b0a5af15\") " pod="default/nginx-deployment-7fcdb87857-rwxpj" Jul 11 00:40:56.760213 systemd-networkd[1043]: lxc_health: Link UP Jul 11 00:40:56.768866 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 11 00:40:56.771326 systemd-networkd[1043]: lxc_health: Gained carrier Jul 11 00:40:57.007976 env[1215]: time="2025-07-11T00:40:57.007869348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rwxpj,Uid:70a8bebc-f3c6-4405-af13-9520b0a5af15,Namespace:default,Attempt:0,}" Jul 11 00:40:57.044024 systemd-networkd[1043]: lxc6d93ff102ed2: Link UP Jul 11 00:40:57.055886 kernel: eth0: renamed from tmp745f6 Jul 11 00:40:57.063446 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 11 00:40:57.063522 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6d93ff102ed2: link becomes ready Jul 11 00:40:57.063616 systemd-networkd[1043]: lxc6d93ff102ed2: Gained carrier Jul 11 00:40:57.406351 kubelet[1419]: E0711 00:40:57.406314 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:40:57.587236 kubelet[1419]: E0711 00:40:57.587185 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:40:57.635959 systemd-networkd[1043]: cilium_vxlan: Gained IPv6LL Jul 11 00:40:58.084027 systemd-networkd[1043]: lxc_health: Gained IPv6LL Jul 11 00:40:58.404031 systemd-networkd[1043]: lxc6d93ff102ed2: Gained IPv6LL Jul 11 00:40:58.406766 kubelet[1419]: E0711 00:40:58.406713 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:40:59.407146 kubelet[1419]: E0711 00:40:59.407100 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:40:59.480666 kubelet[1419]: I0711 00:40:59.480631 1419 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:40:59.481064 kubelet[1419]: E0711 00:40:59.481035 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:40:59.589314 kubelet[1419]: E0711 00:40:59.589271 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:41:00.398102 kubelet[1419]: E0711 00:41:00.398062 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:00.407562 kubelet[1419]: E0711 00:41:00.407528 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:00.489275 env[1215]: time="2025-07-11T00:41:00.488581708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:41:00.489275 env[1215]: time="2025-07-11T00:41:00.488628428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:41:00.489275 env[1215]: time="2025-07-11T00:41:00.488638148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:41:00.489862 env[1215]: time="2025-07-11T00:41:00.489786668Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/745f64167353fa7d5850eb7c3e7eb724219145b6719aa98a7e30418c8385fc3e pid=2494 runtime=io.containerd.runc.v2 Jul 11 00:41:00.504286 systemd[1]: Started cri-containerd-745f64167353fa7d5850eb7c3e7eb724219145b6719aa98a7e30418c8385fc3e.scope. Jul 11 00:41:00.576052 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:41:00.599379 env[1215]: time="2025-07-11T00:41:00.599334228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rwxpj,Uid:70a8bebc-f3c6-4405-af13-9520b0a5af15,Namespace:default,Attempt:0,} returns sandbox id \"745f64167353fa7d5850eb7c3e7eb724219145b6719aa98a7e30418c8385fc3e\"" Jul 11 00:41:00.602827 env[1215]: time="2025-07-11T00:41:00.602786988Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 11 00:41:01.408372 kubelet[1419]: E0711 00:41:01.408327 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:02.409266 kubelet[1419]: E0711 00:41:02.409226 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:03.409528 kubelet[1419]: E0711 00:41:03.409486 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:04.191108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount625750375.mount: Deactivated successfully. Jul 11 00:41:04.409887 kubelet[1419]: E0711 00:41:04.409828 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:05.410800 kubelet[1419]: E0711 00:41:05.410735 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:05.413755 env[1215]: time="2025-07-11T00:41:05.413706862Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:41:05.414902 env[1215]: time="2025-07-11T00:41:05.414875571Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:41:05.416552 env[1215]: time="2025-07-11T00:41:05.416523234Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:41:05.418184 env[1215]: time="2025-07-11T00:41:05.418156258Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:41:05.418986 env[1215]: time="2025-07-11T00:41:05.418959489Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\"" Jul 11 00:41:05.421472 env[1215]: time="2025-07-11T00:41:05.421437704Z" level=info msg="CreateContainer within sandbox \"745f64167353fa7d5850eb7c3e7eb724219145b6719aa98a7e30418c8385fc3e\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 11 00:41:05.430467 env[1215]: time="2025-07-11T00:41:05.430411574Z" level=info msg="CreateContainer within sandbox \"745f64167353fa7d5850eb7c3e7eb724219145b6719aa98a7e30418c8385fc3e\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"7e293a79edffc8c824ae356c8f4e92a65696c497348494ee40d0072ede71c22a\"" Jul 11 00:41:05.430980 env[1215]: time="2025-07-11T00:41:05.430950208Z" level=info msg="StartContainer for \"7e293a79edffc8c824ae356c8f4e92a65696c497348494ee40d0072ede71c22a\"" Jul 11 00:41:05.448037 systemd[1]: Started cri-containerd-7e293a79edffc8c824ae356c8f4e92a65696c497348494ee40d0072ede71c22a.scope. Jul 11 00:41:05.481380 env[1215]: time="2025-07-11T00:41:05.481323140Z" level=info msg="StartContainer for \"7e293a79edffc8c824ae356c8f4e92a65696c497348494ee40d0072ede71c22a\" returns successfully" Jul 11 00:41:05.608643 kubelet[1419]: I0711 00:41:05.608582 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-rwxpj" podStartSLOduration=4.7907072490000004 podStartE2EDuration="9.608568617s" podCreationTimestamp="2025-07-11 00:40:56 +0000 UTC" firstStartedPulling="2025-07-11 00:41:00.602384628 +0000 UTC m=+21.325968081" lastFinishedPulling="2025-07-11 00:41:05.420245956 +0000 UTC m=+26.143829449" observedRunningTime="2025-07-11 00:41:05.608163101 +0000 UTC m=+26.331746594" watchObservedRunningTime="2025-07-11 00:41:05.608568617 +0000 UTC m=+26.332152070" Jul 11 00:41:06.411660 kubelet[1419]: E0711 00:41:06.411610 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:07.412553 kubelet[1419]: E0711 00:41:07.412513 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:08.352207 systemd[1]: Created slice kubepods-besteffort-pod95783ff0_3a81_4e99_853a_c82719a54694.slice. Jul 11 00:41:08.388700 kubelet[1419]: I0711 00:41:08.388609 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/95783ff0-3a81-4e99-853a-c82719a54694-data\") pod \"nfs-server-provisioner-0\" (UID: \"95783ff0-3a81-4e99-853a-c82719a54694\") " pod="default/nfs-server-provisioner-0" Jul 11 00:41:08.388700 kubelet[1419]: I0711 00:41:08.388659 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlsct\" (UniqueName: \"kubernetes.io/projected/95783ff0-3a81-4e99-853a-c82719a54694-kube-api-access-xlsct\") pod \"nfs-server-provisioner-0\" (UID: \"95783ff0-3a81-4e99-853a-c82719a54694\") " pod="default/nfs-server-provisioner-0" Jul 11 00:41:08.412994 kubelet[1419]: E0711 00:41:08.412963 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:08.655318 env[1215]: time="2025-07-11T00:41:08.655260802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:95783ff0-3a81-4e99-853a-c82719a54694,Namespace:default,Attempt:0,}" Jul 11 00:41:08.682325 systemd-networkd[1043]: lxc3ded75330abc: Link UP Jul 11 00:41:08.689874 kernel: eth0: renamed from tmp5bb30 Jul 11 00:41:08.698123 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 11 00:41:08.698201 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc3ded75330abc: link becomes ready Jul 11 00:41:08.698523 systemd-networkd[1043]: lxc3ded75330abc: Gained carrier Jul 11 00:41:08.832323 env[1215]: time="2025-07-11T00:41:08.832256091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:41:08.832489 env[1215]: time="2025-07-11T00:41:08.832294171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:41:08.832489 env[1215]: time="2025-07-11T00:41:08.832304251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:41:08.832587 env[1215]: time="2025-07-11T00:41:08.832507369Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5bb307cd22d19a094b3923fc43acbc7ca453dbf790a66ac8d5ba3a88d2667c91 pid=2624 runtime=io.containerd.runc.v2 Jul 11 00:41:08.845319 systemd[1]: Started cri-containerd-5bb307cd22d19a094b3923fc43acbc7ca453dbf790a66ac8d5ba3a88d2667c91.scope. Jul 11 00:41:08.868080 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:41:08.882892 env[1215]: time="2025-07-11T00:41:08.882851631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:95783ff0-3a81-4e99-853a-c82719a54694,Namespace:default,Attempt:0,} returns sandbox id \"5bb307cd22d19a094b3923fc43acbc7ca453dbf790a66ac8d5ba3a88d2667c91\"" Jul 11 00:41:08.884335 env[1215]: time="2025-07-11T00:41:08.884292659Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 11 00:41:09.414871 kubelet[1419]: E0711 00:41:09.414810 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:09.500425 systemd[1]: run-containerd-runc-k8s.io-5bb307cd22d19a094b3923fc43acbc7ca453dbf790a66ac8d5ba3a88d2667c91-runc.jcH1uO.mount: Deactivated successfully. Jul 11 00:41:10.414975 kubelet[1419]: E0711 00:41:10.414929 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:10.436676 systemd-networkd[1043]: lxc3ded75330abc: Gained IPv6LL Jul 11 00:41:11.006929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2138070982.mount: Deactivated successfully. Jul 11 00:41:11.415688 kubelet[1419]: E0711 00:41:11.415642 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:12.416391 kubelet[1419]: E0711 00:41:12.416343 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:12.743113 env[1215]: time="2025-07-11T00:41:12.743005057Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:41:12.745252 env[1215]: time="2025-07-11T00:41:12.745222762Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:41:12.746795 env[1215]: time="2025-07-11T00:41:12.746765473Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:41:12.748539 env[1215]: time="2025-07-11T00:41:12.748493781Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:41:12.749386 env[1215]: time="2025-07-11T00:41:12.749357736Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jul 11 00:41:12.752167 env[1215]: time="2025-07-11T00:41:12.752130958Z" level=info msg="CreateContainer within sandbox \"5bb307cd22d19a094b3923fc43acbc7ca453dbf790a66ac8d5ba3a88d2667c91\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 11 00:41:12.760894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3578247627.mount: Deactivated successfully. Jul 11 00:41:12.764646 env[1215]: time="2025-07-11T00:41:12.764610198Z" level=info msg="CreateContainer within sandbox \"5bb307cd22d19a094b3923fc43acbc7ca453dbf790a66ac8d5ba3a88d2667c91\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"cd0f541aa6354c60e20dd9797c8ed03a1da8ea241523ca355254df00f1086652\"" Jul 11 00:41:12.765191 env[1215]: time="2025-07-11T00:41:12.765136755Z" level=info msg="StartContainer for \"cd0f541aa6354c60e20dd9797c8ed03a1da8ea241523ca355254df00f1086652\"" Jul 11 00:41:12.779239 systemd[1]: Started cri-containerd-cd0f541aa6354c60e20dd9797c8ed03a1da8ea241523ca355254df00f1086652.scope. Jul 11 00:41:12.859363 env[1215]: time="2025-07-11T00:41:12.859317710Z" level=info msg="StartContainer for \"cd0f541aa6354c60e20dd9797c8ed03a1da8ea241523ca355254df00f1086652\" returns successfully" Jul 11 00:41:13.420117 kubelet[1419]: E0711 00:41:13.416439 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:13.624689 kubelet[1419]: I0711 00:41:13.624538 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.7579774590000001 podStartE2EDuration="5.624509087s" podCreationTimestamp="2025-07-11 00:41:08 +0000 UTC" firstStartedPulling="2025-07-11 00:41:08.88407146 +0000 UTC m=+29.607654953" lastFinishedPulling="2025-07-11 00:41:12.750603088 +0000 UTC m=+33.474186581" observedRunningTime="2025-07-11 00:41:13.624425687 +0000 UTC m=+34.348009180" watchObservedRunningTime="2025-07-11 00:41:13.624509087 +0000 UTC m=+34.348092580" Jul 11 00:41:14.417376 kubelet[1419]: E0711 00:41:14.417328 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:15.417721 kubelet[1419]: E0711 00:41:15.417676 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:16.417947 kubelet[1419]: E0711 00:41:16.417858 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:17.418429 kubelet[1419]: E0711 00:41:17.418387 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:18.394589 update_engine[1207]: I0711 00:41:18.394529 1207 update_attempter.cc:509] Updating boot flags... Jul 11 00:41:18.418958 kubelet[1419]: E0711 00:41:18.418914 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:19.419636 kubelet[1419]: E0711 00:41:19.419584 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:20.398205 kubelet[1419]: E0711 00:41:20.398163 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:20.420602 kubelet[1419]: E0711 00:41:20.420579 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:21.421127 kubelet[1419]: E0711 00:41:21.421092 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:22.358946 systemd[1]: Created slice kubepods-besteffort-pod2b2503a3_b87f_4e64_ac31_8ca340bc862f.slice. Jul 11 00:41:22.367283 kubelet[1419]: I0711 00:41:22.367239 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grw6d\" (UniqueName: \"kubernetes.io/projected/2b2503a3-b87f-4e64-ac31-8ca340bc862f-kube-api-access-grw6d\") pod \"test-pod-1\" (UID: \"2b2503a3-b87f-4e64-ac31-8ca340bc862f\") " pod="default/test-pod-1" Jul 11 00:41:22.367283 kubelet[1419]: I0711 00:41:22.367281 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9c6489b1-7201-4f92-ac30-84e2aec9d0a9\" (UniqueName: \"kubernetes.io/nfs/2b2503a3-b87f-4e64-ac31-8ca340bc862f-pvc-9c6489b1-7201-4f92-ac30-84e2aec9d0a9\") pod \"test-pod-1\" (UID: \"2b2503a3-b87f-4e64-ac31-8ca340bc862f\") " pod="default/test-pod-1" Jul 11 00:41:22.421599 kubelet[1419]: E0711 00:41:22.421563 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:22.496901 kernel: FS-Cache: Loaded Jul 11 00:41:22.526204 kernel: RPC: Registered named UNIX socket transport module. Jul 11 00:41:22.526328 kernel: RPC: Registered udp transport module. Jul 11 00:41:22.526920 kernel: RPC: Registered tcp transport module. Jul 11 00:41:22.527875 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 11 00:41:22.569865 kernel: FS-Cache: Netfs 'nfs' registered for caching Jul 11 00:41:22.698244 kernel: NFS: Registering the id_resolver key type Jul 11 00:41:22.698372 kernel: Key type id_resolver registered Jul 11 00:41:22.698396 kernel: Key type id_legacy registered Jul 11 00:41:22.730145 nfsidmap[2759]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 11 00:41:22.733691 nfsidmap[2762]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 11 00:41:22.961738 env[1215]: time="2025-07-11T00:41:22.961355979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:2b2503a3-b87f-4e64-ac31-8ca340bc862f,Namespace:default,Attempt:0,}" Jul 11 00:41:22.991648 systemd-networkd[1043]: lxc87f94149fc80: Link UP Jul 11 00:41:22.998875 kernel: eth0: renamed from tmp2ac22 Jul 11 00:41:23.007867 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 11 00:41:23.007952 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc87f94149fc80: link becomes ready Jul 11 00:41:23.008513 systemd-networkd[1043]: lxc87f94149fc80: Gained carrier Jul 11 00:41:23.193926 env[1215]: time="2025-07-11T00:41:23.193822557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:41:23.193926 env[1215]: time="2025-07-11T00:41:23.193887236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:41:23.193926 env[1215]: time="2025-07-11T00:41:23.193901836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:41:23.194167 env[1215]: time="2025-07-11T00:41:23.194054396Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ac22f38380d40732e369aba8aea1224f522d270e7417375c82faf41296997b0 pid=2796 runtime=io.containerd.runc.v2 Jul 11 00:41:23.206087 systemd[1]: Started cri-containerd-2ac22f38380d40732e369aba8aea1224f522d270e7417375c82faf41296997b0.scope. Jul 11 00:41:23.243699 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:41:23.258466 env[1215]: time="2025-07-11T00:41:23.258416233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:2b2503a3-b87f-4e64-ac31-8ca340bc862f,Namespace:default,Attempt:0,} returns sandbox id \"2ac22f38380d40732e369aba8aea1224f522d270e7417375c82faf41296997b0\"" Jul 11 00:41:23.259486 env[1215]: time="2025-07-11T00:41:23.259452989Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 11 00:41:23.422741 kubelet[1419]: E0711 00:41:23.422705 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:23.489048 env[1215]: time="2025-07-11T00:41:23.489004465Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:41:23.492335 env[1215]: time="2025-07-11T00:41:23.492304934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:41:23.494565 env[1215]: time="2025-07-11T00:41:23.494301448Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:41:23.496481 env[1215]: time="2025-07-11T00:41:23.496446681Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:41:23.497186 env[1215]: time="2025-07-11T00:41:23.497156359Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\"" Jul 11 00:41:23.500777 env[1215]: time="2025-07-11T00:41:23.500744868Z" level=info msg="CreateContainer within sandbox \"2ac22f38380d40732e369aba8aea1224f522d270e7417375c82faf41296997b0\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 11 00:41:23.511114 env[1215]: time="2025-07-11T00:41:23.511065555Z" level=info msg="CreateContainer within sandbox \"2ac22f38380d40732e369aba8aea1224f522d270e7417375c82faf41296997b0\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"08e3055dcea9f363af1d8d3d830ce06698e0c74972a50149b3f10dda616b97d9\"" Jul 11 00:41:23.511578 env[1215]: time="2025-07-11T00:41:23.511536754Z" level=info msg="StartContainer for \"08e3055dcea9f363af1d8d3d830ce06698e0c74972a50149b3f10dda616b97d9\"" Jul 11 00:41:23.527048 systemd[1]: Started cri-containerd-08e3055dcea9f363af1d8d3d830ce06698e0c74972a50149b3f10dda616b97d9.scope. Jul 11 00:41:23.558638 env[1215]: time="2025-07-11T00:41:23.558593165Z" level=info msg="StartContainer for \"08e3055dcea9f363af1d8d3d830ce06698e0c74972a50149b3f10dda616b97d9\" returns successfully" Jul 11 00:41:24.423753 kubelet[1419]: E0711 00:41:24.423703 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:24.836024 systemd-networkd[1043]: lxc87f94149fc80: Gained IPv6LL Jul 11 00:41:25.424425 kubelet[1419]: E0711 00:41:25.424373 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:26.425595 kubelet[1419]: E0711 00:41:26.425466 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:26.445337 kubelet[1419]: I0711 00:41:26.445061 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.205602112 podStartE2EDuration="18.445041436s" podCreationTimestamp="2025-07-11 00:41:08 +0000 UTC" firstStartedPulling="2025-07-11 00:41:23.259008231 +0000 UTC m=+43.982591684" lastFinishedPulling="2025-07-11 00:41:23.498447515 +0000 UTC m=+44.222031008" observedRunningTime="2025-07-11 00:41:23.642141101 +0000 UTC m=+44.365724594" watchObservedRunningTime="2025-07-11 00:41:26.445041436 +0000 UTC m=+47.168624929" Jul 11 00:41:26.505355 env[1215]: time="2025-07-11T00:41:26.505289759Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:41:26.510635 env[1215]: time="2025-07-11T00:41:26.510604745Z" level=info msg="StopContainer for \"878d0e2e4f6cc309de95d39e79d7d7e18e709748d110c9cd09431bb58f2b408d\" with timeout 2 (s)" Jul 11 00:41:26.511053 env[1215]: time="2025-07-11T00:41:26.511031464Z" level=info msg="Stop container \"878d0e2e4f6cc309de95d39e79d7d7e18e709748d110c9cd09431bb58f2b408d\" with signal terminated" Jul 11 00:41:26.518605 systemd-networkd[1043]: lxc_health: Link DOWN Jul 11 00:41:26.518613 systemd-networkd[1043]: lxc_health: Lost carrier Jul 11 00:41:26.555224 systemd[1]: cri-containerd-878d0e2e4f6cc309de95d39e79d7d7e18e709748d110c9cd09431bb58f2b408d.scope: Deactivated successfully. Jul 11 00:41:26.555605 systemd[1]: cri-containerd-878d0e2e4f6cc309de95d39e79d7d7e18e709748d110c9cd09431bb58f2b408d.scope: Consumed 6.374s CPU time. Jul 11 00:41:26.573638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-878d0e2e4f6cc309de95d39e79d7d7e18e709748d110c9cd09431bb58f2b408d-rootfs.mount: Deactivated successfully. Jul 11 00:41:26.606522 env[1215]: time="2025-07-11T00:41:26.606474696Z" level=info msg="shim disconnected" id=878d0e2e4f6cc309de95d39e79d7d7e18e709748d110c9cd09431bb58f2b408d Jul 11 00:41:26.606522 env[1215]: time="2025-07-11T00:41:26.606517896Z" level=warning msg="cleaning up after shim disconnected" id=878d0e2e4f6cc309de95d39e79d7d7e18e709748d110c9cd09431bb58f2b408d namespace=k8s.io Jul 11 00:41:26.606522 env[1215]: time="2025-07-11T00:41:26.606527616Z" level=info msg="cleaning up dead shim" Jul 11 00:41:26.613522 env[1215]: time="2025-07-11T00:41:26.613478518Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:41:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2926 runtime=io.containerd.runc.v2\n" Jul 11 00:41:26.616224 env[1215]: time="2025-07-11T00:41:26.616179190Z" level=info msg="StopContainer for \"878d0e2e4f6cc309de95d39e79d7d7e18e709748d110c9cd09431bb58f2b408d\" returns successfully" Jul 11 00:41:26.616917 env[1215]: time="2025-07-11T00:41:26.616891229Z" level=info msg="StopPodSandbox for \"fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2\"" Jul 11 00:41:26.616966 env[1215]: time="2025-07-11T00:41:26.616948908Z" level=info msg="Container to stop \"f23e97a2ed6fd873514a86ef2157727331e8279966cf71ba9d82625f6c7b6a4e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:41:26.616993 env[1215]: time="2025-07-11T00:41:26.616962508Z" level=info msg="Container to stop \"878d0e2e4f6cc309de95d39e79d7d7e18e709748d110c9cd09431bb58f2b408d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:41:26.616993 env[1215]: time="2025-07-11T00:41:26.616974668Z" level=info msg="Container to stop \"f101b4be2960a92d701f1d32d4f2ff30d766cc3e6900f29cacd1f062fb13f1a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:41:26.616993 env[1215]: time="2025-07-11T00:41:26.616985388Z" level=info msg="Container to stop \"68dfc1a0db1e79ad90b02661c1ca48d80e78208a88aca1cb4de8edb288715cd1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:41:26.617070 env[1215]: time="2025-07-11T00:41:26.616995628Z" level=info msg="Container to stop \"c6fbff7ee6d70c1b84b25220a6f4a803478f576b35925a8741be90ed85444203\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:41:26.619924 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2-shm.mount: Deactivated successfully. Jul 11 00:41:26.625318 systemd[1]: cri-containerd-fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2.scope: Deactivated successfully. Jul 11 00:41:26.646822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2-rootfs.mount: Deactivated successfully. Jul 11 00:41:26.655585 env[1215]: time="2025-07-11T00:41:26.655542448Z" level=info msg="shim disconnected" id=fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2 Jul 11 00:41:26.656338 env[1215]: time="2025-07-11T00:41:26.656314486Z" level=warning msg="cleaning up after shim disconnected" id=fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2 namespace=k8s.io Jul 11 00:41:26.656446 env[1215]: time="2025-07-11T00:41:26.656431406Z" level=info msg="cleaning up dead shim" Jul 11 00:41:26.664480 env[1215]: time="2025-07-11T00:41:26.664445585Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:41:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2958 runtime=io.containerd.runc.v2\n" Jul 11 00:41:26.664918 env[1215]: time="2025-07-11T00:41:26.664891144Z" level=info msg="TearDown network for sandbox \"fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2\" successfully" Jul 11 00:41:26.665039 env[1215]: time="2025-07-11T00:41:26.665018223Z" level=info msg="StopPodSandbox for \"fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2\" returns successfully" Jul 11 00:41:26.795624 kubelet[1419]: I0711 00:41:26.795513 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-cilium-config-path\") pod \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " Jul 11 00:41:26.795624 kubelet[1419]: I0711 00:41:26.795560 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-bpf-maps\") pod \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " Jul 11 00:41:26.795624 kubelet[1419]: I0711 00:41:26.795578 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-cni-path\") pod \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " Jul 11 00:41:26.795624 kubelet[1419]: I0711 00:41:26.795596 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-lib-modules\") pod \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " Jul 11 00:41:26.795624 kubelet[1419]: I0711 00:41:26.795614 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-hubble-tls\") pod \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " Jul 11 00:41:26.795913 kubelet[1419]: I0711 00:41:26.795633 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-etc-cni-netd\") pod \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " Jul 11 00:41:26.795913 kubelet[1419]: I0711 00:41:26.795647 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-xtables-lock\") pod \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " Jul 11 00:41:26.795913 kubelet[1419]: I0711 00:41:26.795662 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-hostproc\") pod \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " Jul 11 00:41:26.795913 kubelet[1419]: I0711 00:41:26.795679 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-clustermesh-secrets\") pod \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " Jul 11 00:41:26.795913 kubelet[1419]: I0711 00:41:26.795695 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-cilium-cgroup\") pod \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " Jul 11 00:41:26.795913 kubelet[1419]: I0711 00:41:26.795711 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4s9j\" (UniqueName: \"kubernetes.io/projected/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-kube-api-access-l4s9j\") pod \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " Jul 11 00:41:26.796062 kubelet[1419]: I0711 00:41:26.795725 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-host-proc-sys-kernel\") pod \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " Jul 11 00:41:26.796062 kubelet[1419]: I0711 00:41:26.795740 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-host-proc-sys-net\") pod \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " Jul 11 00:41:26.796062 kubelet[1419]: I0711 00:41:26.795755 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-cilium-run\") pod \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\" (UID: \"f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00\") " Jul 11 00:41:26.796062 kubelet[1419]: I0711 00:41:26.795826 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00" (UID: "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:41:26.796870 kubelet[1419]: I0711 00:41:26.796556 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00" (UID: "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:41:26.796870 kubelet[1419]: I0711 00:41:26.796603 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00" (UID: "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:41:26.796870 kubelet[1419]: I0711 00:41:26.796620 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-cni-path" (OuterVolumeSpecName: "cni-path") pod "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00" (UID: "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:41:26.796870 kubelet[1419]: I0711 00:41:26.796634 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00" (UID: "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:41:26.797716 kubelet[1419]: I0711 00:41:26.797688 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00" (UID: "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 11 00:41:26.797789 kubelet[1419]: I0711 00:41:26.797772 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-hostproc" (OuterVolumeSpecName: "hostproc") pod "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00" (UID: "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:41:26.799199 kubelet[1419]: I0711 00:41:26.799103 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00" (UID: "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:41:26.799199 kubelet[1419]: I0711 00:41:26.799139 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00" (UID: "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:41:26.799199 kubelet[1419]: I0711 00:41:26.799161 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00" (UID: "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:41:26.799199 kubelet[1419]: I0711 00:41:26.799180 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00" (UID: "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:41:26.803545 kubelet[1419]: I0711 00:41:26.803516 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00" (UID: "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 00:41:26.803735 kubelet[1419]: I0711 00:41:26.803716 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00" (UID: "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:41:26.804011 systemd[1]: var-lib-kubelet-pods-f1c80a0b\x2db4d3\x2d4bc0\x2d9af5\x2dcba8e8a6be00-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl4s9j.mount: Deactivated successfully. Jul 11 00:41:26.804110 systemd[1]: var-lib-kubelet-pods-f1c80a0b\x2db4d3\x2d4bc0\x2d9af5\x2dcba8e8a6be00-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 11 00:41:26.804166 systemd[1]: var-lib-kubelet-pods-f1c80a0b\x2db4d3\x2d4bc0\x2d9af5\x2dcba8e8a6be00-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 11 00:41:26.807376 kubelet[1419]: I0711 00:41:26.807331 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-kube-api-access-l4s9j" (OuterVolumeSpecName: "kube-api-access-l4s9j") pod "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00" (UID: "f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00"). InnerVolumeSpecName "kube-api-access-l4s9j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:41:26.896767 kubelet[1419]: I0711 00:41:26.896719 1419 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-etc-cni-netd\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:26.896767 kubelet[1419]: I0711 00:41:26.896758 1419 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-xtables-lock\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:26.896767 kubelet[1419]: I0711 00:41:26.896769 1419 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-hostproc\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:26.896971 kubelet[1419]: I0711 00:41:26.896779 1419 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-clustermesh-secrets\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:26.896971 kubelet[1419]: I0711 00:41:26.896790 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-cilium-cgroup\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:26.896971 kubelet[1419]: I0711 00:41:26.896799 1419 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l4s9j\" (UniqueName: \"kubernetes.io/projected/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-kube-api-access-l4s9j\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:26.896971 kubelet[1419]: I0711 00:41:26.896808 1419 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-host-proc-sys-kernel\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:26.896971 kubelet[1419]: I0711 00:41:26.896817 1419 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-host-proc-sys-net\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:26.896971 kubelet[1419]: I0711 00:41:26.896825 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-cilium-run\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:26.896971 kubelet[1419]: I0711 00:41:26.896847 1419 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-bpf-maps\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:26.896971 kubelet[1419]: I0711 00:41:26.896857 1419 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-cni-path\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:26.897157 kubelet[1419]: I0711 00:41:26.896865 1419 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-lib-modules\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:26.897157 kubelet[1419]: I0711 00:41:26.896872 1419 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-hubble-tls\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:26.897157 kubelet[1419]: I0711 00:41:26.896880 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00-cilium-config-path\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:27.426007 kubelet[1419]: E0711 00:41:27.425967 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:27.649903 kubelet[1419]: I0711 00:41:27.649865 1419 scope.go:117] "RemoveContainer" containerID="878d0e2e4f6cc309de95d39e79d7d7e18e709748d110c9cd09431bb58f2b408d" Jul 11 00:41:27.653432 systemd[1]: Removed slice kubepods-burstable-podf1c80a0b_b4d3_4bc0_9af5_cba8e8a6be00.slice. Jul 11 00:41:27.653516 systemd[1]: kubepods-burstable-podf1c80a0b_b4d3_4bc0_9af5_cba8e8a6be00.slice: Consumed 6.587s CPU time. Jul 11 00:41:27.659656 env[1215]: time="2025-07-11T00:41:27.654274556Z" level=info msg="RemoveContainer for \"878d0e2e4f6cc309de95d39e79d7d7e18e709748d110c9cd09431bb58f2b408d\"" Jul 11 00:41:27.669755 env[1215]: time="2025-07-11T00:41:27.668588521Z" level=info msg="RemoveContainer for \"878d0e2e4f6cc309de95d39e79d7d7e18e709748d110c9cd09431bb58f2b408d\" returns successfully" Jul 11 00:41:27.669871 kubelet[1419]: I0711 00:41:27.668880 1419 scope.go:117] "RemoveContainer" containerID="c6fbff7ee6d70c1b84b25220a6f4a803478f576b35925a8741be90ed85444203" Jul 11 00:41:27.670516 env[1215]: time="2025-07-11T00:41:27.670454876Z" level=info msg="RemoveContainer for \"c6fbff7ee6d70c1b84b25220a6f4a803478f576b35925a8741be90ed85444203\"" Jul 11 00:41:27.672912 env[1215]: time="2025-07-11T00:41:27.672864190Z" level=info msg="RemoveContainer for \"c6fbff7ee6d70c1b84b25220a6f4a803478f576b35925a8741be90ed85444203\" returns successfully" Jul 11 00:41:27.673088 kubelet[1419]: I0711 00:41:27.673045 1419 scope.go:117] "RemoveContainer" containerID="68dfc1a0db1e79ad90b02661c1ca48d80e78208a88aca1cb4de8edb288715cd1" Jul 11 00:41:27.674361 env[1215]: time="2025-07-11T00:41:27.674306347Z" level=info msg="RemoveContainer for \"68dfc1a0db1e79ad90b02661c1ca48d80e78208a88aca1cb4de8edb288715cd1\"" Jul 11 00:41:27.676505 env[1215]: time="2025-07-11T00:41:27.676385742Z" level=info msg="RemoveContainer for \"68dfc1a0db1e79ad90b02661c1ca48d80e78208a88aca1cb4de8edb288715cd1\" returns successfully" Jul 11 00:41:27.677070 kubelet[1419]: I0711 00:41:27.676808 1419 scope.go:117] "RemoveContainer" containerID="f101b4be2960a92d701f1d32d4f2ff30d766cc3e6900f29cacd1f062fb13f1a3" Jul 11 00:41:27.677971 env[1215]: time="2025-07-11T00:41:27.677939418Z" level=info msg="RemoveContainer for \"f101b4be2960a92d701f1d32d4f2ff30d766cc3e6900f29cacd1f062fb13f1a3\"" Jul 11 00:41:27.680137 env[1215]: time="2025-07-11T00:41:27.680081773Z" level=info msg="RemoveContainer for \"f101b4be2960a92d701f1d32d4f2ff30d766cc3e6900f29cacd1f062fb13f1a3\" returns successfully" Jul 11 00:41:27.680262 kubelet[1419]: I0711 00:41:27.680241 1419 scope.go:117] "RemoveContainer" containerID="f23e97a2ed6fd873514a86ef2157727331e8279966cf71ba9d82625f6c7b6a4e" Jul 11 00:41:27.681277 env[1215]: time="2025-07-11T00:41:27.681244970Z" level=info msg="RemoveContainer for \"f23e97a2ed6fd873514a86ef2157727331e8279966cf71ba9d82625f6c7b6a4e\"" Jul 11 00:41:27.683524 env[1215]: time="2025-07-11T00:41:27.683475765Z" level=info msg="RemoveContainer for \"f23e97a2ed6fd873514a86ef2157727331e8279966cf71ba9d82625f6c7b6a4e\" returns successfully" Jul 11 00:41:28.427031 kubelet[1419]: E0711 00:41:28.426963 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:28.556956 kubelet[1419]: I0711 00:41:28.556920 1419 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00" path="/var/lib/kubelet/pods/f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00/volumes" Jul 11 00:41:29.172421 kubelet[1419]: I0711 00:41:29.172358 1419 memory_manager.go:355] "RemoveStaleState removing state" podUID="f1c80a0b-b4d3-4bc0-9af5-cba8e8a6be00" containerName="cilium-agent" Jul 11 00:41:29.174580 kubelet[1419]: I0711 00:41:29.174546 1419 status_manager.go:890] "Failed to get status for pod" podUID="4f5d1c18-d5a9-4b7a-bc6d-56934db6e69d" pod="kube-system/cilium-operator-6c4d7847fc-5v25n" err="pods \"cilium-operator-6c4d7847fc-5v25n\" is forbidden: User \"system:node:10.0.0.127\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.127' and this object" Jul 11 00:41:29.174674 kubelet[1419]: W0711 00:41:29.174622 1419 reflector.go:569] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.0.0.127" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.127' and this object Jul 11 00:41:29.174701 kubelet[1419]: E0711 00:41:29.174654 1419 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cilium-config\" is forbidden: User \"system:node:10.0.0.127\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.127' and this object" logger="UnhandledError" Jul 11 00:41:29.177269 systemd[1]: Created slice kubepods-besteffort-pod4f5d1c18_d5a9_4b7a_bc6d_56934db6e69d.slice. Jul 11 00:41:29.184016 systemd[1]: Created slice kubepods-burstable-pod73189c4a_2880_4bb0_98e2_1b956e6bd78d.slice. Jul 11 00:41:29.309884 kubelet[1419]: I0711 00:41:29.309834 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-bpf-maps\") pod \"cilium-gbxlq\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " pod="kube-system/cilium-gbxlq" Jul 11 00:41:29.309884 kubelet[1419]: I0711 00:41:29.309887 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-cilium-cgroup\") pod \"cilium-gbxlq\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " pod="kube-system/cilium-gbxlq" Jul 11 00:41:29.310062 kubelet[1419]: I0711 00:41:29.309909 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-cni-path\") pod \"cilium-gbxlq\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " pod="kube-system/cilium-gbxlq" Jul 11 00:41:29.310062 kubelet[1419]: I0711 00:41:29.309926 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/73189c4a-2880-4bb0-98e2-1b956e6bd78d-cilium-ipsec-secrets\") pod \"cilium-gbxlq\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " pod="kube-system/cilium-gbxlq" Jul 11 00:41:29.310062 kubelet[1419]: I0711 00:41:29.309944 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zh4c\" (UniqueName: \"kubernetes.io/projected/73189c4a-2880-4bb0-98e2-1b956e6bd78d-kube-api-access-7zh4c\") pod \"cilium-gbxlq\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " pod="kube-system/cilium-gbxlq" Jul 11 00:41:29.310062 kubelet[1419]: I0711 00:41:29.310000 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-cilium-run\") pod \"cilium-gbxlq\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " pod="kube-system/cilium-gbxlq" Jul 11 00:41:29.310062 kubelet[1419]: I0711 00:41:29.310039 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-host-proc-sys-net\") pod \"cilium-gbxlq\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " pod="kube-system/cilium-gbxlq" Jul 11 00:41:29.310178 kubelet[1419]: I0711 00:41:29.310067 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4f5d1c18-d5a9-4b7a-bc6d-56934db6e69d-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-5v25n\" (UID: \"4f5d1c18-d5a9-4b7a-bc6d-56934db6e69d\") " pod="kube-system/cilium-operator-6c4d7847fc-5v25n" Jul 11 00:41:29.310178 kubelet[1419]: I0711 00:41:29.310084 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdd9g\" (UniqueName: \"kubernetes.io/projected/4f5d1c18-d5a9-4b7a-bc6d-56934db6e69d-kube-api-access-zdd9g\") pod \"cilium-operator-6c4d7847fc-5v25n\" (UID: \"4f5d1c18-d5a9-4b7a-bc6d-56934db6e69d\") " pod="kube-system/cilium-operator-6c4d7847fc-5v25n" Jul 11 00:41:29.310178 kubelet[1419]: I0711 00:41:29.310107 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-xtables-lock\") pod \"cilium-gbxlq\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " pod="kube-system/cilium-gbxlq" Jul 11 00:41:29.310178 kubelet[1419]: I0711 00:41:29.310126 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/73189c4a-2880-4bb0-98e2-1b956e6bd78d-clustermesh-secrets\") pod \"cilium-gbxlq\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " pod="kube-system/cilium-gbxlq" Jul 11 00:41:29.310178 kubelet[1419]: I0711 00:41:29.310141 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-hostproc\") pod \"cilium-gbxlq\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " pod="kube-system/cilium-gbxlq" Jul 11 00:41:29.310280 kubelet[1419]: I0711 00:41:29.310155 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-etc-cni-netd\") pod \"cilium-gbxlq\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " pod="kube-system/cilium-gbxlq" Jul 11 00:41:29.310280 kubelet[1419]: I0711 00:41:29.310173 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-lib-modules\") pod \"cilium-gbxlq\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " pod="kube-system/cilium-gbxlq" Jul 11 00:41:29.310280 kubelet[1419]: I0711 00:41:29.310187 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/73189c4a-2880-4bb0-98e2-1b956e6bd78d-hubble-tls\") pod \"cilium-gbxlq\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " pod="kube-system/cilium-gbxlq" Jul 11 00:41:29.310280 kubelet[1419]: I0711 00:41:29.310210 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/73189c4a-2880-4bb0-98e2-1b956e6bd78d-cilium-config-path\") pod \"cilium-gbxlq\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " pod="kube-system/cilium-gbxlq" Jul 11 00:41:29.310280 kubelet[1419]: I0711 00:41:29.310227 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-host-proc-sys-kernel\") pod \"cilium-gbxlq\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " pod="kube-system/cilium-gbxlq" Jul 11 00:41:29.342716 kubelet[1419]: E0711 00:41:29.342665 1419 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-7zh4c lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-gbxlq" podUID="73189c4a-2880-4bb0-98e2-1b956e6bd78d" Jul 11 00:41:29.427731 kubelet[1419]: E0711 00:41:29.427636 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:29.812591 kubelet[1419]: I0711 00:41:29.812468 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-cni-path\") pod \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " Jul 11 00:41:29.812591 kubelet[1419]: I0711 00:41:29.812525 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-lib-modules\") pod \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " Jul 11 00:41:29.812591 kubelet[1419]: I0711 00:41:29.812564 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/73189c4a-2880-4bb0-98e2-1b956e6bd78d-hubble-tls\") pod \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " Jul 11 00:41:29.812591 kubelet[1419]: I0711 00:41:29.812592 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-cilium-cgroup\") pod \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " Jul 11 00:41:29.812782 kubelet[1419]: I0711 00:41:29.812620 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-host-proc-sys-kernel\") pod \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " Jul 11 00:41:29.812782 kubelet[1419]: I0711 00:41:29.812649 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-xtables-lock\") pod \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " Jul 11 00:41:29.812782 kubelet[1419]: I0711 00:41:29.812665 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-hostproc\") pod \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " Jul 11 00:41:29.812782 kubelet[1419]: I0711 00:41:29.812683 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-cilium-run\") pod \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " Jul 11 00:41:29.812782 kubelet[1419]: I0711 00:41:29.812697 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-etc-cni-netd\") pod \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " Jul 11 00:41:29.812782 kubelet[1419]: I0711 00:41:29.812715 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/73189c4a-2880-4bb0-98e2-1b956e6bd78d-cilium-ipsec-secrets\") pod \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " Jul 11 00:41:29.813095 kubelet[1419]: I0711 00:41:29.812733 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zh4c\" (UniqueName: \"kubernetes.io/projected/73189c4a-2880-4bb0-98e2-1b956e6bd78d-kube-api-access-7zh4c\") pod \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " Jul 11 00:41:29.813095 kubelet[1419]: I0711 00:41:29.812747 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-host-proc-sys-net\") pod \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " Jul 11 00:41:29.813095 kubelet[1419]: I0711 00:41:29.812762 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/73189c4a-2880-4bb0-98e2-1b956e6bd78d-clustermesh-secrets\") pod \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " Jul 11 00:41:29.813095 kubelet[1419]: I0711 00:41:29.812785 1419 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-bpf-maps\") pod \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\" (UID: \"73189c4a-2880-4bb0-98e2-1b956e6bd78d\") " Jul 11 00:41:29.813095 kubelet[1419]: I0711 00:41:29.812854 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "73189c4a-2880-4bb0-98e2-1b956e6bd78d" (UID: "73189c4a-2880-4bb0-98e2-1b956e6bd78d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:41:29.813220 kubelet[1419]: I0711 00:41:29.812817 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "73189c4a-2880-4bb0-98e2-1b956e6bd78d" (UID: "73189c4a-2880-4bb0-98e2-1b956e6bd78d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:41:29.813220 kubelet[1419]: I0711 00:41:29.812881 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-hostproc" (OuterVolumeSpecName: "hostproc") pod "73189c4a-2880-4bb0-98e2-1b956e6bd78d" (UID: "73189c4a-2880-4bb0-98e2-1b956e6bd78d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:41:29.813220 kubelet[1419]: I0711 00:41:29.812899 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "73189c4a-2880-4bb0-98e2-1b956e6bd78d" (UID: "73189c4a-2880-4bb0-98e2-1b956e6bd78d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:41:29.813220 kubelet[1419]: I0711 00:41:29.812899 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-cni-path" (OuterVolumeSpecName: "cni-path") pod "73189c4a-2880-4bb0-98e2-1b956e6bd78d" (UID: "73189c4a-2880-4bb0-98e2-1b956e6bd78d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:41:29.813220 kubelet[1419]: I0711 00:41:29.812913 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "73189c4a-2880-4bb0-98e2-1b956e6bd78d" (UID: "73189c4a-2880-4bb0-98e2-1b956e6bd78d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:41:29.813329 kubelet[1419]: I0711 00:41:29.812919 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "73189c4a-2880-4bb0-98e2-1b956e6bd78d" (UID: "73189c4a-2880-4bb0-98e2-1b956e6bd78d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:41:29.813329 kubelet[1419]: I0711 00:41:29.813241 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "73189c4a-2880-4bb0-98e2-1b956e6bd78d" (UID: "73189c4a-2880-4bb0-98e2-1b956e6bd78d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:41:29.813329 kubelet[1419]: I0711 00:41:29.813264 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "73189c4a-2880-4bb0-98e2-1b956e6bd78d" (UID: "73189c4a-2880-4bb0-98e2-1b956e6bd78d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:41:29.813329 kubelet[1419]: I0711 00:41:29.813268 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "73189c4a-2880-4bb0-98e2-1b956e6bd78d" (UID: "73189c4a-2880-4bb0-98e2-1b956e6bd78d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 11 00:41:29.816710 systemd[1]: var-lib-kubelet-pods-73189c4a\x2d2880\x2d4bb0\x2d98e2\x2d1b956e6bd78d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7zh4c.mount: Deactivated successfully. Jul 11 00:41:29.816803 systemd[1]: var-lib-kubelet-pods-73189c4a\x2d2880\x2d4bb0\x2d98e2\x2d1b956e6bd78d-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 11 00:41:29.816954 kubelet[1419]: I0711 00:41:29.816893 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73189c4a-2880-4bb0-98e2-1b956e6bd78d-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "73189c4a-2880-4bb0-98e2-1b956e6bd78d" (UID: "73189c4a-2880-4bb0-98e2-1b956e6bd78d"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 00:41:29.817453 kubelet[1419]: I0711 00:41:29.817330 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73189c4a-2880-4bb0-98e2-1b956e6bd78d-kube-api-access-7zh4c" (OuterVolumeSpecName: "kube-api-access-7zh4c") pod "73189c4a-2880-4bb0-98e2-1b956e6bd78d" (UID: "73189c4a-2880-4bb0-98e2-1b956e6bd78d"). InnerVolumeSpecName "kube-api-access-7zh4c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:41:29.817743 kubelet[1419]: I0711 00:41:29.817651 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73189c4a-2880-4bb0-98e2-1b956e6bd78d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "73189c4a-2880-4bb0-98e2-1b956e6bd78d" (UID: "73189c4a-2880-4bb0-98e2-1b956e6bd78d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 00:41:29.818206 kubelet[1419]: I0711 00:41:29.818169 1419 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73189c4a-2880-4bb0-98e2-1b956e6bd78d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "73189c4a-2880-4bb0-98e2-1b956e6bd78d" (UID: "73189c4a-2880-4bb0-98e2-1b956e6bd78d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 00:41:29.913632 kubelet[1419]: I0711 00:41:29.913561 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-cilium-cgroup\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:29.913632 kubelet[1419]: I0711 00:41:29.913596 1419 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/73189c4a-2880-4bb0-98e2-1b956e6bd78d-hubble-tls\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:29.913632 kubelet[1419]: I0711 00:41:29.913606 1419 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-xtables-lock\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:29.913632 kubelet[1419]: I0711 00:41:29.913615 1419 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-hostproc\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:29.913632 kubelet[1419]: I0711 00:41:29.913623 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-cilium-run\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:29.913632 kubelet[1419]: I0711 00:41:29.913632 1419 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-host-proc-sys-kernel\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:29.913632 kubelet[1419]: I0711 00:41:29.913642 1419 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-etc-cni-netd\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:29.913632 kubelet[1419]: I0711 00:41:29.913650 1419 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7zh4c\" (UniqueName: \"kubernetes.io/projected/73189c4a-2880-4bb0-98e2-1b956e6bd78d-kube-api-access-7zh4c\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:29.913968 kubelet[1419]: I0711 00:41:29.913658 1419 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-host-proc-sys-net\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:29.913968 kubelet[1419]: I0711 00:41:29.913667 1419 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/73189c4a-2880-4bb0-98e2-1b956e6bd78d-clustermesh-secrets\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:29.913968 kubelet[1419]: I0711 00:41:29.913675 1419 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-bpf-maps\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:29.913968 kubelet[1419]: I0711 00:41:29.913682 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/73189c4a-2880-4bb0-98e2-1b956e6bd78d-cilium-ipsec-secrets\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:29.913968 kubelet[1419]: I0711 00:41:29.913689 1419 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-cni-path\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:29.913968 kubelet[1419]: I0711 00:41:29.913696 1419 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73189c4a-2880-4bb0-98e2-1b956e6bd78d-lib-modules\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:30.411519 kubelet[1419]: E0711 00:41:30.411458 1419 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jul 11 00:41:30.411677 kubelet[1419]: E0711 00:41:30.411550 1419 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/73189c4a-2880-4bb0-98e2-1b956e6bd78d-cilium-config-path podName:73189c4a-2880-4bb0-98e2-1b956e6bd78d nodeName:}" failed. No retries permitted until 2025-07-11 00:41:30.911528254 +0000 UTC m=+51.635111747 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/73189c4a-2880-4bb0-98e2-1b956e6bd78d-cilium-config-path") pod "cilium-gbxlq" (UID: "73189c4a-2880-4bb0-98e2-1b956e6bd78d") : failed to sync configmap cache: timed out waiting for the condition Jul 11 00:41:30.411677 kubelet[1419]: E0711 00:41:30.411467 1419 configmap.go:193] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Jul 11 00:41:30.411677 kubelet[1419]: E0711 00:41:30.411653 1419 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4f5d1c18-d5a9-4b7a-bc6d-56934db6e69d-cilium-config-path podName:4f5d1c18-d5a9-4b7a-bc6d-56934db6e69d nodeName:}" failed. No retries permitted until 2025-07-11 00:41:30.911626374 +0000 UTC m=+51.635209867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/4f5d1c18-d5a9-4b7a-bc6d-56934db6e69d-cilium-config-path") pod "cilium-operator-6c4d7847fc-5v25n" (UID: "4f5d1c18-d5a9-4b7a-bc6d-56934db6e69d") : failed to sync configmap cache: timed out waiting for the condition Jul 11 00:41:30.415121 systemd[1]: var-lib-kubelet-pods-73189c4a\x2d2880\x2d4bb0\x2d98e2\x2d1b956e6bd78d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 11 00:41:30.415214 systemd[1]: var-lib-kubelet-pods-73189c4a\x2d2880\x2d4bb0\x2d98e2\x2d1b956e6bd78d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 11 00:41:30.428177 kubelet[1419]: E0711 00:41:30.428133 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:30.550391 kubelet[1419]: E0711 00:41:30.550341 1419 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 11 00:41:30.559964 systemd[1]: Removed slice kubepods-burstable-pod73189c4a_2880_4bb0_98e2_1b956e6bd78d.slice. Jul 11 00:41:30.693228 systemd[1]: Created slice kubepods-burstable-podcf0cedf9_67b2_42ec_9e74_e17807a580c9.slice. Jul 11 00:41:30.819042 kubelet[1419]: I0711 00:41:30.818995 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cf0cedf9-67b2-42ec-9e74-e17807a580c9-xtables-lock\") pod \"cilium-mfjts\" (UID: \"cf0cedf9-67b2-42ec-9e74-e17807a580c9\") " pod="kube-system/cilium-mfjts" Jul 11 00:41:30.819236 kubelet[1419]: I0711 00:41:30.819217 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cf0cedf9-67b2-42ec-9e74-e17807a580c9-cilium-config-path\") pod \"cilium-mfjts\" (UID: \"cf0cedf9-67b2-42ec-9e74-e17807a580c9\") " pod="kube-system/cilium-mfjts" Jul 11 00:41:30.819385 kubelet[1419]: I0711 00:41:30.819356 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cf0cedf9-67b2-42ec-9e74-e17807a580c9-etc-cni-netd\") pod \"cilium-mfjts\" (UID: \"cf0cedf9-67b2-42ec-9e74-e17807a580c9\") " pod="kube-system/cilium-mfjts" Jul 11 00:41:30.819493 kubelet[1419]: I0711 00:41:30.819478 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sk2x\" (UniqueName: \"kubernetes.io/projected/cf0cedf9-67b2-42ec-9e74-e17807a580c9-kube-api-access-2sk2x\") pod \"cilium-mfjts\" (UID: \"cf0cedf9-67b2-42ec-9e74-e17807a580c9\") " pod="kube-system/cilium-mfjts" Jul 11 00:41:30.819594 kubelet[1419]: I0711 00:41:30.819580 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cf0cedf9-67b2-42ec-9e74-e17807a580c9-host-proc-sys-kernel\") pod \"cilium-mfjts\" (UID: \"cf0cedf9-67b2-42ec-9e74-e17807a580c9\") " pod="kube-system/cilium-mfjts" Jul 11 00:41:30.819686 kubelet[1419]: I0711 00:41:30.819673 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cf0cedf9-67b2-42ec-9e74-e17807a580c9-hubble-tls\") pod \"cilium-mfjts\" (UID: \"cf0cedf9-67b2-42ec-9e74-e17807a580c9\") " pod="kube-system/cilium-mfjts" Jul 11 00:41:30.819778 kubelet[1419]: I0711 00:41:30.819765 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/cf0cedf9-67b2-42ec-9e74-e17807a580c9-cilium-ipsec-secrets\") pod \"cilium-mfjts\" (UID: \"cf0cedf9-67b2-42ec-9e74-e17807a580c9\") " pod="kube-system/cilium-mfjts" Jul 11 00:41:30.819895 kubelet[1419]: I0711 00:41:30.819882 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cf0cedf9-67b2-42ec-9e74-e17807a580c9-bpf-maps\") pod \"cilium-mfjts\" (UID: \"cf0cedf9-67b2-42ec-9e74-e17807a580c9\") " pod="kube-system/cilium-mfjts" Jul 11 00:41:30.820001 kubelet[1419]: I0711 00:41:30.819984 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cf0cedf9-67b2-42ec-9e74-e17807a580c9-hostproc\") pod \"cilium-mfjts\" (UID: \"cf0cedf9-67b2-42ec-9e74-e17807a580c9\") " pod="kube-system/cilium-mfjts" Jul 11 00:41:30.820099 kubelet[1419]: I0711 00:41:30.820084 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cf0cedf9-67b2-42ec-9e74-e17807a580c9-lib-modules\") pod \"cilium-mfjts\" (UID: \"cf0cedf9-67b2-42ec-9e74-e17807a580c9\") " pod="kube-system/cilium-mfjts" Jul 11 00:41:30.820186 kubelet[1419]: I0711 00:41:30.820173 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cf0cedf9-67b2-42ec-9e74-e17807a580c9-clustermesh-secrets\") pod \"cilium-mfjts\" (UID: \"cf0cedf9-67b2-42ec-9e74-e17807a580c9\") " pod="kube-system/cilium-mfjts" Jul 11 00:41:30.820270 kubelet[1419]: I0711 00:41:30.820257 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cf0cedf9-67b2-42ec-9e74-e17807a580c9-cilium-cgroup\") pod \"cilium-mfjts\" (UID: \"cf0cedf9-67b2-42ec-9e74-e17807a580c9\") " pod="kube-system/cilium-mfjts" Jul 11 00:41:30.820353 kubelet[1419]: I0711 00:41:30.820340 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cf0cedf9-67b2-42ec-9e74-e17807a580c9-cni-path\") pod \"cilium-mfjts\" (UID: \"cf0cedf9-67b2-42ec-9e74-e17807a580c9\") " pod="kube-system/cilium-mfjts" Jul 11 00:41:30.820449 kubelet[1419]: I0711 00:41:30.820436 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cf0cedf9-67b2-42ec-9e74-e17807a580c9-host-proc-sys-net\") pod \"cilium-mfjts\" (UID: \"cf0cedf9-67b2-42ec-9e74-e17807a580c9\") " pod="kube-system/cilium-mfjts" Jul 11 00:41:30.820538 kubelet[1419]: I0711 00:41:30.820524 1419 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cf0cedf9-67b2-42ec-9e74-e17807a580c9-cilium-run\") pod \"cilium-mfjts\" (UID: \"cf0cedf9-67b2-42ec-9e74-e17807a580c9\") " pod="kube-system/cilium-mfjts" Jul 11 00:41:30.820660 kubelet[1419]: I0711 00:41:30.820635 1419 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/73189c4a-2880-4bb0-98e2-1b956e6bd78d-cilium-config-path\") on node \"10.0.0.127\" DevicePath \"\"" Jul 11 00:41:30.979611 kubelet[1419]: E0711 00:41:30.979514 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:41:30.980853 env[1215]: time="2025-07-11T00:41:30.980746470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-5v25n,Uid:4f5d1c18-d5a9-4b7a-bc6d-56934db6e69d,Namespace:kube-system,Attempt:0,}" Jul 11 00:41:30.992544 env[1215]: time="2025-07-11T00:41:30.992481647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:41:30.992632 env[1215]: time="2025-07-11T00:41:30.992558167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:41:30.992632 env[1215]: time="2025-07-11T00:41:30.992585927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:41:30.992762 env[1215]: time="2025-07-11T00:41:30.992733326Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/22c45626a146bacade43757a6188d794787581d4e1211c8fbdbccb6f6243d900 pid=2994 runtime=io.containerd.runc.v2 Jul 11 00:41:31.002321 systemd[1]: Started cri-containerd-22c45626a146bacade43757a6188d794787581d4e1211c8fbdbccb6f6243d900.scope. Jul 11 00:41:31.005357 kubelet[1419]: E0711 00:41:31.005324 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:41:31.006147 env[1215]: time="2025-07-11T00:41:31.006109060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mfjts,Uid:cf0cedf9-67b2-42ec-9e74-e17807a580c9,Namespace:kube-system,Attempt:0,}" Jul 11 00:41:31.026094 env[1215]: time="2025-07-11T00:41:31.025137304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:41:31.026094 env[1215]: time="2025-07-11T00:41:31.025224624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:41:31.026094 env[1215]: time="2025-07-11T00:41:31.025253184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:41:31.026094 env[1215]: time="2025-07-11T00:41:31.025455183Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a0fa8e50740ba61e371f73fa0fb2a580a51e52b9cce3628e8e9cd5ba2a9575d pid=3028 runtime=io.containerd.runc.v2 Jul 11 00:41:31.035117 systemd[1]: Started cri-containerd-2a0fa8e50740ba61e371f73fa0fb2a580a51e52b9cce3628e8e9cd5ba2a9575d.scope. Jul 11 00:41:31.067033 env[1215]: time="2025-07-11T00:41:31.065788187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mfjts,Uid:cf0cedf9-67b2-42ec-9e74-e17807a580c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a0fa8e50740ba61e371f73fa0fb2a580a51e52b9cce3628e8e9cd5ba2a9575d\"" Jul 11 00:41:31.068306 kubelet[1419]: E0711 00:41:31.068286 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:41:31.069508 env[1215]: time="2025-07-11T00:41:31.069474460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-5v25n,Uid:4f5d1c18-d5a9-4b7a-bc6d-56934db6e69d,Namespace:kube-system,Attempt:0,} returns sandbox id \"22c45626a146bacade43757a6188d794787581d4e1211c8fbdbccb6f6243d900\"" Jul 11 00:41:31.073407 kubelet[1419]: E0711 00:41:31.070027 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:41:31.074079 env[1215]: time="2025-07-11T00:41:31.074032372Z" level=info msg="CreateContainer within sandbox \"2a0fa8e50740ba61e371f73fa0fb2a580a51e52b9cce3628e8e9cd5ba2a9575d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 11 00:41:31.074146 env[1215]: time="2025-07-11T00:41:31.074111172Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 11 00:41:31.083884 env[1215]: time="2025-07-11T00:41:31.083829633Z" level=info msg="CreateContainer within sandbox \"2a0fa8e50740ba61e371f73fa0fb2a580a51e52b9cce3628e8e9cd5ba2a9575d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"20ff94d519ca8603c28be2a7f5bad844fe39fcde15266c92269ee5d814b5222d\"" Jul 11 00:41:31.084326 env[1215]: time="2025-07-11T00:41:31.084295473Z" level=info msg="StartContainer for \"20ff94d519ca8603c28be2a7f5bad844fe39fcde15266c92269ee5d814b5222d\"" Jul 11 00:41:31.096919 systemd[1]: Started cri-containerd-20ff94d519ca8603c28be2a7f5bad844fe39fcde15266c92269ee5d814b5222d.scope. Jul 11 00:41:31.126110 env[1215]: time="2025-07-11T00:41:31.126050394Z" level=info msg="StartContainer for \"20ff94d519ca8603c28be2a7f5bad844fe39fcde15266c92269ee5d814b5222d\" returns successfully" Jul 11 00:41:31.139206 systemd[1]: cri-containerd-20ff94d519ca8603c28be2a7f5bad844fe39fcde15266c92269ee5d814b5222d.scope: Deactivated successfully. Jul 11 00:41:31.163812 env[1215]: time="2025-07-11T00:41:31.163764283Z" level=info msg="shim disconnected" id=20ff94d519ca8603c28be2a7f5bad844fe39fcde15266c92269ee5d814b5222d Jul 11 00:41:31.163812 env[1215]: time="2025-07-11T00:41:31.163809163Z" level=warning msg="cleaning up after shim disconnected" id=20ff94d519ca8603c28be2a7f5bad844fe39fcde15266c92269ee5d814b5222d namespace=k8s.io Jul 11 00:41:31.163812 env[1215]: time="2025-07-11T00:41:31.163818643Z" level=info msg="cleaning up dead shim" Jul 11 00:41:31.170020 env[1215]: time="2025-07-11T00:41:31.169975951Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:41:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3115 runtime=io.containerd.runc.v2\n" Jul 11 00:41:31.428299 kubelet[1419]: E0711 00:41:31.428260 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:31.658204 kubelet[1419]: E0711 00:41:31.657647 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:41:31.659778 env[1215]: time="2025-07-11T00:41:31.659738068Z" level=info msg="CreateContainer within sandbox \"2a0fa8e50740ba61e371f73fa0fb2a580a51e52b9cce3628e8e9cd5ba2a9575d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 11 00:41:31.670794 env[1215]: time="2025-07-11T00:41:31.670750728Z" level=info msg="CreateContainer within sandbox \"2a0fa8e50740ba61e371f73fa0fb2a580a51e52b9cce3628e8e9cd5ba2a9575d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"215b8004eff4348824df3d95b7534bc9f0b64ad6381c4235d5651f3bc0066598\"" Jul 11 00:41:31.671494 env[1215]: time="2025-07-11T00:41:31.671471166Z" level=info msg="StartContainer for \"215b8004eff4348824df3d95b7534bc9f0b64ad6381c4235d5651f3bc0066598\"" Jul 11 00:41:31.687389 systemd[1]: Started cri-containerd-215b8004eff4348824df3d95b7534bc9f0b64ad6381c4235d5651f3bc0066598.scope. Jul 11 00:41:31.711257 kubelet[1419]: I0711 00:41:31.711210 1419 setters.go:602] "Node became not ready" node="10.0.0.127" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-11T00:41:31Z","lastTransitionTime":"2025-07-11T00:41:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 11 00:41:31.728652 env[1215]: time="2025-07-11T00:41:31.728610699Z" level=info msg="StartContainer for \"215b8004eff4348824df3d95b7534bc9f0b64ad6381c4235d5651f3bc0066598\" returns successfully" Jul 11 00:41:31.742227 systemd[1]: cri-containerd-215b8004eff4348824df3d95b7534bc9f0b64ad6381c4235d5651f3bc0066598.scope: Deactivated successfully. Jul 11 00:41:31.759930 env[1215]: time="2025-07-11T00:41:31.759884760Z" level=info msg="shim disconnected" id=215b8004eff4348824df3d95b7534bc9f0b64ad6381c4235d5651f3bc0066598 Jul 11 00:41:31.759930 env[1215]: time="2025-07-11T00:41:31.759929400Z" level=warning msg="cleaning up after shim disconnected" id=215b8004eff4348824df3d95b7534bc9f0b64ad6381c4235d5651f3bc0066598 namespace=k8s.io Jul 11 00:41:31.760121 env[1215]: time="2025-07-11T00:41:31.759940680Z" level=info msg="cleaning up dead shim" Jul 11 00:41:31.766157 env[1215]: time="2025-07-11T00:41:31.766122788Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:41:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3177 runtime=io.containerd.runc.v2\n" Jul 11 00:41:32.415314 systemd[1]: run-containerd-runc-k8s.io-215b8004eff4348824df3d95b7534bc9f0b64ad6381c4235d5651f3bc0066598-runc.P5OfvN.mount: Deactivated successfully. Jul 11 00:41:32.415427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-215b8004eff4348824df3d95b7534bc9f0b64ad6381c4235d5651f3bc0066598-rootfs.mount: Deactivated successfully. Jul 11 00:41:32.428615 kubelet[1419]: E0711 00:41:32.428570 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:32.557206 kubelet[1419]: I0711 00:41:32.557157 1419 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73189c4a-2880-4bb0-98e2-1b956e6bd78d" path="/var/lib/kubelet/pods/73189c4a-2880-4bb0-98e2-1b956e6bd78d/volumes" Jul 11 00:41:32.662654 kubelet[1419]: E0711 00:41:32.662615 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:41:32.664417 env[1215]: time="2025-07-11T00:41:32.664376174Z" level=info msg="CreateContainer within sandbox \"2a0fa8e50740ba61e371f73fa0fb2a580a51e52b9cce3628e8e9cd5ba2a9575d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 11 00:41:32.681305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3232293145.mount: Deactivated successfully. Jul 11 00:41:32.686022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2836372017.mount: Deactivated successfully. Jul 11 00:41:32.690742 env[1215]: time="2025-07-11T00:41:32.690682487Z" level=info msg="CreateContainer within sandbox \"2a0fa8e50740ba61e371f73fa0fb2a580a51e52b9cce3628e8e9cd5ba2a9575d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8f0747b7d1bfb517249873c3594cc7bc99bbcc1978cb7b47c5f4fa0ee89132f6\"" Jul 11 00:41:32.691300 env[1215]: time="2025-07-11T00:41:32.691261206Z" level=info msg="StartContainer for \"8f0747b7d1bfb517249873c3594cc7bc99bbcc1978cb7b47c5f4fa0ee89132f6\"" Jul 11 00:41:32.705145 systemd[1]: Started cri-containerd-8f0747b7d1bfb517249873c3594cc7bc99bbcc1978cb7b47c5f4fa0ee89132f6.scope. Jul 11 00:41:32.739515 env[1215]: time="2025-07-11T00:41:32.739472321Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:41:32.740974 env[1215]: time="2025-07-11T00:41:32.740936199Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:41:32.743261 env[1215]: time="2025-07-11T00:41:32.743233675Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 11 00:41:32.743567 env[1215]: time="2025-07-11T00:41:32.743536114Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 11 00:41:32.743871 systemd[1]: cri-containerd-8f0747b7d1bfb517249873c3594cc7bc99bbcc1978cb7b47c5f4fa0ee89132f6.scope: Deactivated successfully. Jul 11 00:41:32.747198 env[1215]: time="2025-07-11T00:41:32.747164748Z" level=info msg="CreateContainer within sandbox \"22c45626a146bacade43757a6188d794787581d4e1211c8fbdbccb6f6243d900\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 11 00:41:32.747474 env[1215]: time="2025-07-11T00:41:32.747426147Z" level=info msg="StartContainer for \"8f0747b7d1bfb517249873c3594cc7bc99bbcc1978cb7b47c5f4fa0ee89132f6\" returns successfully" Jul 11 00:41:32.819531 env[1215]: time="2025-07-11T00:41:32.819454940Z" level=info msg="CreateContainer within sandbox \"22c45626a146bacade43757a6188d794787581d4e1211c8fbdbccb6f6243d900\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"704acefbd8d0b6a906a548f97c134ea14e81abcfad06500c5b12638c59865b3a\"" Jul 11 00:41:32.820178 env[1215]: time="2025-07-11T00:41:32.820108979Z" level=info msg="StartContainer for \"704acefbd8d0b6a906a548f97c134ea14e81abcfad06500c5b12638c59865b3a\"" Jul 11 00:41:32.821528 env[1215]: time="2025-07-11T00:41:32.821494856Z" level=info msg="shim disconnected" id=8f0747b7d1bfb517249873c3594cc7bc99bbcc1978cb7b47c5f4fa0ee89132f6 Jul 11 00:41:32.821528 env[1215]: time="2025-07-11T00:41:32.821532496Z" level=warning msg="cleaning up after shim disconnected" id=8f0747b7d1bfb517249873c3594cc7bc99bbcc1978cb7b47c5f4fa0ee89132f6 namespace=k8s.io Jul 11 00:41:32.821652 env[1215]: time="2025-07-11T00:41:32.821542936Z" level=info msg="cleaning up dead shim" Jul 11 00:41:32.829311 env[1215]: time="2025-07-11T00:41:32.829274483Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:41:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3236 runtime=io.containerd.runc.v2\n" Jul 11 00:41:32.835455 systemd[1]: Started cri-containerd-704acefbd8d0b6a906a548f97c134ea14e81abcfad06500c5b12638c59865b3a.scope. Jul 11 00:41:32.876897 env[1215]: time="2025-07-11T00:41:32.876831399Z" level=info msg="StartContainer for \"704acefbd8d0b6a906a548f97c134ea14e81abcfad06500c5b12638c59865b3a\" returns successfully" Jul 11 00:41:33.428989 kubelet[1419]: E0711 00:41:33.428942 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:33.666968 kubelet[1419]: E0711 00:41:33.666939 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:41:33.668530 kubelet[1419]: E0711 00:41:33.668499 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:41:33.669274 env[1215]: time="2025-07-11T00:41:33.669236872Z" level=info msg="CreateContainer within sandbox \"2a0fa8e50740ba61e371f73fa0fb2a580a51e52b9cce3628e8e9cd5ba2a9575d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 11 00:41:33.679861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount162743793.mount: Deactivated successfully. Jul 11 00:41:33.681125 env[1215]: time="2025-07-11T00:41:33.681088213Z" level=info msg="CreateContainer within sandbox \"2a0fa8e50740ba61e371f73fa0fb2a580a51e52b9cce3628e8e9cd5ba2a9575d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a57b93473e4f4df669e4531cebb64e34d4bf8f36bede527a598df9481e285fc3\"" Jul 11 00:41:33.681547 env[1215]: time="2025-07-11T00:41:33.681500852Z" level=info msg="StartContainer for \"a57b93473e4f4df669e4531cebb64e34d4bf8f36bede527a598df9481e285fc3\"" Jul 11 00:41:33.700600 systemd[1]: Started cri-containerd-a57b93473e4f4df669e4531cebb64e34d4bf8f36bede527a598df9481e285fc3.scope. Jul 11 00:41:33.739983 systemd[1]: cri-containerd-a57b93473e4f4df669e4531cebb64e34d4bf8f36bede527a598df9481e285fc3.scope: Deactivated successfully. Jul 11 00:41:33.744000 env[1215]: time="2025-07-11T00:41:33.743955149Z" level=info msg="StartContainer for \"a57b93473e4f4df669e4531cebb64e34d4bf8f36bede527a598df9481e285fc3\" returns successfully" Jul 11 00:41:33.760963 env[1215]: time="2025-07-11T00:41:33.760917481Z" level=info msg="shim disconnected" id=a57b93473e4f4df669e4531cebb64e34d4bf8f36bede527a598df9481e285fc3 Jul 11 00:41:33.760963 env[1215]: time="2025-07-11T00:41:33.760961481Z" level=warning msg="cleaning up after shim disconnected" id=a57b93473e4f4df669e4531cebb64e34d4bf8f36bede527a598df9481e285fc3 namespace=k8s.io Jul 11 00:41:33.761151 env[1215]: time="2025-07-11T00:41:33.760970721Z" level=info msg="cleaning up dead shim" Jul 11 00:41:33.767235 env[1215]: time="2025-07-11T00:41:33.767201750Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:41:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3329 runtime=io.containerd.runc.v2\n" Jul 11 00:41:34.415657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a57b93473e4f4df669e4531cebb64e34d4bf8f36bede527a598df9481e285fc3-rootfs.mount: Deactivated successfully. Jul 11 00:41:34.429803 kubelet[1419]: E0711 00:41:34.429751 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:34.672104 kubelet[1419]: E0711 00:41:34.671997 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:41:34.672629 kubelet[1419]: E0711 00:41:34.672588 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:41:34.674304 env[1215]: time="2025-07-11T00:41:34.674267556Z" level=info msg="CreateContainer within sandbox \"2a0fa8e50740ba61e371f73fa0fb2a580a51e52b9cce3628e8e9cd5ba2a9575d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 11 00:41:34.688057 kubelet[1419]: I0711 00:41:34.687908 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-5v25n" podStartSLOduration=4.017015515 podStartE2EDuration="5.687893175s" podCreationTimestamp="2025-07-11 00:41:29 +0000 UTC" firstStartedPulling="2025-07-11 00:41:31.073814612 +0000 UTC m=+51.797398065" lastFinishedPulling="2025-07-11 00:41:32.744692232 +0000 UTC m=+53.468275725" observedRunningTime="2025-07-11 00:41:33.691692315 +0000 UTC m=+54.415275808" watchObservedRunningTime="2025-07-11 00:41:34.687893175 +0000 UTC m=+55.411476668" Jul 11 00:41:34.691422 env[1215]: time="2025-07-11T00:41:34.691309689Z" level=info msg="CreateContainer within sandbox \"2a0fa8e50740ba61e371f73fa0fb2a580a51e52b9cce3628e8e9cd5ba2a9575d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"187ab37d5aef49ae9ef8f59715d3ea182cf7184e780d4094127fe6055a21a900\"" Jul 11 00:41:34.692147 env[1215]: time="2025-07-11T00:41:34.692095328Z" level=info msg="StartContainer for \"187ab37d5aef49ae9ef8f59715d3ea182cf7184e780d4094127fe6055a21a900\"" Jul 11 00:41:34.709585 systemd[1]: Started cri-containerd-187ab37d5aef49ae9ef8f59715d3ea182cf7184e780d4094127fe6055a21a900.scope. Jul 11 00:41:34.747781 env[1215]: time="2025-07-11T00:41:34.747735682Z" level=info msg="StartContainer for \"187ab37d5aef49ae9ef8f59715d3ea182cf7184e780d4094127fe6055a21a900\" returns successfully" Jul 11 00:41:34.980867 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 11 00:41:35.415637 systemd[1]: run-containerd-runc-k8s.io-187ab37d5aef49ae9ef8f59715d3ea182cf7184e780d4094127fe6055a21a900-runc.jjam23.mount: Deactivated successfully. Jul 11 00:41:35.430957 kubelet[1419]: E0711 00:41:35.430909 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:35.676076 kubelet[1419]: E0711 00:41:35.675983 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:41:36.431692 kubelet[1419]: E0711 00:41:36.431652 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:37.006414 kubelet[1419]: E0711 00:41:37.006383 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:41:37.432425 kubelet[1419]: E0711 00:41:37.432387 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:37.664041 systemd[1]: run-containerd-runc-k8s.io-187ab37d5aef49ae9ef8f59715d3ea182cf7184e780d4094127fe6055a21a900-runc.mQNg40.mount: Deactivated successfully. Jul 11 00:41:37.727288 systemd-networkd[1043]: lxc_health: Link UP Jul 11 00:41:37.735300 systemd-networkd[1043]: lxc_health: Gained carrier Jul 11 00:41:37.735870 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 11 00:41:38.433413 kubelet[1419]: E0711 00:41:38.433359 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:39.007279 kubelet[1419]: E0711 00:41:39.007227 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:41:39.035769 kubelet[1419]: I0711 00:41:39.035695 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mfjts" podStartSLOduration=9.035677709 podStartE2EDuration="9.035677709s" podCreationTimestamp="2025-07-11 00:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:41:35.694593799 +0000 UTC m=+56.418177292" watchObservedRunningTime="2025-07-11 00:41:39.035677709 +0000 UTC m=+59.759261202" Jul 11 00:41:39.434523 kubelet[1419]: E0711 00:41:39.434464 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:39.696003 kubelet[1419]: E0711 00:41:39.695830 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:41:39.747946 systemd-networkd[1043]: lxc_health: Gained IPv6LL Jul 11 00:41:39.827131 systemd[1]: run-containerd-runc-k8s.io-187ab37d5aef49ae9ef8f59715d3ea182cf7184e780d4094127fe6055a21a900-runc.yvQhkG.mount: Deactivated successfully. Jul 11 00:41:40.397594 kubelet[1419]: E0711 00:41:40.397543 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:40.435467 kubelet[1419]: E0711 00:41:40.435421 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:40.439715 env[1215]: time="2025-07-11T00:41:40.439679641Z" level=info msg="StopPodSandbox for \"fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2\"" Jul 11 00:41:40.440118 env[1215]: time="2025-07-11T00:41:40.440068721Z" level=info msg="TearDown network for sandbox \"fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2\" successfully" Jul 11 00:41:40.440195 env[1215]: time="2025-07-11T00:41:40.440178761Z" level=info msg="StopPodSandbox for \"fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2\" returns successfully" Jul 11 00:41:40.440619 env[1215]: time="2025-07-11T00:41:40.440586480Z" level=info msg="RemovePodSandbox for \"fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2\"" Jul 11 00:41:40.440698 env[1215]: time="2025-07-11T00:41:40.440621960Z" level=info msg="Forcibly stopping sandbox \"fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2\"" Jul 11 00:41:40.440734 env[1215]: time="2025-07-11T00:41:40.440697240Z" level=info msg="TearDown network for sandbox \"fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2\" successfully" Jul 11 00:41:40.444511 env[1215]: time="2025-07-11T00:41:40.444470556Z" level=info msg="RemovePodSandbox \"fe0b770426019540f16ef6543620c2d3a5d1f7cf984aa08603dd1203444ca1e2\" returns successfully" Jul 11 00:41:40.697284 kubelet[1419]: E0711 00:41:40.697167 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:41:41.436171 kubelet[1419]: E0711 00:41:41.436129 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:42.436880 kubelet[1419]: E0711 00:41:42.436831 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:43.437652 kubelet[1419]: E0711 00:41:43.437610 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:44.051929 systemd[1]: run-containerd-runc-k8s.io-187ab37d5aef49ae9ef8f59715d3ea182cf7184e780d4094127fe6055a21a900-runc.NcKHXz.mount: Deactivated successfully. Jul 11 00:41:44.438198 kubelet[1419]: E0711 00:41:44.438152 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 11 00:41:45.438543 kubelet[1419]: E0711 00:41:45.438494 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"