Jul 10 00:40:21.723481 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 10 00:40:21.723502 kernel: Linux version 5.15.186-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Wed Jul 9 23:19:15 -00 2025 Jul 10 00:40:21.723510 kernel: efi: EFI v2.70 by EDK II Jul 10 00:40:21.723516 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Jul 10 00:40:21.723522 kernel: random: crng init done Jul 10 00:40:21.723527 kernel: ACPI: Early table checksum verification disabled Jul 10 00:40:21.723534 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Jul 10 00:40:21.723541 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 10 00:40:21.723547 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:40:21.723553 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:40:21.723559 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:40:21.723564 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:40:21.723570 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:40:21.723576 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:40:21.723584 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:40:21.723590 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:40:21.723596 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:40:21.723602 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 10 00:40:21.723608 kernel: NUMA: Failed to initialise from firmware Jul 10 00:40:21.723615 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:40:21.723621 kernel: NUMA: NODE_DATA [mem 0xdcb0c900-0xdcb11fff] Jul 10 00:40:21.723627 kernel: Zone ranges: Jul 10 00:40:21.723632 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:40:21.723640 kernel: DMA32 empty Jul 10 00:40:21.723646 kernel: Normal empty Jul 10 00:40:21.723652 kernel: Movable zone start for each node Jul 10 00:40:21.723657 kernel: Early memory node ranges Jul 10 00:40:21.723663 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Jul 10 00:40:21.723669 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Jul 10 00:40:21.723675 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Jul 10 00:40:21.723681 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Jul 10 00:40:21.723687 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Jul 10 00:40:21.723693 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Jul 10 00:40:21.723699 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Jul 10 00:40:21.723705 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:40:21.723712 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 10 00:40:21.723718 kernel: psci: probing for conduit method from ACPI. Jul 10 00:40:21.723724 kernel: psci: PSCIv1.1 detected in firmware. Jul 10 00:40:21.723730 kernel: psci: Using standard PSCI v0.2 function IDs Jul 10 00:40:21.723736 kernel: psci: Trusted OS migration not required Jul 10 00:40:21.723745 kernel: psci: SMC Calling Convention v1.1 Jul 10 00:40:21.723752 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 10 00:40:21.723760 kernel: ACPI: SRAT not present Jul 10 00:40:21.723766 kernel: percpu: Embedded 30 pages/cpu s82968 r8192 d31720 u122880 Jul 10 00:40:21.723773 kernel: pcpu-alloc: s82968 r8192 d31720 u122880 alloc=30*4096 Jul 10 00:40:21.723779 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 10 00:40:21.723786 kernel: Detected PIPT I-cache on CPU0 Jul 10 00:40:21.723792 kernel: CPU features: detected: GIC system register CPU interface Jul 10 00:40:21.723798 kernel: CPU features: detected: Hardware dirty bit management Jul 10 00:40:21.723805 kernel: CPU features: detected: Spectre-v4 Jul 10 00:40:21.723811 kernel: CPU features: detected: Spectre-BHB Jul 10 00:40:21.723819 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 10 00:40:21.723825 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 10 00:40:21.723831 kernel: CPU features: detected: ARM erratum 1418040 Jul 10 00:40:21.723838 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 10 00:40:21.723844 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 10 00:40:21.723850 kernel: Policy zone: DMA Jul 10 00:40:21.723858 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=97626bbec4e8c603c151f40dbbae5fabba3cda417023e06335ea30183b36a27f Jul 10 00:40:21.723865 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:40:21.723872 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:40:21.723878 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:40:21.723885 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:40:21.723893 kernel: Memory: 2457344K/2572288K available (9792K kernel code, 2094K rwdata, 7588K rodata, 36416K init, 777K bss, 114944K reserved, 0K cma-reserved) Jul 10 00:40:21.723899 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 00:40:21.723906 kernel: trace event string verifier disabled Jul 10 00:40:21.723912 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:40:21.723919 kernel: rcu: RCU event tracing is enabled. Jul 10 00:40:21.723926 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 00:40:21.723932 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:40:21.723939 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:40:21.723945 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:40:21.723952 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 00:40:21.723958 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 10 00:40:21.723966 kernel: GICv3: 256 SPIs implemented Jul 10 00:40:21.723973 kernel: GICv3: 0 Extended SPIs implemented Jul 10 00:40:21.723987 kernel: GICv3: Distributor has no Range Selector support Jul 10 00:40:21.723994 kernel: Root IRQ handler: gic_handle_irq Jul 10 00:40:21.724001 kernel: GICv3: 16 PPIs implemented Jul 10 00:40:21.724007 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 10 00:40:21.724013 kernel: ACPI: SRAT not present Jul 10 00:40:21.724020 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 10 00:40:21.724026 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Jul 10 00:40:21.724033 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Jul 10 00:40:21.724039 kernel: GICv3: using LPI property table @0x00000000400d0000 Jul 10 00:40:21.724046 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Jul 10 00:40:21.724054 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:40:21.724060 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 10 00:40:21.724067 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 10 00:40:21.724074 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 10 00:40:21.724080 kernel: arm-pv: using stolen time PV Jul 10 00:40:21.724087 kernel: Console: colour dummy device 80x25 Jul 10 00:40:21.724093 kernel: ACPI: Core revision 20210730 Jul 10 00:40:21.724100 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 10 00:40:21.724107 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:40:21.724114 kernel: LSM: Security Framework initializing Jul 10 00:40:21.724121 kernel: SELinux: Initializing. Jul 10 00:40:21.724128 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:40:21.724135 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:40:21.724141 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:40:21.724148 kernel: Platform MSI: ITS@0x8080000 domain created Jul 10 00:40:21.724154 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 10 00:40:21.724161 kernel: Remapping and enabling EFI services. Jul 10 00:40:21.724167 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:40:21.724174 kernel: Detected PIPT I-cache on CPU1 Jul 10 00:40:21.724182 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 10 00:40:21.724188 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Jul 10 00:40:21.724195 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:40:21.724221 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 10 00:40:21.724228 kernel: Detected PIPT I-cache on CPU2 Jul 10 00:40:21.724235 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 10 00:40:21.724242 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Jul 10 00:40:21.724249 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:40:21.724255 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 10 00:40:21.724262 kernel: Detected PIPT I-cache on CPU3 Jul 10 00:40:21.724271 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 10 00:40:21.724277 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Jul 10 00:40:21.724284 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:40:21.724290 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 10 00:40:21.724302 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 00:40:21.724310 kernel: SMP: Total of 4 processors activated. Jul 10 00:40:21.724317 kernel: CPU features: detected: 32-bit EL0 Support Jul 10 00:40:21.724324 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 10 00:40:21.724331 kernel: CPU features: detected: Common not Private translations Jul 10 00:40:21.724338 kernel: CPU features: detected: CRC32 instructions Jul 10 00:40:21.724345 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 10 00:40:21.724352 kernel: CPU features: detected: LSE atomic instructions Jul 10 00:40:21.724360 kernel: CPU features: detected: Privileged Access Never Jul 10 00:40:21.724367 kernel: CPU features: detected: RAS Extension Support Jul 10 00:40:21.724374 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 10 00:40:21.724381 kernel: CPU: All CPU(s) started at EL1 Jul 10 00:40:21.724388 kernel: alternatives: patching kernel code Jul 10 00:40:21.724396 kernel: devtmpfs: initialized Jul 10 00:40:21.724403 kernel: KASLR enabled Jul 10 00:40:21.724410 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:40:21.724417 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 00:40:21.724424 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:40:21.724431 kernel: SMBIOS 3.0.0 present. Jul 10 00:40:21.724438 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jul 10 00:40:21.724445 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:40:21.724452 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 10 00:40:21.724461 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 10 00:40:21.724468 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 10 00:40:21.724475 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:40:21.724482 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 Jul 10 00:40:21.724489 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:40:21.724496 kernel: cpuidle: using governor menu Jul 10 00:40:21.724503 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 10 00:40:21.724510 kernel: ASID allocator initialised with 32768 entries Jul 10 00:40:21.724517 kernel: ACPI: bus type PCI registered Jul 10 00:40:21.724525 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:40:21.724532 kernel: Serial: AMBA PL011 UART driver Jul 10 00:40:21.724539 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:40:21.724546 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 10 00:40:21.724553 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:40:21.724560 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 10 00:40:21.724567 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 00:40:21.724574 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 10 00:40:21.724582 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:40:21.724590 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:40:21.724597 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:40:21.724604 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 10 00:40:21.724611 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 10 00:40:21.724618 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 10 00:40:21.724625 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:40:21.724632 kernel: ACPI: Interpreter enabled Jul 10 00:40:21.724639 kernel: ACPI: Using GIC for interrupt routing Jul 10 00:40:21.724646 kernel: ACPI: MCFG table detected, 1 entries Jul 10 00:40:21.724654 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 10 00:40:21.724661 kernel: printk: console [ttyAMA0] enabled Jul 10 00:40:21.724668 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 00:40:21.724803 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:40:21.724898 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 10 00:40:21.724966 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 10 00:40:21.725037 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 10 00:40:21.725141 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 10 00:40:21.725151 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 10 00:40:21.725158 kernel: PCI host bridge to bus 0000:00 Jul 10 00:40:21.725235 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 10 00:40:21.725290 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 10 00:40:21.725343 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 10 00:40:21.725394 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 00:40:21.725467 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 10 00:40:21.725547 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 10 00:40:21.725609 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 10 00:40:21.725670 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 10 00:40:21.725730 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 00:40:21.725790 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 00:40:21.725850 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 10 00:40:21.725912 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 10 00:40:21.725966 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 10 00:40:21.726027 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 10 00:40:21.726081 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 10 00:40:21.726090 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 10 00:40:21.726097 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 10 00:40:21.726104 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 10 00:40:21.726111 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 10 00:40:21.726119 kernel: iommu: Default domain type: Translated Jul 10 00:40:21.726126 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 10 00:40:21.726133 kernel: vgaarb: loaded Jul 10 00:40:21.726140 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 10 00:40:21.726147 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 10 00:40:21.726154 kernel: PTP clock support registered Jul 10 00:40:21.726161 kernel: Registered efivars operations Jul 10 00:40:21.726168 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 10 00:40:21.726174 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:40:21.726183 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:40:21.726190 kernel: pnp: PnP ACPI init Jul 10 00:40:21.726271 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 10 00:40:21.726283 kernel: pnp: PnP ACPI: found 1 devices Jul 10 00:40:21.726290 kernel: NET: Registered PF_INET protocol family Jul 10 00:40:21.726297 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:40:21.726304 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 00:40:21.726311 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:40:21.726320 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:40:21.726327 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 10 00:40:21.726334 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 00:40:21.726341 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:40:21.726348 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:40:21.726355 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:40:21.726362 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:40:21.726369 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 10 00:40:21.726376 kernel: kvm [1]: HYP mode not available Jul 10 00:40:21.726385 kernel: Initialise system trusted keyrings Jul 10 00:40:21.726392 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 00:40:21.726398 kernel: Key type asymmetric registered Jul 10 00:40:21.726405 kernel: Asymmetric key parser 'x509' registered Jul 10 00:40:21.726412 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 10 00:40:21.726419 kernel: io scheduler mq-deadline registered Jul 10 00:40:21.726426 kernel: io scheduler kyber registered Jul 10 00:40:21.726433 kernel: io scheduler bfq registered Jul 10 00:40:21.726440 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 10 00:40:21.726448 kernel: ACPI: button: Power Button [PWRB] Jul 10 00:40:21.726456 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 10 00:40:21.726516 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 10 00:40:21.726526 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:40:21.726533 kernel: thunder_xcv, ver 1.0 Jul 10 00:40:21.726539 kernel: thunder_bgx, ver 1.0 Jul 10 00:40:21.726546 kernel: nicpf, ver 1.0 Jul 10 00:40:21.726553 kernel: nicvf, ver 1.0 Jul 10 00:40:21.726625 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 10 00:40:21.726684 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-10T00:40:21 UTC (1752108021) Jul 10 00:40:21.726694 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 00:40:21.726701 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:40:21.726708 kernel: Segment Routing with IPv6 Jul 10 00:40:21.726715 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:40:21.726722 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:40:21.726729 kernel: Key type dns_resolver registered Jul 10 00:40:21.726736 kernel: registered taskstats version 1 Jul 10 00:40:21.726744 kernel: Loading compiled-in X.509 certificates Jul 10 00:40:21.726751 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.186-flatcar: 9e274a0dc4fc3d34232d90d226b034c4fe0e3e22' Jul 10 00:40:21.726759 kernel: Key type .fscrypt registered Jul 10 00:40:21.726765 kernel: Key type fscrypt-provisioning registered Jul 10 00:40:21.726772 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:40:21.726779 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:40:21.726786 kernel: ima: No architecture policies found Jul 10 00:40:21.726793 kernel: clk: Disabling unused clocks Jul 10 00:40:21.726800 kernel: Freeing unused kernel memory: 36416K Jul 10 00:40:21.726808 kernel: Run /init as init process Jul 10 00:40:21.726815 kernel: with arguments: Jul 10 00:40:21.726822 kernel: /init Jul 10 00:40:21.726828 kernel: with environment: Jul 10 00:40:21.726835 kernel: HOME=/ Jul 10 00:40:21.726842 kernel: TERM=linux Jul 10 00:40:21.726849 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:40:21.726858 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 10 00:40:21.726868 systemd[1]: Detected virtualization kvm. Jul 10 00:40:21.726876 systemd[1]: Detected architecture arm64. Jul 10 00:40:21.726883 systemd[1]: Running in initrd. Jul 10 00:40:21.726890 systemd[1]: No hostname configured, using default hostname. Jul 10 00:40:21.726898 systemd[1]: Hostname set to . Jul 10 00:40:21.726906 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:40:21.726913 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:40:21.726921 systemd[1]: Started systemd-ask-password-console.path. Jul 10 00:40:21.726929 systemd[1]: Reached target cryptsetup.target. Jul 10 00:40:21.726937 systemd[1]: Reached target paths.target. Jul 10 00:40:21.726944 systemd[1]: Reached target slices.target. Jul 10 00:40:21.726951 systemd[1]: Reached target swap.target. Jul 10 00:40:21.726958 systemd[1]: Reached target timers.target. Jul 10 00:40:21.726966 systemd[1]: Listening on iscsid.socket. Jul 10 00:40:21.726973 systemd[1]: Listening on iscsiuio.socket. Jul 10 00:40:21.726989 systemd[1]: Listening on systemd-journald-audit.socket. Jul 10 00:40:21.726997 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 10 00:40:21.727004 systemd[1]: Listening on systemd-journald.socket. Jul 10 00:40:21.727011 systemd[1]: Listening on systemd-networkd.socket. Jul 10 00:40:21.727019 systemd[1]: Listening on systemd-udevd-control.socket. Jul 10 00:40:21.727026 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 10 00:40:21.727033 systemd[1]: Reached target sockets.target. Jul 10 00:40:21.727040 systemd[1]: Starting kmod-static-nodes.service... Jul 10 00:40:21.727048 systemd[1]: Finished network-cleanup.service. Jul 10 00:40:21.727056 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:40:21.727064 systemd[1]: Starting systemd-journald.service... Jul 10 00:40:21.727072 systemd[1]: Starting systemd-modules-load.service... Jul 10 00:40:21.727079 systemd[1]: Starting systemd-resolved.service... Jul 10 00:40:21.727087 systemd[1]: Starting systemd-vconsole-setup.service... Jul 10 00:40:21.727094 systemd[1]: Finished kmod-static-nodes.service. Jul 10 00:40:21.727101 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:40:21.727109 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 10 00:40:21.727116 systemd[1]: Finished systemd-vconsole-setup.service. Jul 10 00:40:21.727124 systemd[1]: Starting dracut-cmdline-ask.service... Jul 10 00:40:21.727132 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 10 00:40:21.727140 kernel: audit: type=1130 audit(1752108021.726:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:21.727152 systemd-journald[290]: Journal started Jul 10 00:40:21.727197 systemd-journald[290]: Runtime Journal (/run/log/journal/4c13d0629e0e4fb18c8e20fa1d047c66) is 6.0M, max 48.7M, 42.6M free. Jul 10 00:40:21.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:21.716548 systemd-modules-load[291]: Inserted module 'overlay' Jul 10 00:40:21.729391 systemd[1]: Started systemd-journald.service. Jul 10 00:40:21.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:21.732238 kernel: audit: type=1130 audit(1752108021.729:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:21.735359 systemd-resolved[292]: Positive Trust Anchors: Jul 10 00:40:21.735374 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:40:21.735402 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 10 00:40:21.739667 systemd-resolved[292]: Defaulting to hostname 'linux'. Jul 10 00:40:21.747966 kernel: audit: type=1130 audit(1752108021.743:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:21.748005 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:40:21.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:21.740797 systemd[1]: Started systemd-resolved.service. Jul 10 00:40:21.743573 systemd[1]: Reached target nss-lookup.target. Jul 10 00:40:21.751318 systemd[1]: Finished dracut-cmdline-ask.service. Jul 10 00:40:21.756025 kernel: Bridge firewalling registered Jul 10 00:40:21.756048 kernel: audit: type=1130 audit(1752108021.752:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:21.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:21.752476 systemd-modules-load[291]: Inserted module 'br_netfilter' Jul 10 00:40:21.753422 systemd[1]: Starting dracut-cmdline.service... Jul 10 00:40:21.762868 dracut-cmdline[309]: dracut-dracut-053 Jul 10 00:40:21.765228 kernel: SCSI subsystem initialized Jul 10 00:40:21.765304 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=97626bbec4e8c603c151f40dbbae5fabba3cda417023e06335ea30183b36a27f Jul 10 00:40:21.772857 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:40:21.772893 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:40:21.772910 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 10 00:40:21.775198 systemd-modules-load[291]: Inserted module 'dm_multipath' Jul 10 00:40:21.780286 kernel: audit: type=1130 audit(1752108021.776:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:21.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:21.776021 systemd[1]: Finished systemd-modules-load.service. Jul 10 00:40:21.777578 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:40:21.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:21.785966 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:40:21.789223 kernel: audit: type=1130 audit(1752108021.786:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:21.827220 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:40:21.839221 kernel: iscsi: registered transport (tcp) Jul 10 00:40:21.855218 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:40:21.855230 kernel: QLogic iSCSI HBA Driver Jul 10 00:40:21.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:21.893108 systemd[1]: Finished dracut-cmdline.service. Jul 10 00:40:21.896654 kernel: audit: type=1130 audit(1752108021.893:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:21.894659 systemd[1]: Starting dracut-pre-udev.service... Jul 10 00:40:21.943228 kernel: raid6: neonx8 gen() 13738 MB/s Jul 10 00:40:21.960213 kernel: raid6: neonx8 xor() 10764 MB/s Jul 10 00:40:21.977216 kernel: raid6: neonx4 gen() 13478 MB/s Jul 10 00:40:21.994226 kernel: raid6: neonx4 xor() 11251 MB/s Jul 10 00:40:22.011214 kernel: raid6: neonx2 gen() 12946 MB/s Jul 10 00:40:22.028214 kernel: raid6: neonx2 xor() 10245 MB/s Jul 10 00:40:22.045214 kernel: raid6: neonx1 gen() 10552 MB/s Jul 10 00:40:22.062245 kernel: raid6: neonx1 xor() 8687 MB/s Jul 10 00:40:22.079221 kernel: raid6: int64x8 gen() 6259 MB/s Jul 10 00:40:22.096213 kernel: raid6: int64x8 xor() 3540 MB/s Jul 10 00:40:22.113213 kernel: raid6: int64x4 gen() 7211 MB/s Jul 10 00:40:22.130213 kernel: raid6: int64x4 xor() 3856 MB/s Jul 10 00:40:22.147214 kernel: raid6: int64x2 gen() 6150 MB/s Jul 10 00:40:22.164214 kernel: raid6: int64x2 xor() 3320 MB/s Jul 10 00:40:22.181215 kernel: raid6: int64x1 gen() 5040 MB/s Jul 10 00:40:22.198421 kernel: raid6: int64x1 xor() 2643 MB/s Jul 10 00:40:22.198451 kernel: raid6: using algorithm neonx8 gen() 13738 MB/s Jul 10 00:40:22.198460 kernel: raid6: .... xor() 10764 MB/s, rmw enabled Jul 10 00:40:22.198469 kernel: raid6: using neon recovery algorithm Jul 10 00:40:22.211278 kernel: xor: measuring software checksum speed Jul 10 00:40:22.211306 kernel: 8regs : 17220 MB/sec Jul 10 00:40:22.211317 kernel: 32regs : 20723 MB/sec Jul 10 00:40:22.212218 kernel: arm64_neon : 27729 MB/sec Jul 10 00:40:22.212229 kernel: xor: using function: arm64_neon (27729 MB/sec) Jul 10 00:40:22.266225 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 10 00:40:22.276067 systemd[1]: Finished dracut-pre-udev.service. Jul 10 00:40:22.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:22.277623 systemd[1]: Starting systemd-udevd.service... Jul 10 00:40:22.280460 kernel: audit: type=1130 audit(1752108022.276:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:22.280481 kernel: audit: type=1334 audit(1752108022.276:10): prog-id=7 op=LOAD Jul 10 00:40:22.276000 audit: BPF prog-id=7 op=LOAD Jul 10 00:40:22.277000 audit: BPF prog-id=8 op=LOAD Jul 10 00:40:22.291530 systemd-udevd[492]: Using default interface naming scheme 'v252'. Jul 10 00:40:22.294759 systemd[1]: Started systemd-udevd.service. Jul 10 00:40:22.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:22.296588 systemd[1]: Starting dracut-pre-trigger.service... Jul 10 00:40:22.309337 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Jul 10 00:40:22.336126 systemd[1]: Finished dracut-pre-trigger.service. Jul 10 00:40:22.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:22.337545 systemd[1]: Starting systemd-udev-trigger.service... Jul 10 00:40:22.374240 systemd[1]: Finished systemd-udev-trigger.service. Jul 10 00:40:22.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:22.402049 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 00:40:22.405875 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:40:22.405889 kernel: GPT:9289727 != 19775487 Jul 10 00:40:22.405898 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:40:22.405912 kernel: GPT:9289727 != 19775487 Jul 10 00:40:22.405921 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:40:22.405933 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:40:22.419230 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (541) Jul 10 00:40:22.424551 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 10 00:40:22.425661 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 10 00:40:22.429885 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 10 00:40:22.433336 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 10 00:40:22.436695 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 10 00:40:22.439116 systemd[1]: Starting disk-uuid.service... Jul 10 00:40:22.445137 disk-uuid[563]: Primary Header is updated. Jul 10 00:40:22.445137 disk-uuid[563]: Secondary Entries is updated. Jul 10 00:40:22.445137 disk-uuid[563]: Secondary Header is updated. Jul 10 00:40:22.447726 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:40:23.457994 disk-uuid[564]: The operation has completed successfully. Jul 10 00:40:23.459066 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:40:23.481468 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:40:23.482371 systemd[1]: Finished disk-uuid.service. Jul 10 00:40:23.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:23.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:23.484229 systemd[1]: Starting verity-setup.service... Jul 10 00:40:23.500219 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 10 00:40:23.520043 systemd[1]: Found device dev-mapper-usr.device. Jul 10 00:40:23.522248 systemd[1]: Mounting sysusr-usr.mount... Jul 10 00:40:23.523866 systemd[1]: Finished verity-setup.service. Jul 10 00:40:23.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:23.571236 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 10 00:40:23.571643 systemd[1]: Mounted sysusr-usr.mount. Jul 10 00:40:23.572312 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 10 00:40:23.573032 systemd[1]: Starting ignition-setup.service... Jul 10 00:40:23.575247 systemd[1]: Starting parse-ip-for-networkd.service... Jul 10 00:40:23.581370 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:40:23.581410 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:40:23.581420 kernel: BTRFS info (device vda6): has skinny extents Jul 10 00:40:23.589686 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 10 00:40:23.595871 systemd[1]: Finished ignition-setup.service. Jul 10 00:40:23.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:23.597371 systemd[1]: Starting ignition-fetch-offline.service... Jul 10 00:40:23.652034 systemd[1]: Finished parse-ip-for-networkd.service. Jul 10 00:40:23.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:23.653000 audit: BPF prog-id=9 op=LOAD Jul 10 00:40:23.653995 systemd[1]: Starting systemd-networkd.service... Jul 10 00:40:23.671723 ignition[652]: Ignition 2.14.0 Jul 10 00:40:23.671734 ignition[652]: Stage: fetch-offline Jul 10 00:40:23.671771 ignition[652]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:40:23.671780 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:40:23.671914 ignition[652]: parsed url from cmdline: "" Jul 10 00:40:23.671918 ignition[652]: no config URL provided Jul 10 00:40:23.671922 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:40:23.671929 ignition[652]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:40:23.671949 ignition[652]: op(1): [started] loading QEMU firmware config module Jul 10 00:40:23.671954 ignition[652]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 00:40:23.675048 ignition[652]: op(1): [finished] loading QEMU firmware config module Jul 10 00:40:23.682538 ignition[652]: parsing config with SHA512: e699eed7f59c9318410d97623cb96699af5d3586a4a8c2902cf4cfa9b1f42545da1af1fdee039bd2d8ffdddbb95d30fc29fce72bd6e4ab0d7c49bfff56d2214b Jul 10 00:40:23.683041 systemd-networkd[741]: lo: Link UP Jul 10 00:40:23.683053 systemd-networkd[741]: lo: Gained carrier Jul 10 00:40:23.683691 systemd-networkd[741]: Enumeration completed Jul 10 00:40:23.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:23.683996 systemd[1]: Started systemd-networkd.service. Jul 10 00:40:23.684957 systemd[1]: Reached target network.target. Jul 10 00:40:23.686992 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:40:23.688298 systemd[1]: Starting iscsiuio.service... Jul 10 00:40:23.689895 systemd-networkd[741]: eth0: Link UP Jul 10 00:40:23.689915 systemd-networkd[741]: eth0: Gained carrier Jul 10 00:40:23.690560 unknown[652]: fetched base config from "system" Jul 10 00:40:23.690982 ignition[652]: fetch-offline: fetch-offline passed Jul 10 00:40:23.690572 unknown[652]: fetched user config from "qemu" Jul 10 00:40:23.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:23.691082 ignition[652]: Ignition finished successfully Jul 10 00:40:23.692477 systemd[1]: Finished ignition-fetch-offline.service. Jul 10 00:40:23.693683 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 00:40:23.694528 systemd[1]: Starting ignition-kargs.service... Jul 10 00:40:23.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:23.696720 systemd[1]: Started iscsiuio.service. Jul 10 00:40:23.698255 systemd[1]: Starting iscsid.service... Jul 10 00:40:23.701841 iscsid[749]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 10 00:40:23.701841 iscsid[749]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 10 00:40:23.701841 iscsid[749]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 10 00:40:23.701841 iscsid[749]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 10 00:40:23.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:23.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:23.703246 ignition[747]: Ignition 2.14.0 Jul 10 00:40:23.710541 iscsid[749]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 10 00:40:23.710541 iscsid[749]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 10 00:40:23.705379 systemd[1]: Started iscsid.service. Jul 10 00:40:23.703253 ignition[747]: Stage: kargs Jul 10 00:40:23.708354 systemd[1]: Finished ignition-kargs.service. Jul 10 00:40:23.703353 ignition[747]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:40:23.710089 systemd[1]: Starting dracut-initqueue.service... Jul 10 00:40:23.703363 ignition[747]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:40:23.711337 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.111/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:40:23.704218 ignition[747]: kargs: kargs passed Jul 10 00:40:23.712806 systemd[1]: Starting ignition-disks.service... Jul 10 00:40:23.704266 ignition[747]: Ignition finished successfully Jul 10 00:40:23.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:23.720581 systemd[1]: Finished dracut-initqueue.service. Jul 10 00:40:23.724995 ignition[756]: Ignition 2.14.0 Jul 10 00:40:23.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:23.721550 systemd[1]: Reached target remote-fs-pre.target. Jul 10 00:40:23.725003 ignition[756]: Stage: disks Jul 10 00:40:23.722820 systemd[1]: Reached target remote-cryptsetup.target. Jul 10 00:40:23.725122 ignition[756]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:40:23.724420 systemd[1]: Reached target remote-fs.target. Jul 10 00:40:23.725131 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:40:23.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:23.726421 systemd[1]: Starting dracut-pre-mount.service... Jul 10 00:40:23.725846 ignition[756]: disks: disks passed Jul 10 00:40:23.727506 systemd[1]: Finished ignition-disks.service. Jul 10 00:40:23.725888 ignition[756]: Ignition finished successfully Jul 10 00:40:23.728620 systemd[1]: Reached target initrd-root-device.target. Jul 10 00:40:23.729631 systemd[1]: Reached target local-fs-pre.target. Jul 10 00:40:23.730892 systemd[1]: Reached target local-fs.target. Jul 10 00:40:23.731993 systemd[1]: Reached target sysinit.target. Jul 10 00:40:23.733215 systemd[1]: Reached target basic.target. Jul 10 00:40:23.734791 systemd[1]: Finished dracut-pre-mount.service. Jul 10 00:40:23.736697 systemd[1]: Starting systemd-fsck-root.service... Jul 10 00:40:23.747500 systemd-fsck[776]: ROOT: clean, 619/553520 files, 56022/553472 blocks Jul 10 00:40:23.750913 systemd[1]: Finished systemd-fsck-root.service. Jul 10 00:40:23.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:23.752925 systemd[1]: Mounting sysroot.mount... Jul 10 00:40:23.758226 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 10 00:40:23.758604 systemd[1]: Mounted sysroot.mount. Jul 10 00:40:23.759391 systemd[1]: Reached target initrd-root-fs.target. Jul 10 00:40:23.761585 systemd[1]: Mounting sysroot-usr.mount... Jul 10 00:40:23.762492 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 10 00:40:23.762549 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:40:23.762571 systemd[1]: Reached target ignition-diskful.target. Jul 10 00:40:23.764572 systemd[1]: Mounted sysroot-usr.mount. Jul 10 00:40:23.767068 systemd[1]: Starting initrd-setup-root.service... Jul 10 00:40:23.771581 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:40:23.775075 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:40:23.778379 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:40:23.781769 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:40:23.810497 systemd[1]: Finished initrd-setup-root.service. Jul 10 00:40:23.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:23.811946 systemd[1]: Starting ignition-mount.service... Jul 10 00:40:23.813147 systemd[1]: Starting sysroot-boot.service... Jul 10 00:40:23.817450 bash[827]: umount: /sysroot/usr/share/oem: not mounted. Jul 10 00:40:23.825748 ignition[829]: INFO : Ignition 2.14.0 Jul 10 00:40:23.825748 ignition[829]: INFO : Stage: mount Jul 10 00:40:23.827678 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:40:23.827678 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:40:23.827678 ignition[829]: INFO : mount: mount passed Jul 10 00:40:23.827678 ignition[829]: INFO : Ignition finished successfully Jul 10 00:40:23.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:23.828624 systemd[1]: Finished ignition-mount.service. Jul 10 00:40:23.835628 systemd[1]: Finished sysroot-boot.service. Jul 10 00:40:23.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:24.531841 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 10 00:40:24.537223 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (838) Jul 10 00:40:24.538622 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:40:24.538636 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:40:24.538645 kernel: BTRFS info (device vda6): has skinny extents Jul 10 00:40:24.541791 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 10 00:40:24.543148 systemd[1]: Starting ignition-files.service... Jul 10 00:40:24.556512 ignition[858]: INFO : Ignition 2.14.0 Jul 10 00:40:24.556512 ignition[858]: INFO : Stage: files Jul 10 00:40:24.558137 ignition[858]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:40:24.558137 ignition[858]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:40:24.558137 ignition[858]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:40:24.563643 ignition[858]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:40:24.563643 ignition[858]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:40:24.566593 ignition[858]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:40:24.566593 ignition[858]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:40:24.566593 ignition[858]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:40:24.566310 unknown[858]: wrote ssh authorized keys file for user: core Jul 10 00:40:24.571635 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:40:24.571635 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:40:24.571635 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:40:24.571635 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:40:24.571635 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 00:40:24.571635 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 00:40:24.571635 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 00:40:24.571635 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 10 00:40:25.022227 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jul 10 00:40:25.389364 systemd-networkd[741]: eth0: Gained IPv6LL Jul 10 00:40:25.934249 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 00:40:25.934249 ignition[858]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jul 10 00:40:25.937341 ignition[858]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:40:25.937341 ignition[858]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:40:25.937341 ignition[858]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jul 10 00:40:25.937341 ignition[858]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 00:40:25.937341 ignition[858]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:40:25.987830 ignition[858]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:40:25.989045 ignition[858]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 00:40:25.989045 ignition[858]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:40:25.989045 ignition[858]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:40:25.989045 ignition[858]: INFO : files: files passed Jul 10 00:40:25.989045 ignition[858]: INFO : Ignition finished successfully Jul 10 00:40:25.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:25.989552 systemd[1]: Finished ignition-files.service. Jul 10 00:40:25.992028 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 10 00:40:25.997597 initrd-setup-root-after-ignition[883]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Jul 10 00:40:25.992895 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 10 00:40:25.993628 systemd[1]: Starting ignition-quench.service... Jul 10 00:40:26.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.001370 initrd-setup-root-after-ignition[885]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:40:26.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:25.999899 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 10 00:40:26.001038 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:40:26.001112 systemd[1]: Finished ignition-quench.service. Jul 10 00:40:26.001995 systemd[1]: Reached target ignition-complete.target. Jul 10 00:40:26.004063 systemd[1]: Starting initrd-parse-etc.service... Jul 10 00:40:26.017500 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:40:26.017593 systemd[1]: Finished initrd-parse-etc.service. Jul 10 00:40:26.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.018961 systemd[1]: Reached target initrd-fs.target. Jul 10 00:40:26.020115 systemd[1]: Reached target initrd.target. Jul 10 00:40:26.021089 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 10 00:40:26.021872 systemd[1]: Starting dracut-pre-pivot.service... Jul 10 00:40:26.032636 systemd[1]: Finished dracut-pre-pivot.service. Jul 10 00:40:26.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.034553 systemd[1]: Starting initrd-cleanup.service... Jul 10 00:40:26.044748 systemd[1]: Stopped target nss-lookup.target. Jul 10 00:40:26.045495 systemd[1]: Stopped target remote-cryptsetup.target. Jul 10 00:40:26.046582 systemd[1]: Stopped target timers.target. Jul 10 00:40:26.047591 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:40:26.051400 kernel: kauditd_printk_skb: 29 callbacks suppressed Jul 10 00:40:26.051425 kernel: audit: type=1131 audit(1752108026.048:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.047698 systemd[1]: Stopped dracut-pre-pivot.service. Jul 10 00:40:26.048780 systemd[1]: Stopped target initrd.target. Jul 10 00:40:26.052045 systemd[1]: Stopped target basic.target. Jul 10 00:40:26.053041 systemd[1]: Stopped target ignition-complete.target. Jul 10 00:40:26.054065 systemd[1]: Stopped target ignition-diskful.target. Jul 10 00:40:26.055076 systemd[1]: Stopped target initrd-root-device.target. Jul 10 00:40:26.056206 systemd[1]: Stopped target remote-fs.target. Jul 10 00:40:26.057236 systemd[1]: Stopped target remote-fs-pre.target. Jul 10 00:40:26.058319 systemd[1]: Stopped target sysinit.target. Jul 10 00:40:26.059265 systemd[1]: Stopped target local-fs.target. Jul 10 00:40:26.060293 systemd[1]: Stopped target local-fs-pre.target. Jul 10 00:40:26.061296 systemd[1]: Stopped target swap.target. Jul 10 00:40:26.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.062195 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:40:26.066569 kernel: audit: type=1131 audit(1752108026.063:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.062319 systemd[1]: Stopped dracut-pre-mount.service. Jul 10 00:40:26.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.063375 systemd[1]: Stopped target cryptsetup.target. Jul 10 00:40:26.072235 kernel: audit: type=1131 audit(1752108026.066:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.072251 kernel: audit: type=1131 audit(1752108026.069:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.065994 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:40:26.066098 systemd[1]: Stopped dracut-initqueue.service. Jul 10 00:40:26.067238 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:40:26.067331 systemd[1]: Stopped ignition-fetch-offline.service. Jul 10 00:40:26.070085 systemd[1]: Stopped target paths.target. Jul 10 00:40:26.072764 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:40:26.076244 systemd[1]: Stopped systemd-ask-password-console.path. Jul 10 00:40:26.077281 systemd[1]: Stopped target slices.target. Jul 10 00:40:26.078354 systemd[1]: Stopped target sockets.target. Jul 10 00:40:26.079326 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:40:26.079407 systemd[1]: Closed iscsid.socket. Jul 10 00:40:26.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.080293 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:40:26.086738 kernel: audit: type=1131 audit(1752108026.081:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.086758 kernel: audit: type=1131 audit(1752108026.084:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.080393 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 10 00:40:26.081539 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:40:26.081624 systemd[1]: Stopped ignition-files.service. Jul 10 00:40:26.085240 systemd[1]: Stopping ignition-mount.service... Jul 10 00:40:26.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.088063 systemd[1]: Stopping iscsiuio.service... Jul 10 00:40:26.094374 kernel: audit: type=1131 audit(1752108026.090:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.094464 ignition[899]: INFO : Ignition 2.14.0 Jul 10 00:40:26.094464 ignition[899]: INFO : Stage: umount Jul 10 00:40:26.094464 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:40:26.094464 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:40:26.102858 kernel: audit: type=1131 audit(1752108026.094:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.102880 kernel: audit: type=1131 audit(1752108026.098:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.089336 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:40:26.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.105936 ignition[899]: INFO : umount: umount passed Jul 10 00:40:26.105936 ignition[899]: INFO : Ignition finished successfully Jul 10 00:40:26.107747 kernel: audit: type=1131 audit(1752108026.103:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.089502 systemd[1]: Stopped kmod-static-nodes.service. Jul 10 00:40:26.091334 systemd[1]: Stopping sysroot-boot.service... Jul 10 00:40:26.093900 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:40:26.094095 systemd[1]: Stopped systemd-udev-trigger.service. Jul 10 00:40:26.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.095113 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:40:26.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.095260 systemd[1]: Stopped dracut-pre-trigger.service. Jul 10 00:40:26.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.101704 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:40:26.102440 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 10 00:40:26.102535 systemd[1]: Stopped iscsiuio.service. Jul 10 00:40:26.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.103905 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:40:26.103994 systemd[1]: Stopped ignition-mount.service. Jul 10 00:40:26.108041 systemd[1]: Stopped target network.target. Jul 10 00:40:26.108935 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:40:26.108975 systemd[1]: Closed iscsiuio.socket. Jul 10 00:40:26.109827 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:40:26.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.109862 systemd[1]: Stopped ignition-disks.service. Jul 10 00:40:26.110988 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:40:26.111019 systemd[1]: Stopped ignition-kargs.service. Jul 10 00:40:26.112041 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:40:26.112074 systemd[1]: Stopped ignition-setup.service. Jul 10 00:40:26.113680 systemd[1]: Stopping systemd-networkd.service... Jul 10 00:40:26.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.114744 systemd[1]: Stopping systemd-resolved.service... Jul 10 00:40:26.127000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.116033 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:40:26.116113 systemd[1]: Finished initrd-cleanup.service. Jul 10 00:40:26.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.119272 systemd-networkd[741]: eth0: DHCPv6 lease lost Jul 10 00:40:26.133000 audit: BPF prog-id=9 op=UNLOAD Jul 10 00:40:26.120496 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:40:26.120589 systemd[1]: Stopped systemd-networkd.service. Jul 10 00:40:26.121872 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:40:26.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.121901 systemd[1]: Closed systemd-networkd.socket. Jul 10 00:40:26.124697 systemd[1]: Stopping network-cleanup.service... Jul 10 00:40:26.125943 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:40:26.126003 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 10 00:40:26.139000 audit: BPF prog-id=6 op=UNLOAD Jul 10 00:40:26.127216 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:40:26.127259 systemd[1]: Stopped systemd-sysctl.service. Jul 10 00:40:26.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.128846 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:40:26.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.128887 systemd[1]: Stopped systemd-modules-load.service. Jul 10 00:40:26.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.132103 systemd[1]: Stopping systemd-udevd.service... Jul 10 00:40:26.134857 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 00:40:26.135359 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:40:26.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.135456 systemd[1]: Stopped systemd-resolved.service. Jul 10 00:40:26.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.140248 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:40:26.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.140347 systemd[1]: Stopped network-cleanup.service. Jul 10 00:40:26.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.141328 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:40:26.141454 systemd[1]: Stopped systemd-udevd.service. Jul 10 00:40:26.142405 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:40:26.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.142484 systemd[1]: Stopped sysroot-boot.service. Jul 10 00:40:26.143433 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:40:26.143466 systemd[1]: Closed systemd-udevd-control.socket. Jul 10 00:40:26.144359 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:40:26.144390 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 10 00:40:26.145471 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:40:26.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.145518 systemd[1]: Stopped dracut-pre-udev.service. Jul 10 00:40:26.146780 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:40:26.146814 systemd[1]: Stopped dracut-cmdline.service. Jul 10 00:40:26.147947 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:40:26.147993 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 10 00:40:26.148965 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:40:26.149000 systemd[1]: Stopped initrd-setup-root.service. Jul 10 00:40:26.150903 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 10 00:40:26.151862 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:40:26.151916 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 10 00:40:26.156545 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:40:26.156645 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 10 00:40:26.157840 systemd[1]: Reached target initrd-switch-root.target. Jul 10 00:40:26.159707 systemd[1]: Starting initrd-switch-root.service... Jul 10 00:40:26.166662 systemd[1]: Switching root. Jul 10 00:40:26.186670 iscsid[749]: iscsid shutting down. Jul 10 00:40:26.187222 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Jul 10 00:40:26.187253 systemd-journald[290]: Journal stopped Jul 10 00:40:28.212980 kernel: SELinux: Class mctp_socket not defined in policy. Jul 10 00:40:28.213036 kernel: SELinux: Class anon_inode not defined in policy. Jul 10 00:40:28.213048 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 10 00:40:28.213062 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:40:28.213083 kernel: SELinux: policy capability open_perms=1 Jul 10 00:40:28.213092 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:40:28.213102 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:40:28.213111 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:40:28.213122 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:40:28.213132 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:40:28.213141 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:40:28.213153 systemd[1]: Successfully loaded SELinux policy in 34.978ms. Jul 10 00:40:28.213169 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.160ms. Jul 10 00:40:28.213181 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 10 00:40:28.213191 systemd[1]: Detected virtualization kvm. Jul 10 00:40:28.213213 systemd[1]: Detected architecture arm64. Jul 10 00:40:28.213225 systemd[1]: Detected first boot. Jul 10 00:40:28.213235 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:40:28.213245 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 10 00:40:28.213255 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:40:28.213266 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:40:28.213276 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:40:28.213288 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:40:28.213298 systemd[1]: iscsid.service: Deactivated successfully. Jul 10 00:40:28.213314 systemd[1]: Stopped iscsid.service. Jul 10 00:40:28.213328 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:40:28.213338 systemd[1]: Stopped initrd-switch-root.service. Jul 10 00:40:28.213348 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:40:28.213359 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 10 00:40:28.213369 systemd[1]: Created slice system-addon\x2drun.slice. Jul 10 00:40:28.213379 systemd[1]: Created slice system-getty.slice. Jul 10 00:40:28.213391 systemd[1]: Created slice system-modprobe.slice. Jul 10 00:40:28.213402 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 10 00:40:28.213413 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 10 00:40:28.213423 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 10 00:40:28.213433 systemd[1]: Created slice user.slice. Jul 10 00:40:28.213443 systemd[1]: Started systemd-ask-password-console.path. Jul 10 00:40:28.213453 systemd[1]: Started systemd-ask-password-wall.path. Jul 10 00:40:28.213463 systemd[1]: Set up automount boot.automount. Jul 10 00:40:28.213473 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 10 00:40:28.213484 systemd[1]: Stopped target initrd-switch-root.target. Jul 10 00:40:28.213495 systemd[1]: Stopped target initrd-fs.target. Jul 10 00:40:28.213505 systemd[1]: Stopped target initrd-root-fs.target. Jul 10 00:40:28.213517 systemd[1]: Reached target integritysetup.target. Jul 10 00:40:28.213531 systemd[1]: Reached target remote-cryptsetup.target. Jul 10 00:40:28.213542 systemd[1]: Reached target remote-fs.target. Jul 10 00:40:28.213553 systemd[1]: Reached target slices.target. Jul 10 00:40:28.213563 systemd[1]: Reached target swap.target. Jul 10 00:40:28.213573 systemd[1]: Reached target torcx.target. Jul 10 00:40:28.213583 systemd[1]: Reached target veritysetup.target. Jul 10 00:40:28.213594 systemd[1]: Listening on systemd-coredump.socket. Jul 10 00:40:28.213604 systemd[1]: Listening on systemd-initctl.socket. Jul 10 00:40:28.213615 systemd[1]: Listening on systemd-networkd.socket. Jul 10 00:40:28.213626 systemd[1]: Listening on systemd-udevd-control.socket. Jul 10 00:40:28.213637 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 10 00:40:28.213647 systemd[1]: Listening on systemd-userdbd.socket. Jul 10 00:40:28.213657 systemd[1]: Mounting dev-hugepages.mount... Jul 10 00:40:28.213667 systemd[1]: Mounting dev-mqueue.mount... Jul 10 00:40:28.213677 systemd[1]: Mounting media.mount... Jul 10 00:40:28.213688 systemd[1]: Mounting sys-kernel-debug.mount... Jul 10 00:40:28.213698 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 10 00:40:28.213708 systemd[1]: Mounting tmp.mount... Jul 10 00:40:28.213718 systemd[1]: Starting flatcar-tmpfiles.service... Jul 10 00:40:28.213729 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:40:28.213740 systemd[1]: Starting kmod-static-nodes.service... Jul 10 00:40:28.213750 systemd[1]: Starting modprobe@configfs.service... Jul 10 00:40:28.213760 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:40:28.213770 systemd[1]: Starting modprobe@drm.service... Jul 10 00:40:28.213780 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:40:28.213790 systemd[1]: Starting modprobe@fuse.service... Jul 10 00:40:28.213800 systemd[1]: Starting modprobe@loop.service... Jul 10 00:40:28.213810 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:40:28.213821 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:40:28.213832 systemd[1]: Stopped systemd-fsck-root.service. Jul 10 00:40:28.213842 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:40:28.213852 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:40:28.213863 systemd[1]: Stopped systemd-journald.service. Jul 10 00:40:28.213873 systemd[1]: Starting systemd-journald.service... Jul 10 00:40:28.213883 systemd[1]: Starting systemd-modules-load.service... Jul 10 00:40:28.213894 systemd[1]: Starting systemd-network-generator.service... Jul 10 00:40:28.213904 systemd[1]: Starting systemd-remount-fs.service... Jul 10 00:40:28.213914 kernel: fuse: init (API version 7.34) Jul 10 00:40:28.213926 kernel: loop: module loaded Jul 10 00:40:28.213937 systemd[1]: Starting systemd-udev-trigger.service... Jul 10 00:40:28.213954 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:40:28.213968 systemd[1]: Stopped verity-setup.service. Jul 10 00:40:28.213978 systemd[1]: Mounted dev-hugepages.mount. Jul 10 00:40:28.213988 systemd[1]: Mounted dev-mqueue.mount. Jul 10 00:40:28.213998 systemd[1]: Mounted media.mount. Jul 10 00:40:28.214009 systemd[1]: Mounted sys-kernel-debug.mount. Jul 10 00:40:28.214019 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 10 00:40:28.214029 systemd[1]: Mounted tmp.mount. Jul 10 00:40:28.214040 systemd[1]: Finished kmod-static-nodes.service. Jul 10 00:40:28.214051 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:40:28.214061 systemd[1]: Finished modprobe@configfs.service. Jul 10 00:40:28.214072 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:40:28.214083 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:40:28.214093 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:40:28.214104 systemd[1]: Finished modprobe@drm.service. Jul 10 00:40:28.214115 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:40:28.214126 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:40:28.214136 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:40:28.214147 systemd[1]: Finished modprobe@fuse.service. Jul 10 00:40:28.214157 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:40:28.214168 systemd[1]: Finished modprobe@loop.service. Jul 10 00:40:28.214178 systemd[1]: Finished systemd-modules-load.service. Jul 10 00:40:28.214188 systemd[1]: Finished systemd-network-generator.service. Jul 10 00:40:28.214369 systemd[1]: Finished systemd-remount-fs.service. Jul 10 00:40:28.214391 systemd[1]: Reached target network-pre.target. Jul 10 00:40:28.214402 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 10 00:40:28.214415 systemd-journald[992]: Journal started Jul 10 00:40:28.214462 systemd-journald[992]: Runtime Journal (/run/log/journal/4c13d0629e0e4fb18c8e20fa1d047c66) is 6.0M, max 48.7M, 42.6M free. Jul 10 00:40:26.303000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:40:26.378000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 10 00:40:26.378000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 10 00:40:26.378000 audit: BPF prog-id=10 op=LOAD Jul 10 00:40:26.378000 audit: BPF prog-id=10 op=UNLOAD Jul 10 00:40:26.378000 audit: BPF prog-id=11 op=LOAD Jul 10 00:40:26.378000 audit: BPF prog-id=11 op=UNLOAD Jul 10 00:40:26.414000 audit[934]: AVC avc: denied { associate } for pid=934 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 10 00:40:26.414000 audit[934]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=400018b8cc a1=4000028e40 a2=4000027100 a3=32 items=0 ppid=917 pid=934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:40:26.414000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 10 00:40:26.415000 audit[934]: AVC avc: denied { associate } for pid=934 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 10 00:40:26.415000 audit[934]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=400018b9a5 a2=1ed a3=0 items=2 ppid=917 pid=934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:40:26.415000 audit: CWD cwd="/" Jul 10 00:40:26.415000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:40:26.415000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 00:40:26.415000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 10 00:40:28.072000 audit: BPF prog-id=12 op=LOAD Jul 10 00:40:28.072000 audit: BPF prog-id=3 op=UNLOAD Jul 10 00:40:28.072000 audit: BPF prog-id=13 op=LOAD Jul 10 00:40:28.072000 audit: BPF prog-id=14 op=LOAD Jul 10 00:40:28.072000 audit: BPF prog-id=4 op=UNLOAD Jul 10 00:40:28.072000 audit: BPF prog-id=5 op=UNLOAD Jul 10 00:40:28.072000 audit: BPF prog-id=15 op=LOAD Jul 10 00:40:28.072000 audit: BPF prog-id=12 op=UNLOAD Jul 10 00:40:28.072000 audit: BPF prog-id=16 op=LOAD Jul 10 00:40:28.072000 audit: BPF prog-id=17 op=LOAD Jul 10 00:40:28.072000 audit: BPF prog-id=13 op=UNLOAD Jul 10 00:40:28.072000 audit: BPF prog-id=14 op=UNLOAD Jul 10 00:40:28.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.082000 audit: BPF prog-id=15 op=UNLOAD Jul 10 00:40:28.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.160000 audit: BPF prog-id=18 op=LOAD Jul 10 00:40:28.163000 audit: BPF prog-id=19 op=LOAD Jul 10 00:40:28.163000 audit: BPF prog-id=20 op=LOAD Jul 10 00:40:28.163000 audit: BPF prog-id=16 op=UNLOAD Jul 10 00:40:28.163000 audit: BPF prog-id=17 op=UNLOAD Jul 10 00:40:28.181000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.201000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 10 00:40:28.201000 audit[992]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffcd0dc890 a2=4000 a3=1 items=0 ppid=1 pid=992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:40:28.201000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 10 00:40:28.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:26.412477 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2025-07-10T00:40:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:40:28.070362 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:40:26.412735 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2025-07-10T00:40:26Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 10 00:40:28.070375 systemd[1]: Unnecessary job was removed for dev-vda6.device. Jul 10 00:40:28.217961 systemd[1]: Mounting sys-kernel-config.mount... Jul 10 00:40:28.217984 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:40:26.412752 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2025-07-10T00:40:26Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 10 00:40:28.073672 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:40:26.412781 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2025-07-10T00:40:26Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 10 00:40:26.412790 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2025-07-10T00:40:26Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 10 00:40:26.412818 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2025-07-10T00:40:26Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 10 00:40:26.412829 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2025-07-10T00:40:26Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 10 00:40:26.413038 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2025-07-10T00:40:26Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 10 00:40:26.413073 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2025-07-10T00:40:26Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 10 00:40:26.413084 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2025-07-10T00:40:26Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 10 00:40:26.413860 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2025-07-10T00:40:26Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 10 00:40:26.413893 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2025-07-10T00:40:26Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 10 00:40:26.413911 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2025-07-10T00:40:26Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Jul 10 00:40:26.413924 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2025-07-10T00:40:26Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 10 00:40:26.413941 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2025-07-10T00:40:26Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Jul 10 00:40:26.413964 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2025-07-10T00:40:26Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 10 00:40:27.828359 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2025-07-10T00:40:27Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 10 00:40:27.828614 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2025-07-10T00:40:27Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 10 00:40:27.828705 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2025-07-10T00:40:27Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 10 00:40:27.828869 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2025-07-10T00:40:27Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 10 00:40:27.828916 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2025-07-10T00:40:27Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 10 00:40:27.828983 /usr/lib/systemd/system-generators/torcx-generator[934]: time="2025-07-10T00:40:27Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 10 00:40:28.223256 systemd[1]: Starting systemd-hwdb-update.service... Jul 10 00:40:28.225258 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:40:28.226584 systemd[1]: Starting systemd-random-seed.service... Jul 10 00:40:28.228217 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:40:28.229246 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:40:28.234465 systemd[1]: Started systemd-journald.service. Jul 10 00:40:28.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.233296 systemd[1]: Finished flatcar-tmpfiles.service. Jul 10 00:40:28.234041 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 10 00:40:28.234852 systemd[1]: Mounted sys-kernel-config.mount. Jul 10 00:40:28.235832 systemd[1]: Finished systemd-udev-trigger.service. Jul 10 00:40:28.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.236774 systemd[1]: Finished systemd-random-seed.service. Jul 10 00:40:28.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.237885 systemd[1]: Reached target first-boot-complete.target. Jul 10 00:40:28.239589 systemd[1]: Starting systemd-journal-flush.service... Jul 10 00:40:28.241344 systemd[1]: Starting systemd-sysusers.service... Jul 10 00:40:28.242887 systemd[1]: Starting systemd-udev-settle.service... Jul 10 00:40:28.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.248958 systemd-journald[992]: Time spent on flushing to /var/log/journal/4c13d0629e0e4fb18c8e20fa1d047c66 is 16.673ms for 983 entries. Jul 10 00:40:28.248958 systemd-journald[992]: System Journal (/var/log/journal/4c13d0629e0e4fb18c8e20fa1d047c66) is 8.0M, max 195.6M, 187.6M free. Jul 10 00:40:28.273910 systemd-journald[992]: Received client request to flush runtime journal. Jul 10 00:40:28.260000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.245066 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:40:28.274167 udevadm[1035]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 10 00:40:28.259671 systemd[1]: Finished systemd-sysusers.service. Jul 10 00:40:28.274786 systemd[1]: Finished systemd-journal-flush.service. Jul 10 00:40:28.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.609036 systemd[1]: Finished systemd-hwdb-update.service. Jul 10 00:40:28.610000 audit: BPF prog-id=21 op=LOAD Jul 10 00:40:28.610000 audit: BPF prog-id=22 op=LOAD Jul 10 00:40:28.610000 audit: BPF prog-id=7 op=UNLOAD Jul 10 00:40:28.610000 audit: BPF prog-id=8 op=UNLOAD Jul 10 00:40:28.611505 systemd[1]: Starting systemd-udevd.service... Jul 10 00:40:28.627112 systemd-udevd[1037]: Using default interface naming scheme 'v252'. Jul 10 00:40:28.640238 systemd[1]: Started systemd-udevd.service. Jul 10 00:40:28.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.642000 audit: BPF prog-id=23 op=LOAD Jul 10 00:40:28.646384 systemd[1]: Starting systemd-networkd.service... Jul 10 00:40:28.652000 audit: BPF prog-id=24 op=LOAD Jul 10 00:40:28.652000 audit: BPF prog-id=25 op=LOAD Jul 10 00:40:28.652000 audit: BPF prog-id=26 op=LOAD Jul 10 00:40:28.653520 systemd[1]: Starting systemd-userdbd.service... Jul 10 00:40:28.678191 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Jul 10 00:40:28.681025 systemd[1]: Started systemd-userdbd.service. Jul 10 00:40:28.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.719730 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 10 00:40:28.740321 systemd-networkd[1051]: lo: Link UP Jul 10 00:40:28.740334 systemd-networkd[1051]: lo: Gained carrier Jul 10 00:40:28.740706 systemd-networkd[1051]: Enumeration completed Jul 10 00:40:28.740835 systemd[1]: Started systemd-networkd.service. Jul 10 00:40:28.741116 systemd-networkd[1051]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:40:28.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.743072 systemd-networkd[1051]: eth0: Link UP Jul 10 00:40:28.743082 systemd-networkd[1051]: eth0: Gained carrier Jul 10 00:40:28.751625 systemd[1]: Finished systemd-udev-settle.service. Jul 10 00:40:28.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.753605 systemd[1]: Starting lvm2-activation-early.service... Jul 10 00:40:28.762356 systemd-networkd[1051]: eth0: DHCPv4 address 10.0.0.111/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:40:28.763216 lvm[1070]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:40:28.795142 systemd[1]: Finished lvm2-activation-early.service. Jul 10 00:40:28.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.795988 systemd[1]: Reached target cryptsetup.target. Jul 10 00:40:28.797792 systemd[1]: Starting lvm2-activation.service... Jul 10 00:40:28.801491 lvm[1071]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:40:28.833141 systemd[1]: Finished lvm2-activation.service. Jul 10 00:40:28.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.833951 systemd[1]: Reached target local-fs-pre.target. Jul 10 00:40:28.834597 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:40:28.834626 systemd[1]: Reached target local-fs.target. Jul 10 00:40:28.835181 systemd[1]: Reached target machines.target. Jul 10 00:40:28.837065 systemd[1]: Starting ldconfig.service... Jul 10 00:40:28.838057 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:40:28.838119 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:40:28.839307 systemd[1]: Starting systemd-boot-update.service... Jul 10 00:40:28.841323 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 10 00:40:28.843380 systemd[1]: Starting systemd-machine-id-commit.service... Jul 10 00:40:28.846172 systemd[1]: Starting systemd-sysext.service... Jul 10 00:40:28.847257 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1073 (bootctl) Jul 10 00:40:28.848508 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 10 00:40:28.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.851132 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 10 00:40:28.861148 systemd[1]: Unmounting usr-share-oem.mount... Jul 10 00:40:28.867068 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 10 00:40:28.867378 systemd[1]: Unmounted usr-share-oem.mount. Jul 10 00:40:28.881221 kernel: loop0: detected capacity change from 0 to 203944 Jul 10 00:40:28.924611 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:40:28.926217 systemd[1]: Finished systemd-machine-id-commit.service. Jul 10 00:40:28.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.931816 systemd-fsck[1081]: fsck.fat 4.2 (2021-01-31) Jul 10 00:40:28.931816 systemd-fsck[1081]: /dev/vda1: 236 files, 117310/258078 clusters Jul 10 00:40:28.933167 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:40:28.935900 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 10 00:40:28.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.938449 systemd[1]: Mounting boot.mount... Jul 10 00:40:28.945451 systemd[1]: Mounted boot.mount. Jul 10 00:40:28.954637 systemd[1]: Finished systemd-boot-update.service. Jul 10 00:40:28.955381 kernel: loop1: detected capacity change from 0 to 203944 Jul 10 00:40:28.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.961388 (sd-sysext)[1086]: Using extensions 'kubernetes'. Jul 10 00:40:28.961828 (sd-sysext)[1086]: Merged extensions into '/usr'. Jul 10 00:40:28.980323 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:40:28.982591 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:40:28.985275 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:40:28.988426 systemd[1]: Starting modprobe@loop.service... Jul 10 00:40:28.989503 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:40:28.989846 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:40:28.991284 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:40:28.991601 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:40:28.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.993430 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:40:28.993672 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:40:28.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.995598 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:40:28.995840 systemd[1]: Finished modprobe@loop.service. Jul 10 00:40:28.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:28.997866 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:40:28.998145 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:40:29.033506 ldconfig[1072]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:40:29.036876 systemd[1]: Finished ldconfig.service. Jul 10 00:40:29.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:29.185314 systemd[1]: Mounting usr-share-oem.mount... Jul 10 00:40:29.190411 systemd[1]: Mounted usr-share-oem.mount. Jul 10 00:40:29.192408 systemd[1]: Finished systemd-sysext.service. Jul 10 00:40:29.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:29.194663 systemd[1]: Starting ensure-sysext.service... Jul 10 00:40:29.196619 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 10 00:40:29.200822 systemd[1]: Reloading. Jul 10 00:40:29.208236 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 10 00:40:29.210378 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:40:29.213298 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:40:29.248840 /usr/lib/systemd/system-generators/torcx-generator[1113]: time="2025-07-10T00:40:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:40:29.249385 /usr/lib/systemd/system-generators/torcx-generator[1113]: time="2025-07-10T00:40:29Z" level=info msg="torcx already run" Jul 10 00:40:29.306441 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:40:29.306462 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:40:29.323431 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:40:29.370000 audit: BPF prog-id=27 op=LOAD Jul 10 00:40:29.370000 audit: BPF prog-id=18 op=UNLOAD Jul 10 00:40:29.370000 audit: BPF prog-id=28 op=LOAD Jul 10 00:40:29.370000 audit: BPF prog-id=29 op=LOAD Jul 10 00:40:29.370000 audit: BPF prog-id=19 op=UNLOAD Jul 10 00:40:29.370000 audit: BPF prog-id=20 op=UNLOAD Jul 10 00:40:29.371000 audit: BPF prog-id=30 op=LOAD Jul 10 00:40:29.371000 audit: BPF prog-id=23 op=UNLOAD Jul 10 00:40:29.371000 audit: BPF prog-id=31 op=LOAD Jul 10 00:40:29.371000 audit: BPF prog-id=32 op=LOAD Jul 10 00:40:29.371000 audit: BPF prog-id=21 op=UNLOAD Jul 10 00:40:29.371000 audit: BPF prog-id=22 op=UNLOAD Jul 10 00:40:29.373000 audit: BPF prog-id=33 op=LOAD Jul 10 00:40:29.373000 audit: BPF prog-id=24 op=UNLOAD Jul 10 00:40:29.373000 audit: BPF prog-id=34 op=LOAD Jul 10 00:40:29.373000 audit: BPF prog-id=35 op=LOAD Jul 10 00:40:29.373000 audit: BPF prog-id=25 op=UNLOAD Jul 10 00:40:29.373000 audit: BPF prog-id=26 op=UNLOAD Jul 10 00:40:29.375929 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 10 00:40:29.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:29.380687 systemd[1]: Starting audit-rules.service... Jul 10 00:40:29.382698 systemd[1]: Starting clean-ca-certificates.service... Jul 10 00:40:29.385084 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 10 00:40:29.387000 audit: BPF prog-id=36 op=LOAD Jul 10 00:40:29.389248 systemd[1]: Starting systemd-resolved.service... Jul 10 00:40:29.391000 audit: BPF prog-id=37 op=LOAD Jul 10 00:40:29.394468 systemd[1]: Starting systemd-timesyncd.service... Jul 10 00:40:29.396648 systemd[1]: Starting systemd-update-utmp.service... Jul 10 00:40:29.401000 audit[1163]: SYSTEM_BOOT pid=1163 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 10 00:40:29.404608 systemd[1]: Finished clean-ca-certificates.service. Jul 10 00:40:29.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:29.406050 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:40:29.407679 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:40:29.409903 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:40:29.412288 systemd[1]: Starting modprobe@loop.service... Jul 10 00:40:29.413330 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:40:29.413530 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:40:29.413696 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:40:29.414684 systemd[1]: Finished systemd-update-utmp.service. Jul 10 00:40:29.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:29.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:29.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:29.416516 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:40:29.416658 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:40:29.418080 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:40:29.418231 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:40:29.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:29.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:29.419728 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:40:29.419855 systemd[1]: Finished modprobe@loop.service. Jul 10 00:40:29.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:29.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:29.421904 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:40:29.422036 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:40:29.422819 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 10 00:40:29.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:29.428610 systemd[1]: Starting systemd-update-done.service... Jul 10 00:40:29.431908 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:40:29.433666 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:40:29.436136 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:40:29.438617 systemd[1]: Starting modprobe@loop.service... Jul 10 00:40:29.439697 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:40:29.439839 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:40:29.440179 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:40:29.441534 systemd[1]: Finished systemd-update-done.service. Jul 10 00:40:29.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:29.442874 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:40:29.443016 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:40:29.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:29.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:29.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:29.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:29.444375 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:40:29.444507 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:40:29.445897 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:40:29.446034 systemd[1]: Finished modprobe@loop.service. Jul 10 00:40:29.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:29.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 00:40:29.447606 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:40:29.447711 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:40:29.450374 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 10 00:40:29.452317 systemd-resolved[1156]: Positive Trust Anchors: Jul 10 00:40:29.452328 systemd-resolved[1156]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:40:29.452356 systemd-resolved[1156]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 10 00:40:29.453542 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 00:40:29.453000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 10 00:40:29.453000 audit[1179]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdf8f95d0 a2=420 a3=0 items=0 ppid=1152 pid=1179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 00:40:29.453000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 10 00:40:29.454192 augenrules[1179]: No rules Jul 10 00:40:29.455394 systemd-timesyncd[1162]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 00:40:29.455691 systemd-timesyncd[1162]: Initial clock synchronization to Thu 2025-07-10 00:40:29.146897 UTC. Jul 10 00:40:29.456026 systemd[1]: Starting modprobe@drm.service... Jul 10 00:40:29.458370 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 00:40:29.460580 systemd[1]: Starting modprobe@loop.service... Jul 10 00:40:29.461313 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 00:40:29.461487 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:40:29.462951 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 10 00:40:29.463847 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:40:29.464886 systemd[1]: Started systemd-timesyncd.service. Jul 10 00:40:29.466493 systemd[1]: Finished audit-rules.service. Jul 10 00:40:29.467511 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:40:29.467645 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 00:40:29.468670 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:40:29.468797 systemd[1]: Finished modprobe@drm.service. Jul 10 00:40:29.469486 systemd-resolved[1156]: Defaulting to hostname 'linux'. Jul 10 00:40:29.469860 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:40:29.469994 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 00:40:29.471031 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:40:29.471152 systemd[1]: Finished modprobe@loop.service. Jul 10 00:40:29.472024 systemd[1]: Started systemd-resolved.service. Jul 10 00:40:29.473280 systemd[1]: Reached target network.target. Jul 10 00:40:29.473887 systemd[1]: Reached target nss-lookup.target. Jul 10 00:40:29.474766 systemd[1]: Reached target time-set.target. Jul 10 00:40:29.475362 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:40:29.475395 systemd[1]: Reached target sysinit.target. Jul 10 00:40:29.476014 systemd[1]: Started motdgen.path. Jul 10 00:40:29.476566 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 10 00:40:29.477517 systemd[1]: Started logrotate.timer. Jul 10 00:40:29.478162 systemd[1]: Started mdadm.timer. Jul 10 00:40:29.478693 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 10 00:40:29.479476 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:40:29.479506 systemd[1]: Reached target paths.target. Jul 10 00:40:29.480056 systemd[1]: Reached target timers.target. Jul 10 00:40:29.481002 systemd[1]: Listening on dbus.socket. Jul 10 00:40:29.482694 systemd[1]: Starting docker.socket... Jul 10 00:40:29.486002 systemd[1]: Listening on sshd.socket. Jul 10 00:40:29.486756 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:40:29.486824 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 00:40:29.487685 systemd[1]: Finished ensure-sysext.service. Jul 10 00:40:29.488552 systemd[1]: Listening on docker.socket. Jul 10 00:40:29.490077 systemd[1]: Reached target sockets.target. Jul 10 00:40:29.490722 systemd[1]: Reached target basic.target. Jul 10 00:40:29.491394 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 10 00:40:29.491428 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 10 00:40:29.492584 systemd[1]: Starting containerd.service... Jul 10 00:40:29.494341 systemd[1]: Starting dbus.service... Jul 10 00:40:29.495847 systemd[1]: Starting enable-oem-cloudinit.service... Jul 10 00:40:29.497840 systemd[1]: Starting extend-filesystems.service... Jul 10 00:40:29.498569 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 10 00:40:29.500021 systemd[1]: Starting motdgen.service... Jul 10 00:40:29.501876 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 10 00:40:29.505445 systemd[1]: Starting sshd-keygen.service... Jul 10 00:40:29.511112 systemd[1]: Starting systemd-logind.service... Jul 10 00:40:29.511972 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 00:40:29.515861 jq[1194]: false Jul 10 00:40:29.512061 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:40:29.515501 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:40:29.516382 systemd[1]: Starting update-engine.service... Jul 10 00:40:29.518617 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 10 00:40:29.521325 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:40:29.521520 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 10 00:40:29.521821 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:40:29.521976 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 10 00:40:29.523972 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:40:29.524127 systemd[1]: Finished motdgen.service. Jul 10 00:40:29.528022 extend-filesystems[1195]: Found loop1 Jul 10 00:40:29.528022 extend-filesystems[1195]: Found vda Jul 10 00:40:29.528022 extend-filesystems[1195]: Found vda1 Jul 10 00:40:29.528022 extend-filesystems[1195]: Found vda2 Jul 10 00:40:29.528022 extend-filesystems[1195]: Found vda3 Jul 10 00:40:29.528022 extend-filesystems[1195]: Found usr Jul 10 00:40:29.528022 extend-filesystems[1195]: Found vda4 Jul 10 00:40:29.528022 extend-filesystems[1195]: Found vda6 Jul 10 00:40:29.528022 extend-filesystems[1195]: Found vda7 Jul 10 00:40:29.528022 extend-filesystems[1195]: Found vda9 Jul 10 00:40:29.528022 extend-filesystems[1195]: Checking size of /dev/vda9 Jul 10 00:40:29.541375 jq[1212]: true Jul 10 00:40:29.542239 jq[1216]: true Jul 10 00:40:29.547970 dbus-daemon[1193]: [system] SELinux support is enabled Jul 10 00:40:29.548166 systemd[1]: Started dbus.service. Jul 10 00:40:29.550852 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:40:29.550888 systemd[1]: Reached target system-config.target. Jul 10 00:40:29.551795 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:40:29.551820 systemd[1]: Reached target user-config.target. Jul 10 00:40:29.575052 systemd-logind[1205]: Watching system buttons on /dev/input/event0 (Power Button) Jul 10 00:40:29.575348 systemd-logind[1205]: New seat seat0. Jul 10 00:40:29.575869 extend-filesystems[1195]: Resized partition /dev/vda9 Jul 10 00:40:29.592991 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 00:40:29.593043 extend-filesystems[1240]: resize2fs 1.46.5 (30-Dec-2021) Jul 10 00:40:29.595148 systemd[1]: Started systemd-logind.service. Jul 10 00:40:29.595957 update_engine[1211]: I0710 00:40:29.595670 1211 main.cc:92] Flatcar Update Engine starting Jul 10 00:40:29.609364 update_engine[1211]: I0710 00:40:29.598955 1211 update_check_scheduler.cc:74] Next update check in 2m9s Jul 10 00:40:29.599187 systemd[1]: Started update-engine.service. Jul 10 00:40:29.602259 systemd[1]: Started locksmithd.service. Jul 10 00:40:29.611101 bash[1239]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:40:29.612068 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 10 00:40:29.614213 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 00:40:29.632103 extend-filesystems[1240]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 00:40:29.632103 extend-filesystems[1240]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 00:40:29.632103 extend-filesystems[1240]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 00:40:29.636208 extend-filesystems[1195]: Resized filesystem in /dev/vda9 Jul 10 00:40:29.634136 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:40:29.634346 systemd[1]: Finished extend-filesystems.service. Jul 10 00:40:29.639496 env[1214]: time="2025-07-10T00:40:29.639445680Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 10 00:40:29.659416 env[1214]: time="2025-07-10T00:40:29.659367640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 10 00:40:29.659723 env[1214]: time="2025-07-10T00:40:29.659702960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:40:29.661048 env[1214]: time="2025-07-10T00:40:29.660989960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.186-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:40:29.661048 env[1214]: time="2025-07-10T00:40:29.661043720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:40:29.661557 env[1214]: time="2025-07-10T00:40:29.661517200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:40:29.661557 env[1214]: time="2025-07-10T00:40:29.661543360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 10 00:40:29.661986 env[1214]: time="2025-07-10T00:40:29.661668360Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 10 00:40:29.661986 env[1214]: time="2025-07-10T00:40:29.661981600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 10 00:40:29.662118 env[1214]: time="2025-07-10T00:40:29.662100960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:40:29.662695 env[1214]: time="2025-07-10T00:40:29.662668320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:40:29.663120 env[1214]: time="2025-07-10T00:40:29.663087360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:40:29.663120 env[1214]: time="2025-07-10T00:40:29.663114280Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 10 00:40:29.663320 env[1214]: time="2025-07-10T00:40:29.663300800Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 10 00:40:29.663357 env[1214]: time="2025-07-10T00:40:29.663322160Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:40:29.664729 locksmithd[1241]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:40:29.666750 env[1214]: time="2025-07-10T00:40:29.666708920Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 10 00:40:29.666813 env[1214]: time="2025-07-10T00:40:29.666753360Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 10 00:40:29.666813 env[1214]: time="2025-07-10T00:40:29.666767920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 10 00:40:29.666813 env[1214]: time="2025-07-10T00:40:29.666803400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 10 00:40:29.666869 env[1214]: time="2025-07-10T00:40:29.666818240Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 10 00:40:29.666869 env[1214]: time="2025-07-10T00:40:29.666834520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 10 00:40:29.666869 env[1214]: time="2025-07-10T00:40:29.666848600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 10 00:40:29.667252 env[1214]: time="2025-07-10T00:40:29.667230160Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 10 00:40:29.667280 env[1214]: time="2025-07-10T00:40:29.667258840Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 10 00:40:29.667280 env[1214]: time="2025-07-10T00:40:29.667273280Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 10 00:40:29.667322 env[1214]: time="2025-07-10T00:40:29.667286160Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 10 00:40:29.667322 env[1214]: time="2025-07-10T00:40:29.667300160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 10 00:40:29.667445 env[1214]: time="2025-07-10T00:40:29.667430280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 10 00:40:29.667525 env[1214]: time="2025-07-10T00:40:29.667512480Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 10 00:40:29.667763 env[1214]: time="2025-07-10T00:40:29.667749200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 10 00:40:29.667784 env[1214]: time="2025-07-10T00:40:29.667777800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 10 00:40:29.667803 env[1214]: time="2025-07-10T00:40:29.667791400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 10 00:40:29.667903 env[1214]: time="2025-07-10T00:40:29.667892680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 10 00:40:29.667925 env[1214]: time="2025-07-10T00:40:29.667908600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 10 00:40:29.667925 env[1214]: time="2025-07-10T00:40:29.667922040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 10 00:40:29.667970 env[1214]: time="2025-07-10T00:40:29.667947880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 10 00:40:29.667970 env[1214]: time="2025-07-10T00:40:29.667961920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 10 00:40:29.668006 env[1214]: time="2025-07-10T00:40:29.667973280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 10 00:40:29.668006 env[1214]: time="2025-07-10T00:40:29.667985080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 10 00:40:29.668006 env[1214]: time="2025-07-10T00:40:29.667996920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 10 00:40:29.668069 env[1214]: time="2025-07-10T00:40:29.668009720Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 10 00:40:29.668154 env[1214]: time="2025-07-10T00:40:29.668138120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 10 00:40:29.668186 env[1214]: time="2025-07-10T00:40:29.668158920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 10 00:40:29.668186 env[1214]: time="2025-07-10T00:40:29.668172760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 10 00:40:29.668232 env[1214]: time="2025-07-10T00:40:29.668184360Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 10 00:40:29.668232 env[1214]: time="2025-07-10T00:40:29.668214760Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 10 00:40:29.668232 env[1214]: time="2025-07-10T00:40:29.668226360Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 10 00:40:29.668287 env[1214]: time="2025-07-10T00:40:29.668242880Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 10 00:40:29.668287 env[1214]: time="2025-07-10T00:40:29.668277480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 10 00:40:29.668507 env[1214]: time="2025-07-10T00:40:29.668462960Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 10 00:40:29.669112 env[1214]: time="2025-07-10T00:40:29.668521880Z" level=info msg="Connect containerd service" Jul 10 00:40:29.669112 env[1214]: time="2025-07-10T00:40:29.668559360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 10 00:40:29.669274 env[1214]: time="2025-07-10T00:40:29.669250840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:40:29.669614 env[1214]: time="2025-07-10T00:40:29.669597440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:40:29.669663 env[1214]: time="2025-07-10T00:40:29.669625760Z" level=info msg="Start subscribing containerd event" Jul 10 00:40:29.669686 env[1214]: time="2025-07-10T00:40:29.669645040Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:40:29.669706 env[1214]: time="2025-07-10T00:40:29.669687160Z" level=info msg="Start recovering state" Jul 10 00:40:29.669736 env[1214]: time="2025-07-10T00:40:29.669724760Z" level=info msg="containerd successfully booted in 0.035026s" Jul 10 00:40:29.669775 env[1214]: time="2025-07-10T00:40:29.669763240Z" level=info msg="Start event monitor" Jul 10 00:40:29.669796 env[1214]: time="2025-07-10T00:40:29.669783840Z" level=info msg="Start snapshots syncer" Jul 10 00:40:29.669819 env[1214]: time="2025-07-10T00:40:29.669795640Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:40:29.669819 env[1214]: time="2025-07-10T00:40:29.669803800Z" level=info msg="Start streaming server" Jul 10 00:40:29.669801 systemd[1]: Started containerd.service. Jul 10 00:40:29.805429 systemd-networkd[1051]: eth0: Gained IPv6LL Jul 10 00:40:29.807232 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 10 00:40:29.808482 systemd[1]: Reached target network-online.target. Jul 10 00:40:29.810855 systemd[1]: Starting kubelet.service... Jul 10 00:40:30.371789 systemd[1]: Started kubelet.service. Jul 10 00:40:30.524388 sshd_keygen[1210]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:40:30.543063 systemd[1]: Finished sshd-keygen.service. Jul 10 00:40:30.545279 systemd[1]: Starting issuegen.service... Jul 10 00:40:30.550303 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:40:30.550467 systemd[1]: Finished issuegen.service. Jul 10 00:40:30.552528 systemd[1]: Starting systemd-user-sessions.service... Jul 10 00:40:30.558746 systemd[1]: Finished systemd-user-sessions.service. Jul 10 00:40:30.560842 systemd[1]: Started getty@tty1.service. Jul 10 00:40:30.562973 systemd[1]: Started serial-getty@ttyAMA0.service. Jul 10 00:40:30.564058 systemd[1]: Reached target getty.target. Jul 10 00:40:30.564939 systemd[1]: Reached target multi-user.target. Jul 10 00:40:30.567229 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 10 00:40:30.574130 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 10 00:40:30.574314 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 10 00:40:30.575432 systemd[1]: Startup finished in 562ms (kernel) + 4.692s (initrd) + 4.315s (userspace) = 9.570s. Jul 10 00:40:30.789675 kubelet[1257]: E0710 00:40:30.789572 1257 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:40:30.791462 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:40:30.791581 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:40:34.175929 systemd[1]: Created slice system-sshd.slice. Jul 10 00:40:34.177033 systemd[1]: Started sshd@0-10.0.0.111:22-10.0.0.1:55160.service. Jul 10 00:40:34.223162 sshd[1279]: Accepted publickey for core from 10.0.0.1 port 55160 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:40:34.225860 sshd[1279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:40:34.236555 systemd-logind[1205]: New session 1 of user core. Jul 10 00:40:34.237515 systemd[1]: Created slice user-500.slice. Jul 10 00:40:34.238588 systemd[1]: Starting user-runtime-dir@500.service... Jul 10 00:40:34.246934 systemd[1]: Finished user-runtime-dir@500.service. Jul 10 00:40:34.248310 systemd[1]: Starting user@500.service... Jul 10 00:40:34.251188 (systemd)[1282]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:40:34.311912 systemd[1282]: Queued start job for default target default.target. Jul 10 00:40:34.312552 systemd[1282]: Reached target paths.target. Jul 10 00:40:34.312588 systemd[1282]: Reached target sockets.target. Jul 10 00:40:34.312599 systemd[1282]: Reached target timers.target. Jul 10 00:40:34.312620 systemd[1282]: Reached target basic.target. Jul 10 00:40:34.312664 systemd[1282]: Reached target default.target. Jul 10 00:40:34.312689 systemd[1282]: Startup finished in 55ms. Jul 10 00:40:34.312743 systemd[1]: Started user@500.service. Jul 10 00:40:34.313760 systemd[1]: Started session-1.scope. Jul 10 00:40:34.363225 systemd[1]: Started sshd@1-10.0.0.111:22-10.0.0.1:55164.service. Jul 10 00:40:34.429359 sshd[1291]: Accepted publickey for core from 10.0.0.1 port 55164 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:40:34.430765 sshd[1291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:40:34.435848 systemd[1]: Started session-2.scope. Jul 10 00:40:34.435991 systemd-logind[1205]: New session 2 of user core. Jul 10 00:40:34.489163 sshd[1291]: pam_unix(sshd:session): session closed for user core Jul 10 00:40:34.493384 systemd[1]: Started sshd@2-10.0.0.111:22-10.0.0.1:55172.service. Jul 10 00:40:34.493828 systemd[1]: sshd@1-10.0.0.111:22-10.0.0.1:55164.service: Deactivated successfully. Jul 10 00:40:34.494587 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:40:34.497437 systemd-logind[1205]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:40:34.500776 systemd-logind[1205]: Removed session 2. Jul 10 00:40:34.538576 sshd[1296]: Accepted publickey for core from 10.0.0.1 port 55172 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:40:34.540094 sshd[1296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:40:34.544726 systemd[1]: Started session-3.scope. Jul 10 00:40:34.545018 systemd-logind[1205]: New session 3 of user core. Jul 10 00:40:34.594772 sshd[1296]: pam_unix(sshd:session): session closed for user core Jul 10 00:40:34.598775 systemd[1]: Started sshd@3-10.0.0.111:22-10.0.0.1:55182.service. Jul 10 00:40:34.601633 systemd[1]: sshd@2-10.0.0.111:22-10.0.0.1:55172.service: Deactivated successfully. Jul 10 00:40:34.602248 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:40:34.603013 systemd-logind[1205]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:40:34.603721 systemd-logind[1205]: Removed session 3. Jul 10 00:40:34.641894 sshd[1302]: Accepted publickey for core from 10.0.0.1 port 55182 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:40:34.643404 sshd[1302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:40:34.647054 systemd-logind[1205]: New session 4 of user core. Jul 10 00:40:34.648264 systemd[1]: Started session-4.scope. Jul 10 00:40:34.700291 sshd[1302]: pam_unix(sshd:session): session closed for user core Jul 10 00:40:34.702999 systemd[1]: sshd@3-10.0.0.111:22-10.0.0.1:55182.service: Deactivated successfully. Jul 10 00:40:34.703569 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:40:34.705778 systemd-logind[1205]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:40:34.707058 systemd[1]: Started sshd@4-10.0.0.111:22-10.0.0.1:55196.service. Jul 10 00:40:34.707853 systemd-logind[1205]: Removed session 4. Jul 10 00:40:34.755599 sshd[1309]: Accepted publickey for core from 10.0.0.1 port 55196 ssh2: RSA SHA256:qOVwIcEhxIMbnnVzVACNg4ZPFMKwsyA0M9qFZXlj7es Jul 10 00:40:34.756926 sshd[1309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:40:34.760360 systemd-logind[1205]: New session 5 of user core. Jul 10 00:40:34.761181 systemd[1]: Started session-5.scope. Jul 10 00:40:34.817994 sudo[1312]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:40:34.818228 sudo[1312]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 10 00:40:34.836168 systemd[1]: Starting coreos-metadata.service... Jul 10 00:40:34.841580 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 10 00:40:34.841749 systemd[1]: Finished coreos-metadata.service. Jul 10 00:40:35.366928 systemd[1]: Stopped kubelet.service. Jul 10 00:40:35.368953 systemd[1]: Starting kubelet.service... Jul 10 00:40:35.391331 systemd[1]: Reloading. Jul 10 00:40:35.443388 /usr/lib/systemd/system-generators/torcx-generator[1372]: time="2025-07-10T00:40:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 00:40:35.443417 /usr/lib/systemd/system-generators/torcx-generator[1372]: time="2025-07-10T00:40:35Z" level=info msg="torcx already run" Jul 10 00:40:35.619175 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 00:40:35.619208 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 00:40:35.635641 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:40:35.703376 systemd[1]: Started kubelet.service. Jul 10 00:40:35.704873 systemd[1]: Stopping kubelet.service... Jul 10 00:40:35.705326 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:40:35.705603 systemd[1]: Stopped kubelet.service. Jul 10 00:40:35.707373 systemd[1]: Starting kubelet.service... Jul 10 00:40:35.798192 systemd[1]: Started kubelet.service. Jul 10 00:40:35.834905 kubelet[1414]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:40:35.834905 kubelet[1414]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:40:35.834905 kubelet[1414]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:40:35.835278 kubelet[1414]: I0710 00:40:35.834957 1414 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:40:36.929314 kubelet[1414]: I0710 00:40:36.929269 1414 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:40:36.929314 kubelet[1414]: I0710 00:40:36.929299 1414 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:40:36.929629 kubelet[1414]: I0710 00:40:36.929559 1414 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:40:36.997134 kubelet[1414]: I0710 00:40:36.997104 1414 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:40:37.015569 kubelet[1414]: E0710 00:40:37.015504 1414 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:40:37.015569 kubelet[1414]: I0710 00:40:37.015559 1414 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:40:37.020969 kubelet[1414]: I0710 00:40:37.020942 1414 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:40:37.022898 kubelet[1414]: I0710 00:40:37.022868 1414 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:40:37.023078 kubelet[1414]: I0710 00:40:37.023044 1414 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:40:37.023255 kubelet[1414]: I0710 00:40:37.023073 1414 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.111","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:40:37.023348 kubelet[1414]: I0710 00:40:37.023325 1414 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:40:37.023348 kubelet[1414]: I0710 00:40:37.023334 1414 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:40:37.023594 kubelet[1414]: I0710 00:40:37.023570 1414 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:40:37.026150 kubelet[1414]: I0710 00:40:37.025895 1414 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:40:37.026285 kubelet[1414]: I0710 00:40:37.026272 1414 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:40:37.026367 kubelet[1414]: I0710 00:40:37.026355 1414 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:40:37.026506 kubelet[1414]: I0710 00:40:37.026495 1414 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:40:37.027430 kubelet[1414]: E0710 00:40:37.027374 1414 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:37.027493 kubelet[1414]: E0710 00:40:37.027452 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:37.038682 kubelet[1414]: I0710 00:40:37.038652 1414 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 10 00:40:37.039521 kubelet[1414]: I0710 00:40:37.039500 1414 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:40:37.039698 kubelet[1414]: W0710 00:40:37.039680 1414 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:40:37.040758 kubelet[1414]: I0710 00:40:37.040741 1414 server.go:1274] "Started kubelet" Jul 10 00:40:37.041277 kubelet[1414]: I0710 00:40:37.041239 1414 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:40:37.043030 kubelet[1414]: I0710 00:40:37.043008 1414 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:40:37.044867 kubelet[1414]: I0710 00:40:37.044805 1414 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:40:37.045161 kubelet[1414]: I0710 00:40:37.045143 1414 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:40:37.047595 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 10 00:40:37.047656 kubelet[1414]: E0710 00:40:37.047010 1414 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:40:37.047839 kubelet[1414]: I0710 00:40:37.047820 1414 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:40:37.048163 kubelet[1414]: I0710 00:40:37.048135 1414 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:40:37.048729 kubelet[1414]: I0710 00:40:37.048711 1414 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:40:37.048852 kubelet[1414]: I0710 00:40:37.048840 1414 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:40:37.048919 kubelet[1414]: I0710 00:40:37.048910 1414 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:40:37.049484 kubelet[1414]: E0710 00:40:37.049457 1414 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.111\" not found" Jul 10 00:40:37.049647 kubelet[1414]: I0710 00:40:37.049631 1414 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:40:37.049806 kubelet[1414]: I0710 00:40:37.049786 1414 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:40:37.051575 kubelet[1414]: I0710 00:40:37.051553 1414 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:40:37.062426 kubelet[1414]: I0710 00:40:37.062404 1414 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:40:37.062617 kubelet[1414]: I0710 00:40:37.062600 1414 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:40:37.062717 kubelet[1414]: I0710 00:40:37.062707 1414 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:40:37.068599 kubelet[1414]: E0710 00:40:37.068541 1414 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.111\" not found" node="10.0.0.111" Jul 10 00:40:37.149948 kubelet[1414]: I0710 00:40:37.149925 1414 policy_none.go:49] "None policy: Start" Jul 10 00:40:37.150214 kubelet[1414]: E0710 00:40:37.150130 1414 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.111\" not found" Jul 10 00:40:37.150977 kubelet[1414]: I0710 00:40:37.150959 1414 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:40:37.151086 kubelet[1414]: I0710 00:40:37.151072 1414 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:40:37.162195 systemd[1]: Created slice kubepods.slice. Jul 10 00:40:37.167881 systemd[1]: Created slice kubepods-burstable.slice. Jul 10 00:40:37.170412 systemd[1]: Created slice kubepods-besteffort.slice. Jul 10 00:40:37.178090 kubelet[1414]: I0710 00:40:37.178052 1414 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:40:37.178529 kubelet[1414]: I0710 00:40:37.178501 1414 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:40:37.178581 kubelet[1414]: I0710 00:40:37.178523 1414 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:40:37.178941 kubelet[1414]: I0710 00:40:37.178914 1414 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:40:37.181503 kubelet[1414]: E0710 00:40:37.180138 1414 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.111\" not found" Jul 10 00:40:37.227418 kubelet[1414]: I0710 00:40:37.227369 1414 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:40:37.228565 kubelet[1414]: I0710 00:40:37.228542 1414 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:40:37.228681 kubelet[1414]: I0710 00:40:37.228672 1414 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:40:37.228877 kubelet[1414]: I0710 00:40:37.228867 1414 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:40:37.228996 kubelet[1414]: E0710 00:40:37.228982 1414 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 10 00:40:37.279568 kubelet[1414]: I0710 00:40:37.279542 1414 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.111" Jul 10 00:40:37.284965 kubelet[1414]: I0710 00:40:37.284928 1414 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.111" Jul 10 00:40:37.302356 kubelet[1414]: I0710 00:40:37.302326 1414 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 10 00:40:37.302768 env[1214]: time="2025-07-10T00:40:37.302677071Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:40:37.303012 kubelet[1414]: I0710 00:40:37.302864 1414 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 10 00:40:37.341956 sudo[1312]: pam_unix(sudo:session): session closed for user root Jul 10 00:40:37.345044 sshd[1309]: pam_unix(sshd:session): session closed for user core Jul 10 00:40:37.347353 systemd[1]: sshd@4-10.0.0.111:22-10.0.0.1:55196.service: Deactivated successfully. Jul 10 00:40:37.348056 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:40:37.348578 systemd-logind[1205]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:40:37.349498 systemd-logind[1205]: Removed session 5. Jul 10 00:40:37.931914 kubelet[1414]: I0710 00:40:37.931870 1414 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 10 00:40:37.932229 kubelet[1414]: W0710 00:40:37.932080 1414 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 10 00:40:37.932229 kubelet[1414]: W0710 00:40:37.932117 1414 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 10 00:40:37.932329 kubelet[1414]: W0710 00:40:37.932299 1414 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 10 00:40:38.028277 kubelet[1414]: E0710 00:40:38.028240 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:38.028277 kubelet[1414]: I0710 00:40:38.028255 1414 apiserver.go:52] "Watching apiserver" Jul 10 00:40:38.036447 systemd[1]: Created slice kubepods-besteffort-podbe680868_d06f_4e95_84b4_d3cbfee9afae.slice. Jul 10 00:40:38.050338 kubelet[1414]: I0710 00:40:38.050301 1414 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:40:38.053243 kubelet[1414]: I0710 00:40:38.053212 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-hostproc\") pod \"cilium-wvcdw\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " pod="kube-system/cilium-wvcdw" Jul 10 00:40:38.053306 kubelet[1414]: I0710 00:40:38.053251 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-cilium-cgroup\") pod \"cilium-wvcdw\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " pod="kube-system/cilium-wvcdw" Jul 10 00:40:38.053306 kubelet[1414]: I0710 00:40:38.053270 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-lib-modules\") pod \"cilium-wvcdw\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " pod="kube-system/cilium-wvcdw" Jul 10 00:40:38.053306 kubelet[1414]: I0710 00:40:38.053296 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/638a2747-9925-4001-9dad-a33defa35791-hubble-tls\") pod \"cilium-wvcdw\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " pod="kube-system/cilium-wvcdw" Jul 10 00:40:38.053388 kubelet[1414]: I0710 00:40:38.053312 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/be680868-d06f-4e95-84b4-d3cbfee9afae-kube-proxy\") pod \"kube-proxy-9grlw\" (UID: \"be680868-d06f-4e95-84b4-d3cbfee9afae\") " pod="kube-system/kube-proxy-9grlw" Jul 10 00:40:38.053388 kubelet[1414]: I0710 00:40:38.053329 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlz4v\" (UniqueName: \"kubernetes.io/projected/be680868-d06f-4e95-84b4-d3cbfee9afae-kube-api-access-wlz4v\") pod \"kube-proxy-9grlw\" (UID: \"be680868-d06f-4e95-84b4-d3cbfee9afae\") " pod="kube-system/kube-proxy-9grlw" Jul 10 00:40:38.053388 kubelet[1414]: I0710 00:40:38.053345 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-bpf-maps\") pod \"cilium-wvcdw\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " pod="kube-system/cilium-wvcdw" Jul 10 00:40:38.053388 kubelet[1414]: I0710 00:40:38.053376 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-host-proc-sys-net\") pod \"cilium-wvcdw\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " pod="kube-system/cilium-wvcdw" Jul 10 00:40:38.053470 kubelet[1414]: I0710 00:40:38.053391 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-cni-path\") pod \"cilium-wvcdw\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " pod="kube-system/cilium-wvcdw" Jul 10 00:40:38.053470 kubelet[1414]: I0710 00:40:38.053445 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/638a2747-9925-4001-9dad-a33defa35791-cilium-config-path\") pod \"cilium-wvcdw\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " pod="kube-system/cilium-wvcdw" Jul 10 00:40:38.053511 kubelet[1414]: I0710 00:40:38.053489 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-cilium-run\") pod \"cilium-wvcdw\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " pod="kube-system/cilium-wvcdw" Jul 10 00:40:38.053533 kubelet[1414]: I0710 00:40:38.053509 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-xtables-lock\") pod \"cilium-wvcdw\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " pod="kube-system/cilium-wvcdw" Jul 10 00:40:38.053533 kubelet[1414]: I0710 00:40:38.053526 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/638a2747-9925-4001-9dad-a33defa35791-clustermesh-secrets\") pod \"cilium-wvcdw\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " pod="kube-system/cilium-wvcdw" Jul 10 00:40:38.053576 kubelet[1414]: I0710 00:40:38.053542 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-host-proc-sys-kernel\") pod \"cilium-wvcdw\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " pod="kube-system/cilium-wvcdw" Jul 10 00:40:38.053576 kubelet[1414]: I0710 00:40:38.053557 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4hq7\" (UniqueName: \"kubernetes.io/projected/638a2747-9925-4001-9dad-a33defa35791-kube-api-access-j4hq7\") pod \"cilium-wvcdw\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " pod="kube-system/cilium-wvcdw" Jul 10 00:40:38.053619 kubelet[1414]: I0710 00:40:38.053580 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be680868-d06f-4e95-84b4-d3cbfee9afae-xtables-lock\") pod \"kube-proxy-9grlw\" (UID: \"be680868-d06f-4e95-84b4-d3cbfee9afae\") " pod="kube-system/kube-proxy-9grlw" Jul 10 00:40:38.053641 kubelet[1414]: I0710 00:40:38.053621 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be680868-d06f-4e95-84b4-d3cbfee9afae-lib-modules\") pod \"kube-proxy-9grlw\" (UID: \"be680868-d06f-4e95-84b4-d3cbfee9afae\") " pod="kube-system/kube-proxy-9grlw" Jul 10 00:40:38.053641 kubelet[1414]: I0710 00:40:38.053636 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-etc-cni-netd\") pod \"cilium-wvcdw\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " pod="kube-system/cilium-wvcdw" Jul 10 00:40:38.055569 systemd[1]: Created slice kubepods-burstable-pod638a2747_9925_4001_9dad_a33defa35791.slice. Jul 10 00:40:38.157694 kubelet[1414]: I0710 00:40:38.157650 1414 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 10 00:40:38.354824 kubelet[1414]: E0710 00:40:38.354060 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:40:38.355811 env[1214]: time="2025-07-10T00:40:38.355524763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9grlw,Uid:be680868-d06f-4e95-84b4-d3cbfee9afae,Namespace:kube-system,Attempt:0,}" Jul 10 00:40:38.364644 kubelet[1414]: E0710 00:40:38.364608 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:40:38.365099 env[1214]: time="2025-07-10T00:40:38.365046264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wvcdw,Uid:638a2747-9925-4001-9dad-a33defa35791,Namespace:kube-system,Attempt:0,}" Jul 10 00:40:39.028705 kubelet[1414]: E0710 00:40:39.028659 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:39.139769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2780572156.mount: Deactivated successfully. Jul 10 00:40:39.147647 env[1214]: time="2025-07-10T00:40:39.147596680Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:40:39.152497 env[1214]: time="2025-07-10T00:40:39.152449414Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:40:39.153242 env[1214]: time="2025-07-10T00:40:39.153182062Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:40:39.155206 env[1214]: time="2025-07-10T00:40:39.155153899Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:40:39.156824 env[1214]: time="2025-07-10T00:40:39.156783075Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:40:39.158743 env[1214]: time="2025-07-10T00:40:39.158244695Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:40:39.161584 env[1214]: time="2025-07-10T00:40:39.161546061Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:40:39.163289 env[1214]: time="2025-07-10T00:40:39.163253084Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:40:39.196868 env[1214]: time="2025-07-10T00:40:39.196597824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:40:39.196868 env[1214]: time="2025-07-10T00:40:39.196644240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:40:39.196868 env[1214]: time="2025-07-10T00:40:39.196655350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:40:39.199948 env[1214]: time="2025-07-10T00:40:39.198218974Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90 pid=1473 runtime=io.containerd.runc.v2 Jul 10 00:40:39.201017 env[1214]: time="2025-07-10T00:40:39.200857552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:40:39.201017 env[1214]: time="2025-07-10T00:40:39.200891988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:40:39.201017 env[1214]: time="2025-07-10T00:40:39.200902070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:40:39.201305 env[1214]: time="2025-07-10T00:40:39.201242162Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2898b2be471001260bbd190f00297ed3d77676b9625705938c12d43ced6df72 pid=1488 runtime=io.containerd.runc.v2 Jul 10 00:40:39.230707 systemd[1]: Started cri-containerd-86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90.scope. Jul 10 00:40:39.245634 systemd[1]: Started cri-containerd-a2898b2be471001260bbd190f00297ed3d77676b9625705938c12d43ced6df72.scope. Jul 10 00:40:39.275805 env[1214]: time="2025-07-10T00:40:39.275754073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9grlw,Uid:be680868-d06f-4e95-84b4-d3cbfee9afae,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2898b2be471001260bbd190f00297ed3d77676b9625705938c12d43ced6df72\"" Jul 10 00:40:39.276775 kubelet[1414]: E0710 00:40:39.276738 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:40:39.277958 env[1214]: time="2025-07-10T00:40:39.277915052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wvcdw,Uid:638a2747-9925-4001-9dad-a33defa35791,Namespace:kube-system,Attempt:0,} returns sandbox id \"86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90\"" Jul 10 00:40:39.279470 kubelet[1414]: E0710 00:40:39.278650 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:40:39.279933 env[1214]: time="2025-07-10T00:40:39.279899066Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 10 00:40:40.029076 kubelet[1414]: E0710 00:40:40.029026 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:40.300734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1467015463.mount: Deactivated successfully. Jul 10 00:40:40.739433 env[1214]: time="2025-07-10T00:40:40.739325249Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:40:40.741399 env[1214]: time="2025-07-10T00:40:40.741361229Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:40:40.743890 env[1214]: time="2025-07-10T00:40:40.743842289Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:40:40.744706 env[1214]: time="2025-07-10T00:40:40.744678204Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:40:40.745801 env[1214]: time="2025-07-10T00:40:40.745754615Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 10 00:40:40.747256 env[1214]: time="2025-07-10T00:40:40.747223486Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 10 00:40:40.748305 env[1214]: time="2025-07-10T00:40:40.748266400Z" level=info msg="CreateContainer within sandbox \"a2898b2be471001260bbd190f00297ed3d77676b9625705938c12d43ced6df72\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:40:40.764685 env[1214]: time="2025-07-10T00:40:40.764638892Z" level=info msg="CreateContainer within sandbox \"a2898b2be471001260bbd190f00297ed3d77676b9625705938c12d43ced6df72\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"54d5251f3012663d8be7bf5bc220afec2b324a1a3be8bf802191b7fecc0c668c\"" Jul 10 00:40:40.765481 env[1214]: time="2025-07-10T00:40:40.765449982Z" level=info msg="StartContainer for \"54d5251f3012663d8be7bf5bc220afec2b324a1a3be8bf802191b7fecc0c668c\"" Jul 10 00:40:40.782946 systemd[1]: Started cri-containerd-54d5251f3012663d8be7bf5bc220afec2b324a1a3be8bf802191b7fecc0c668c.scope. Jul 10 00:40:40.818276 env[1214]: time="2025-07-10T00:40:40.818224865Z" level=info msg="StartContainer for \"54d5251f3012663d8be7bf5bc220afec2b324a1a3be8bf802191b7fecc0c668c\" returns successfully" Jul 10 00:40:41.029932 kubelet[1414]: E0710 00:40:41.029782 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:41.242080 kubelet[1414]: E0710 00:40:41.241951 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:40:41.251089 kubelet[1414]: I0710 00:40:41.251029 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9grlw" podStartSLOduration=2.783051278 podStartE2EDuration="4.251007493s" podCreationTimestamp="2025-07-10 00:40:37 +0000 UTC" firstStartedPulling="2025-07-10 00:40:39.278649203 +0000 UTC m=+3.477033439" lastFinishedPulling="2025-07-10 00:40:40.746605418 +0000 UTC m=+4.944989654" observedRunningTime="2025-07-10 00:40:41.25086925 +0000 UTC m=+5.449253486" watchObservedRunningTime="2025-07-10 00:40:41.251007493 +0000 UTC m=+5.449391728" Jul 10 00:40:42.030307 kubelet[1414]: E0710 00:40:42.030267 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:42.243261 kubelet[1414]: E0710 00:40:42.242883 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:40:43.031002 kubelet[1414]: E0710 00:40:43.030947 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:44.031148 kubelet[1414]: E0710 00:40:44.031086 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:45.032132 kubelet[1414]: E0710 00:40:45.032077 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:46.033261 kubelet[1414]: E0710 00:40:46.033221 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:47.034306 kubelet[1414]: E0710 00:40:47.034248 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:48.035162 kubelet[1414]: E0710 00:40:48.035117 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:49.036681 kubelet[1414]: E0710 00:40:49.036648 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:50.037371 kubelet[1414]: E0710 00:40:50.037329 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:50.247389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1512464570.mount: Deactivated successfully. Jul 10 00:40:51.038142 kubelet[1414]: E0710 00:40:51.038111 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:52.039254 kubelet[1414]: E0710 00:40:52.039165 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:52.481378 env[1214]: time="2025-07-10T00:40:52.481274399Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:40:52.482921 env[1214]: time="2025-07-10T00:40:52.482881828Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:40:52.484474 env[1214]: time="2025-07-10T00:40:52.484442115Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:40:52.485151 env[1214]: time="2025-07-10T00:40:52.485117372Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 10 00:40:52.487450 env[1214]: time="2025-07-10T00:40:52.487420259Z" level=info msg="CreateContainer within sandbox \"86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:40:52.496574 env[1214]: time="2025-07-10T00:40:52.496536838Z" level=info msg="CreateContainer within sandbox \"86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f62992b3dfcd79fcfd0dc7d37e1f13e0948952e9c5092039deb79239f2c7a99b\"" Jul 10 00:40:52.497191 env[1214]: time="2025-07-10T00:40:52.497165750Z" level=info msg="StartContainer for \"f62992b3dfcd79fcfd0dc7d37e1f13e0948952e9c5092039deb79239f2c7a99b\"" Jul 10 00:40:52.518616 systemd[1]: Started cri-containerd-f62992b3dfcd79fcfd0dc7d37e1f13e0948952e9c5092039deb79239f2c7a99b.scope. Jul 10 00:40:52.554139 env[1214]: time="2025-07-10T00:40:52.554086598Z" level=info msg="StartContainer for \"f62992b3dfcd79fcfd0dc7d37e1f13e0948952e9c5092039deb79239f2c7a99b\" returns successfully" Jul 10 00:40:52.600440 systemd[1]: cri-containerd-f62992b3dfcd79fcfd0dc7d37e1f13e0948952e9c5092039deb79239f2c7a99b.scope: Deactivated successfully. Jul 10 00:40:52.748757 env[1214]: time="2025-07-10T00:40:52.748239703Z" level=info msg="shim disconnected" id=f62992b3dfcd79fcfd0dc7d37e1f13e0948952e9c5092039deb79239f2c7a99b Jul 10 00:40:52.748757 env[1214]: time="2025-07-10T00:40:52.748288842Z" level=warning msg="cleaning up after shim disconnected" id=f62992b3dfcd79fcfd0dc7d37e1f13e0948952e9c5092039deb79239f2c7a99b namespace=k8s.io Jul 10 00:40:52.748757 env[1214]: time="2025-07-10T00:40:52.748301057Z" level=info msg="cleaning up dead shim" Jul 10 00:40:52.754414 env[1214]: time="2025-07-10T00:40:52.754379535Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:40:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1766 runtime=io.containerd.runc.v2\n" Jul 10 00:40:53.040423 kubelet[1414]: E0710 00:40:53.040029 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:53.260852 kubelet[1414]: E0710 00:40:53.260683 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:40:53.262464 env[1214]: time="2025-07-10T00:40:53.262426818Z" level=info msg="CreateContainer within sandbox \"86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:40:53.275479 env[1214]: time="2025-07-10T00:40:53.275436403Z" level=info msg="CreateContainer within sandbox \"86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1bac21886bd3a8bba577f1bab218778a09571df946f8d51fa9de52782725d2b2\"" Jul 10 00:40:53.276144 env[1214]: time="2025-07-10T00:40:53.276114469Z" level=info msg="StartContainer for \"1bac21886bd3a8bba577f1bab218778a09571df946f8d51fa9de52782725d2b2\"" Jul 10 00:40:53.289096 systemd[1]: Started cri-containerd-1bac21886bd3a8bba577f1bab218778a09571df946f8d51fa9de52782725d2b2.scope. Jul 10 00:40:53.326439 env[1214]: time="2025-07-10T00:40:53.326362615Z" level=info msg="StartContainer for \"1bac21886bd3a8bba577f1bab218778a09571df946f8d51fa9de52782725d2b2\" returns successfully" Jul 10 00:40:53.350471 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:40:53.350853 systemd[1]: Stopped systemd-sysctl.service. Jul 10 00:40:53.351135 systemd[1]: Stopping systemd-sysctl.service... Jul 10 00:40:53.353027 systemd[1]: Starting systemd-sysctl.service... Jul 10 00:40:53.354193 systemd[1]: cri-containerd-1bac21886bd3a8bba577f1bab218778a09571df946f8d51fa9de52782725d2b2.scope: Deactivated successfully. Jul 10 00:40:53.359841 systemd[1]: Finished systemd-sysctl.service. Jul 10 00:40:53.373346 env[1214]: time="2025-07-10T00:40:53.373297853Z" level=info msg="shim disconnected" id=1bac21886bd3a8bba577f1bab218778a09571df946f8d51fa9de52782725d2b2 Jul 10 00:40:53.373561 env[1214]: time="2025-07-10T00:40:53.373541896Z" level=warning msg="cleaning up after shim disconnected" id=1bac21886bd3a8bba577f1bab218778a09571df946f8d51fa9de52782725d2b2 namespace=k8s.io Jul 10 00:40:53.373620 env[1214]: time="2025-07-10T00:40:53.373608936Z" level=info msg="cleaning up dead shim" Jul 10 00:40:53.380318 env[1214]: time="2025-07-10T00:40:53.380275918Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:40:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1831 runtime=io.containerd.runc.v2\n" Jul 10 00:40:53.493880 systemd[1]: run-containerd-runc-k8s.io-f62992b3dfcd79fcfd0dc7d37e1f13e0948952e9c5092039deb79239f2c7a99b-runc.9hSyjd.mount: Deactivated successfully. Jul 10 00:40:53.493976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f62992b3dfcd79fcfd0dc7d37e1f13e0948952e9c5092039deb79239f2c7a99b-rootfs.mount: Deactivated successfully. Jul 10 00:40:54.040843 kubelet[1414]: E0710 00:40:54.040801 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:54.263864 kubelet[1414]: E0710 00:40:54.263636 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:40:54.265131 env[1214]: time="2025-07-10T00:40:54.265080484Z" level=info msg="CreateContainer within sandbox \"86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:40:54.277082 env[1214]: time="2025-07-10T00:40:54.277029886Z" level=info msg="CreateContainer within sandbox \"86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bcd2f4a43c97c8ce5a0b9344c662a992078549762945aac4053631c6c9534912\"" Jul 10 00:40:54.277795 env[1214]: time="2025-07-10T00:40:54.277747282Z" level=info msg="StartContainer for \"bcd2f4a43c97c8ce5a0b9344c662a992078549762945aac4053631c6c9534912\"" Jul 10 00:40:54.297023 systemd[1]: Started cri-containerd-bcd2f4a43c97c8ce5a0b9344c662a992078549762945aac4053631c6c9534912.scope. Jul 10 00:40:54.329223 env[1214]: time="2025-07-10T00:40:54.327933949Z" level=info msg="StartContainer for \"bcd2f4a43c97c8ce5a0b9344c662a992078549762945aac4053631c6c9534912\" returns successfully" Jul 10 00:40:54.338033 systemd[1]: cri-containerd-bcd2f4a43c97c8ce5a0b9344c662a992078549762945aac4053631c6c9534912.scope: Deactivated successfully. Jul 10 00:40:54.361123 env[1214]: time="2025-07-10T00:40:54.361078870Z" level=info msg="shim disconnected" id=bcd2f4a43c97c8ce5a0b9344c662a992078549762945aac4053631c6c9534912 Jul 10 00:40:54.361123 env[1214]: time="2025-07-10T00:40:54.361120524Z" level=warning msg="cleaning up after shim disconnected" id=bcd2f4a43c97c8ce5a0b9344c662a992078549762945aac4053631c6c9534912 namespace=k8s.io Jul 10 00:40:54.361123 env[1214]: time="2025-07-10T00:40:54.361130349Z" level=info msg="cleaning up dead shim" Jul 10 00:40:54.367824 env[1214]: time="2025-07-10T00:40:54.367783168Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:40:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1888 runtime=io.containerd.runc.v2\n" Jul 10 00:40:54.493614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bcd2f4a43c97c8ce5a0b9344c662a992078549762945aac4053631c6c9534912-rootfs.mount: Deactivated successfully. Jul 10 00:40:55.041821 kubelet[1414]: E0710 00:40:55.041782 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:55.267331 kubelet[1414]: E0710 00:40:55.267287 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:40:55.268945 env[1214]: time="2025-07-10T00:40:55.268901325Z" level=info msg="CreateContainer within sandbox \"86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:40:55.278761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4176279979.mount: Deactivated successfully. Jul 10 00:40:55.283438 env[1214]: time="2025-07-10T00:40:55.283392228Z" level=info msg="CreateContainer within sandbox \"86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4ffb6890cb98516294379a3fe588386a3a120baeadd0273a9f319915716b2d6c\"" Jul 10 00:40:55.284527 env[1214]: time="2025-07-10T00:40:55.284487287Z" level=info msg="StartContainer for \"4ffb6890cb98516294379a3fe588386a3a120baeadd0273a9f319915716b2d6c\"" Jul 10 00:40:55.298215 systemd[1]: Started cri-containerd-4ffb6890cb98516294379a3fe588386a3a120baeadd0273a9f319915716b2d6c.scope. Jul 10 00:40:55.324107 env[1214]: time="2025-07-10T00:40:55.324054826Z" level=info msg="StartContainer for \"4ffb6890cb98516294379a3fe588386a3a120baeadd0273a9f319915716b2d6c\" returns successfully" Jul 10 00:40:55.324981 systemd[1]: cri-containerd-4ffb6890cb98516294379a3fe588386a3a120baeadd0273a9f319915716b2d6c.scope: Deactivated successfully. Jul 10 00:40:55.343307 env[1214]: time="2025-07-10T00:40:55.343242212Z" level=info msg="shim disconnected" id=4ffb6890cb98516294379a3fe588386a3a120baeadd0273a9f319915716b2d6c Jul 10 00:40:55.343307 env[1214]: time="2025-07-10T00:40:55.343295579Z" level=warning msg="cleaning up after shim disconnected" id=4ffb6890cb98516294379a3fe588386a3a120baeadd0273a9f319915716b2d6c namespace=k8s.io Jul 10 00:40:55.343307 env[1214]: time="2025-07-10T00:40:55.343307762Z" level=info msg="cleaning up dead shim" Jul 10 00:40:55.349777 env[1214]: time="2025-07-10T00:40:55.349742065Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:40:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1944 runtime=io.containerd.runc.v2\n" Jul 10 00:40:56.042490 kubelet[1414]: E0710 00:40:56.042445 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:56.270956 kubelet[1414]: E0710 00:40:56.270915 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:40:56.273023 env[1214]: time="2025-07-10T00:40:56.272985081Z" level=info msg="CreateContainer within sandbox \"86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:40:56.288369 env[1214]: time="2025-07-10T00:40:56.288309390Z" level=info msg="CreateContainer within sandbox \"86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b\"" Jul 10 00:40:56.288998 env[1214]: time="2025-07-10T00:40:56.288974352Z" level=info msg="StartContainer for \"ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b\"" Jul 10 00:40:56.307293 systemd[1]: Started cri-containerd-ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b.scope. Jul 10 00:40:56.367068 env[1214]: time="2025-07-10T00:40:56.367017550Z" level=info msg="StartContainer for \"ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b\" returns successfully" Jul 10 00:40:56.523371 kubelet[1414]: I0710 00:40:56.523294 1414 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 10 00:40:56.637236 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 10 00:40:56.881251 kernel: Initializing XFRM netlink socket Jul 10 00:40:56.883226 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 10 00:40:57.026774 kubelet[1414]: E0710 00:40:57.026727 1414 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:57.042713 kubelet[1414]: E0710 00:40:57.042673 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:57.275165 kubelet[1414]: E0710 00:40:57.274918 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:40:57.290672 kubelet[1414]: I0710 00:40:57.290610 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wvcdw" podStartSLOduration=7.084604914 podStartE2EDuration="20.290569399s" podCreationTimestamp="2025-07-10 00:40:37 +0000 UTC" firstStartedPulling="2025-07-10 00:40:39.280145615 +0000 UTC m=+3.478529851" lastFinishedPulling="2025-07-10 00:40:52.4861101 +0000 UTC m=+16.684494336" observedRunningTime="2025-07-10 00:40:57.290115715 +0000 UTC m=+21.488499951" watchObservedRunningTime="2025-07-10 00:40:57.290569399 +0000 UTC m=+21.488953635" Jul 10 00:40:58.043100 kubelet[1414]: E0710 00:40:58.043062 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:58.277011 kubelet[1414]: E0710 00:40:58.276985 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:40:58.277911 systemd[1]: Created slice kubepods-besteffort-pod41eb4e9c_701e_4063_b35c_01fa8fda62a8.slice. Jul 10 00:40:58.369126 kubelet[1414]: I0710 00:40:58.368753 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wvg7\" (UniqueName: \"kubernetes.io/projected/41eb4e9c-701e-4063-b35c-01fa8fda62a8-kube-api-access-6wvg7\") pod \"nginx-deployment-8587fbcb89-z2q6g\" (UID: \"41eb4e9c-701e-4063-b35c-01fa8fda62a8\") " pod="default/nginx-deployment-8587fbcb89-z2q6g" Jul 10 00:40:58.505449 systemd-networkd[1051]: cilium_host: Link UP Jul 10 00:40:58.505911 systemd-networkd[1051]: cilium_net: Link UP Jul 10 00:40:58.506604 systemd-networkd[1051]: cilium_net: Gained carrier Jul 10 00:40:58.507176 systemd-networkd[1051]: cilium_host: Gained carrier Jul 10 00:40:58.507252 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 10 00:40:58.507278 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 10 00:40:58.581215 env[1214]: time="2025-07-10T00:40:58.580845810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-z2q6g,Uid:41eb4e9c-701e-4063-b35c-01fa8fda62a8,Namespace:default,Attempt:0,}" Jul 10 00:40:58.582469 systemd-networkd[1051]: cilium_vxlan: Link UP Jul 10 00:40:58.582475 systemd-networkd[1051]: cilium_vxlan: Gained carrier Jul 10 00:40:58.869785 systemd-networkd[1051]: cilium_host: Gained IPv6LL Jul 10 00:40:58.886255 kernel: NET: Registered PF_ALG protocol family Jul 10 00:40:59.043395 kubelet[1414]: E0710 00:40:59.043352 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:40:59.181344 systemd-networkd[1051]: cilium_net: Gained IPv6LL Jul 10 00:40:59.278139 kubelet[1414]: E0710 00:40:59.278107 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:40:59.440595 systemd-networkd[1051]: lxc_health: Link UP Jul 10 00:40:59.449893 systemd-networkd[1051]: lxc_health: Gained carrier Jul 10 00:40:59.450310 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 10 00:40:59.626512 systemd-networkd[1051]: lxcd7bf891314de: Link UP Jul 10 00:40:59.638265 kernel: eth0: renamed from tmpa4888 Jul 10 00:40:59.645696 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 10 00:40:59.645813 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd7bf891314de: link becomes ready Jul 10 00:40:59.645857 systemd-networkd[1051]: lxcd7bf891314de: Gained carrier Jul 10 00:41:00.044168 kubelet[1414]: E0710 00:41:00.044118 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:00.279509 kubelet[1414]: E0710 00:41:00.279279 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:00.589323 systemd-networkd[1051]: cilium_vxlan: Gained IPv6LL Jul 10 00:41:01.045007 kubelet[1414]: E0710 00:41:01.044974 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:01.165309 systemd-networkd[1051]: lxc_health: Gained IPv6LL Jul 10 00:41:01.282005 kubelet[1414]: E0710 00:41:01.281966 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:01.485338 systemd-networkd[1051]: lxcd7bf891314de: Gained IPv6LL Jul 10 00:41:02.045994 kubelet[1414]: E0710 00:41:02.045950 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:02.282124 kubelet[1414]: E0710 00:41:02.282089 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:03.047091 kubelet[1414]: E0710 00:41:03.047051 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:03.189784 env[1214]: time="2025-07-10T00:41:03.189713223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:41:03.190094 env[1214]: time="2025-07-10T00:41:03.189758750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:41:03.190094 env[1214]: time="2025-07-10T00:41:03.189769191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:41:03.190243 env[1214]: time="2025-07-10T00:41:03.190195819Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a48883edbf354fa018bb536ca258c97ca53c73e4e5adb5d6c30c8bfe7b855156 pid=2498 runtime=io.containerd.runc.v2 Jul 10 00:41:03.203721 systemd[1]: run-containerd-runc-k8s.io-a48883edbf354fa018bb536ca258c97ca53c73e4e5adb5d6c30c8bfe7b855156-runc.JLHpe6.mount: Deactivated successfully. Jul 10 00:41:03.206438 systemd[1]: Started cri-containerd-a48883edbf354fa018bb536ca258c97ca53c73e4e5adb5d6c30c8bfe7b855156.scope. Jul 10 00:41:03.267328 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:41:03.282106 env[1214]: time="2025-07-10T00:41:03.281865348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-z2q6g,Uid:41eb4e9c-701e-4063-b35c-01fa8fda62a8,Namespace:default,Attempt:0,} returns sandbox id \"a48883edbf354fa018bb536ca258c97ca53c73e4e5adb5d6c30c8bfe7b855156\"" Jul 10 00:41:03.283627 env[1214]: time="2025-07-10T00:41:03.283553855Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 10 00:41:04.047857 kubelet[1414]: E0710 00:41:04.047811 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:05.047968 kubelet[1414]: E0710 00:41:05.047922 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:05.237492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3281254676.mount: Deactivated successfully. Jul 10 00:41:06.048621 kubelet[1414]: E0710 00:41:06.048575 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:06.454939 env[1214]: time="2025-07-10T00:41:06.454638387Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:41:06.456159 env[1214]: time="2025-07-10T00:41:06.456113264Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:41:06.458691 env[1214]: time="2025-07-10T00:41:06.458653564Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:41:06.462978 env[1214]: time="2025-07-10T00:41:06.462946377Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:41:06.464489 env[1214]: time="2025-07-10T00:41:06.464453178Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\"" Jul 10 00:41:06.466284 env[1214]: time="2025-07-10T00:41:06.466252178Z" level=info msg="CreateContainer within sandbox \"a48883edbf354fa018bb536ca258c97ca53c73e4e5adb5d6c30c8bfe7b855156\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 10 00:41:06.474871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount202688595.mount: Deactivated successfully. Jul 10 00:41:06.475985 env[1214]: time="2025-07-10T00:41:06.475949113Z" level=info msg="CreateContainer within sandbox \"a48883edbf354fa018bb536ca258c97ca53c73e4e5adb5d6c30c8bfe7b855156\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"ad6a108c40bd5c0fe7133fd37c02f76e760ee8185a7de2c4d2d42d0fd4cdf847\"" Jul 10 00:41:06.476483 env[1214]: time="2025-07-10T00:41:06.476456020Z" level=info msg="StartContainer for \"ad6a108c40bd5c0fe7133fd37c02f76e760ee8185a7de2c4d2d42d0fd4cdf847\"" Jul 10 00:41:06.494359 systemd[1]: run-containerd-runc-k8s.io-ad6a108c40bd5c0fe7133fd37c02f76e760ee8185a7de2c4d2d42d0fd4cdf847-runc.IpLTul.mount: Deactivated successfully. Jul 10 00:41:06.495765 systemd[1]: Started cri-containerd-ad6a108c40bd5c0fe7133fd37c02f76e760ee8185a7de2c4d2d42d0fd4cdf847.scope. Jul 10 00:41:06.530846 env[1214]: time="2025-07-10T00:41:06.530798916Z" level=info msg="StartContainer for \"ad6a108c40bd5c0fe7133fd37c02f76e760ee8185a7de2c4d2d42d0fd4cdf847\" returns successfully" Jul 10 00:41:07.049330 kubelet[1414]: E0710 00:41:07.049284 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:07.299227 kubelet[1414]: I0710 00:41:07.299141 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-z2q6g" podStartSLOduration=6.117237489 podStartE2EDuration="9.299124351s" podCreationTimestamp="2025-07-10 00:40:58 +0000 UTC" firstStartedPulling="2025-07-10 00:41:03.283226684 +0000 UTC m=+27.481610920" lastFinishedPulling="2025-07-10 00:41:06.465113586 +0000 UTC m=+30.663497782" observedRunningTime="2025-07-10 00:41:07.298997215 +0000 UTC m=+31.497381451" watchObservedRunningTime="2025-07-10 00:41:07.299124351 +0000 UTC m=+31.497508587" Jul 10 00:41:08.050310 kubelet[1414]: E0710 00:41:08.050267 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:09.050696 kubelet[1414]: E0710 00:41:09.050658 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:10.051290 kubelet[1414]: E0710 00:41:10.051251 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:10.402559 systemd[1]: Created slice kubepods-besteffort-podb18364b9_9b0a_4e72_917a_3d804664e26b.slice. Jul 10 00:41:10.434686 kubelet[1414]: I0710 00:41:10.434647 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/b18364b9-9b0a-4e72-917a-3d804664e26b-data\") pod \"nfs-server-provisioner-0\" (UID: \"b18364b9-9b0a-4e72-917a-3d804664e26b\") " pod="default/nfs-server-provisioner-0" Jul 10 00:41:10.434900 kubelet[1414]: I0710 00:41:10.434874 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn5hn\" (UniqueName: \"kubernetes.io/projected/b18364b9-9b0a-4e72-917a-3d804664e26b-kube-api-access-kn5hn\") pod \"nfs-server-provisioner-0\" (UID: \"b18364b9-9b0a-4e72-917a-3d804664e26b\") " pod="default/nfs-server-provisioner-0" Jul 10 00:41:10.705720 env[1214]: time="2025-07-10T00:41:10.705379346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b18364b9-9b0a-4e72-917a-3d804664e26b,Namespace:default,Attempt:0,}" Jul 10 00:41:10.733037 systemd-networkd[1051]: lxc157c190fb36b: Link UP Jul 10 00:41:10.743284 kernel: eth0: renamed from tmp77887 Jul 10 00:41:10.751229 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 10 00:41:10.751315 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc157c190fb36b: link becomes ready Jul 10 00:41:10.751369 systemd-networkd[1051]: lxc157c190fb36b: Gained carrier Jul 10 00:41:10.926096 env[1214]: time="2025-07-10T00:41:10.926028558Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:41:10.926266 env[1214]: time="2025-07-10T00:41:10.926069162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:41:10.926266 env[1214]: time="2025-07-10T00:41:10.926094085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:41:10.926374 env[1214]: time="2025-07-10T00:41:10.926308828Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/778874c306606d5b3e6cc3fe9c3425e76e4bc30a85caeef91709984c3db16dab pid=2631 runtime=io.containerd.runc.v2 Jul 10 00:41:10.938099 systemd[1]: Started cri-containerd-778874c306606d5b3e6cc3fe9c3425e76e4bc30a85caeef91709984c3db16dab.scope. Jul 10 00:41:10.962449 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:41:10.980017 env[1214]: time="2025-07-10T00:41:10.979977431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:b18364b9-9b0a-4e72-917a-3d804664e26b,Namespace:default,Attempt:0,} returns sandbox id \"778874c306606d5b3e6cc3fe9c3425e76e4bc30a85caeef91709984c3db16dab\"" Jul 10 00:41:10.985120 env[1214]: time="2025-07-10T00:41:10.985082379Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 10 00:41:11.052650 kubelet[1414]: E0710 00:41:11.052609 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:12.053494 kubelet[1414]: E0710 00:41:12.053443 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:12.173427 systemd-networkd[1051]: lxc157c190fb36b: Gained IPv6LL Jul 10 00:41:12.958363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount254286077.mount: Deactivated successfully. Jul 10 00:41:13.054399 kubelet[1414]: E0710 00:41:13.054353 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:14.054866 kubelet[1414]: E0710 00:41:14.054818 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:14.745856 env[1214]: time="2025-07-10T00:41:14.745809134Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:41:14.747290 env[1214]: time="2025-07-10T00:41:14.747254540Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:41:14.748876 env[1214]: time="2025-07-10T00:41:14.748846718Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:41:14.750556 env[1214]: time="2025-07-10T00:41:14.750531625Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:41:14.752058 env[1214]: time="2025-07-10T00:41:14.752020675Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jul 10 00:41:14.753886 env[1214]: time="2025-07-10T00:41:14.753855035Z" level=info msg="CreateContainer within sandbox \"778874c306606d5b3e6cc3fe9c3425e76e4bc30a85caeef91709984c3db16dab\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 10 00:41:14.764698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2732189448.mount: Deactivated successfully. Jul 10 00:41:14.768453 env[1214]: time="2025-07-10T00:41:14.768417704Z" level=info msg="CreateContainer within sandbox \"778874c306606d5b3e6cc3fe9c3425e76e4bc30a85caeef91709984c3db16dab\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"212e1772a88e726d738b8e2d3628614ea5c5c1535dd074562d707cd4b60e1880\"" Jul 10 00:41:14.769144 env[1214]: time="2025-07-10T00:41:14.769108165Z" level=info msg="StartContainer for \"212e1772a88e726d738b8e2d3628614ea5c5c1535dd074562d707cd4b60e1880\"" Jul 10 00:41:14.783519 systemd[1]: run-containerd-runc-k8s.io-212e1772a88e726d738b8e2d3628614ea5c5c1535dd074562d707cd4b60e1880-runc.dYkbar.mount: Deactivated successfully. Jul 10 00:41:14.785911 systemd[1]: Started cri-containerd-212e1772a88e726d738b8e2d3628614ea5c5c1535dd074562d707cd4b60e1880.scope. Jul 10 00:41:14.833057 env[1214]: time="2025-07-10T00:41:14.832167422Z" level=info msg="StartContainer for \"212e1772a88e726d738b8e2d3628614ea5c5c1535dd074562d707cd4b60e1880\" returns successfully" Jul 10 00:41:15.006371 update_engine[1211]: I0710 00:41:15.006246 1211 update_attempter.cc:509] Updating boot flags... Jul 10 00:41:15.055960 kubelet[1414]: E0710 00:41:15.055917 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:15.317771 kubelet[1414]: I0710 00:41:15.317699 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.549613505 podStartE2EDuration="5.31768295s" podCreationTimestamp="2025-07-10 00:41:10 +0000 UTC" firstStartedPulling="2025-07-10 00:41:10.984599687 +0000 UTC m=+35.182983923" lastFinishedPulling="2025-07-10 00:41:14.752669172 +0000 UTC m=+38.951053368" observedRunningTime="2025-07-10 00:41:15.317041817 +0000 UTC m=+39.515426013" watchObservedRunningTime="2025-07-10 00:41:15.31768295 +0000 UTC m=+39.516067186" Jul 10 00:41:16.056999 kubelet[1414]: E0710 00:41:16.056949 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:17.027494 kubelet[1414]: E0710 00:41:17.027454 1414 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:17.057996 kubelet[1414]: E0710 00:41:17.057959 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:18.058526 kubelet[1414]: E0710 00:41:18.058469 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:19.058901 kubelet[1414]: E0710 00:41:19.058853 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:20.059751 kubelet[1414]: E0710 00:41:20.059700 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:21.060880 kubelet[1414]: E0710 00:41:21.060821 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:22.061305 kubelet[1414]: E0710 00:41:22.061262 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:23.062587 kubelet[1414]: E0710 00:41:23.062551 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:24.063428 kubelet[1414]: E0710 00:41:24.063382 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:24.487120 systemd[1]: Created slice kubepods-besteffort-pod6192bdf4_5059_46fb_ac42_1ef58d3cb0fc.slice. Jul 10 00:41:24.516209 kubelet[1414]: I0710 00:41:24.516160 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3c77176a-b9f8-4b6d-b620-47b51c4f7382\" (UniqueName: \"kubernetes.io/nfs/6192bdf4-5059-46fb-ac42-1ef58d3cb0fc-pvc-3c77176a-b9f8-4b6d-b620-47b51c4f7382\") pod \"test-pod-1\" (UID: \"6192bdf4-5059-46fb-ac42-1ef58d3cb0fc\") " pod="default/test-pod-1" Jul 10 00:41:24.516545 kubelet[1414]: I0710 00:41:24.516529 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42flj\" (UniqueName: \"kubernetes.io/projected/6192bdf4-5059-46fb-ac42-1ef58d3cb0fc-kube-api-access-42flj\") pod \"test-pod-1\" (UID: \"6192bdf4-5059-46fb-ac42-1ef58d3cb0fc\") " pod="default/test-pod-1" Jul 10 00:41:24.644233 kernel: FS-Cache: Loaded Jul 10 00:41:24.672518 kernel: RPC: Registered named UNIX socket transport module. Jul 10 00:41:24.672617 kernel: RPC: Registered udp transport module. Jul 10 00:41:24.672637 kernel: RPC: Registered tcp transport module. Jul 10 00:41:24.675912 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 10 00:41:24.721255 kernel: FS-Cache: Netfs 'nfs' registered for caching Jul 10 00:41:24.866238 kernel: NFS: Registering the id_resolver key type Jul 10 00:41:24.866369 kernel: Key type id_resolver registered Jul 10 00:41:24.866392 kernel: Key type id_legacy registered Jul 10 00:41:24.910484 nfsidmap[2768]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 10 00:41:24.922031 nfsidmap[2771]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 10 00:41:25.063753 kubelet[1414]: E0710 00:41:25.063706 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:25.090861 env[1214]: time="2025-07-10T00:41:25.090477365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:6192bdf4-5059-46fb-ac42-1ef58d3cb0fc,Namespace:default,Attempt:0,}" Jul 10 00:41:25.121491 systemd-networkd[1051]: lxc04b5c52e4d6e: Link UP Jul 10 00:41:25.136234 kernel: eth0: renamed from tmp55b71 Jul 10 00:41:25.147296 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 10 00:41:25.147388 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc04b5c52e4d6e: link becomes ready Jul 10 00:41:25.147386 systemd-networkd[1051]: lxc04b5c52e4d6e: Gained carrier Jul 10 00:41:25.328355 env[1214]: time="2025-07-10T00:41:25.328281967Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:41:25.328673 env[1214]: time="2025-07-10T00:41:25.328329649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:41:25.328673 env[1214]: time="2025-07-10T00:41:25.328341850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:41:25.328797 env[1214]: time="2025-07-10T00:41:25.328699228Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/55b71edb0a9a8240e9c8ca5941e478d3942f6c3fe399c2d53868f4c8ce6726b5 pid=2807 runtime=io.containerd.runc.v2 Jul 10 00:41:25.338890 systemd[1]: Started cri-containerd-55b71edb0a9a8240e9c8ca5941e478d3942f6c3fe399c2d53868f4c8ce6726b5.scope. Jul 10 00:41:25.394094 systemd-resolved[1156]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:41:25.410625 env[1214]: time="2025-07-10T00:41:25.410564351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:6192bdf4-5059-46fb-ac42-1ef58d3cb0fc,Namespace:default,Attempt:0,} returns sandbox id \"55b71edb0a9a8240e9c8ca5941e478d3942f6c3fe399c2d53868f4c8ce6726b5\"" Jul 10 00:41:25.412325 env[1214]: time="2025-07-10T00:41:25.411981906Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 10 00:41:25.648481 env[1214]: time="2025-07-10T00:41:25.648348312Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:41:25.651299 env[1214]: time="2025-07-10T00:41:25.651259864Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:41:25.652716 env[1214]: time="2025-07-10T00:41:25.652689699Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:41:25.655255 env[1214]: time="2025-07-10T00:41:25.655225391Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:30bb68e656e0665bce700e67d2756f68bdca3345fa1099a32bfdb8febcf621cd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:41:25.655914 env[1214]: time="2025-07-10T00:41:25.655890386Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:cd8b38a4e22587134e82fff3512a99b84799274d989a1ec20f58c7f8c89b8511\"" Jul 10 00:41:25.658406 env[1214]: time="2025-07-10T00:41:25.658369716Z" level=info msg="CreateContainer within sandbox \"55b71edb0a9a8240e9c8ca5941e478d3942f6c3fe399c2d53868f4c8ce6726b5\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 10 00:41:25.672546 env[1214]: time="2025-07-10T00:41:25.672498695Z" level=info msg="CreateContainer within sandbox \"55b71edb0a9a8240e9c8ca5941e478d3942f6c3fe399c2d53868f4c8ce6726b5\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"0eca3881cb411f571584015df87b9db905abc969cdcaddc40492c5d21ccedabc\"" Jul 10 00:41:25.673087 env[1214]: time="2025-07-10T00:41:25.673033403Z" level=info msg="StartContainer for \"0eca3881cb411f571584015df87b9db905abc969cdcaddc40492c5d21ccedabc\"" Jul 10 00:41:25.691972 systemd[1]: Started cri-containerd-0eca3881cb411f571584015df87b9db905abc969cdcaddc40492c5d21ccedabc.scope. Jul 10 00:41:25.724184 env[1214]: time="2025-07-10T00:41:25.724129796Z" level=info msg="StartContainer for \"0eca3881cb411f571584015df87b9db905abc969cdcaddc40492c5d21ccedabc\" returns successfully" Jul 10 00:41:26.063927 kubelet[1414]: E0710 00:41:26.063881 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:26.336465 kubelet[1414]: I0710 00:41:26.336345 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.091104089 podStartE2EDuration="16.336327958s" podCreationTimestamp="2025-07-10 00:41:10 +0000 UTC" firstStartedPulling="2025-07-10 00:41:25.411771575 +0000 UTC m=+49.610155771" lastFinishedPulling="2025-07-10 00:41:25.656995444 +0000 UTC m=+49.855379640" observedRunningTime="2025-07-10 00:41:26.336227873 +0000 UTC m=+50.534612109" watchObservedRunningTime="2025-07-10 00:41:26.336327958 +0000 UTC m=+50.534712194" Jul 10 00:41:27.064251 kubelet[1414]: E0710 00:41:27.064214 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:27.149434 systemd-networkd[1051]: lxc04b5c52e4d6e: Gained IPv6LL Jul 10 00:41:28.065085 kubelet[1414]: E0710 00:41:28.065040 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:29.066023 kubelet[1414]: E0710 00:41:29.065974 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:30.066948 kubelet[1414]: E0710 00:41:30.066901 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:31.067288 kubelet[1414]: E0710 00:41:31.067238 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:32.067752 kubelet[1414]: E0710 00:41:32.067695 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:33.068385 kubelet[1414]: E0710 00:41:33.068342 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:33.933118 env[1214]: time="2025-07-10T00:41:33.933041101Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:41:33.939093 env[1214]: time="2025-07-10T00:41:33.939047014Z" level=info msg="StopContainer for \"ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b\" with timeout 2 (s)" Jul 10 00:41:33.939396 env[1214]: time="2025-07-10T00:41:33.939364066Z" level=info msg="Stop container \"ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b\" with signal terminated" Jul 10 00:41:33.944894 systemd-networkd[1051]: lxc_health: Link DOWN Jul 10 00:41:33.944901 systemd-networkd[1051]: lxc_health: Lost carrier Jul 10 00:41:33.978604 systemd[1]: cri-containerd-ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b.scope: Deactivated successfully. Jul 10 00:41:33.978941 systemd[1]: cri-containerd-ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b.scope: Consumed 6.453s CPU time. Jul 10 00:41:33.998281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b-rootfs.mount: Deactivated successfully. Jul 10 00:41:34.007643 env[1214]: time="2025-07-10T00:41:34.007584259Z" level=info msg="shim disconnected" id=ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b Jul 10 00:41:34.007643 env[1214]: time="2025-07-10T00:41:34.007636421Z" level=warning msg="cleaning up after shim disconnected" id=ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b namespace=k8s.io Jul 10 00:41:34.007643 env[1214]: time="2025-07-10T00:41:34.007647502Z" level=info msg="cleaning up dead shim" Jul 10 00:41:34.014989 env[1214]: time="2025-07-10T00:41:34.014931375Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:41:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2938 runtime=io.containerd.runc.v2\n" Jul 10 00:41:34.017680 env[1214]: time="2025-07-10T00:41:34.017634236Z" level=info msg="StopContainer for \"ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b\" returns successfully" Jul 10 00:41:34.018349 env[1214]: time="2025-07-10T00:41:34.018309501Z" level=info msg="StopPodSandbox for \"86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90\"" Jul 10 00:41:34.018411 env[1214]: time="2025-07-10T00:41:34.018374904Z" level=info msg="Container to stop \"1bac21886bd3a8bba577f1bab218778a09571df946f8d51fa9de52782725d2b2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:41:34.018411 env[1214]: time="2025-07-10T00:41:34.018391344Z" level=info msg="Container to stop \"bcd2f4a43c97c8ce5a0b9344c662a992078549762945aac4053631c6c9534912\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:41:34.018411 env[1214]: time="2025-07-10T00:41:34.018402825Z" level=info msg="Container to stop \"4ffb6890cb98516294379a3fe588386a3a120baeadd0273a9f319915716b2d6c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:41:34.018481 env[1214]: time="2025-07-10T00:41:34.018413505Z" level=info msg="Container to stop \"ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:41:34.018481 env[1214]: time="2025-07-10T00:41:34.018425026Z" level=info msg="Container to stop \"f62992b3dfcd79fcfd0dc7d37e1f13e0948952e9c5092039deb79239f2c7a99b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 00:41:34.020068 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90-shm.mount: Deactivated successfully. Jul 10 00:41:34.025642 systemd[1]: cri-containerd-86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90.scope: Deactivated successfully. Jul 10 00:41:34.040043 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90-rootfs.mount: Deactivated successfully. Jul 10 00:41:34.042813 env[1214]: time="2025-07-10T00:41:34.042771578Z" level=info msg="shim disconnected" id=86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90 Jul 10 00:41:34.042904 env[1214]: time="2025-07-10T00:41:34.042809499Z" level=warning msg="cleaning up after shim disconnected" id=86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90 namespace=k8s.io Jul 10 00:41:34.042904 env[1214]: time="2025-07-10T00:41:34.042828460Z" level=info msg="cleaning up dead shim" Jul 10 00:41:34.049821 env[1214]: time="2025-07-10T00:41:34.049778840Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:41:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2968 runtime=io.containerd.runc.v2\n" Jul 10 00:41:34.050097 env[1214]: time="2025-07-10T00:41:34.050068491Z" level=info msg="TearDown network for sandbox \"86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90\" successfully" Jul 10 00:41:34.050128 env[1214]: time="2025-07-10T00:41:34.050096732Z" level=info msg="StopPodSandbox for \"86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90\" returns successfully" Jul 10 00:41:34.069324 kubelet[1414]: E0710 00:41:34.069252 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:34.169706 kubelet[1414]: I0710 00:41:34.169673 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-host-proc-sys-net\") pod \"638a2747-9925-4001-9dad-a33defa35791\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " Jul 10 00:41:34.169903 kubelet[1414]: I0710 00:41:34.169886 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-bpf-maps\") pod \"638a2747-9925-4001-9dad-a33defa35791\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " Jul 10 00:41:34.169977 kubelet[1414]: I0710 00:41:34.169965 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-cilium-run\") pod \"638a2747-9925-4001-9dad-a33defa35791\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " Jul 10 00:41:34.170056 kubelet[1414]: I0710 00:41:34.169759 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "638a2747-9925-4001-9dad-a33defa35791" (UID: "638a2747-9925-4001-9dad-a33defa35791"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:41:34.170056 kubelet[1414]: I0710 00:41:34.170045 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "638a2747-9925-4001-9dad-a33defa35791" (UID: "638a2747-9925-4001-9dad-a33defa35791"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:41:34.170128 kubelet[1414]: I0710 00:41:34.170037 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-etc-cni-netd\") pod \"638a2747-9925-4001-9dad-a33defa35791\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " Jul 10 00:41:34.170128 kubelet[1414]: I0710 00:41:34.170021 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "638a2747-9925-4001-9dad-a33defa35791" (UID: "638a2747-9925-4001-9dad-a33defa35791"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:41:34.170128 kubelet[1414]: I0710 00:41:34.170096 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/638a2747-9925-4001-9dad-a33defa35791-hubble-tls\") pod \"638a2747-9925-4001-9dad-a33defa35791\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " Jul 10 00:41:34.170128 kubelet[1414]: I0710 00:41:34.170115 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-cilium-cgroup\") pod \"638a2747-9925-4001-9dad-a33defa35791\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " Jul 10 00:41:34.170243 kubelet[1414]: I0710 00:41:34.170131 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-lib-modules\") pod \"638a2747-9925-4001-9dad-a33defa35791\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " Jul 10 00:41:34.170243 kubelet[1414]: I0710 00:41:34.170144 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-cni-path\") pod \"638a2747-9925-4001-9dad-a33defa35791\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " Jul 10 00:41:34.170243 kubelet[1414]: I0710 00:41:34.170160 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/638a2747-9925-4001-9dad-a33defa35791-cilium-config-path\") pod \"638a2747-9925-4001-9dad-a33defa35791\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " Jul 10 00:41:34.170243 kubelet[1414]: I0710 00:41:34.170175 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-host-proc-sys-kernel\") pod \"638a2747-9925-4001-9dad-a33defa35791\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " Jul 10 00:41:34.170243 kubelet[1414]: I0710 00:41:34.170191 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-hostproc\") pod \"638a2747-9925-4001-9dad-a33defa35791\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " Jul 10 00:41:34.170243 kubelet[1414]: I0710 00:41:34.170229 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-xtables-lock\") pod \"638a2747-9925-4001-9dad-a33defa35791\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " Jul 10 00:41:34.170471 kubelet[1414]: I0710 00:41:34.170254 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/638a2747-9925-4001-9dad-a33defa35791-clustermesh-secrets\") pod \"638a2747-9925-4001-9dad-a33defa35791\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " Jul 10 00:41:34.170471 kubelet[1414]: I0710 00:41:34.170277 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4hq7\" (UniqueName: \"kubernetes.io/projected/638a2747-9925-4001-9dad-a33defa35791-kube-api-access-j4hq7\") pod \"638a2747-9925-4001-9dad-a33defa35791\" (UID: \"638a2747-9925-4001-9dad-a33defa35791\") " Jul 10 00:41:34.170471 kubelet[1414]: I0710 00:41:34.170306 1414 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-host-proc-sys-net\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:34.170471 kubelet[1414]: I0710 00:41:34.170315 1414 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-bpf-maps\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:34.170471 kubelet[1414]: I0710 00:41:34.170323 1414 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-cilium-run\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:34.170656 kubelet[1414]: I0710 00:41:34.170609 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "638a2747-9925-4001-9dad-a33defa35791" (UID: "638a2747-9925-4001-9dad-a33defa35791"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:41:34.170656 kubelet[1414]: I0710 00:41:34.170654 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "638a2747-9925-4001-9dad-a33defa35791" (UID: "638a2747-9925-4001-9dad-a33defa35791"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:41:34.170729 kubelet[1414]: I0710 00:41:34.170714 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "638a2747-9925-4001-9dad-a33defa35791" (UID: "638a2747-9925-4001-9dad-a33defa35791"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:41:34.170757 kubelet[1414]: I0710 00:41:34.170737 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-cni-path" (OuterVolumeSpecName: "cni-path") pod "638a2747-9925-4001-9dad-a33defa35791" (UID: "638a2747-9925-4001-9dad-a33defa35791"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:41:34.170783 kubelet[1414]: I0710 00:41:34.170760 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-hostproc" (OuterVolumeSpecName: "hostproc") pod "638a2747-9925-4001-9dad-a33defa35791" (UID: "638a2747-9925-4001-9dad-a33defa35791"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:41:34.170783 kubelet[1414]: I0710 00:41:34.170774 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "638a2747-9925-4001-9dad-a33defa35791" (UID: "638a2747-9925-4001-9dad-a33defa35791"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:41:34.170833 kubelet[1414]: I0710 00:41:34.170789 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "638a2747-9925-4001-9dad-a33defa35791" (UID: "638a2747-9925-4001-9dad-a33defa35791"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:41:34.172690 kubelet[1414]: I0710 00:41:34.172640 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/638a2747-9925-4001-9dad-a33defa35791-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "638a2747-9925-4001-9dad-a33defa35791" (UID: "638a2747-9925-4001-9dad-a33defa35791"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 00:41:34.179387 systemd[1]: var-lib-kubelet-pods-638a2747\x2d9925\x2d4001\x2d9dad\x2da33defa35791-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:41:34.180252 kubelet[1414]: I0710 00:41:34.180162 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/638a2747-9925-4001-9dad-a33defa35791-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "638a2747-9925-4001-9dad-a33defa35791" (UID: "638a2747-9925-4001-9dad-a33defa35791"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:41:34.180613 kubelet[1414]: I0710 00:41:34.180579 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/638a2747-9925-4001-9dad-a33defa35791-kube-api-access-j4hq7" (OuterVolumeSpecName: "kube-api-access-j4hq7") pod "638a2747-9925-4001-9dad-a33defa35791" (UID: "638a2747-9925-4001-9dad-a33defa35791"). InnerVolumeSpecName "kube-api-access-j4hq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:41:34.180613 kubelet[1414]: I0710 00:41:34.180586 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/638a2747-9925-4001-9dad-a33defa35791-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "638a2747-9925-4001-9dad-a33defa35791" (UID: "638a2747-9925-4001-9dad-a33defa35791"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 00:41:34.272316 kubelet[1414]: I0710 00:41:34.271175 1414 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-hostproc\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:34.272316 kubelet[1414]: I0710 00:41:34.271227 1414 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-xtables-lock\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:34.272316 kubelet[1414]: I0710 00:41:34.271245 1414 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/638a2747-9925-4001-9dad-a33defa35791-clustermesh-secrets\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:34.272316 kubelet[1414]: I0710 00:41:34.271261 1414 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4hq7\" (UniqueName: \"kubernetes.io/projected/638a2747-9925-4001-9dad-a33defa35791-kube-api-access-j4hq7\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:34.272316 kubelet[1414]: I0710 00:41:34.271288 1414 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-etc-cni-netd\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:34.272316 kubelet[1414]: I0710 00:41:34.271305 1414 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/638a2747-9925-4001-9dad-a33defa35791-hubble-tls\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:34.272316 kubelet[1414]: I0710 00:41:34.271321 1414 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-cilium-cgroup\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:34.272316 kubelet[1414]: I0710 00:41:34.271337 1414 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-lib-modules\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:34.273290 kubelet[1414]: I0710 00:41:34.271350 1414 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-cni-path\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:34.273290 kubelet[1414]: I0710 00:41:34.271365 1414 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/638a2747-9925-4001-9dad-a33defa35791-cilium-config-path\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:34.273290 kubelet[1414]: I0710 00:41:34.271379 1414 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/638a2747-9925-4001-9dad-a33defa35791-host-proc-sys-kernel\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:34.343347 kubelet[1414]: I0710 00:41:34.343321 1414 scope.go:117] "RemoveContainer" containerID="ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b" Jul 10 00:41:34.344902 env[1214]: time="2025-07-10T00:41:34.344628806Z" level=info msg="RemoveContainer for \"ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b\"" Jul 10 00:41:34.347605 env[1214]: time="2025-07-10T00:41:34.347556635Z" level=info msg="RemoveContainer for \"ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b\" returns successfully" Jul 10 00:41:34.347906 kubelet[1414]: I0710 00:41:34.347883 1414 scope.go:117] "RemoveContainer" containerID="4ffb6890cb98516294379a3fe588386a3a120baeadd0273a9f319915716b2d6c" Jul 10 00:41:34.349102 env[1214]: time="2025-07-10T00:41:34.348861084Z" level=info msg="RemoveContainer for \"4ffb6890cb98516294379a3fe588386a3a120baeadd0273a9f319915716b2d6c\"" Jul 10 00:41:34.349881 systemd[1]: Removed slice kubepods-burstable-pod638a2747_9925_4001_9dad_a33defa35791.slice. Jul 10 00:41:34.349964 systemd[1]: kubepods-burstable-pod638a2747_9925_4001_9dad_a33defa35791.slice: Consumed 6.661s CPU time. Jul 10 00:41:34.354996 env[1214]: time="2025-07-10T00:41:34.354960313Z" level=info msg="RemoveContainer for \"4ffb6890cb98516294379a3fe588386a3a120baeadd0273a9f319915716b2d6c\" returns successfully" Jul 10 00:41:34.356114 kubelet[1414]: I0710 00:41:34.356086 1414 scope.go:117] "RemoveContainer" containerID="bcd2f4a43c97c8ce5a0b9344c662a992078549762945aac4053631c6c9534912" Jul 10 00:41:34.358573 env[1214]: time="2025-07-10T00:41:34.358540287Z" level=info msg="RemoveContainer for \"bcd2f4a43c97c8ce5a0b9344c662a992078549762945aac4053631c6c9534912\"" Jul 10 00:41:34.360617 env[1214]: time="2025-07-10T00:41:34.360579203Z" level=info msg="RemoveContainer for \"bcd2f4a43c97c8ce5a0b9344c662a992078549762945aac4053631c6c9534912\" returns successfully" Jul 10 00:41:34.361159 kubelet[1414]: I0710 00:41:34.361137 1414 scope.go:117] "RemoveContainer" containerID="1bac21886bd3a8bba577f1bab218778a09571df946f8d51fa9de52782725d2b2" Jul 10 00:41:34.363725 env[1214]: time="2025-07-10T00:41:34.363698160Z" level=info msg="RemoveContainer for \"1bac21886bd3a8bba577f1bab218778a09571df946f8d51fa9de52782725d2b2\"" Jul 10 00:41:34.365921 env[1214]: time="2025-07-10T00:41:34.365887282Z" level=info msg="RemoveContainer for \"1bac21886bd3a8bba577f1bab218778a09571df946f8d51fa9de52782725d2b2\" returns successfully" Jul 10 00:41:34.366175 kubelet[1414]: I0710 00:41:34.366153 1414 scope.go:117] "RemoveContainer" containerID="f62992b3dfcd79fcfd0dc7d37e1f13e0948952e9c5092039deb79239f2c7a99b" Jul 10 00:41:34.367141 env[1214]: time="2025-07-10T00:41:34.367114848Z" level=info msg="RemoveContainer for \"f62992b3dfcd79fcfd0dc7d37e1f13e0948952e9c5092039deb79239f2c7a99b\"" Jul 10 00:41:34.370440 env[1214]: time="2025-07-10T00:41:34.370404251Z" level=info msg="RemoveContainer for \"f62992b3dfcd79fcfd0dc7d37e1f13e0948952e9c5092039deb79239f2c7a99b\" returns successfully" Jul 10 00:41:34.370740 kubelet[1414]: I0710 00:41:34.370715 1414 scope.go:117] "RemoveContainer" containerID="ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b" Jul 10 00:41:34.371061 env[1214]: time="2025-07-10T00:41:34.370977953Z" level=error msg="ContainerStatus for \"ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b\": not found" Jul 10 00:41:34.371184 kubelet[1414]: E0710 00:41:34.371163 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b\": not found" containerID="ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b" Jul 10 00:41:34.371304 kubelet[1414]: I0710 00:41:34.371193 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b"} err="failed to get container status \"ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b\": rpc error: code = NotFound desc = an error occurred when try to find container \"ea7da3d5b21c23a3d8cb52ec9417c714f619deb3b1c9ac9a46ca402f4af39d0b\": not found" Jul 10 00:41:34.371304 kubelet[1414]: I0710 00:41:34.371302 1414 scope.go:117] "RemoveContainer" containerID="4ffb6890cb98516294379a3fe588386a3a120baeadd0273a9f319915716b2d6c" Jul 10 00:41:34.371514 env[1214]: time="2025-07-10T00:41:34.371463851Z" level=error msg="ContainerStatus for \"4ffb6890cb98516294379a3fe588386a3a120baeadd0273a9f319915716b2d6c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4ffb6890cb98516294379a3fe588386a3a120baeadd0273a9f319915716b2d6c\": not found" Jul 10 00:41:34.371629 kubelet[1414]: E0710 00:41:34.371611 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4ffb6890cb98516294379a3fe588386a3a120baeadd0273a9f319915716b2d6c\": not found" containerID="4ffb6890cb98516294379a3fe588386a3a120baeadd0273a9f319915716b2d6c" Jul 10 00:41:34.371673 kubelet[1414]: I0710 00:41:34.371636 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4ffb6890cb98516294379a3fe588386a3a120baeadd0273a9f319915716b2d6c"} err="failed to get container status \"4ffb6890cb98516294379a3fe588386a3a120baeadd0273a9f319915716b2d6c\": rpc error: code = NotFound desc = an error occurred when try to find container \"4ffb6890cb98516294379a3fe588386a3a120baeadd0273a9f319915716b2d6c\": not found" Jul 10 00:41:34.371673 kubelet[1414]: I0710 00:41:34.371651 1414 scope.go:117] "RemoveContainer" containerID="bcd2f4a43c97c8ce5a0b9344c662a992078549762945aac4053631c6c9534912" Jul 10 00:41:34.371945 env[1214]: time="2025-07-10T00:41:34.371891707Z" level=error msg="ContainerStatus for \"bcd2f4a43c97c8ce5a0b9344c662a992078549762945aac4053631c6c9534912\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bcd2f4a43c97c8ce5a0b9344c662a992078549762945aac4053631c6c9534912\": not found" Jul 10 00:41:34.372138 kubelet[1414]: E0710 00:41:34.372119 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bcd2f4a43c97c8ce5a0b9344c662a992078549762945aac4053631c6c9534912\": not found" containerID="bcd2f4a43c97c8ce5a0b9344c662a992078549762945aac4053631c6c9534912" Jul 10 00:41:34.372214 kubelet[1414]: I0710 00:41:34.372140 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bcd2f4a43c97c8ce5a0b9344c662a992078549762945aac4053631c6c9534912"} err="failed to get container status \"bcd2f4a43c97c8ce5a0b9344c662a992078549762945aac4053631c6c9534912\": rpc error: code = NotFound desc = an error occurred when try to find container \"bcd2f4a43c97c8ce5a0b9344c662a992078549762945aac4053631c6c9534912\": not found" Jul 10 00:41:34.372214 kubelet[1414]: I0710 00:41:34.372154 1414 scope.go:117] "RemoveContainer" containerID="1bac21886bd3a8bba577f1bab218778a09571df946f8d51fa9de52782725d2b2" Jul 10 00:41:34.372376 env[1214]: time="2025-07-10T00:41:34.372321603Z" level=error msg="ContainerStatus for \"1bac21886bd3a8bba577f1bab218778a09571df946f8d51fa9de52782725d2b2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1bac21886bd3a8bba577f1bab218778a09571df946f8d51fa9de52782725d2b2\": not found" Jul 10 00:41:34.372502 kubelet[1414]: E0710 00:41:34.372481 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1bac21886bd3a8bba577f1bab218778a09571df946f8d51fa9de52782725d2b2\": not found" containerID="1bac21886bd3a8bba577f1bab218778a09571df946f8d51fa9de52782725d2b2" Jul 10 00:41:34.372542 kubelet[1414]: I0710 00:41:34.372506 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1bac21886bd3a8bba577f1bab218778a09571df946f8d51fa9de52782725d2b2"} err="failed to get container status \"1bac21886bd3a8bba577f1bab218778a09571df946f8d51fa9de52782725d2b2\": rpc error: code = NotFound desc = an error occurred when try to find container \"1bac21886bd3a8bba577f1bab218778a09571df946f8d51fa9de52782725d2b2\": not found" Jul 10 00:41:34.372542 kubelet[1414]: I0710 00:41:34.372521 1414 scope.go:117] "RemoveContainer" containerID="f62992b3dfcd79fcfd0dc7d37e1f13e0948952e9c5092039deb79239f2c7a99b" Jul 10 00:41:34.372731 env[1214]: time="2025-07-10T00:41:34.372688497Z" level=error msg="ContainerStatus for \"f62992b3dfcd79fcfd0dc7d37e1f13e0948952e9c5092039deb79239f2c7a99b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f62992b3dfcd79fcfd0dc7d37e1f13e0948952e9c5092039deb79239f2c7a99b\": not found" Jul 10 00:41:34.372862 kubelet[1414]: E0710 00:41:34.372835 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f62992b3dfcd79fcfd0dc7d37e1f13e0948952e9c5092039deb79239f2c7a99b\": not found" containerID="f62992b3dfcd79fcfd0dc7d37e1f13e0948952e9c5092039deb79239f2c7a99b" Jul 10 00:41:34.372897 kubelet[1414]: I0710 00:41:34.372866 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f62992b3dfcd79fcfd0dc7d37e1f13e0948952e9c5092039deb79239f2c7a99b"} err="failed to get container status \"f62992b3dfcd79fcfd0dc7d37e1f13e0948952e9c5092039deb79239f2c7a99b\": rpc error: code = NotFound desc = an error occurred when try to find container \"f62992b3dfcd79fcfd0dc7d37e1f13e0948952e9c5092039deb79239f2c7a99b\": not found" Jul 10 00:41:34.890834 systemd[1]: var-lib-kubelet-pods-638a2747\x2d9925\x2d4001\x2d9dad\x2da33defa35791-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj4hq7.mount: Deactivated successfully. Jul 10 00:41:34.890935 systemd[1]: var-lib-kubelet-pods-638a2747\x2d9925\x2d4001\x2d9dad\x2da33defa35791-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:41:35.070014 kubelet[1414]: E0710 00:41:35.069969 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:35.232210 kubelet[1414]: I0710 00:41:35.232098 1414 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="638a2747-9925-4001-9dad-a33defa35791" path="/var/lib/kubelet/pods/638a2747-9925-4001-9dad-a33defa35791/volumes" Jul 10 00:41:36.070813 kubelet[1414]: E0710 00:41:36.070765 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:36.998512 kubelet[1414]: E0710 00:41:36.998469 1414 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="638a2747-9925-4001-9dad-a33defa35791" containerName="cilium-agent" Jul 10 00:41:36.998512 kubelet[1414]: E0710 00:41:36.998501 1414 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="638a2747-9925-4001-9dad-a33defa35791" containerName="mount-cgroup" Jul 10 00:41:36.998512 kubelet[1414]: E0710 00:41:36.998507 1414 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="638a2747-9925-4001-9dad-a33defa35791" containerName="apply-sysctl-overwrites" Jul 10 00:41:36.998512 kubelet[1414]: E0710 00:41:36.998514 1414 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="638a2747-9925-4001-9dad-a33defa35791" containerName="mount-bpf-fs" Jul 10 00:41:36.998512 kubelet[1414]: E0710 00:41:36.998519 1414 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="638a2747-9925-4001-9dad-a33defa35791" containerName="clean-cilium-state" Jul 10 00:41:36.998769 kubelet[1414]: I0710 00:41:36.998540 1414 memory_manager.go:354] "RemoveStaleState removing state" podUID="638a2747-9925-4001-9dad-a33defa35791" containerName="cilium-agent" Jul 10 00:41:37.003871 systemd[1]: Created slice kubepods-burstable-podfa025d86_4747_4759_beaa_1d2b1460f549.slice. Jul 10 00:41:37.026438 kubelet[1414]: E0710 00:41:37.026408 1414 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:37.049478 env[1214]: time="2025-07-10T00:41:37.049428353Z" level=info msg="StopPodSandbox for \"86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90\"" Jul 10 00:41:37.049758 env[1214]: time="2025-07-10T00:41:37.049679602Z" level=info msg="TearDown network for sandbox \"86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90\" successfully" Jul 10 00:41:37.049758 env[1214]: time="2025-07-10T00:41:37.049725244Z" level=info msg="StopPodSandbox for \"86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90\" returns successfully" Jul 10 00:41:37.050698 env[1214]: time="2025-07-10T00:41:37.050669476Z" level=info msg="RemovePodSandbox for \"86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90\"" Jul 10 00:41:37.050751 env[1214]: time="2025-07-10T00:41:37.050705357Z" level=info msg="Forcibly stopping sandbox \"86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90\"" Jul 10 00:41:37.050789 env[1214]: time="2025-07-10T00:41:37.050772559Z" level=info msg="TearDown network for sandbox \"86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90\" successfully" Jul 10 00:41:37.070990 kubelet[1414]: E0710 00:41:37.070941 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:37.084893 env[1214]: time="2025-07-10T00:41:37.084850243Z" level=info msg="RemovePodSandbox \"86b019b2cdbdeb99c736fd435b66406657c1a8e800d51435575467283f81aa90\" returns successfully" Jul 10 00:41:37.085177 kubelet[1414]: I0710 00:41:37.085118 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fa025d86-4747-4759-beaa-1d2b1460f549-clustermesh-secrets\") pod \"cilium-xcqn4\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " pod="kube-system/cilium-xcqn4" Jul 10 00:41:37.085177 kubelet[1414]: I0710 00:41:37.085170 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fa025d86-4747-4759-beaa-1d2b1460f549-cilium-ipsec-secrets\") pod \"cilium-xcqn4\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " pod="kube-system/cilium-xcqn4" Jul 10 00:41:37.085293 kubelet[1414]: I0710 00:41:37.085191 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-host-proc-sys-kernel\") pod \"cilium-xcqn4\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " pod="kube-system/cilium-xcqn4" Jul 10 00:41:37.085293 kubelet[1414]: I0710 00:41:37.085223 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fa025d86-4747-4759-beaa-1d2b1460f549-hubble-tls\") pod \"cilium-xcqn4\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " pod="kube-system/cilium-xcqn4" Jul 10 00:41:37.085293 kubelet[1414]: I0710 00:41:37.085242 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj47j\" (UniqueName: \"kubernetes.io/projected/fa025d86-4747-4759-beaa-1d2b1460f549-kube-api-access-mj47j\") pod \"cilium-xcqn4\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " pod="kube-system/cilium-xcqn4" Jul 10 00:41:37.085293 kubelet[1414]: I0710 00:41:37.085259 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-hostproc\") pod \"cilium-xcqn4\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " pod="kube-system/cilium-xcqn4" Jul 10 00:41:37.085293 kubelet[1414]: I0710 00:41:37.085274 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-etc-cni-netd\") pod \"cilium-xcqn4\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " pod="kube-system/cilium-xcqn4" Jul 10 00:41:37.085293 kubelet[1414]: I0710 00:41:37.085288 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-xtables-lock\") pod \"cilium-xcqn4\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " pod="kube-system/cilium-xcqn4" Jul 10 00:41:37.085452 kubelet[1414]: I0710 00:41:37.085304 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-cilium-cgroup\") pod \"cilium-xcqn4\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " pod="kube-system/cilium-xcqn4" Jul 10 00:41:37.085452 kubelet[1414]: I0710 00:41:37.085318 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-cni-path\") pod \"cilium-xcqn4\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " pod="kube-system/cilium-xcqn4" Jul 10 00:41:37.085452 kubelet[1414]: I0710 00:41:37.085335 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa025d86-4747-4759-beaa-1d2b1460f549-cilium-config-path\") pod \"cilium-xcqn4\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " pod="kube-system/cilium-xcqn4" Jul 10 00:41:37.085452 kubelet[1414]: I0710 00:41:37.085350 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-cilium-run\") pod \"cilium-xcqn4\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " pod="kube-system/cilium-xcqn4" Jul 10 00:41:37.085452 kubelet[1414]: I0710 00:41:37.085373 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-bpf-maps\") pod \"cilium-xcqn4\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " pod="kube-system/cilium-xcqn4" Jul 10 00:41:37.085452 kubelet[1414]: I0710 00:41:37.085389 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-lib-modules\") pod \"cilium-xcqn4\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " pod="kube-system/cilium-xcqn4" Jul 10 00:41:37.085589 kubelet[1414]: I0710 00:41:37.085404 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-host-proc-sys-net\") pod \"cilium-xcqn4\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " pod="kube-system/cilium-xcqn4" Jul 10 00:41:37.095474 systemd[1]: Created slice kubepods-besteffort-pod75b53a3f_e9f2_4eaa_8154_87705cf94c9c.slice. Jul 10 00:41:37.146399 kubelet[1414]: E0710 00:41:37.146339 1414 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-mj47j lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-xcqn4" podUID="fa025d86-4747-4759-beaa-1d2b1460f549" Jul 10 00:41:37.188938 kubelet[1414]: E0710 00:41:37.188888 1414 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:41:37.287786 kubelet[1414]: I0710 00:41:37.287733 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdbtq\" (UniqueName: \"kubernetes.io/projected/75b53a3f-e9f2-4eaa-8154-87705cf94c9c-kube-api-access-jdbtq\") pod \"cilium-operator-5d85765b45-wv7t8\" (UID: \"75b53a3f-e9f2-4eaa-8154-87705cf94c9c\") " pod="kube-system/cilium-operator-5d85765b45-wv7t8" Jul 10 00:41:37.287906 kubelet[1414]: I0710 00:41:37.287783 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75b53a3f-e9f2-4eaa-8154-87705cf94c9c-cilium-config-path\") pod \"cilium-operator-5d85765b45-wv7t8\" (UID: \"75b53a3f-e9f2-4eaa-8154-87705cf94c9c\") " pod="kube-system/cilium-operator-5d85765b45-wv7t8" Jul 10 00:41:37.489261 kubelet[1414]: I0710 00:41:37.489190 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-host-proc-sys-net\") pod \"fa025d86-4747-4759-beaa-1d2b1460f549\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " Jul 10 00:41:37.489261 kubelet[1414]: I0710 00:41:37.489254 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-lib-modules\") pod \"fa025d86-4747-4759-beaa-1d2b1460f549\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " Jul 10 00:41:37.489464 kubelet[1414]: I0710 00:41:37.489279 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fa025d86-4747-4759-beaa-1d2b1460f549-clustermesh-secrets\") pod \"fa025d86-4747-4759-beaa-1d2b1460f549\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " Jul 10 00:41:37.489464 kubelet[1414]: I0710 00:41:37.489298 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fa025d86-4747-4759-beaa-1d2b1460f549-cilium-ipsec-secrets\") pod \"fa025d86-4747-4759-beaa-1d2b1460f549\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " Jul 10 00:41:37.489464 kubelet[1414]: I0710 00:41:37.489314 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-cilium-run\") pod \"fa025d86-4747-4759-beaa-1d2b1460f549\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " Jul 10 00:41:37.489464 kubelet[1414]: I0710 00:41:37.489327 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "fa025d86-4747-4759-beaa-1d2b1460f549" (UID: "fa025d86-4747-4759-beaa-1d2b1460f549"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:41:37.489464 kubelet[1414]: I0710 00:41:37.489357 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj47j\" (UniqueName: \"kubernetes.io/projected/fa025d86-4747-4759-beaa-1d2b1460f549-kube-api-access-mj47j\") pod \"fa025d86-4747-4759-beaa-1d2b1460f549\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " Jul 10 00:41:37.489464 kubelet[1414]: I0710 00:41:37.489392 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-xtables-lock\") pod \"fa025d86-4747-4759-beaa-1d2b1460f549\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " Jul 10 00:41:37.489650 kubelet[1414]: I0710 00:41:37.489413 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-cilium-cgroup\") pod \"fa025d86-4747-4759-beaa-1d2b1460f549\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " Jul 10 00:41:37.489650 kubelet[1414]: I0710 00:41:37.489428 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-cni-path\") pod \"fa025d86-4747-4759-beaa-1d2b1460f549\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " Jul 10 00:41:37.489650 kubelet[1414]: I0710 00:41:37.489446 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa025d86-4747-4759-beaa-1d2b1460f549-cilium-config-path\") pod \"fa025d86-4747-4759-beaa-1d2b1460f549\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " Jul 10 00:41:37.489650 kubelet[1414]: I0710 00:41:37.489459 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-bpf-maps\") pod \"fa025d86-4747-4759-beaa-1d2b1460f549\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " Jul 10 00:41:37.489650 kubelet[1414]: I0710 00:41:37.489474 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-host-proc-sys-kernel\") pod \"fa025d86-4747-4759-beaa-1d2b1460f549\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " Jul 10 00:41:37.489650 kubelet[1414]: I0710 00:41:37.489492 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fa025d86-4747-4759-beaa-1d2b1460f549-hubble-tls\") pod \"fa025d86-4747-4759-beaa-1d2b1460f549\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " Jul 10 00:41:37.489783 kubelet[1414]: I0710 00:41:37.489506 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-hostproc\") pod \"fa025d86-4747-4759-beaa-1d2b1460f549\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " Jul 10 00:41:37.489783 kubelet[1414]: I0710 00:41:37.489520 1414 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-etc-cni-netd\") pod \"fa025d86-4747-4759-beaa-1d2b1460f549\" (UID: \"fa025d86-4747-4759-beaa-1d2b1460f549\") " Jul 10 00:41:37.489783 kubelet[1414]: I0710 00:41:37.489546 1414 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-host-proc-sys-net\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:37.489783 kubelet[1414]: I0710 00:41:37.489581 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "fa025d86-4747-4759-beaa-1d2b1460f549" (UID: "fa025d86-4747-4759-beaa-1d2b1460f549"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:41:37.489924 kubelet[1414]: I0710 00:41:37.489871 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-cni-path" (OuterVolumeSpecName: "cni-path") pod "fa025d86-4747-4759-beaa-1d2b1460f549" (UID: "fa025d86-4747-4759-beaa-1d2b1460f549"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:41:37.489959 kubelet[1414]: I0710 00:41:37.489927 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "fa025d86-4747-4759-beaa-1d2b1460f549" (UID: "fa025d86-4747-4759-beaa-1d2b1460f549"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:41:37.489959 kubelet[1414]: I0710 00:41:37.489947 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "fa025d86-4747-4759-beaa-1d2b1460f549" (UID: "fa025d86-4747-4759-beaa-1d2b1460f549"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:41:37.490010 kubelet[1414]: I0710 00:41:37.489994 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fa025d86-4747-4759-beaa-1d2b1460f549" (UID: "fa025d86-4747-4759-beaa-1d2b1460f549"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:41:37.491743 kubelet[1414]: I0710 00:41:37.491652 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa025d86-4747-4759-beaa-1d2b1460f549-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fa025d86-4747-4759-beaa-1d2b1460f549" (UID: "fa025d86-4747-4759-beaa-1d2b1460f549"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 00:41:37.491743 kubelet[1414]: I0710 00:41:37.491700 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "fa025d86-4747-4759-beaa-1d2b1460f549" (UID: "fa025d86-4747-4759-beaa-1d2b1460f549"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:41:37.491743 kubelet[1414]: I0710 00:41:37.491721 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "fa025d86-4747-4759-beaa-1d2b1460f549" (UID: "fa025d86-4747-4759-beaa-1d2b1460f549"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:41:37.493465 kubelet[1414]: I0710 00:41:37.491817 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "fa025d86-4747-4759-beaa-1d2b1460f549" (UID: "fa025d86-4747-4759-beaa-1d2b1460f549"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:41:37.493465 kubelet[1414]: I0710 00:41:37.491888 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-hostproc" (OuterVolumeSpecName: "hostproc") pod "fa025d86-4747-4759-beaa-1d2b1460f549" (UID: "fa025d86-4747-4759-beaa-1d2b1460f549"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 10 00:41:37.493329 systemd[1]: var-lib-kubelet-pods-fa025d86\x2d4747\x2d4759\x2dbeaa\x2d1d2b1460f549-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 10 00:41:37.493423 systemd[1]: var-lib-kubelet-pods-fa025d86\x2d4747\x2d4759\x2dbeaa\x2d1d2b1460f549-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 10 00:41:37.494677 kubelet[1414]: I0710 00:41:37.494635 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa025d86-4747-4759-beaa-1d2b1460f549-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "fa025d86-4747-4759-beaa-1d2b1460f549" (UID: "fa025d86-4747-4759-beaa-1d2b1460f549"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 00:41:37.494868 kubelet[1414]: I0710 00:41:37.494843 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa025d86-4747-4759-beaa-1d2b1460f549-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "fa025d86-4747-4759-beaa-1d2b1460f549" (UID: "fa025d86-4747-4759-beaa-1d2b1460f549"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 00:41:37.494962 kubelet[1414]: I0710 00:41:37.494843 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa025d86-4747-4759-beaa-1d2b1460f549-kube-api-access-mj47j" (OuterVolumeSpecName: "kube-api-access-mj47j") pod "fa025d86-4747-4759-beaa-1d2b1460f549" (UID: "fa025d86-4747-4759-beaa-1d2b1460f549"). InnerVolumeSpecName "kube-api-access-mj47j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:41:37.495932 kubelet[1414]: I0710 00:41:37.495901 1414 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa025d86-4747-4759-beaa-1d2b1460f549-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "fa025d86-4747-4759-beaa-1d2b1460f549" (UID: "fa025d86-4747-4759-beaa-1d2b1460f549"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:41:37.590824 kubelet[1414]: I0710 00:41:37.590699 1414 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fa025d86-4747-4759-beaa-1d2b1460f549-clustermesh-secrets\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:37.590824 kubelet[1414]: I0710 00:41:37.590734 1414 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fa025d86-4747-4759-beaa-1d2b1460f549-cilium-ipsec-secrets\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:37.590824 kubelet[1414]: I0710 00:41:37.590743 1414 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-cilium-run\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:37.590824 kubelet[1414]: I0710 00:41:37.590754 1414 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-cilium-cgroup\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:37.590824 kubelet[1414]: I0710 00:41:37.590762 1414 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-cni-path\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:37.590824 kubelet[1414]: I0710 00:41:37.590771 1414 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj47j\" (UniqueName: \"kubernetes.io/projected/fa025d86-4747-4759-beaa-1d2b1460f549-kube-api-access-mj47j\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:37.590824 kubelet[1414]: I0710 00:41:37.590779 1414 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-xtables-lock\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:37.590824 kubelet[1414]: I0710 00:41:37.590786 1414 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fa025d86-4747-4759-beaa-1d2b1460f549-cilium-config-path\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:37.591091 kubelet[1414]: I0710 00:41:37.590794 1414 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-bpf-maps\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:37.591091 kubelet[1414]: I0710 00:41:37.590802 1414 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-hostproc\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:37.591091 kubelet[1414]: I0710 00:41:37.590809 1414 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-etc-cni-netd\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:37.591091 kubelet[1414]: I0710 00:41:37.590817 1414 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-host-proc-sys-kernel\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:37.591091 kubelet[1414]: I0710 00:41:37.590824 1414 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fa025d86-4747-4759-beaa-1d2b1460f549-hubble-tls\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:37.591091 kubelet[1414]: I0710 00:41:37.590831 1414 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa025d86-4747-4759-beaa-1d2b1460f549-lib-modules\") on node \"10.0.0.111\" DevicePath \"\"" Jul 10 00:41:37.697989 kubelet[1414]: E0710 00:41:37.697955 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:37.699263 env[1214]: time="2025-07-10T00:41:37.699191697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-wv7t8,Uid:75b53a3f-e9f2-4eaa-8154-87705cf94c9c,Namespace:kube-system,Attempt:0,}" Jul 10 00:41:37.716408 env[1214]: time="2025-07-10T00:41:37.716335163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:41:37.716408 env[1214]: time="2025-07-10T00:41:37.716380204Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:41:37.716596 env[1214]: time="2025-07-10T00:41:37.716391885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:41:37.717329 env[1214]: time="2025-07-10T00:41:37.717289275Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d7eafdf226718f50b63776b3de010014d470e00ce78a6309691d64a958e1d31 pid=3003 runtime=io.containerd.runc.v2 Jul 10 00:41:37.728662 systemd[1]: Started cri-containerd-6d7eafdf226718f50b63776b3de010014d470e00ce78a6309691d64a958e1d31.scope. Jul 10 00:41:37.785972 env[1214]: time="2025-07-10T00:41:37.785916218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-wv7t8,Uid:75b53a3f-e9f2-4eaa-8154-87705cf94c9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d7eafdf226718f50b63776b3de010014d470e00ce78a6309691d64a958e1d31\"" Jul 10 00:41:37.786632 kubelet[1414]: E0710 00:41:37.786609 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:37.787625 env[1214]: time="2025-07-10T00:41:37.787574635Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 10 00:41:38.071286 kubelet[1414]: E0710 00:41:38.071230 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:38.202759 systemd[1]: var-lib-kubelet-pods-fa025d86\x2d4747\x2d4759\x2dbeaa\x2d1d2b1460f549-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmj47j.mount: Deactivated successfully. Jul 10 00:41:38.202848 systemd[1]: var-lib-kubelet-pods-fa025d86\x2d4747\x2d4759\x2dbeaa\x2d1d2b1460f549-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 10 00:41:38.354914 systemd[1]: Removed slice kubepods-burstable-podfa025d86_4747_4759_beaa_1d2b1460f549.slice. Jul 10 00:41:38.393525 systemd[1]: Created slice kubepods-burstable-pod9b5f3fb9_21fc_4ede_a943_011ff4b1f3c4.slice. Jul 10 00:41:38.394696 kubelet[1414]: I0710 00:41:38.394668 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4-cilium-run\") pod \"cilium-wqfcg\" (UID: \"9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4\") " pod="kube-system/cilium-wqfcg" Jul 10 00:41:38.394830 kubelet[1414]: I0710 00:41:38.394815 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4-bpf-maps\") pod \"cilium-wqfcg\" (UID: \"9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4\") " pod="kube-system/cilium-wqfcg" Jul 10 00:41:38.394912 kubelet[1414]: I0710 00:41:38.394896 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4-cni-path\") pod \"cilium-wqfcg\" (UID: \"9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4\") " pod="kube-system/cilium-wqfcg" Jul 10 00:41:38.394991 kubelet[1414]: I0710 00:41:38.394978 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4-xtables-lock\") pod \"cilium-wqfcg\" (UID: \"9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4\") " pod="kube-system/cilium-wqfcg" Jul 10 00:41:38.395067 kubelet[1414]: I0710 00:41:38.395054 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4-clustermesh-secrets\") pod \"cilium-wqfcg\" (UID: \"9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4\") " pod="kube-system/cilium-wqfcg" Jul 10 00:41:38.395167 kubelet[1414]: I0710 00:41:38.395154 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4-host-proc-sys-kernel\") pod \"cilium-wqfcg\" (UID: \"9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4\") " pod="kube-system/cilium-wqfcg" Jul 10 00:41:38.395265 kubelet[1414]: I0710 00:41:38.395252 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pvvz\" (UniqueName: \"kubernetes.io/projected/9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4-kube-api-access-8pvvz\") pod \"cilium-wqfcg\" (UID: \"9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4\") " pod="kube-system/cilium-wqfcg" Jul 10 00:41:38.395341 kubelet[1414]: I0710 00:41:38.395329 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4-lib-modules\") pod \"cilium-wqfcg\" (UID: \"9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4\") " pod="kube-system/cilium-wqfcg" Jul 10 00:41:38.395438 kubelet[1414]: I0710 00:41:38.395420 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4-cilium-config-path\") pod \"cilium-wqfcg\" (UID: \"9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4\") " pod="kube-system/cilium-wqfcg" Jul 10 00:41:38.395519 kubelet[1414]: I0710 00:41:38.395507 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4-hostproc\") pod \"cilium-wqfcg\" (UID: \"9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4\") " pod="kube-system/cilium-wqfcg" Jul 10 00:41:38.395647 kubelet[1414]: I0710 00:41:38.395610 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4-cilium-cgroup\") pod \"cilium-wqfcg\" (UID: \"9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4\") " pod="kube-system/cilium-wqfcg" Jul 10 00:41:38.395696 kubelet[1414]: I0710 00:41:38.395680 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4-cilium-ipsec-secrets\") pod \"cilium-wqfcg\" (UID: \"9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4\") " pod="kube-system/cilium-wqfcg" Jul 10 00:41:38.395794 kubelet[1414]: I0710 00:41:38.395719 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4-etc-cni-netd\") pod \"cilium-wqfcg\" (UID: \"9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4\") " pod="kube-system/cilium-wqfcg" Jul 10 00:41:38.395794 kubelet[1414]: I0710 00:41:38.395753 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4-host-proc-sys-net\") pod \"cilium-wqfcg\" (UID: \"9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4\") " pod="kube-system/cilium-wqfcg" Jul 10 00:41:38.395794 kubelet[1414]: I0710 00:41:38.395777 1414 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4-hubble-tls\") pod \"cilium-wqfcg\" (UID: \"9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4\") " pod="kube-system/cilium-wqfcg" Jul 10 00:41:38.677380 kubelet[1414]: I0710 00:41:38.677266 1414 setters.go:600] "Node became not ready" node="10.0.0.111" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-10T00:41:38Z","lastTransitionTime":"2025-07-10T00:41:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 10 00:41:38.705166 kubelet[1414]: E0710 00:41:38.705131 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:38.705755 env[1214]: time="2025-07-10T00:41:38.705713781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wqfcg,Uid:9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4,Namespace:kube-system,Attempt:0,}" Jul 10 00:41:38.716926 env[1214]: time="2025-07-10T00:41:38.716843910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:41:38.716926 env[1214]: time="2025-07-10T00:41:38.716892511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:41:38.716926 env[1214]: time="2025-07-10T00:41:38.716903192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:41:38.717256 env[1214]: time="2025-07-10T00:41:38.717190561Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e9ea661b47651bd3dbf5916b728464c961a26c1ea8535eb75d476a0af9dca74f pid=3045 runtime=io.containerd.runc.v2 Jul 10 00:41:38.727773 systemd[1]: Started cri-containerd-e9ea661b47651bd3dbf5916b728464c961a26c1ea8535eb75d476a0af9dca74f.scope. Jul 10 00:41:38.762526 env[1214]: time="2025-07-10T00:41:38.762476143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wqfcg,Uid:9b5f3fb9-21fc-4ede-a943-011ff4b1f3c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9ea661b47651bd3dbf5916b728464c961a26c1ea8535eb75d476a0af9dca74f\"" Jul 10 00:41:38.763420 kubelet[1414]: E0710 00:41:38.763386 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:38.765571 env[1214]: time="2025-07-10T00:41:38.765531645Z" level=info msg="CreateContainer within sandbox \"e9ea661b47651bd3dbf5916b728464c961a26c1ea8535eb75d476a0af9dca74f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 10 00:41:38.775659 env[1214]: time="2025-07-10T00:41:38.775598219Z" level=info msg="CreateContainer within sandbox \"e9ea661b47651bd3dbf5916b728464c961a26c1ea8535eb75d476a0af9dca74f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"706bdecf0256766d9d812c7fb9a44c6f7b07d52a7831a9cf01e0dc12dc6e292a\"" Jul 10 00:41:38.776144 env[1214]: time="2025-07-10T00:41:38.776072274Z" level=info msg="StartContainer for \"706bdecf0256766d9d812c7fb9a44c6f7b07d52a7831a9cf01e0dc12dc6e292a\"" Jul 10 00:41:38.789158 systemd[1]: Started cri-containerd-706bdecf0256766d9d812c7fb9a44c6f7b07d52a7831a9cf01e0dc12dc6e292a.scope. Jul 10 00:41:38.837149 env[1214]: time="2025-07-10T00:41:38.837102219Z" level=info msg="StartContainer for \"706bdecf0256766d9d812c7fb9a44c6f7b07d52a7831a9cf01e0dc12dc6e292a\" returns successfully" Jul 10 00:41:38.845779 systemd[1]: cri-containerd-706bdecf0256766d9d812c7fb9a44c6f7b07d52a7831a9cf01e0dc12dc6e292a.scope: Deactivated successfully. Jul 10 00:41:38.909675 env[1214]: time="2025-07-10T00:41:38.909628344Z" level=info msg="shim disconnected" id=706bdecf0256766d9d812c7fb9a44c6f7b07d52a7831a9cf01e0dc12dc6e292a Jul 10 00:41:38.909675 env[1214]: time="2025-07-10T00:41:38.909671986Z" level=warning msg="cleaning up after shim disconnected" id=706bdecf0256766d9d812c7fb9a44c6f7b07d52a7831a9cf01e0dc12dc6e292a namespace=k8s.io Jul 10 00:41:38.909675 env[1214]: time="2025-07-10T00:41:38.909681986Z" level=info msg="cleaning up dead shim" Jul 10 00:41:38.916393 env[1214]: time="2025-07-10T00:41:38.916349887Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:41:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3129 runtime=io.containerd.runc.v2\n" Jul 10 00:41:39.072746 kubelet[1414]: E0710 00:41:39.072700 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:39.081038 env[1214]: time="2025-07-10T00:41:39.080992996Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:41:39.082350 env[1214]: time="2025-07-10T00:41:39.082310438Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:41:39.087758 env[1214]: time="2025-07-10T00:41:39.087720053Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 00:41:39.088342 env[1214]: time="2025-07-10T00:41:39.088304912Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 10 00:41:39.090942 env[1214]: time="2025-07-10T00:41:39.090909356Z" level=info msg="CreateContainer within sandbox \"6d7eafdf226718f50b63776b3de010014d470e00ce78a6309691d64a958e1d31\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 10 00:41:39.102381 env[1214]: time="2025-07-10T00:41:39.102336404Z" level=info msg="CreateContainer within sandbox \"6d7eafdf226718f50b63776b3de010014d470e00ce78a6309691d64a958e1d31\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8b6d869c0369ab05c1167b650a433283b94b785cb79335b74380c4aaaa92f238\"" Jul 10 00:41:39.103223 env[1214]: time="2025-07-10T00:41:39.103178312Z" level=info msg="StartContainer for \"8b6d869c0369ab05c1167b650a433283b94b785cb79335b74380c4aaaa92f238\"" Jul 10 00:41:39.117880 systemd[1]: Started cri-containerd-8b6d869c0369ab05c1167b650a433283b94b785cb79335b74380c4aaaa92f238.scope. Jul 10 00:41:39.152073 env[1214]: time="2025-07-10T00:41:39.152020927Z" level=info msg="StartContainer for \"8b6d869c0369ab05c1167b650a433283b94b785cb79335b74380c4aaaa92f238\" returns successfully" Jul 10 00:41:39.203574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2443482277.mount: Deactivated successfully. Jul 10 00:41:39.232264 kubelet[1414]: I0710 00:41:39.232195 1414 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa025d86-4747-4759-beaa-1d2b1460f549" path="/var/lib/kubelet/pods/fa025d86-4747-4759-beaa-1d2b1460f549/volumes" Jul 10 00:41:39.354888 kubelet[1414]: E0710 00:41:39.354758 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:39.356435 kubelet[1414]: E0710 00:41:39.356400 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:39.358166 env[1214]: time="2025-07-10T00:41:39.358127096Z" level=info msg="CreateContainer within sandbox \"e9ea661b47651bd3dbf5916b728464c961a26c1ea8535eb75d476a0af9dca74f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 10 00:41:39.368492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2782894626.mount: Deactivated successfully. Jul 10 00:41:39.369023 env[1214]: time="2025-07-10T00:41:39.368983606Z" level=info msg="CreateContainer within sandbox \"e9ea661b47651bd3dbf5916b728464c961a26c1ea8535eb75d476a0af9dca74f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"000d61c4417fd919f32ec440fe3ab2ac9adaa854eddc14cad9df0e2bd1578a60\"" Jul 10 00:41:39.369648 env[1214]: time="2025-07-10T00:41:39.369608906Z" level=info msg="StartContainer for \"000d61c4417fd919f32ec440fe3ab2ac9adaa854eddc14cad9df0e2bd1578a60\"" Jul 10 00:41:39.372743 kubelet[1414]: I0710 00:41:39.372681 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-wv7t8" podStartSLOduration=2.070359917 podStartE2EDuration="3.372663205s" podCreationTimestamp="2025-07-10 00:41:36 +0000 UTC" firstStartedPulling="2025-07-10 00:41:37.787261504 +0000 UTC m=+61.985645740" lastFinishedPulling="2025-07-10 00:41:39.089564832 +0000 UTC m=+63.287949028" observedRunningTime="2025-07-10 00:41:39.371556889 +0000 UTC m=+63.569941085" watchObservedRunningTime="2025-07-10 00:41:39.372663205 +0000 UTC m=+63.571047441" Jul 10 00:41:39.390429 systemd[1]: Started cri-containerd-000d61c4417fd919f32ec440fe3ab2ac9adaa854eddc14cad9df0e2bd1578a60.scope. Jul 10 00:41:39.448926 systemd[1]: cri-containerd-000d61c4417fd919f32ec440fe3ab2ac9adaa854eddc14cad9df0e2bd1578a60.scope: Deactivated successfully. Jul 10 00:41:39.458211 env[1214]: time="2025-07-10T00:41:39.458159803Z" level=info msg="StartContainer for \"000d61c4417fd919f32ec440fe3ab2ac9adaa854eddc14cad9df0e2bd1578a60\" returns successfully" Jul 10 00:41:39.474614 env[1214]: time="2025-07-10T00:41:39.474570052Z" level=info msg="shim disconnected" id=000d61c4417fd919f32ec440fe3ab2ac9adaa854eddc14cad9df0e2bd1578a60 Jul 10 00:41:39.474891 env[1214]: time="2025-07-10T00:41:39.474863702Z" level=warning msg="cleaning up after shim disconnected" id=000d61c4417fd919f32ec440fe3ab2ac9adaa854eddc14cad9df0e2bd1578a60 namespace=k8s.io Jul 10 00:41:39.474975 env[1214]: time="2025-07-10T00:41:39.474959705Z" level=info msg="cleaning up dead shim" Jul 10 00:41:39.481761 env[1214]: time="2025-07-10T00:41:39.481722603Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:41:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3230 runtime=io.containerd.runc.v2\n" Jul 10 00:41:40.072841 kubelet[1414]: E0710 00:41:40.072800 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:40.201665 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-000d61c4417fd919f32ec440fe3ab2ac9adaa854eddc14cad9df0e2bd1578a60-rootfs.mount: Deactivated successfully. Jul 10 00:41:40.361933 kubelet[1414]: E0710 00:41:40.360044 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:40.361933 kubelet[1414]: E0710 00:41:40.360803 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:40.362991 env[1214]: time="2025-07-10T00:41:40.362648873Z" level=info msg="CreateContainer within sandbox \"e9ea661b47651bd3dbf5916b728464c961a26c1ea8535eb75d476a0af9dca74f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 10 00:41:40.375417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1484879849.mount: Deactivated successfully. Jul 10 00:41:40.382504 env[1214]: time="2025-07-10T00:41:40.381864477Z" level=info msg="CreateContainer within sandbox \"e9ea661b47651bd3dbf5916b728464c961a26c1ea8535eb75d476a0af9dca74f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e4f2207e1623911af683d1c147e8a21d56cc3f9d1eceff76b3c5c9db718614d1\"" Jul 10 00:41:40.382504 env[1214]: time="2025-07-10T00:41:40.382338972Z" level=info msg="StartContainer for \"e4f2207e1623911af683d1c147e8a21d56cc3f9d1eceff76b3c5c9db718614d1\"" Jul 10 00:41:40.400124 systemd[1]: Started cri-containerd-e4f2207e1623911af683d1c147e8a21d56cc3f9d1eceff76b3c5c9db718614d1.scope. Jul 10 00:41:40.434319 systemd[1]: cri-containerd-e4f2207e1623911af683d1c147e8a21d56cc3f9d1eceff76b3c5c9db718614d1.scope: Deactivated successfully. Jul 10 00:41:40.435098 env[1214]: time="2025-07-10T00:41:40.434939504Z" level=info msg="StartContainer for \"e4f2207e1623911af683d1c147e8a21d56cc3f9d1eceff76b3c5c9db718614d1\" returns successfully" Jul 10 00:41:40.456847 env[1214]: time="2025-07-10T00:41:40.456785510Z" level=info msg="shim disconnected" id=e4f2207e1623911af683d1c147e8a21d56cc3f9d1eceff76b3c5c9db718614d1 Jul 10 00:41:40.456847 env[1214]: time="2025-07-10T00:41:40.456826791Z" level=warning msg="cleaning up after shim disconnected" id=e4f2207e1623911af683d1c147e8a21d56cc3f9d1eceff76b3c5c9db718614d1 namespace=k8s.io Jul 10 00:41:40.456847 env[1214]: time="2025-07-10T00:41:40.456836271Z" level=info msg="cleaning up dead shim" Jul 10 00:41:40.463258 env[1214]: time="2025-07-10T00:41:40.463188271Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:41:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3287 runtime=io.containerd.runc.v2\n" Jul 10 00:41:41.072954 kubelet[1414]: E0710 00:41:41.072906 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:41.201765 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4f2207e1623911af683d1c147e8a21d56cc3f9d1eceff76b3c5c9db718614d1-rootfs.mount: Deactivated successfully. Jul 10 00:41:41.363946 kubelet[1414]: E0710 00:41:41.363851 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:41.366006 env[1214]: time="2025-07-10T00:41:41.365959411Z" level=info msg="CreateContainer within sandbox \"e9ea661b47651bd3dbf5916b728464c961a26c1ea8535eb75d476a0af9dca74f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 10 00:41:41.380080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2662727007.mount: Deactivated successfully. Jul 10 00:41:41.385525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount876047372.mount: Deactivated successfully. Jul 10 00:41:41.389197 env[1214]: time="2025-07-10T00:41:41.389137920Z" level=info msg="CreateContainer within sandbox \"e9ea661b47651bd3dbf5916b728464c961a26c1ea8535eb75d476a0af9dca74f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d9aa9cde3a494fb34d6d97ae9e26ceb86e582dcbbe251d12ced4b9b19ae1a69d\"" Jul 10 00:41:41.389789 env[1214]: time="2025-07-10T00:41:41.389742939Z" level=info msg="StartContainer for \"d9aa9cde3a494fb34d6d97ae9e26ceb86e582dcbbe251d12ced4b9b19ae1a69d\"" Jul 10 00:41:41.404739 systemd[1]: Started cri-containerd-d9aa9cde3a494fb34d6d97ae9e26ceb86e582dcbbe251d12ced4b9b19ae1a69d.scope. Jul 10 00:41:41.434035 systemd[1]: cri-containerd-d9aa9cde3a494fb34d6d97ae9e26ceb86e582dcbbe251d12ced4b9b19ae1a69d.scope: Deactivated successfully. Jul 10 00:41:41.439372 env[1214]: time="2025-07-10T00:41:41.439327136Z" level=info msg="StartContainer for \"d9aa9cde3a494fb34d6d97ae9e26ceb86e582dcbbe251d12ced4b9b19ae1a69d\" returns successfully" Jul 10 00:41:41.456704 env[1214]: time="2025-07-10T00:41:41.456661787Z" level=info msg="shim disconnected" id=d9aa9cde3a494fb34d6d97ae9e26ceb86e582dcbbe251d12ced4b9b19ae1a69d Jul 10 00:41:41.456945 env[1214]: time="2025-07-10T00:41:41.456925475Z" level=warning msg="cleaning up after shim disconnected" id=d9aa9cde3a494fb34d6d97ae9e26ceb86e582dcbbe251d12ced4b9b19ae1a69d namespace=k8s.io Jul 10 00:41:41.457019 env[1214]: time="2025-07-10T00:41:41.457005357Z" level=info msg="cleaning up dead shim" Jul 10 00:41:41.463018 env[1214]: time="2025-07-10T00:41:41.462977460Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:41:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3343 runtime=io.containerd.runc.v2\n" Jul 10 00:41:42.074031 kubelet[1414]: E0710 00:41:42.073949 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:42.189982 kubelet[1414]: E0710 00:41:42.189947 1414 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 10 00:41:42.368422 kubelet[1414]: E0710 00:41:42.368321 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:42.370097 env[1214]: time="2025-07-10T00:41:42.370041948Z" level=info msg="CreateContainer within sandbox \"e9ea661b47651bd3dbf5916b728464c961a26c1ea8535eb75d476a0af9dca74f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 10 00:41:42.383824 env[1214]: time="2025-07-10T00:41:42.383781518Z" level=info msg="CreateContainer within sandbox \"e9ea661b47651bd3dbf5916b728464c961a26c1ea8535eb75d476a0af9dca74f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"85d5eb320e366fe32c11aec46306b161c0b231b1b065dd305d8d739aa6e6ea3c\"" Jul 10 00:41:42.384528 env[1214]: time="2025-07-10T00:41:42.384474699Z" level=info msg="StartContainer for \"85d5eb320e366fe32c11aec46306b161c0b231b1b065dd305d8d739aa6e6ea3c\"" Jul 10 00:41:42.402649 systemd[1]: Started cri-containerd-85d5eb320e366fe32c11aec46306b161c0b231b1b065dd305d8d739aa6e6ea3c.scope. Jul 10 00:41:42.440163 env[1214]: time="2025-07-10T00:41:42.440120840Z" level=info msg="StartContainer for \"85d5eb320e366fe32c11aec46306b161c0b231b1b065dd305d8d739aa6e6ea3c\" returns successfully" Jul 10 00:41:42.696240 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 10 00:41:43.074388 kubelet[1414]: E0710 00:41:43.074336 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:43.201943 systemd[1]: run-containerd-runc-k8s.io-85d5eb320e366fe32c11aec46306b161c0b231b1b065dd305d8d739aa6e6ea3c-runc.akLy6c.mount: Deactivated successfully. Jul 10 00:41:43.372504 kubelet[1414]: E0710 00:41:43.372407 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:43.411368 kubelet[1414]: I0710 00:41:43.411313 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wqfcg" podStartSLOduration=5.411294349 podStartE2EDuration="5.411294349s" podCreationTimestamp="2025-07-10 00:41:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:41:43.411248347 +0000 UTC m=+67.609632583" watchObservedRunningTime="2025-07-10 00:41:43.411294349 +0000 UTC m=+67.609678585" Jul 10 00:41:43.534458 systemd[1]: run-containerd-runc-k8s.io-85d5eb320e366fe32c11aec46306b161c0b231b1b065dd305d8d739aa6e6ea3c-runc.oMvwgG.mount: Deactivated successfully. Jul 10 00:41:44.075337 kubelet[1414]: E0710 00:41:44.075277 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:44.706436 kubelet[1414]: E0710 00:41:44.706381 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:45.075658 kubelet[1414]: E0710 00:41:45.075590 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:45.462749 systemd-networkd[1051]: lxc_health: Link UP Jul 10 00:41:45.471246 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 10 00:41:45.471422 systemd-networkd[1051]: lxc_health: Gained carrier Jul 10 00:41:45.651862 systemd[1]: run-containerd-runc-k8s.io-85d5eb320e366fe32c11aec46306b161c0b231b1b065dd305d8d739aa6e6ea3c-runc.Y6UK2S.mount: Deactivated successfully. Jul 10 00:41:46.075789 kubelet[1414]: E0710 00:41:46.075733 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:46.706540 kubelet[1414]: E0710 00:41:46.706505 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:47.076615 kubelet[1414]: E0710 00:41:47.076552 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:47.379056 kubelet[1414]: E0710 00:41:47.378752 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:47.437409 systemd-networkd[1051]: lxc_health: Gained IPv6LL Jul 10 00:41:48.077193 kubelet[1414]: E0710 00:41:48.077137 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:48.380890 kubelet[1414]: E0710 00:41:48.380773 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:41:49.077486 kubelet[1414]: E0710 00:41:49.077440 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:50.078453 kubelet[1414]: E0710 00:41:50.078396 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:51.079394 kubelet[1414]: E0710 00:41:51.079349 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:52.080532 kubelet[1414]: E0710 00:41:52.080471 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 10 00:41:53.081084 kubelet[1414]: E0710 00:41:53.081034 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"