Mar 17 18:24:22.727629 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 18:24:22.727648 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Mar 17 17:11:44 -00 2025 Mar 17 18:24:22.727656 kernel: efi: EFI v2.70 by EDK II Mar 17 18:24:22.727661 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Mar 17 18:24:22.727666 kernel: random: crng init done Mar 17 18:24:22.727671 kernel: ACPI: Early table checksum verification disabled Mar 17 18:24:22.727677 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Mar 17 18:24:22.727684 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 17 18:24:22.727690 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:24:22.727695 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:24:22.727700 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:24:22.727705 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:24:22.727711 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:24:22.727716 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:24:22.727724 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:24:22.727730 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:24:22.727736 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:24:22.727741 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 17 18:24:22.727747 kernel: NUMA: Failed to initialise from firmware Mar 17 18:24:22.727753 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 18:24:22.727758 kernel: NUMA: NODE_DATA [mem 0xdcb0c900-0xdcb11fff] Mar 17 18:24:22.727764 kernel: Zone ranges: Mar 17 18:24:22.727769 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 18:24:22.727776 kernel: DMA32 empty Mar 17 18:24:22.727781 kernel: Normal empty Mar 17 18:24:22.727787 kernel: Movable zone start for each node Mar 17 18:24:22.727792 kernel: Early memory node ranges Mar 17 18:24:22.727798 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Mar 17 18:24:22.727803 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Mar 17 18:24:22.727809 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Mar 17 18:24:22.727815 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Mar 17 18:24:22.727820 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Mar 17 18:24:22.727826 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Mar 17 18:24:22.727832 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Mar 17 18:24:22.727837 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 18:24:22.727844 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 17 18:24:22.727849 kernel: psci: probing for conduit method from ACPI. Mar 17 18:24:22.727855 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 18:24:22.727861 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 18:24:22.727866 kernel: psci: Trusted OS migration not required Mar 17 18:24:22.727875 kernel: psci: SMC Calling Convention v1.1 Mar 17 18:24:22.727881 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 17 18:24:22.727888 kernel: ACPI: SRAT not present Mar 17 18:24:22.727895 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Mar 17 18:24:22.727901 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Mar 17 18:24:22.727907 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 17 18:24:22.727913 kernel: Detected PIPT I-cache on CPU0 Mar 17 18:24:22.727919 kernel: CPU features: detected: GIC system register CPU interface Mar 17 18:24:22.727925 kernel: CPU features: detected: Hardware dirty bit management Mar 17 18:24:22.727931 kernel: CPU features: detected: Spectre-v4 Mar 17 18:24:22.727937 kernel: CPU features: detected: Spectre-BHB Mar 17 18:24:22.727944 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 18:24:22.727950 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 18:24:22.727956 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 18:24:22.727962 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 18:24:22.727968 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 17 18:24:22.727974 kernel: Policy zone: DMA Mar 17 18:24:22.727981 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:24:22.727988 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:24:22.727994 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:24:22.728000 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:24:22.728006 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:24:22.728013 kernel: Memory: 2457408K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 114880K reserved, 0K cma-reserved) Mar 17 18:24:22.728020 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 18:24:22.728026 kernel: trace event string verifier disabled Mar 17 18:24:22.728031 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 18:24:22.728038 kernel: rcu: RCU event tracing is enabled. Mar 17 18:24:22.728045 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 18:24:22.728051 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 18:24:22.728057 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:24:22.728063 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:24:22.728069 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 18:24:22.728075 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 18:24:22.728082 kernel: GICv3: 256 SPIs implemented Mar 17 18:24:22.728088 kernel: GICv3: 0 Extended SPIs implemented Mar 17 18:24:22.728094 kernel: GICv3: Distributor has no Range Selector support Mar 17 18:24:22.728100 kernel: Root IRQ handler: gic_handle_irq Mar 17 18:24:22.728106 kernel: GICv3: 16 PPIs implemented Mar 17 18:24:22.728113 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 17 18:24:22.728118 kernel: ACPI: SRAT not present Mar 17 18:24:22.728124 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 17 18:24:22.728130 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 18:24:22.728137 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Mar 17 18:24:22.728143 kernel: GICv3: using LPI property table @0x00000000400d0000 Mar 17 18:24:22.728148 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Mar 17 18:24:22.728156 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:24:22.728162 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 18:24:22.728168 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 18:24:22.728174 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 18:24:22.728180 kernel: arm-pv: using stolen time PV Mar 17 18:24:22.728187 kernel: Console: colour dummy device 80x25 Mar 17 18:24:22.728193 kernel: ACPI: Core revision 20210730 Mar 17 18:24:22.728199 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 18:24:22.728206 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:24:22.728212 kernel: LSM: Security Framework initializing Mar 17 18:24:22.728219 kernel: SELinux: Initializing. Mar 17 18:24:22.728225 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:24:22.728231 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:24:22.728238 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:24:22.728244 kernel: Platform MSI: ITS@0x8080000 domain created Mar 17 18:24:22.728250 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 17 18:24:22.728256 kernel: Remapping and enabling EFI services. Mar 17 18:24:22.728267 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:24:22.728273 kernel: Detected PIPT I-cache on CPU1 Mar 17 18:24:22.728281 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 17 18:24:22.728287 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Mar 17 18:24:22.728293 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:24:22.728299 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 18:24:22.728305 kernel: Detected PIPT I-cache on CPU2 Mar 17 18:24:22.728312 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 17 18:24:22.728318 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Mar 17 18:24:22.728324 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:24:22.728330 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 17 18:24:22.728336 kernel: Detected PIPT I-cache on CPU3 Mar 17 18:24:22.728344 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 17 18:24:22.728350 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Mar 17 18:24:22.728356 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:24:22.728362 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 17 18:24:22.728372 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 18:24:22.728380 kernel: SMP: Total of 4 processors activated. Mar 17 18:24:22.728386 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 18:24:22.728393 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 18:24:22.728399 kernel: CPU features: detected: Common not Private translations Mar 17 18:24:22.728406 kernel: CPU features: detected: CRC32 instructions Mar 17 18:24:22.728412 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 18:24:22.728419 kernel: CPU features: detected: LSE atomic instructions Mar 17 18:24:22.728426 kernel: CPU features: detected: Privileged Access Never Mar 17 18:24:22.728433 kernel: CPU features: detected: RAS Extension Support Mar 17 18:24:22.728439 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 17 18:24:22.728445 kernel: CPU: All CPU(s) started at EL1 Mar 17 18:24:22.728457 kernel: alternatives: patching kernel code Mar 17 18:24:22.728469 kernel: devtmpfs: initialized Mar 17 18:24:22.728475 kernel: KASLR enabled Mar 17 18:24:22.728482 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:24:22.728488 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 18:24:22.728497 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:24:22.728504 kernel: SMBIOS 3.0.0 present. Mar 17 18:24:22.728523 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Mar 17 18:24:22.728530 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:24:22.728537 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 18:24:22.728545 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 18:24:22.728552 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 18:24:22.728558 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:24:22.728565 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 Mar 17 18:24:22.728571 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:24:22.728578 kernel: cpuidle: using governor menu Mar 17 18:24:22.728585 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 18:24:22.728591 kernel: ASID allocator initialised with 32768 entries Mar 17 18:24:22.728597 kernel: ACPI: bus type PCI registered Mar 17 18:24:22.728605 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:24:22.728611 kernel: Serial: AMBA PL011 UART driver Mar 17 18:24:22.728618 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:24:22.728624 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 18:24:22.728631 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:24:22.728637 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 18:24:22.728644 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:24:22.728650 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 18:24:22.728657 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:24:22.728665 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:24:22.728672 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:24:22.728679 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:24:22.728685 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:24:22.728691 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:24:22.728698 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:24:22.728704 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:24:22.728711 kernel: ACPI: Interpreter enabled Mar 17 18:24:22.728717 kernel: ACPI: Using GIC for interrupt routing Mar 17 18:24:22.728725 kernel: ACPI: MCFG table detected, 1 entries Mar 17 18:24:22.728731 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 17 18:24:22.728738 kernel: printk: console [ttyAMA0] enabled Mar 17 18:24:22.728744 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 18:24:22.728855 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:24:22.728916 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 18:24:22.728972 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 18:24:22.729029 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 17 18:24:22.729084 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 17 18:24:22.729093 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 17 18:24:22.729099 kernel: PCI host bridge to bus 0000:00 Mar 17 18:24:22.729162 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 17 18:24:22.729213 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 18:24:22.729263 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 17 18:24:22.729313 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 18:24:22.729382 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 17 18:24:22.729461 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 18:24:22.729546 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 17 18:24:22.729608 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 17 18:24:22.729664 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 18:24:22.729722 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 18:24:22.729783 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 17 18:24:22.729840 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 17 18:24:22.729892 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 17 18:24:22.729942 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 18:24:22.729993 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 17 18:24:22.730002 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 18:24:22.730008 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 18:24:22.730015 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 18:24:22.730023 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 18:24:22.730030 kernel: iommu: Default domain type: Translated Mar 17 18:24:22.730036 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 18:24:22.730043 kernel: vgaarb: loaded Mar 17 18:24:22.730050 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:24:22.730056 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:24:22.730063 kernel: PTP clock support registered Mar 17 18:24:22.730070 kernel: Registered efivars operations Mar 17 18:24:22.730076 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 18:24:22.730084 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:24:22.730091 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:24:22.730097 kernel: pnp: PnP ACPI init Mar 17 18:24:22.730157 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 17 18:24:22.730167 kernel: pnp: PnP ACPI: found 1 devices Mar 17 18:24:22.730173 kernel: NET: Registered PF_INET protocol family Mar 17 18:24:22.730180 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:24:22.730186 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 18:24:22.730195 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:24:22.730202 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:24:22.730208 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 17 18:24:22.730215 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 18:24:22.730221 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:24:22.730228 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:24:22.730235 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:24:22.730241 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:24:22.730248 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 17 18:24:22.730256 kernel: kvm [1]: HYP mode not available Mar 17 18:24:22.730262 kernel: Initialise system trusted keyrings Mar 17 18:24:22.730269 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 18:24:22.730275 kernel: Key type asymmetric registered Mar 17 18:24:22.730282 kernel: Asymmetric key parser 'x509' registered Mar 17 18:24:22.730288 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:24:22.730294 kernel: io scheduler mq-deadline registered Mar 17 18:24:22.730301 kernel: io scheduler kyber registered Mar 17 18:24:22.730307 kernel: io scheduler bfq registered Mar 17 18:24:22.730315 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 18:24:22.730321 kernel: ACPI: button: Power Button [PWRB] Mar 17 18:24:22.730328 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 18:24:22.730384 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 17 18:24:22.730393 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:24:22.730399 kernel: thunder_xcv, ver 1.0 Mar 17 18:24:22.730406 kernel: thunder_bgx, ver 1.0 Mar 17 18:24:22.730412 kernel: nicpf, ver 1.0 Mar 17 18:24:22.730419 kernel: nicvf, ver 1.0 Mar 17 18:24:22.730491 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 18:24:22.730578 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T18:24:22 UTC (1742235862) Mar 17 18:24:22.730588 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 18:24:22.730595 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:24:22.730602 kernel: Segment Routing with IPv6 Mar 17 18:24:22.730608 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:24:22.730615 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:24:22.730621 kernel: Key type dns_resolver registered Mar 17 18:24:22.730630 kernel: registered taskstats version 1 Mar 17 18:24:22.730637 kernel: Loading compiled-in X.509 certificates Mar 17 18:24:22.730644 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: c6f3fb83dc6bb7052b07ec5b1ef41d12f9b3f7e4' Mar 17 18:24:22.730650 kernel: Key type .fscrypt registered Mar 17 18:24:22.730656 kernel: Key type fscrypt-provisioning registered Mar 17 18:24:22.730663 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:24:22.730670 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:24:22.730676 kernel: ima: No architecture policies found Mar 17 18:24:22.730683 kernel: clk: Disabling unused clocks Mar 17 18:24:22.730690 kernel: Freeing unused kernel memory: 36416K Mar 17 18:24:22.730696 kernel: Run /init as init process Mar 17 18:24:22.730703 kernel: with arguments: Mar 17 18:24:22.730709 kernel: /init Mar 17 18:24:22.730716 kernel: with environment: Mar 17 18:24:22.730722 kernel: HOME=/ Mar 17 18:24:22.730728 kernel: TERM=linux Mar 17 18:24:22.730735 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:24:22.730743 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:24:22.730753 systemd[1]: Detected virtualization kvm. Mar 17 18:24:22.730760 systemd[1]: Detected architecture arm64. Mar 17 18:24:22.730767 systemd[1]: Running in initrd. Mar 17 18:24:22.730773 systemd[1]: No hostname configured, using default hostname. Mar 17 18:24:22.730780 systemd[1]: Hostname set to . Mar 17 18:24:22.730788 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:24:22.730794 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:24:22.730803 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:24:22.730810 systemd[1]: Reached target cryptsetup.target. Mar 17 18:24:22.730817 systemd[1]: Reached target paths.target. Mar 17 18:24:22.730823 systemd[1]: Reached target slices.target. Mar 17 18:24:22.730830 systemd[1]: Reached target swap.target. Mar 17 18:24:22.730837 systemd[1]: Reached target timers.target. Mar 17 18:24:22.730844 systemd[1]: Listening on iscsid.socket. Mar 17 18:24:22.730852 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:24:22.730859 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:24:22.730866 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:24:22.730873 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:24:22.730880 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:24:22.730887 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:24:22.730894 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:24:22.730901 systemd[1]: Reached target sockets.target. Mar 17 18:24:22.730908 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:24:22.730916 systemd[1]: Finished network-cleanup.service. Mar 17 18:24:22.730923 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:24:22.730930 systemd[1]: Starting systemd-journald.service... Mar 17 18:24:22.730937 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:24:22.730944 systemd[1]: Starting systemd-resolved.service... Mar 17 18:24:22.730950 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:24:22.730957 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:24:22.730964 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:24:22.730971 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:24:22.730979 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:24:22.730986 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:24:22.730995 systemd-journald[290]: Journal started Mar 17 18:24:22.731031 systemd-journald[290]: Runtime Journal (/run/log/journal/56f6463fed4544d6b78febc8ad04b46e) is 6.0M, max 48.7M, 42.6M free. Mar 17 18:24:22.724278 systemd-modules-load[291]: Inserted module 'overlay' Mar 17 18:24:22.733060 systemd[1]: Started systemd-journald.service. Mar 17 18:24:22.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:22.738825 kernel: audit: type=1130 audit(1742235862.733:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:22.739352 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:24:22.744502 kernel: audit: type=1130 audit(1742235862.740:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:22.744533 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:24:22.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:22.747845 systemd-modules-load[291]: Inserted module 'br_netfilter' Mar 17 18:24:22.749159 kernel: Bridge firewalling registered Mar 17 18:24:22.750966 systemd-resolved[292]: Positive Trust Anchors: Mar 17 18:24:22.750979 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:24:22.751008 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:24:22.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:22.752589 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:24:22.762346 kernel: audit: type=1130 audit(1742235862.753:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:22.762367 kernel: SCSI subsystem initialized Mar 17 18:24:22.754709 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:24:22.762333 systemd-resolved[292]: Defaulting to hostname 'linux'. Mar 17 18:24:22.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:22.763131 systemd[1]: Started systemd-resolved.service. Mar 17 18:24:22.767758 kernel: audit: type=1130 audit(1742235862.763:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:22.764007 systemd[1]: Reached target nss-lookup.target. Mar 17 18:24:22.770056 dracut-cmdline[308]: dracut-dracut-053 Mar 17 18:24:22.770056 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:24:22.776007 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:24:22.776027 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:24:22.776037 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:24:22.776307 systemd-modules-load[291]: Inserted module 'dm_multipath' Mar 17 18:24:22.777092 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:24:22.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:22.778906 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:24:22.781950 kernel: audit: type=1130 audit(1742235862.777:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:22.786152 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:24:22.789571 kernel: audit: type=1130 audit(1742235862.786:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:22.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:22.836533 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:24:22.850537 kernel: iscsi: registered transport (tcp) Mar 17 18:24:22.866532 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:24:22.866549 kernel: QLogic iSCSI HBA Driver Mar 17 18:24:22.900699 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:24:22.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:22.902293 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:24:22.905044 kernel: audit: type=1130 audit(1742235862.900:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:22.946531 kernel: raid6: neonx8 gen() 13703 MB/s Mar 17 18:24:22.963528 kernel: raid6: neonx8 xor() 10757 MB/s Mar 17 18:24:22.980527 kernel: raid6: neonx4 gen() 13514 MB/s Mar 17 18:24:22.997524 kernel: raid6: neonx4 xor() 11212 MB/s Mar 17 18:24:23.014535 kernel: raid6: neonx2 gen() 13040 MB/s Mar 17 18:24:23.031527 kernel: raid6: neonx2 xor() 10248 MB/s Mar 17 18:24:23.048524 kernel: raid6: neonx1 gen() 10495 MB/s Mar 17 18:24:23.065538 kernel: raid6: neonx1 xor() 8739 MB/s Mar 17 18:24:23.082527 kernel: raid6: int64x8 gen() 6234 MB/s Mar 17 18:24:23.099529 kernel: raid6: int64x8 xor() 3540 MB/s Mar 17 18:24:23.116523 kernel: raid6: int64x4 gen() 7193 MB/s Mar 17 18:24:23.133523 kernel: raid6: int64x4 xor() 3856 MB/s Mar 17 18:24:23.150530 kernel: raid6: int64x2 gen() 6150 MB/s Mar 17 18:24:23.167531 kernel: raid6: int64x2 xor() 3317 MB/s Mar 17 18:24:23.184523 kernel: raid6: int64x1 gen() 5043 MB/s Mar 17 18:24:23.201717 kernel: raid6: int64x1 xor() 2645 MB/s Mar 17 18:24:23.201727 kernel: raid6: using algorithm neonx8 gen() 13703 MB/s Mar 17 18:24:23.201735 kernel: raid6: .... xor() 10757 MB/s, rmw enabled Mar 17 18:24:23.201748 kernel: raid6: using neon recovery algorithm Mar 17 18:24:23.212527 kernel: xor: measuring software checksum speed Mar 17 18:24:23.212542 kernel: 8regs : 17209 MB/sec Mar 17 18:24:23.212551 kernel: 32regs : 19030 MB/sec Mar 17 18:24:23.213805 kernel: arm64_neon : 27682 MB/sec Mar 17 18:24:23.213817 kernel: xor: using function: arm64_neon (27682 MB/sec) Mar 17 18:24:23.269546 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Mar 17 18:24:23.280822 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:24:23.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:23.282810 systemd[1]: Starting systemd-udevd.service... Mar 17 18:24:23.286155 kernel: audit: type=1130 audit(1742235863.281:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:23.286174 kernel: audit: type=1334 audit(1742235863.281:10): prog-id=7 op=LOAD Mar 17 18:24:23.281000 audit: BPF prog-id=7 op=LOAD Mar 17 18:24:23.281000 audit: BPF prog-id=8 op=LOAD Mar 17 18:24:23.299087 systemd-udevd[492]: Using default interface naming scheme 'v252'. Mar 17 18:24:23.302340 systemd[1]: Started systemd-udevd.service. Mar 17 18:24:23.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:23.304291 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:24:23.315472 dracut-pre-trigger[500]: rd.md=0: removing MD RAID activation Mar 17 18:24:23.342020 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:24:23.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:23.343645 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:24:23.376217 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:24:23.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:23.416533 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 18:24:23.419488 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 18:24:23.419520 kernel: GPT:9289727 != 19775487 Mar 17 18:24:23.419530 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 18:24:23.419539 kernel: GPT:9289727 != 19775487 Mar 17 18:24:23.419546 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:24:23.419554 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:24:23.446535 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (542) Mar 17 18:24:23.448596 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:24:23.449697 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:24:23.454460 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:24:23.457908 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:24:23.461391 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:24:23.463194 systemd[1]: Starting disk-uuid.service... Mar 17 18:24:23.469870 disk-uuid[562]: Primary Header is updated. Mar 17 18:24:23.469870 disk-uuid[562]: Secondary Entries is updated. Mar 17 18:24:23.469870 disk-uuid[562]: Secondary Header is updated. Mar 17 18:24:23.472902 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:24:23.497529 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:24:24.502920 disk-uuid[563]: The operation has completed successfully. Mar 17 18:24:24.503999 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:24:24.525181 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:24:24.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:24.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:24.525272 systemd[1]: Finished disk-uuid.service. Mar 17 18:24:24.526847 systemd[1]: Starting verity-setup.service... Mar 17 18:24:24.543658 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 18:24:24.568493 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:24:24.570077 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:24:24.571847 systemd[1]: Finished verity-setup.service. Mar 17 18:24:24.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:24.619413 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:24:24.620772 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:24:24.620286 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:24:24.621036 systemd[1]: Starting ignition-setup.service... Mar 17 18:24:24.623327 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:24:24.633590 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:24:24.633625 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:24:24.633636 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:24:24.641500 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:24:24.646860 systemd[1]: Finished ignition-setup.service. Mar 17 18:24:24.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:24.648388 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:24:24.694922 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:24:24.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:24.696000 audit: BPF prog-id=9 op=LOAD Mar 17 18:24:24.697829 systemd[1]: Starting systemd-networkd.service... Mar 17 18:24:24.718458 systemd-networkd[741]: lo: Link UP Mar 17 18:24:24.718467 systemd-networkd[741]: lo: Gained carrier Mar 17 18:24:24.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:24.718866 systemd-networkd[741]: Enumeration completed Mar 17 18:24:24.718939 systemd[1]: Started systemd-networkd.service. Mar 17 18:24:24.719041 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:24:24.720036 systemd-networkd[741]: eth0: Link UP Mar 17 18:24:24.720039 systemd-networkd[741]: eth0: Gained carrier Mar 17 18:24:24.720411 systemd[1]: Reached target network.target. Mar 17 18:24:24.722523 systemd[1]: Starting iscsiuio.service... Mar 17 18:24:24.733359 ignition[658]: Ignition 2.14.0 Mar 17 18:24:24.733370 ignition[658]: Stage: fetch-offline Mar 17 18:24:24.733419 ignition[658]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:24:24.733435 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:24:24.733685 ignition[658]: parsed url from cmdline: "" Mar 17 18:24:24.733689 ignition[658]: no config URL provided Mar 17 18:24:24.733694 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:24:24.733701 ignition[658]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:24:24.733722 ignition[658]: op(1): [started] loading QEMU firmware config module Mar 17 18:24:24.733727 ignition[658]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 18:24:24.743503 systemd[1]: Started iscsiuio.service. Mar 17 18:24:24.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:24.745232 systemd[1]: Starting iscsid.service... Mar 17 18:24:24.746593 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:24:24.750097 iscsid[747]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:24:24.750097 iscsid[747]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:24:24.750097 iscsid[747]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:24:24.750097 iscsid[747]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:24:24.750097 iscsid[747]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:24:24.750097 iscsid[747]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:24:24.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:24.757669 ignition[658]: op(1): [finished] loading QEMU firmware config module Mar 17 18:24:24.756778 systemd[1]: Started iscsid.service. Mar 17 18:24:24.760358 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:24:24.769264 ignition[658]: parsing config with SHA512: 48c7a8b3095df877af6fa2ba35a6e77cf9c4990018a5a2b1f4ca8c71261eec35780a15fddf541e5a95eaf7b9db164c98b3bbc008a30db9c91add48cc26900caa Mar 17 18:24:24.770525 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:24:24.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:24.771598 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:24:24.772927 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:24:24.777079 systemd[1]: Reached target remote-fs.target. Mar 17 18:24:24.779647 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:24:24.781861 unknown[658]: fetched base config from "system" Mar 17 18:24:24.781883 unknown[658]: fetched user config from "qemu" Mar 17 18:24:24.782282 ignition[658]: fetch-offline: fetch-offline passed Mar 17 18:24:24.783620 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:24:24.782349 ignition[658]: Ignition finished successfully Mar 17 18:24:24.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:24.785338 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 18:24:24.786278 systemd[1]: Starting ignition-kargs.service... Mar 17 18:24:24.791836 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:24:24.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:24.796075 ignition[759]: Ignition 2.14.0 Mar 17 18:24:24.796084 ignition[759]: Stage: kargs Mar 17 18:24:24.796183 ignition[759]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:24:24.796192 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:24:24.798555 systemd[1]: Finished ignition-kargs.service. Mar 17 18:24:24.799000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:24.796924 ignition[759]: kargs: kargs passed Mar 17 18:24:24.796969 ignition[759]: Ignition finished successfully Mar 17 18:24:24.801217 systemd[1]: Starting ignition-disks.service... Mar 17 18:24:24.808163 ignition[769]: Ignition 2.14.0 Mar 17 18:24:24.808173 ignition[769]: Stage: disks Mar 17 18:24:24.808262 ignition[769]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:24:24.810189 systemd[1]: Finished ignition-disks.service. Mar 17 18:24:24.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:24.808271 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:24:24.811775 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:24:24.809154 ignition[769]: disks: disks passed Mar 17 18:24:24.813115 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:24:24.809194 ignition[769]: Ignition finished successfully Mar 17 18:24:24.814776 systemd[1]: Reached target local-fs.target. Mar 17 18:24:24.816172 systemd[1]: Reached target sysinit.target. Mar 17 18:24:24.817333 systemd[1]: Reached target basic.target. Mar 17 18:24:24.819654 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:24:24.831169 systemd-fsck[777]: ROOT: clean, 623/553520 files, 56021/553472 blocks Mar 17 18:24:24.917492 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:24:24.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:24.919644 systemd[1]: Mounting sysroot.mount... Mar 17 18:24:24.931529 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:24:24.931949 systemd[1]: Mounted sysroot.mount. Mar 17 18:24:24.932722 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:24:24.934972 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:24:24.935837 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 18:24:24.935879 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:24:24.935903 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:24:24.937877 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:24:24.939672 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:24:24.944043 initrd-setup-root[787]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:24:24.947679 initrd-setup-root[795]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:24:24.953704 initrd-setup-root[803]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:24:24.958444 initrd-setup-root[811]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:24:24.990419 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:24:24.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:24.992085 systemd[1]: Starting ignition-mount.service... Mar 17 18:24:24.993370 systemd[1]: Starting sysroot-boot.service... Mar 17 18:24:24.997839 bash[828]: umount: /sysroot/usr/share/oem: not mounted. Mar 17 18:24:25.005902 ignition[830]: INFO : Ignition 2.14.0 Mar 17 18:24:25.005902 ignition[830]: INFO : Stage: mount Mar 17 18:24:25.007432 ignition[830]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:24:25.007432 ignition[830]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:24:25.007432 ignition[830]: INFO : mount: mount passed Mar 17 18:24:25.007432 ignition[830]: INFO : Ignition finished successfully Mar 17 18:24:25.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:25.008626 systemd[1]: Finished ignition-mount.service. Mar 17 18:24:25.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:25.011241 systemd[1]: Finished sysroot-boot.service. Mar 17 18:24:25.580889 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:24:25.586538 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (838) Mar 17 18:24:25.588625 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:24:25.588645 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:24:25.588660 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:24:25.591361 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:24:25.593014 systemd[1]: Starting ignition-files.service... Mar 17 18:24:25.607316 ignition[858]: INFO : Ignition 2.14.0 Mar 17 18:24:25.607316 ignition[858]: INFO : Stage: files Mar 17 18:24:25.609087 ignition[858]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:24:25.609087 ignition[858]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:24:25.609087 ignition[858]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:24:25.614848 ignition[858]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:24:25.614848 ignition[858]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:24:25.617832 ignition[858]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:24:25.617832 ignition[858]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:24:25.617832 ignition[858]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:24:25.617799 unknown[858]: wrote ssh authorized keys file for user: core Mar 17 18:24:25.623065 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:24:25.623065 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:24:25.623065 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:24:25.623065 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:24:25.623065 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:24:25.623065 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:24:25.623065 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:24:25.623065 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Mar 17 18:24:25.954678 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Mar 17 18:24:26.340959 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:24:26.340959 ignition[858]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Mar 17 18:24:26.345178 ignition[858]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:24:26.345178 ignition[858]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:24:26.345178 ignition[858]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Mar 17 18:24:26.345178 ignition[858]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 18:24:26.345178 ignition[858]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:24:26.370790 ignition[858]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:24:26.373112 ignition[858]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 18:24:26.373112 ignition[858]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:24:26.373112 ignition[858]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:24:26.373112 ignition[858]: INFO : files: files passed Mar 17 18:24:26.373112 ignition[858]: INFO : Ignition finished successfully Mar 17 18:24:26.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.373277 systemd[1]: Finished ignition-files.service. Mar 17 18:24:26.381000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.376097 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:24:26.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.377701 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:24:26.387237 initrd-setup-root-after-ignition[883]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Mar 17 18:24:26.378362 systemd[1]: Starting ignition-quench.service... Mar 17 18:24:26.389941 initrd-setup-root-after-ignition[886]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:24:26.381626 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:24:26.381707 systemd[1]: Finished ignition-quench.service. Mar 17 18:24:26.383083 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:24:26.384105 systemd[1]: Reached target ignition-complete.target. Mar 17 18:24:26.386232 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:24:26.397944 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:24:26.398025 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:24:26.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.399732 systemd[1]: Reached target initrd-fs.target. Mar 17 18:24:26.400977 systemd[1]: Reached target initrd.target. Mar 17 18:24:26.402238 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:24:26.402924 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:24:26.412649 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:24:26.412000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.414162 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:24:26.421604 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:24:26.422475 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:24:26.423973 systemd[1]: Stopped target timers.target. Mar 17 18:24:26.425359 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:24:26.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.425477 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:24:26.426816 systemd[1]: Stopped target initrd.target. Mar 17 18:24:26.428324 systemd[1]: Stopped target basic.target. Mar 17 18:24:26.429645 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:24:26.431075 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:24:26.432410 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:24:26.433967 systemd[1]: Stopped target remote-fs.target. Mar 17 18:24:26.435381 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:24:26.436875 systemd[1]: Stopped target sysinit.target. Mar 17 18:24:26.438184 systemd[1]: Stopped target local-fs.target. Mar 17 18:24:26.439551 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:24:26.440898 systemd[1]: Stopped target swap.target. Mar 17 18:24:26.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.442145 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:24:26.442260 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:24:26.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.443598 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:24:26.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.444811 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:24:26.444916 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:24:26.446463 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:24:26.446585 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:24:26.447999 systemd[1]: Stopped target paths.target. Mar 17 18:24:26.449272 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:24:26.451888 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:24:26.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.452863 systemd[1]: Stopped target slices.target. Mar 17 18:24:26.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.454312 systemd[1]: Stopped target sockets.target. Mar 17 18:24:26.455688 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:24:26.461937 iscsid[747]: iscsid shutting down. Mar 17 18:24:26.455802 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:24:26.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.457587 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:24:26.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.467395 ignition[899]: INFO : Ignition 2.14.0 Mar 17 18:24:26.467395 ignition[899]: INFO : Stage: umount Mar 17 18:24:26.467395 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:24:26.467395 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:24:26.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.457685 systemd[1]: Stopped ignition-files.service. Mar 17 18:24:26.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.474312 ignition[899]: INFO : umount: umount passed Mar 17 18:24:26.474312 ignition[899]: INFO : Ignition finished successfully Mar 17 18:24:26.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.459933 systemd[1]: Stopping ignition-mount.service... Mar 17 18:24:26.479000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.461624 systemd[1]: Stopping iscsid.service... Mar 17 18:24:26.462416 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:24:26.462559 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:24:26.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.464645 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:24:26.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.465367 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:24:26.465525 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:24:26.466498 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:24:26.466612 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:24:26.469738 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 18:24:26.469852 systemd[1]: Stopped iscsid.service. Mar 17 18:24:26.471995 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:24:26.472083 systemd[1]: Stopped ignition-mount.service. Mar 17 18:24:26.473949 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:24:26.474047 systemd[1]: Closed iscsid.socket. Mar 17 18:24:26.474973 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:24:26.475055 systemd[1]: Stopped ignition-disks.service. Mar 17 18:24:26.476100 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:24:26.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.476140 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:24:26.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.477860 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:24:26.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.477900 systemd[1]: Stopped ignition-setup.service. Mar 17 18:24:26.480353 systemd[1]: Stopping iscsiuio.service... Mar 17 18:24:26.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.510000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:24:26.485289 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:24:26.485770 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:24:26.485856 systemd[1]: Stopped iscsiuio.service. Mar 17 18:24:26.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.486995 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:24:26.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.487072 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:24:26.517000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.489867 systemd[1]: Stopped target network.target. Mar 17 18:24:26.490685 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:24:26.490719 systemd[1]: Closed iscsiuio.socket. Mar 17 18:24:26.492321 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:24:26.494434 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:24:26.502472 systemd-networkd[741]: eth0: DHCPv6 lease lost Mar 17 18:24:26.523000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:24:26.503482 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:24:26.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.503585 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:24:26.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.504565 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:24:26.504648 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:24:26.506284 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:24:26.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.506356 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:24:26.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.507797 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:24:26.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.507823 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:24:26.509044 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:24:26.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.509083 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:24:26.511143 systemd[1]: Stopping network-cleanup.service... Mar 17 18:24:26.512648 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:24:26.512698 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:24:26.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.514164 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:24:26.514202 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:24:26.516285 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:24:26.516322 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:24:26.517308 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:24:26.521367 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:24:26.523667 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:24:26.523752 systemd[1]: Stopped network-cleanup.service. Mar 17 18:24:26.526158 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:24:26.526279 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:24:26.527689 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:24:26.527720 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:24:26.528900 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:24:26.528935 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:24:26.530402 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:24:26.530455 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:24:26.531774 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:24:26.531812 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:24:26.533268 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:24:26.533306 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:24:26.535325 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:24:26.536257 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:24:26.536315 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:24:26.540615 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:24:26.540700 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:24:26.541960 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:24:26.544017 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:24:26.568581 systemd-journald[290]: Received SIGTERM from PID 1 (n/a). Mar 17 18:24:26.550278 systemd[1]: Switching root. Mar 17 18:24:26.569137 systemd-journald[290]: Journal stopped Mar 17 18:24:28.636389 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:24:28.636439 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:24:28.636451 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:24:28.636467 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:24:28.636477 kernel: SELinux: policy capability open_perms=1 Mar 17 18:24:28.636487 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:24:28.636496 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:24:28.636506 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:24:28.636530 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:24:28.636540 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:24:28.636549 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:24:28.636566 systemd[1]: Successfully loaded SELinux policy in 32.709ms. Mar 17 18:24:28.636588 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.272ms. Mar 17 18:24:28.636599 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:24:28.636611 systemd[1]: Detected virtualization kvm. Mar 17 18:24:28.636622 systemd[1]: Detected architecture arm64. Mar 17 18:24:28.636632 systemd[1]: Detected first boot. Mar 17 18:24:28.636652 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:24:28.636663 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:24:28.636674 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:24:28.636686 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:24:28.636697 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:24:28.636708 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:24:28.636719 kernel: kauditd_printk_skb: 79 callbacks suppressed Mar 17 18:24:28.636729 kernel: audit: type=1334 audit(1742235868.496:83): prog-id=12 op=LOAD Mar 17 18:24:28.636753 kernel: audit: type=1334 audit(1742235868.496:84): prog-id=3 op=UNLOAD Mar 17 18:24:28.636764 kernel: audit: type=1334 audit(1742235868.497:85): prog-id=13 op=LOAD Mar 17 18:24:28.636774 kernel: audit: type=1334 audit(1742235868.497:86): prog-id=14 op=LOAD Mar 17 18:24:28.636785 kernel: audit: type=1334 audit(1742235868.497:87): prog-id=4 op=UNLOAD Mar 17 18:24:28.636794 kernel: audit: type=1334 audit(1742235868.498:88): prog-id=5 op=UNLOAD Mar 17 18:24:28.636804 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 18:24:28.636814 kernel: audit: type=1334 audit(1742235868.498:89): prog-id=15 op=LOAD Mar 17 18:24:28.636825 systemd[1]: Stopped initrd-switch-root.service. Mar 17 18:24:28.636835 kernel: audit: type=1334 audit(1742235868.498:90): prog-id=12 op=UNLOAD Mar 17 18:24:28.636845 kernel: audit: type=1334 audit(1742235868.499:91): prog-id=16 op=LOAD Mar 17 18:24:28.636856 kernel: audit: type=1334 audit(1742235868.499:92): prog-id=17 op=LOAD Mar 17 18:24:28.636868 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 18:24:28.636879 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:24:28.636890 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:24:28.636904 systemd[1]: Created slice system-getty.slice. Mar 17 18:24:28.636915 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:24:28.636926 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:24:28.636937 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:24:28.636948 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:24:28.636958 systemd[1]: Created slice user.slice. Mar 17 18:24:28.636970 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:24:28.636980 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:24:28.636990 systemd[1]: Set up automount boot.automount. Mar 17 18:24:28.637001 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:24:28.637011 systemd[1]: Stopped target initrd-switch-root.target. Mar 17 18:24:28.637021 systemd[1]: Stopped target initrd-fs.target. Mar 17 18:24:28.637033 systemd[1]: Stopped target initrd-root-fs.target. Mar 17 18:24:28.637045 systemd[1]: Reached target integritysetup.target. Mar 17 18:24:28.637056 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:24:28.637067 systemd[1]: Reached target remote-fs.target. Mar 17 18:24:28.637084 systemd[1]: Reached target slices.target. Mar 17 18:24:28.637095 systemd[1]: Reached target swap.target. Mar 17 18:24:28.637106 systemd[1]: Reached target torcx.target. Mar 17 18:24:28.637116 systemd[1]: Reached target veritysetup.target. Mar 17 18:24:28.637127 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:24:28.637138 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:24:28.637149 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:24:28.637160 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:24:28.637171 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:24:28.637181 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:24:28.637192 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:24:28.637203 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:24:28.637214 systemd[1]: Mounting media.mount... Mar 17 18:24:28.637224 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:24:28.637238 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:24:28.637249 systemd[1]: Mounting tmp.mount... Mar 17 18:24:28.637260 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:24:28.637271 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:24:28.637281 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:24:28.637293 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:24:28.637304 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:24:28.637314 systemd[1]: Starting modprobe@drm.service... Mar 17 18:24:28.637324 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:24:28.637336 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:24:28.637347 systemd[1]: Starting modprobe@loop.service... Mar 17 18:24:28.637358 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:24:28.637369 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 18:24:28.637384 systemd[1]: Stopped systemd-fsck-root.service. Mar 17 18:24:28.637395 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 18:24:28.637405 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 18:24:28.637416 systemd[1]: Stopped systemd-journald.service. Mar 17 18:24:28.637426 kernel: fuse: init (API version 7.34) Mar 17 18:24:28.637438 kernel: loop: module loaded Mar 17 18:24:28.637449 systemd[1]: Starting systemd-journald.service... Mar 17 18:24:28.637459 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:24:28.637470 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:24:28.637480 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:24:28.637490 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:24:28.637501 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 18:24:28.637517 systemd[1]: Stopped verity-setup.service. Mar 17 18:24:28.637529 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:24:28.637541 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:24:28.637552 systemd[1]: Mounted media.mount. Mar 17 18:24:28.637563 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:24:28.637573 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:24:28.637584 systemd[1]: Mounted tmp.mount. Mar 17 18:24:28.637594 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:24:28.637605 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:24:28.637616 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:24:28.637627 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:24:28.637637 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:24:28.637650 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:24:28.637661 systemd[1]: Finished modprobe@drm.service. Mar 17 18:24:28.637672 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:24:28.637685 systemd-journald[1001]: Journal started Mar 17 18:24:28.637728 systemd-journald[1001]: Runtime Journal (/run/log/journal/56f6463fed4544d6b78febc8ad04b46e) is 6.0M, max 48.7M, 42.6M free. Mar 17 18:24:26.640000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:24:26.713000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:24:26.713000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:24:26.714000 audit: BPF prog-id=10 op=LOAD Mar 17 18:24:26.714000 audit: BPF prog-id=10 op=UNLOAD Mar 17 18:24:26.714000 audit: BPF prog-id=11 op=LOAD Mar 17 18:24:26.714000 audit: BPF prog-id=11 op=UNLOAD Mar 17 18:24:26.753000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:24:26.753000 audit[932]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58b2 a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:24:26.753000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:24:26.754000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:24:26.754000 audit[932]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000022030 a2=1ed a3=0 items=2 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:24:26.754000 audit: CWD cwd="/" Mar 17 18:24:26.754000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:24:26.754000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:24:26.754000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:24:28.496000 audit: BPF prog-id=12 op=LOAD Mar 17 18:24:28.496000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:24:28.497000 audit: BPF prog-id=13 op=LOAD Mar 17 18:24:28.497000 audit: BPF prog-id=14 op=LOAD Mar 17 18:24:28.497000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:24:28.498000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:24:28.498000 audit: BPF prog-id=15 op=LOAD Mar 17 18:24:28.498000 audit: BPF prog-id=12 op=UNLOAD Mar 17 18:24:28.499000 audit: BPF prog-id=16 op=LOAD Mar 17 18:24:28.499000 audit: BPF prog-id=17 op=LOAD Mar 17 18:24:28.499000 audit: BPF prog-id=13 op=UNLOAD Mar 17 18:24:28.499000 audit: BPF prog-id=14 op=UNLOAD Mar 17 18:24:28.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.515000 audit: BPF prog-id=15 op=UNLOAD Mar 17 18:24:28.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.600000 audit: BPF prog-id=18 op=LOAD Mar 17 18:24:28.600000 audit: BPF prog-id=19 op=LOAD Mar 17 18:24:28.600000 audit: BPF prog-id=20 op=LOAD Mar 17 18:24:28.600000 audit: BPF prog-id=16 op=UNLOAD Mar 17 18:24:28.600000 audit: BPF prog-id=17 op=UNLOAD Mar 17 18:24:28.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.629000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:24:28.629000 audit[1001]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffdc11b0c0 a2=4000 a3=1 items=0 ppid=1 pid=1001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:24:28.629000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:24:28.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.495314 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:24:26.751785 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:24:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:24:28.495327 systemd[1]: Unnecessary job was removed for dev-vda6.device. Mar 17 18:24:26.752020 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:24:26Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:24:28.500663 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 18:24:26.752038 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:24:26Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:24:26.752067 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:24:26Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Mar 17 18:24:26.752076 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:24:26Z" level=debug msg="skipped missing lower profile" missing profile=oem Mar 17 18:24:26.752106 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:24:26Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Mar 17 18:24:26.752117 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:24:26Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Mar 17 18:24:28.639738 systemd[1]: Started systemd-journald.service. Mar 17 18:24:26.752301 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:24:26Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Mar 17 18:24:26.752332 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:24:26Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:24:26.752344 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:24:26Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:24:26.753943 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:24:26Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Mar 17 18:24:28.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:26.753979 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:24:26Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Mar 17 18:24:26.753997 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:24:26Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Mar 17 18:24:26.754010 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:24:26Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Mar 17 18:24:26.754027 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:24:26Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Mar 17 18:24:26.754041 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:24:26Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Mar 17 18:24:28.186753 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:24:28Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:24:28.640468 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:24:28.187013 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:24:28Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:24:28.640614 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:24:28.187116 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:24:28Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:24:28.187277 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:24:28Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:24:28.187329 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:24:28Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Mar 17 18:24:28.187395 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:24:28Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Mar 17 18:24:28.640000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.641974 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:24:28.642126 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:24:28.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.643308 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:24:28.643467 systemd[1]: Finished modprobe@loop.service. Mar 17 18:24:28.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.644685 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:24:28.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.645944 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:24:28.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.647364 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:24:28.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.648830 systemd[1]: Reached target network-pre.target. Mar 17 18:24:28.650959 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:24:28.653155 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:24:28.654060 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:24:28.655791 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:24:28.657681 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:24:28.658646 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:24:28.659743 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:24:28.660720 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:24:28.661828 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:24:28.668573 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:24:28.672157 systemd-journald[1001]: Time spent on flushing to /var/log/journal/56f6463fed4544d6b78febc8ad04b46e is 12.516ms for 976 entries. Mar 17 18:24:28.672157 systemd-journald[1001]: System Journal (/var/log/journal/56f6463fed4544d6b78febc8ad04b46e) is 8.0M, max 195.6M, 187.6M free. Mar 17 18:24:28.706397 systemd-journald[1001]: Received client request to flush runtime journal. Mar 17 18:24:28.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.672700 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:24:28.674145 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:24:28.707142 udevadm[1031]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 18:24:28.675199 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:24:28.677428 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:24:28.690341 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:24:28.692944 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:24:28.703520 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:24:28.707425 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:24:28.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:28.716957 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:24:28.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.054203 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:24:29.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.055000 audit: BPF prog-id=21 op=LOAD Mar 17 18:24:29.055000 audit: BPF prog-id=22 op=LOAD Mar 17 18:24:29.055000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:24:29.055000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:24:29.056443 systemd[1]: Starting systemd-udevd.service... Mar 17 18:24:29.084033 systemd-udevd[1034]: Using default interface naming scheme 'v252'. Mar 17 18:24:29.103816 systemd[1]: Started systemd-udevd.service. Mar 17 18:24:29.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.105000 audit: BPF prog-id=23 op=LOAD Mar 17 18:24:29.107616 systemd[1]: Starting systemd-networkd.service... Mar 17 18:24:29.117000 audit: BPF prog-id=24 op=LOAD Mar 17 18:24:29.117000 audit: BPF prog-id=25 op=LOAD Mar 17 18:24:29.117000 audit: BPF prog-id=26 op=LOAD Mar 17 18:24:29.119288 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:24:29.120662 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Mar 17 18:24:29.153279 systemd[1]: Started systemd-userdbd.service. Mar 17 18:24:29.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.159554 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:24:29.213942 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:24:29.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.216143 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:24:29.216946 systemd-networkd[1043]: lo: Link UP Mar 17 18:24:29.216951 systemd-networkd[1043]: lo: Gained carrier Mar 17 18:24:29.217471 systemd-networkd[1043]: Enumeration completed Mar 17 18:24:29.217568 systemd[1]: Started systemd-networkd.service. Mar 17 18:24:29.217688 systemd-networkd[1043]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:24:29.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.221204 systemd-networkd[1043]: eth0: Link UP Mar 17 18:24:29.221212 systemd-networkd[1043]: eth0: Gained carrier Mar 17 18:24:29.227039 lvm[1067]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:24:29.243655 systemd-networkd[1043]: eth0: DHCPv4 address 10.0.0.98/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:24:29.254378 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:24:29.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.255463 systemd[1]: Reached target cryptsetup.target. Mar 17 18:24:29.257491 systemd[1]: Starting lvm2-activation.service... Mar 17 18:24:29.261031 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:24:29.301362 systemd[1]: Finished lvm2-activation.service. Mar 17 18:24:29.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.302347 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:24:29.303336 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:24:29.303366 systemd[1]: Reached target local-fs.target. Mar 17 18:24:29.304168 systemd[1]: Reached target machines.target. Mar 17 18:24:29.306084 systemd[1]: Starting ldconfig.service... Mar 17 18:24:29.307091 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:24:29.307140 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:24:29.308121 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:24:29.310147 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:24:29.312207 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:24:29.315617 systemd[1]: Starting systemd-sysext.service... Mar 17 18:24:29.316679 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1070 (bootctl) Mar 17 18:24:29.317781 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:24:29.329880 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:24:29.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.333829 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:24:29.338289 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:24:29.338486 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:24:29.376280 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:24:29.377203 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:24:29.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.380534 kernel: loop0: detected capacity change from 0 to 189592 Mar 17 18:24:29.381795 systemd-fsck[1077]: fsck.fat 4.2 (2021-01-31) Mar 17 18:24:29.381795 systemd-fsck[1077]: /dev/vda1: 236 files, 117179/258078 clusters Mar 17 18:24:29.383667 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:24:29.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.386821 systemd[1]: Mounting boot.mount... Mar 17 18:24:29.393469 systemd[1]: Mounted boot.mount. Mar 17 18:24:29.393589 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:24:29.403230 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:24:29.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.411532 kernel: loop1: detected capacity change from 0 to 189592 Mar 17 18:24:29.415653 (sd-sysext)[1083]: Using extensions 'kubernetes'. Mar 17 18:24:29.415983 (sd-sysext)[1083]: Merged extensions into '/usr'. Mar 17 18:24:29.440777 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:24:29.442120 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:24:29.444116 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:24:29.446059 systemd[1]: Starting modprobe@loop.service... Mar 17 18:24:29.447064 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:24:29.447197 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:24:29.447965 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:24:29.448100 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:24:29.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.449663 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:24:29.449779 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:24:29.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.451288 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:24:29.451410 systemd[1]: Finished modprobe@loop.service. Mar 17 18:24:29.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.453020 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:24:29.453134 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:24:29.489652 ldconfig[1069]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:24:29.497583 systemd[1]: Finished ldconfig.service. Mar 17 18:24:29.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.620585 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:24:29.625653 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:24:29.627539 systemd[1]: Finished systemd-sysext.service. Mar 17 18:24:29.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.629743 systemd[1]: Starting ensure-sysext.service... Mar 17 18:24:29.631615 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:24:29.636124 systemd[1]: Reloading. Mar 17 18:24:29.647588 systemd-tmpfiles[1090]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:24:29.653841 systemd-tmpfiles[1090]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:24:29.657270 systemd-tmpfiles[1090]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:24:29.664639 /usr/lib/systemd/system-generators/torcx-generator[1110]: time="2025-03-17T18:24:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:24:29.664669 /usr/lib/systemd/system-generators/torcx-generator[1110]: time="2025-03-17T18:24:29Z" level=info msg="torcx already run" Mar 17 18:24:29.726540 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:24:29.726560 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:24:29.741767 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:24:29.782000 audit: BPF prog-id=27 op=LOAD Mar 17 18:24:29.782000 audit: BPF prog-id=28 op=LOAD Mar 17 18:24:29.782000 audit: BPF prog-id=21 op=UNLOAD Mar 17 18:24:29.782000 audit: BPF prog-id=22 op=UNLOAD Mar 17 18:24:29.784000 audit: BPF prog-id=29 op=LOAD Mar 17 18:24:29.784000 audit: BPF prog-id=24 op=UNLOAD Mar 17 18:24:29.784000 audit: BPF prog-id=30 op=LOAD Mar 17 18:24:29.784000 audit: BPF prog-id=31 op=LOAD Mar 17 18:24:29.784000 audit: BPF prog-id=25 op=UNLOAD Mar 17 18:24:29.784000 audit: BPF prog-id=26 op=UNLOAD Mar 17 18:24:29.785000 audit: BPF prog-id=32 op=LOAD Mar 17 18:24:29.785000 audit: BPF prog-id=23 op=UNLOAD Mar 17 18:24:29.785000 audit: BPF prog-id=33 op=LOAD Mar 17 18:24:29.785000 audit: BPF prog-id=18 op=UNLOAD Mar 17 18:24:29.785000 audit: BPF prog-id=34 op=LOAD Mar 17 18:24:29.785000 audit: BPF prog-id=35 op=LOAD Mar 17 18:24:29.785000 audit: BPF prog-id=19 op=UNLOAD Mar 17 18:24:29.785000 audit: BPF prog-id=20 op=UNLOAD Mar 17 18:24:29.788908 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:24:29.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.793246 systemd[1]: Starting audit-rules.service... Mar 17 18:24:29.795140 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:24:29.797394 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:24:29.798000 audit: BPF prog-id=36 op=LOAD Mar 17 18:24:29.801000 audit: BPF prog-id=37 op=LOAD Mar 17 18:24:29.800257 systemd[1]: Starting systemd-resolved.service... Mar 17 18:24:29.803154 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:24:29.805566 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:24:29.807419 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:24:29.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.810891 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:24:29.813000 audit[1160]: SYSTEM_BOOT pid=1160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.816485 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:24:29.817911 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:24:29.821325 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:24:29.823456 systemd[1]: Starting modprobe@loop.service... Mar 17 18:24:29.824303 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:24:29.824531 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:24:29.824686 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:24:29.825908 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:24:29.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.827375 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:24:29.827504 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:24:29.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.828881 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:24:29.828988 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:24:29.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.830370 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:24:29.830482 systemd[1]: Finished modprobe@loop.service. Mar 17 18:24:29.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.831962 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:24:29.832074 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:24:29.833474 systemd[1]: Starting systemd-update-done.service... Mar 17 18:24:29.835097 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:24:29.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.838274 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:24:29.839738 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:24:29.842005 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:24:29.844016 systemd[1]: Starting modprobe@loop.service... Mar 17 18:24:29.845034 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:24:29.845188 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:24:29.845312 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:24:29.846195 systemd[1]: Finished systemd-update-done.service. Mar 17 18:24:29.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.847489 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:24:29.847646 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:24:29.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.848989 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:24:29.849103 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:24:29.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.850553 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:24:29.850724 systemd[1]: Finished modprobe@loop.service. Mar 17 18:24:29.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:24:29.852114 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:24:29.852206 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:24:29.854736 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:24:29.856018 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:24:29.392157 systemd[1]: Starting modprobe@drm.service... Mar 17 18:24:29.412823 systemd-journald[1001]: Time jumped backwards, rotating. Mar 17 18:24:29.392000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:24:29.392000 audit[1176]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffa1a8bb0 a2=420 a3=0 items=0 ppid=1149 pid=1176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:24:29.392000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:24:29.413106 augenrules[1176]: No rules Mar 17 18:24:29.393598 systemd-timesyncd[1157]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 18:24:29.393647 systemd-timesyncd[1157]: Initial clock synchronization to Mon 2025-03-17 18:24:29.391735 UTC. Mar 17 18:24:29.394561 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:24:29.396669 systemd[1]: Starting modprobe@loop.service... Mar 17 18:24:29.397639 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:24:29.397828 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:24:29.399174 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:24:29.400235 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:24:29.401032 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:24:29.401877 systemd-resolved[1153]: Positive Trust Anchors: Mar 17 18:24:29.401884 systemd-resolved[1153]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:24:29.401912 systemd-resolved[1153]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:24:29.402630 systemd[1]: Finished audit-rules.service. Mar 17 18:24:29.404500 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:24:29.404633 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:24:29.405889 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:24:29.406001 systemd[1]: Finished modprobe@drm.service. Mar 17 18:24:29.407388 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:24:29.407504 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:24:29.409216 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:24:29.409342 systemd[1]: Finished modprobe@loop.service. Mar 17 18:24:29.410943 systemd[1]: Reached target time-set.target. Mar 17 18:24:29.411885 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:24:29.411918 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:24:29.412206 systemd[1]: Finished ensure-sysext.service. Mar 17 18:24:29.415057 systemd-resolved[1153]: Defaulting to hostname 'linux'. Mar 17 18:24:29.416352 systemd[1]: Started systemd-resolved.service. Mar 17 18:24:29.417208 systemd[1]: Reached target network.target. Mar 17 18:24:29.417958 systemd[1]: Reached target nss-lookup.target. Mar 17 18:24:29.418726 systemd[1]: Reached target sysinit.target. Mar 17 18:24:29.419521 systemd[1]: Started motdgen.path. Mar 17 18:24:29.420448 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:24:29.421718 systemd[1]: Started logrotate.timer. Mar 17 18:24:29.422483 systemd[1]: Started mdadm.timer. Mar 17 18:24:29.423157 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:24:29.424040 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:24:29.424069 systemd[1]: Reached target paths.target. Mar 17 18:24:29.424795 systemd[1]: Reached target timers.target. Mar 17 18:24:29.425835 systemd[1]: Listening on dbus.socket. Mar 17 18:24:29.427497 systemd[1]: Starting docker.socket... Mar 17 18:24:29.430656 systemd[1]: Listening on sshd.socket. Mar 17 18:24:29.431466 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:24:29.431962 systemd[1]: Listening on docker.socket. Mar 17 18:24:29.432782 systemd[1]: Reached target sockets.target. Mar 17 18:24:29.433511 systemd[1]: Reached target basic.target. Mar 17 18:24:29.434306 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:24:29.434341 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:24:29.435275 systemd[1]: Starting containerd.service... Mar 17 18:24:29.436943 systemd[1]: Starting dbus.service... Mar 17 18:24:29.438655 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:24:29.440635 systemd[1]: Starting extend-filesystems.service... Mar 17 18:24:29.441558 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:24:29.443067 systemd[1]: Starting motdgen.service... Mar 17 18:24:29.444817 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:24:29.446604 systemd[1]: Starting sshd-keygen.service... Mar 17 18:24:29.449614 systemd[1]: Starting systemd-logind.service... Mar 17 18:24:29.450797 jq[1192]: false Mar 17 18:24:29.452925 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:24:29.453058 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:24:29.453737 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 18:24:29.454575 systemd[1]: Starting update-engine.service... Mar 17 18:24:29.456495 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:24:29.461360 jq[1206]: true Mar 17 18:24:29.459988 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:24:29.460200 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:24:29.460566 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:24:29.460731 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:24:29.470415 jq[1211]: true Mar 17 18:24:29.473742 extend-filesystems[1193]: Found loop1 Mar 17 18:24:29.473742 extend-filesystems[1193]: Found vda Mar 17 18:24:29.473742 extend-filesystems[1193]: Found vda1 Mar 17 18:24:29.473742 extend-filesystems[1193]: Found vda2 Mar 17 18:24:29.473742 extend-filesystems[1193]: Found vda3 Mar 17 18:24:29.473742 extend-filesystems[1193]: Found usr Mar 17 18:24:29.473742 extend-filesystems[1193]: Found vda4 Mar 17 18:24:29.473742 extend-filesystems[1193]: Found vda6 Mar 17 18:24:29.473742 extend-filesystems[1193]: Found vda7 Mar 17 18:24:29.473742 extend-filesystems[1193]: Found vda9 Mar 17 18:24:29.473742 extend-filesystems[1193]: Checking size of /dev/vda9 Mar 17 18:24:29.515152 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 18:24:29.515424 extend-filesystems[1193]: Resized partition /dev/vda9 Mar 17 18:24:29.475043 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:24:29.520649 extend-filesystems[1232]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 18:24:29.475218 systemd[1]: Finished motdgen.service. Mar 17 18:24:29.519318 systemd-logind[1201]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 18:24:29.519684 systemd-logind[1201]: New seat seat0. Mar 17 18:24:29.543209 dbus-daemon[1191]: [system] SELinux support is enabled Mar 17 18:24:29.543385 systemd[1]: Started dbus.service. Mar 17 18:24:29.545788 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:24:29.545821 systemd[1]: Reached target system-config.target. Mar 17 18:24:29.546898 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:24:29.547000 dbus-daemon[1191]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 17 18:24:29.546921 systemd[1]: Reached target user-config.target. Mar 17 18:24:29.547846 systemd[1]: Started systemd-logind.service. Mar 17 18:24:29.550749 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 18:24:29.582127 update_engine[1205]: I0317 18:24:29.577385 1205 main.cc:92] Flatcar Update Engine starting Mar 17 18:24:29.582440 extend-filesystems[1232]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 18:24:29.582440 extend-filesystems[1232]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 18:24:29.582440 extend-filesystems[1232]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 18:24:29.587445 extend-filesystems[1193]: Resized filesystem in /dev/vda9 Mar 17 18:24:29.583196 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:24:29.588557 update_engine[1205]: I0317 18:24:29.587805 1205 update_check_scheduler.cc:74] Next update check in 6m43s Mar 17 18:24:29.583366 systemd[1]: Finished extend-filesystems.service. Mar 17 18:24:29.587757 systemd[1]: Started update-engine.service. Mar 17 18:24:29.589861 env[1212]: time="2025-03-17T18:24:29.589811220Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:24:29.591189 systemd[1]: Started locksmithd.service. Mar 17 18:24:29.592568 bash[1234]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:24:29.593243 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:24:29.608950 env[1212]: time="2025-03-17T18:24:29.608894100Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:24:29.609223 env[1212]: time="2025-03-17T18:24:29.609204540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:24:29.610662 env[1212]: time="2025-03-17T18:24:29.610594980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:24:29.610662 env[1212]: time="2025-03-17T18:24:29.610640900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:24:29.610914 env[1212]: time="2025-03-17T18:24:29.610877380Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:24:29.610914 env[1212]: time="2025-03-17T18:24:29.610903620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:24:29.610971 env[1212]: time="2025-03-17T18:24:29.610917660Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:24:29.610971 env[1212]: time="2025-03-17T18:24:29.610927500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:24:29.611009 env[1212]: time="2025-03-17T18:24:29.611000980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:24:29.611624 env[1212]: time="2025-03-17T18:24:29.611583300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:24:29.612718 env[1212]: time="2025-03-17T18:24:29.612675820Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:24:29.612718 env[1212]: time="2025-03-17T18:24:29.612713660Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:24:29.612810 env[1212]: time="2025-03-17T18:24:29.612790220Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:24:29.612810 env[1212]: time="2025-03-17T18:24:29.612802540Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:24:29.620654 env[1212]: time="2025-03-17T18:24:29.620604620Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:24:29.620654 env[1212]: time="2025-03-17T18:24:29.620651820Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:24:29.620783 env[1212]: time="2025-03-17T18:24:29.620666180Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:24:29.620783 env[1212]: time="2025-03-17T18:24:29.620707620Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:24:29.620783 env[1212]: time="2025-03-17T18:24:29.620737020Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:24:29.620783 env[1212]: time="2025-03-17T18:24:29.620750900Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:24:29.620783 env[1212]: time="2025-03-17T18:24:29.620763460Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:24:29.621139 env[1212]: time="2025-03-17T18:24:29.621102660Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:24:29.621139 env[1212]: time="2025-03-17T18:24:29.621129380Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:24:29.621347 env[1212]: time="2025-03-17T18:24:29.621144180Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:24:29.621347 env[1212]: time="2025-03-17T18:24:29.621158660Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:24:29.621347 env[1212]: time="2025-03-17T18:24:29.621173060Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:24:29.621347 env[1212]: time="2025-03-17T18:24:29.621328060Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:24:29.621425 env[1212]: time="2025-03-17T18:24:29.621400620Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:24:29.621665 env[1212]: time="2025-03-17T18:24:29.621641660Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:24:29.621747 env[1212]: time="2025-03-17T18:24:29.621675300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:24:29.621773 env[1212]: time="2025-03-17T18:24:29.621689820Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:24:29.621939 env[1212]: time="2025-03-17T18:24:29.621926260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:24:29.621966 env[1212]: time="2025-03-17T18:24:29.621943260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:24:29.621966 env[1212]: time="2025-03-17T18:24:29.621959900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:24:29.622015 env[1212]: time="2025-03-17T18:24:29.621972020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:24:29.622015 env[1212]: time="2025-03-17T18:24:29.621986220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:24:29.622015 env[1212]: time="2025-03-17T18:24:29.622000260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:24:29.622015 env[1212]: time="2025-03-17T18:24:29.622012220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:24:29.622096 env[1212]: time="2025-03-17T18:24:29.622024700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:24:29.622096 env[1212]: time="2025-03-17T18:24:29.622038420Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:24:29.622218 env[1212]: time="2025-03-17T18:24:29.622171940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:24:29.622218 env[1212]: time="2025-03-17T18:24:29.622197420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:24:29.622218 env[1212]: time="2025-03-17T18:24:29.622209700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:24:29.622285 env[1212]: time="2025-03-17T18:24:29.622221340Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:24:29.622285 env[1212]: time="2025-03-17T18:24:29.622236500Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:24:29.622285 env[1212]: time="2025-03-17T18:24:29.622251180Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:24:29.622285 env[1212]: time="2025-03-17T18:24:29.622268500Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:24:29.622360 env[1212]: time="2025-03-17T18:24:29.622302700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:24:29.622563 env[1212]: time="2025-03-17T18:24:29.622503660Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:24:29.628657 env[1212]: time="2025-03-17T18:24:29.622573500Z" level=info msg="Connect containerd service" Mar 17 18:24:29.628657 env[1212]: time="2025-03-17T18:24:29.622605780Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:24:29.628657 env[1212]: time="2025-03-17T18:24:29.623743740Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:24:29.628657 env[1212]: time="2025-03-17T18:24:29.624202620Z" level=info msg="Start subscribing containerd event" Mar 17 18:24:29.628657 env[1212]: time="2025-03-17T18:24:29.624280460Z" level=info msg="Start recovering state" Mar 17 18:24:29.628657 env[1212]: time="2025-03-17T18:24:29.624331260Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:24:29.628657 env[1212]: time="2025-03-17T18:24:29.624367780Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:24:29.628657 env[1212]: time="2025-03-17T18:24:29.624368980Z" level=info msg="Start event monitor" Mar 17 18:24:29.628657 env[1212]: time="2025-03-17T18:24:29.625623900Z" level=info msg="containerd successfully booted in 0.053305s" Mar 17 18:24:29.628657 env[1212]: time="2025-03-17T18:24:29.628238500Z" level=info msg="Start snapshots syncer" Mar 17 18:24:29.628657 env[1212]: time="2025-03-17T18:24:29.628286580Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:24:29.628657 env[1212]: time="2025-03-17T18:24:29.628302700Z" level=info msg="Start streaming server" Mar 17 18:24:29.624522 systemd[1]: Started containerd.service. Mar 17 18:24:29.648384 locksmithd[1245]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:24:30.796869 systemd-networkd[1043]: eth0: Gained IPv6LL Mar 17 18:24:30.798520 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:24:30.799779 systemd[1]: Reached target network-online.target. Mar 17 18:24:30.805101 systemd[1]: Starting kubelet.service... Mar 17 18:24:30.981057 sshd_keygen[1213]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:24:30.999549 systemd[1]: Finished sshd-keygen.service. Mar 17 18:24:31.002012 systemd[1]: Starting issuegen.service... Mar 17 18:24:31.006744 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:24:31.006905 systemd[1]: Finished issuegen.service. Mar 17 18:24:31.009210 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:24:31.015597 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:24:31.018389 systemd[1]: Started getty@tty1.service. Mar 17 18:24:31.020622 systemd[1]: Started serial-getty@ttyAMA0.service. Mar 17 18:24:31.021773 systemd[1]: Reached target getty.target. Mar 17 18:24:31.358310 systemd[1]: Started kubelet.service. Mar 17 18:24:31.359619 systemd[1]: Reached target multi-user.target. Mar 17 18:24:31.361774 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:24:31.368847 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:24:31.369015 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:24:31.370189 systemd[1]: Startup finished in 629ms (kernel) + 4.021s (initrd) + 5.233s (userspace) = 9.884s. Mar 17 18:24:31.804564 kubelet[1270]: E0317 18:24:31.804461 1270 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:24:31.806552 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:24:31.806686 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:24:35.059196 systemd[1]: Created slice system-sshd.slice. Mar 17 18:24:35.060291 systemd[1]: Started sshd@0-10.0.0.98:22-10.0.0.1:36750.service. Mar 17 18:24:35.112665 sshd[1279]: Accepted publickey for core from 10.0.0.1 port 36750 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:24:35.115289 sshd[1279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:24:35.124862 systemd[1]: Created slice user-500.slice. Mar 17 18:24:35.126148 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:24:35.131383 systemd-logind[1201]: New session 1 of user core. Mar 17 18:24:35.135892 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:24:35.137241 systemd[1]: Starting user@500.service... Mar 17 18:24:35.146739 (systemd)[1282]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:24:35.212781 systemd[1282]: Queued start job for default target default.target. Mar 17 18:24:35.213261 systemd[1282]: Reached target paths.target. Mar 17 18:24:35.213280 systemd[1282]: Reached target sockets.target. Mar 17 18:24:35.213292 systemd[1282]: Reached target timers.target. Mar 17 18:24:35.213302 systemd[1282]: Reached target basic.target. Mar 17 18:24:35.213350 systemd[1282]: Reached target default.target. Mar 17 18:24:35.213372 systemd[1282]: Startup finished in 60ms. Mar 17 18:24:35.213882 systemd[1]: Started user@500.service. Mar 17 18:24:35.214856 systemd[1]: Started session-1.scope. Mar 17 18:24:35.270558 systemd[1]: Started sshd@1-10.0.0.98:22-10.0.0.1:36762.service. Mar 17 18:24:35.318409 sshd[1291]: Accepted publickey for core from 10.0.0.1 port 36762 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:24:35.319918 sshd[1291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:24:35.324052 systemd-logind[1201]: New session 2 of user core. Mar 17 18:24:35.324867 systemd[1]: Started session-2.scope. Mar 17 18:24:35.380194 sshd[1291]: pam_unix(sshd:session): session closed for user core Mar 17 18:24:35.382760 systemd[1]: sshd@1-10.0.0.98:22-10.0.0.1:36762.service: Deactivated successfully. Mar 17 18:24:35.383332 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 18:24:35.383828 systemd-logind[1201]: Session 2 logged out. Waiting for processes to exit. Mar 17 18:24:35.384849 systemd[1]: Started sshd@2-10.0.0.98:22-10.0.0.1:36764.service. Mar 17 18:24:35.385434 systemd-logind[1201]: Removed session 2. Mar 17 18:24:35.421008 sshd[1297]: Accepted publickey for core from 10.0.0.1 port 36764 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:24:35.422293 sshd[1297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:24:35.425778 systemd-logind[1201]: New session 3 of user core. Mar 17 18:24:35.426630 systemd[1]: Started session-3.scope. Mar 17 18:24:35.476197 sshd[1297]: pam_unix(sshd:session): session closed for user core Mar 17 18:24:35.480190 systemd[1]: sshd@2-10.0.0.98:22-10.0.0.1:36764.service: Deactivated successfully. Mar 17 18:24:35.480766 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 18:24:35.481241 systemd-logind[1201]: Session 3 logged out. Waiting for processes to exit. Mar 17 18:24:35.482295 systemd[1]: Started sshd@3-10.0.0.98:22-10.0.0.1:36778.service. Mar 17 18:24:35.482989 systemd-logind[1201]: Removed session 3. Mar 17 18:24:35.524971 sshd[1303]: Accepted publickey for core from 10.0.0.1 port 36778 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:24:35.526590 sshd[1303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:24:35.529851 systemd-logind[1201]: New session 4 of user core. Mar 17 18:24:35.530640 systemd[1]: Started session-4.scope. Mar 17 18:24:35.583566 sshd[1303]: pam_unix(sshd:session): session closed for user core Mar 17 18:24:35.587064 systemd[1]: Started sshd@4-10.0.0.98:22-10.0.0.1:36788.service. Mar 17 18:24:35.589171 systemd[1]: sshd@3-10.0.0.98:22-10.0.0.1:36778.service: Deactivated successfully. Mar 17 18:24:35.589783 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:24:35.590292 systemd-logind[1201]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:24:35.591093 systemd-logind[1201]: Removed session 4. Mar 17 18:24:35.623626 sshd[1308]: Accepted publickey for core from 10.0.0.1 port 36788 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:24:35.625169 sshd[1308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:24:35.628931 systemd-logind[1201]: New session 5 of user core. Mar 17 18:24:35.629356 systemd[1]: Started session-5.scope. Mar 17 18:24:35.690628 sudo[1312]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:24:35.690864 sudo[1312]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:24:35.702339 systemd[1]: Starting coreos-metadata.service... Mar 17 18:24:35.708619 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 17 18:24:35.708822 systemd[1]: Finished coreos-metadata.service. Mar 17 18:24:36.216444 systemd[1]: Stopped kubelet.service. Mar 17 18:24:36.218388 systemd[1]: Starting kubelet.service... Mar 17 18:24:36.241516 systemd[1]: Reloading. Mar 17 18:24:36.292049 /usr/lib/systemd/system-generators/torcx-generator[1369]: time="2025-03-17T18:24:36Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:24:36.292351 /usr/lib/systemd/system-generators/torcx-generator[1369]: time="2025-03-17T18:24:36Z" level=info msg="torcx already run" Mar 17 18:24:36.388303 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:24:36.388322 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:24:36.403963 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:24:36.466873 systemd[1]: Started kubelet.service. Mar 17 18:24:36.469927 systemd[1]: Stopping kubelet.service... Mar 17 18:24:36.470358 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:24:36.470627 systemd[1]: Stopped kubelet.service. Mar 17 18:24:36.472267 systemd[1]: Starting kubelet.service... Mar 17 18:24:36.556351 systemd[1]: Started kubelet.service. Mar 17 18:24:36.597641 kubelet[1415]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:24:36.597641 kubelet[1415]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:24:36.597641 kubelet[1415]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:24:36.597977 kubelet[1415]: I0317 18:24:36.597840 1415 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:24:37.066764 kubelet[1415]: I0317 18:24:37.066724 1415 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 18:24:37.066764 kubelet[1415]: I0317 18:24:37.066756 1415 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:24:37.067047 kubelet[1415]: I0317 18:24:37.067019 1415 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 18:24:37.128287 kubelet[1415]: I0317 18:24:37.128251 1415 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:24:37.136748 kubelet[1415]: E0317 18:24:37.136713 1415 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:24:37.136748 kubelet[1415]: I0317 18:24:37.136747 1415 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:24:37.140509 kubelet[1415]: I0317 18:24:37.140484 1415 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:24:37.143315 kubelet[1415]: I0317 18:24:37.143287 1415 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 18:24:37.143576 kubelet[1415]: I0317 18:24:37.143542 1415 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:24:37.143862 kubelet[1415]: I0317 18:24:37.143650 1415 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.98","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:24:37.144104 kubelet[1415]: I0317 18:24:37.144090 1415 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:24:37.144154 kubelet[1415]: I0317 18:24:37.144146 1415 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 18:24:37.144382 kubelet[1415]: I0317 18:24:37.144369 1415 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:24:37.148041 kubelet[1415]: I0317 18:24:37.148017 1415 kubelet.go:408] "Attempting to sync node with API server" Mar 17 18:24:37.148136 kubelet[1415]: I0317 18:24:37.148123 1415 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:24:37.148274 kubelet[1415]: I0317 18:24:37.148263 1415 kubelet.go:314] "Adding apiserver pod source" Mar 17 18:24:37.148331 kubelet[1415]: I0317 18:24:37.148320 1415 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:24:37.148400 kubelet[1415]: E0317 18:24:37.148373 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:37.148400 kubelet[1415]: E0317 18:24:37.148322 1415 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:37.150687 kubelet[1415]: I0317 18:24:37.150657 1415 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:24:37.152442 kubelet[1415]: I0317 18:24:37.152416 1415 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:24:37.153269 kubelet[1415]: W0317 18:24:37.153239 1415 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:24:37.153991 kubelet[1415]: I0317 18:24:37.153971 1415 server.go:1269] "Started kubelet" Mar 17 18:24:37.154609 kubelet[1415]: I0317 18:24:37.154561 1415 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:24:37.155845 kubelet[1415]: I0317 18:24:37.155810 1415 server.go:460] "Adding debug handlers to kubelet server" Mar 17 18:24:37.158506 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 18:24:37.158710 kubelet[1415]: I0317 18:24:37.158606 1415 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:24:37.167718 kubelet[1415]: I0317 18:24:37.167679 1415 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:24:37.168029 kubelet[1415]: I0317 18:24:37.167972 1415 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:24:37.168220 kubelet[1415]: I0317 18:24:37.168200 1415 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:24:37.169247 kubelet[1415]: I0317 18:24:37.169226 1415 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 18:24:37.169546 kubelet[1415]: I0317 18:24:37.169522 1415 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 18:24:37.169595 kubelet[1415]: I0317 18:24:37.169587 1415 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:24:37.169795 kubelet[1415]: I0317 18:24:37.169770 1415 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:24:37.169879 kubelet[1415]: E0317 18:24:37.169849 1415 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.98\" not found" Mar 17 18:24:37.170736 kubelet[1415]: E0317 18:24:37.170289 1415 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:24:37.170878 kubelet[1415]: I0317 18:24:37.170851 1415 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:24:37.174113 kubelet[1415]: I0317 18:24:37.174090 1415 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:24:37.174573 kubelet[1415]: E0317 18:24:37.174518 1415 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.98\" not found" node="10.0.0.98" Mar 17 18:24:37.184859 kubelet[1415]: I0317 18:24:37.184822 1415 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:24:37.184859 kubelet[1415]: I0317 18:24:37.184842 1415 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:24:37.184859 kubelet[1415]: I0317 18:24:37.184863 1415 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:24:37.254930 kubelet[1415]: I0317 18:24:37.254881 1415 policy_none.go:49] "None policy: Start" Mar 17 18:24:37.255737 kubelet[1415]: I0317 18:24:37.255721 1415 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:24:37.255832 kubelet[1415]: I0317 18:24:37.255822 1415 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:24:37.262012 systemd[1]: Created slice kubepods.slice. Mar 17 18:24:37.266410 systemd[1]: Created slice kubepods-burstable.slice. Mar 17 18:24:37.269009 systemd[1]: Created slice kubepods-besteffort.slice. Mar 17 18:24:37.269938 kubelet[1415]: E0317 18:24:37.269916 1415 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.98\" not found" Mar 17 18:24:37.286714 kubelet[1415]: I0317 18:24:37.286677 1415 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:24:37.286997 kubelet[1415]: I0317 18:24:37.286876 1415 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:24:37.286997 kubelet[1415]: I0317 18:24:37.286895 1415 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:24:37.287478 kubelet[1415]: I0317 18:24:37.287444 1415 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:24:37.288181 kubelet[1415]: E0317 18:24:37.288150 1415 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.98\" not found" Mar 17 18:24:37.311839 kubelet[1415]: I0317 18:24:37.311787 1415 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:24:37.313064 kubelet[1415]: I0317 18:24:37.313036 1415 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:24:37.313373 kubelet[1415]: I0317 18:24:37.313346 1415 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:24:37.313636 kubelet[1415]: I0317 18:24:37.313621 1415 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 18:24:37.314620 kubelet[1415]: E0317 18:24:37.314399 1415 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 17 18:24:37.388177 kubelet[1415]: I0317 18:24:37.388087 1415 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.98" Mar 17 18:24:37.396508 kubelet[1415]: I0317 18:24:37.396474 1415 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.98" Mar 17 18:24:37.431838 kubelet[1415]: I0317 18:24:37.431795 1415 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Mar 17 18:24:37.432145 env[1212]: time="2025-03-17T18:24:37.432106900Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:24:37.432381 kubelet[1415]: I0317 18:24:37.432303 1415 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Mar 17 18:24:37.590463 sudo[1312]: pam_unix(sudo:session): session closed for user root Mar 17 18:24:37.592237 sshd[1308]: pam_unix(sshd:session): session closed for user core Mar 17 18:24:37.594897 systemd[1]: sshd@4-10.0.0.98:22-10.0.0.1:36788.service: Deactivated successfully. Mar 17 18:24:37.595551 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:24:37.596050 systemd-logind[1201]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:24:37.596987 systemd-logind[1201]: Removed session 5. Mar 17 18:24:38.069378 kubelet[1415]: I0317 18:24:38.069331 1415 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 17 18:24:38.069913 kubelet[1415]: W0317 18:24:38.069522 1415 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 18:24:38.069913 kubelet[1415]: W0317 18:24:38.069571 1415 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 18:24:38.069913 kubelet[1415]: W0317 18:24:38.069790 1415 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 18:24:38.149305 kubelet[1415]: I0317 18:24:38.149278 1415 apiserver.go:52] "Watching apiserver" Mar 17 18:24:38.149452 kubelet[1415]: E0317 18:24:38.149320 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:38.157729 systemd[1]: Created slice kubepods-besteffort-pode3d1b83b_bd38_4d09_aa53_ff308e788006.slice. Mar 17 18:24:38.170183 kubelet[1415]: I0317 18:24:38.170156 1415 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 18:24:38.173457 kubelet[1415]: I0317 18:24:38.173424 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b472b8c-07f0-4e48-b039-a6b958371b88-cilium-config-path\") pod \"cilium-4h7cg\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " pod="kube-system/cilium-4h7cg" Mar 17 18:24:38.173531 kubelet[1415]: I0317 18:24:38.173461 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-xtables-lock\") pod \"cilium-4h7cg\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " pod="kube-system/cilium-4h7cg" Mar 17 18:24:38.173531 kubelet[1415]: I0317 18:24:38.173494 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6b472b8c-07f0-4e48-b039-a6b958371b88-clustermesh-secrets\") pod \"cilium-4h7cg\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " pod="kube-system/cilium-4h7cg" Mar 17 18:24:38.173531 kubelet[1415]: I0317 18:24:38.173512 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-cni-path\") pod \"cilium-4h7cg\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " pod="kube-system/cilium-4h7cg" Mar 17 18:24:38.173531 kubelet[1415]: I0317 18:24:38.173527 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-etc-cni-netd\") pod \"cilium-4h7cg\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " pod="kube-system/cilium-4h7cg" Mar 17 18:24:38.173634 kubelet[1415]: I0317 18:24:38.173541 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-lib-modules\") pod \"cilium-4h7cg\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " pod="kube-system/cilium-4h7cg" Mar 17 18:24:38.173634 kubelet[1415]: I0317 18:24:38.173555 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3d1b83b-bd38-4d09-aa53-ff308e788006-lib-modules\") pod \"kube-proxy-6hk69\" (UID: \"e3d1b83b-bd38-4d09-aa53-ff308e788006\") " pod="kube-system/kube-proxy-6hk69" Mar 17 18:24:38.173634 kubelet[1415]: I0317 18:24:38.173569 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-cilium-run\") pod \"cilium-4h7cg\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " pod="kube-system/cilium-4h7cg" Mar 17 18:24:38.173634 kubelet[1415]: I0317 18:24:38.173584 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-cilium-cgroup\") pod \"cilium-4h7cg\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " pod="kube-system/cilium-4h7cg" Mar 17 18:24:38.173634 kubelet[1415]: I0317 18:24:38.173598 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-bpf-maps\") pod \"cilium-4h7cg\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " pod="kube-system/cilium-4h7cg" Mar 17 18:24:38.173634 kubelet[1415]: I0317 18:24:38.173611 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-hostproc\") pod \"cilium-4h7cg\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " pod="kube-system/cilium-4h7cg" Mar 17 18:24:38.173771 kubelet[1415]: I0317 18:24:38.173625 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89ftl\" (UniqueName: \"kubernetes.io/projected/e3d1b83b-bd38-4d09-aa53-ff308e788006-kube-api-access-89ftl\") pod \"kube-proxy-6hk69\" (UID: \"e3d1b83b-bd38-4d09-aa53-ff308e788006\") " pod="kube-system/kube-proxy-6hk69" Mar 17 18:24:38.173771 kubelet[1415]: I0317 18:24:38.173639 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-host-proc-sys-net\") pod \"cilium-4h7cg\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " pod="kube-system/cilium-4h7cg" Mar 17 18:24:38.173771 kubelet[1415]: I0317 18:24:38.173654 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-host-proc-sys-kernel\") pod \"cilium-4h7cg\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " pod="kube-system/cilium-4h7cg" Mar 17 18:24:38.173771 kubelet[1415]: I0317 18:24:38.173669 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6b472b8c-07f0-4e48-b039-a6b958371b88-hubble-tls\") pod \"cilium-4h7cg\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " pod="kube-system/cilium-4h7cg" Mar 17 18:24:38.173771 kubelet[1415]: I0317 18:24:38.173688 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxqr8\" (UniqueName: \"kubernetes.io/projected/6b472b8c-07f0-4e48-b039-a6b958371b88-kube-api-access-jxqr8\") pod \"cilium-4h7cg\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " pod="kube-system/cilium-4h7cg" Mar 17 18:24:38.173865 kubelet[1415]: I0317 18:24:38.173713 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e3d1b83b-bd38-4d09-aa53-ff308e788006-kube-proxy\") pod \"kube-proxy-6hk69\" (UID: \"e3d1b83b-bd38-4d09-aa53-ff308e788006\") " pod="kube-system/kube-proxy-6hk69" Mar 17 18:24:38.173865 kubelet[1415]: I0317 18:24:38.173727 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3d1b83b-bd38-4d09-aa53-ff308e788006-xtables-lock\") pod \"kube-proxy-6hk69\" (UID: \"e3d1b83b-bd38-4d09-aa53-ff308e788006\") " pod="kube-system/kube-proxy-6hk69" Mar 17 18:24:38.178078 systemd[1]: Created slice kubepods-burstable-pod6b472b8c_07f0_4e48_b039_a6b958371b88.slice. Mar 17 18:24:38.274766 kubelet[1415]: I0317 18:24:38.274725 1415 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 17 18:24:38.477861 kubelet[1415]: E0317 18:24:38.477752 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:38.479185 env[1212]: time="2025-03-17T18:24:38.479114420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6hk69,Uid:e3d1b83b-bd38-4d09-aa53-ff308e788006,Namespace:kube-system,Attempt:0,}" Mar 17 18:24:38.489021 kubelet[1415]: E0317 18:24:38.488971 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:38.489629 env[1212]: time="2025-03-17T18:24:38.489459940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4h7cg,Uid:6b472b8c-07f0-4e48-b039-a6b958371b88,Namespace:kube-system,Attempt:0,}" Mar 17 18:24:39.072475 env[1212]: time="2025-03-17T18:24:39.072420540Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:39.075646 env[1212]: time="2025-03-17T18:24:39.075612500Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:39.076456 env[1212]: time="2025-03-17T18:24:39.076422300Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:39.078083 env[1212]: time="2025-03-17T18:24:39.078043020Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:39.079508 env[1212]: time="2025-03-17T18:24:39.079471820Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:39.081101 env[1212]: time="2025-03-17T18:24:39.081077660Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:39.083620 env[1212]: time="2025-03-17T18:24:39.083583380Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:39.085179 env[1212]: time="2025-03-17T18:24:39.085104460Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:39.124764 env[1212]: time="2025-03-17T18:24:39.124468140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:24:39.124764 env[1212]: time="2025-03-17T18:24:39.124511300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:24:39.124764 env[1212]: time="2025-03-17T18:24:39.124521820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:24:39.124947 env[1212]: time="2025-03-17T18:24:39.124789780Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07 pid=1479 runtime=io.containerd.runc.v2 Mar 17 18:24:39.125136 env[1212]: time="2025-03-17T18:24:39.125051300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:24:39.125136 env[1212]: time="2025-03-17T18:24:39.125088380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:24:39.125136 env[1212]: time="2025-03-17T18:24:39.125099180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:24:39.125408 env[1212]: time="2025-03-17T18:24:39.125357460Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/864f56985ee05b2fae0b6c4988c2e64f42c8a371680bcae403d713f0328bb34c pid=1478 runtime=io.containerd.runc.v2 Mar 17 18:24:39.149699 kubelet[1415]: E0317 18:24:39.149650 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:39.152794 systemd[1]: Started cri-containerd-ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07.scope. Mar 17 18:24:39.155059 systemd[1]: Started cri-containerd-864f56985ee05b2fae0b6c4988c2e64f42c8a371680bcae403d713f0328bb34c.scope. Mar 17 18:24:39.203221 env[1212]: time="2025-03-17T18:24:39.203172620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4h7cg,Uid:6b472b8c-07f0-4e48-b039-a6b958371b88,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07\"" Mar 17 18:24:39.205817 kubelet[1415]: E0317 18:24:39.205787 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:39.207307 env[1212]: time="2025-03-17T18:24:39.207269860Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:24:39.217858 env[1212]: time="2025-03-17T18:24:39.217805340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6hk69,Uid:e3d1b83b-bd38-4d09-aa53-ff308e788006,Namespace:kube-system,Attempt:0,} returns sandbox id \"864f56985ee05b2fae0b6c4988c2e64f42c8a371680bcae403d713f0328bb34c\"" Mar 17 18:24:39.218468 kubelet[1415]: E0317 18:24:39.218434 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:39.280430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount485764174.mount: Deactivated successfully. Mar 17 18:24:40.157516 kubelet[1415]: E0317 18:24:40.149853 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:41.150577 kubelet[1415]: E0317 18:24:41.150530 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:42.150908 kubelet[1415]: E0317 18:24:42.150875 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:42.401918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount85825065.mount: Deactivated successfully. Mar 17 18:24:43.151528 kubelet[1415]: E0317 18:24:43.151491 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:44.151917 kubelet[1415]: E0317 18:24:44.151837 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:44.563372 env[1212]: time="2025-03-17T18:24:44.563262140Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:44.567115 env[1212]: time="2025-03-17T18:24:44.567084540Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:44.569191 env[1212]: time="2025-03-17T18:24:44.569151260Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:44.569892 env[1212]: time="2025-03-17T18:24:44.569865700Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 18:24:44.572034 env[1212]: time="2025-03-17T18:24:44.572001100Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 17 18:24:44.572640 env[1212]: time="2025-03-17T18:24:44.572606860Z" level=info msg="CreateContainer within sandbox \"ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:24:44.585834 env[1212]: time="2025-03-17T18:24:44.585800620Z" level=info msg="CreateContainer within sandbox \"ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"89cef7f4b1b2b6106bd72c44ad635595f49001618fa2f087ae89316715c091ee\"" Mar 17 18:24:44.586494 env[1212]: time="2025-03-17T18:24:44.586453980Z" level=info msg="StartContainer for \"89cef7f4b1b2b6106bd72c44ad635595f49001618fa2f087ae89316715c091ee\"" Mar 17 18:24:44.604908 systemd[1]: Started cri-containerd-89cef7f4b1b2b6106bd72c44ad635595f49001618fa2f087ae89316715c091ee.scope. Mar 17 18:24:44.644226 env[1212]: time="2025-03-17T18:24:44.644093060Z" level=info msg="StartContainer for \"89cef7f4b1b2b6106bd72c44ad635595f49001618fa2f087ae89316715c091ee\" returns successfully" Mar 17 18:24:44.675270 systemd[1]: cri-containerd-89cef7f4b1b2b6106bd72c44ad635595f49001618fa2f087ae89316715c091ee.scope: Deactivated successfully. Mar 17 18:24:44.792185 env[1212]: time="2025-03-17T18:24:44.792145340Z" level=info msg="shim disconnected" id=89cef7f4b1b2b6106bd72c44ad635595f49001618fa2f087ae89316715c091ee Mar 17 18:24:44.792383 env[1212]: time="2025-03-17T18:24:44.792365540Z" level=warning msg="cleaning up after shim disconnected" id=89cef7f4b1b2b6106bd72c44ad635595f49001618fa2f087ae89316715c091ee namespace=k8s.io Mar 17 18:24:44.792454 env[1212]: time="2025-03-17T18:24:44.792441460Z" level=info msg="cleaning up dead shim" Mar 17 18:24:44.799752 env[1212]: time="2025-03-17T18:24:44.799720340Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:24:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1597 runtime=io.containerd.runc.v2\n" Mar 17 18:24:45.151975 kubelet[1415]: E0317 18:24:45.151919 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:45.331315 kubelet[1415]: E0317 18:24:45.331024 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:45.333168 env[1212]: time="2025-03-17T18:24:45.333107500Z" level=info msg="CreateContainer within sandbox \"ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:24:45.399046 env[1212]: time="2025-03-17T18:24:45.398988660Z" level=info msg="CreateContainer within sandbox \"ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d60bf8042117b1f53188a783f9bfb2ea6febd7331fdaaf26f29516170d9d6349\"" Mar 17 18:24:45.399804 env[1212]: time="2025-03-17T18:24:45.399777940Z" level=info msg="StartContainer for \"d60bf8042117b1f53188a783f9bfb2ea6febd7331fdaaf26f29516170d9d6349\"" Mar 17 18:24:45.412927 systemd[1]: Started cri-containerd-d60bf8042117b1f53188a783f9bfb2ea6febd7331fdaaf26f29516170d9d6349.scope. Mar 17 18:24:45.445490 env[1212]: time="2025-03-17T18:24:45.445436380Z" level=info msg="StartContainer for \"d60bf8042117b1f53188a783f9bfb2ea6febd7331fdaaf26f29516170d9d6349\" returns successfully" Mar 17 18:24:45.464478 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:24:45.464668 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:24:45.464843 systemd[1]: Stopping systemd-sysctl.service... Mar 17 18:24:45.466342 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:24:45.469686 systemd[1]: cri-containerd-d60bf8042117b1f53188a783f9bfb2ea6febd7331fdaaf26f29516170d9d6349.scope: Deactivated successfully. Mar 17 18:24:45.472954 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:24:45.501805 env[1212]: time="2025-03-17T18:24:45.501763100Z" level=info msg="shim disconnected" id=d60bf8042117b1f53188a783f9bfb2ea6febd7331fdaaf26f29516170d9d6349 Mar 17 18:24:45.502005 env[1212]: time="2025-03-17T18:24:45.501987380Z" level=warning msg="cleaning up after shim disconnected" id=d60bf8042117b1f53188a783f9bfb2ea6febd7331fdaaf26f29516170d9d6349 namespace=k8s.io Mar 17 18:24:45.502060 env[1212]: time="2025-03-17T18:24:45.502047660Z" level=info msg="cleaning up dead shim" Mar 17 18:24:45.508171 env[1212]: time="2025-03-17T18:24:45.508135220Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:24:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1659 runtime=io.containerd.runc.v2\n" Mar 17 18:24:45.581623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89cef7f4b1b2b6106bd72c44ad635595f49001618fa2f087ae89316715c091ee-rootfs.mount: Deactivated successfully. Mar 17 18:24:45.725529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount562705705.mount: Deactivated successfully. Mar 17 18:24:46.152347 kubelet[1415]: E0317 18:24:46.152292 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:46.172466 env[1212]: time="2025-03-17T18:24:46.172423580Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:46.173776 env[1212]: time="2025-03-17T18:24:46.173743860Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:46.175248 env[1212]: time="2025-03-17T18:24:46.175219540Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:46.176687 env[1212]: time="2025-03-17T18:24:46.176662420Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:46.177118 env[1212]: time="2025-03-17T18:24:46.177091380Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\"" Mar 17 18:24:46.179125 env[1212]: time="2025-03-17T18:24:46.179097780Z" level=info msg="CreateContainer within sandbox \"864f56985ee05b2fae0b6c4988c2e64f42c8a371680bcae403d713f0328bb34c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:24:46.191995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3038383316.mount: Deactivated successfully. Mar 17 18:24:46.195951 env[1212]: time="2025-03-17T18:24:46.195903300Z" level=info msg="CreateContainer within sandbox \"864f56985ee05b2fae0b6c4988c2e64f42c8a371680bcae403d713f0328bb34c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2a491054d11caaa6c5f929992eddb280bcd39f104a109d1780708a919c6a9c80\"" Mar 17 18:24:46.196716 env[1212]: time="2025-03-17T18:24:46.196672140Z" level=info msg="StartContainer for \"2a491054d11caaa6c5f929992eddb280bcd39f104a109d1780708a919c6a9c80\"" Mar 17 18:24:46.214608 systemd[1]: Started cri-containerd-2a491054d11caaa6c5f929992eddb280bcd39f104a109d1780708a919c6a9c80.scope. Mar 17 18:24:46.254854 env[1212]: time="2025-03-17T18:24:46.254810780Z" level=info msg="StartContainer for \"2a491054d11caaa6c5f929992eddb280bcd39f104a109d1780708a919c6a9c80\" returns successfully" Mar 17 18:24:46.335528 kubelet[1415]: E0317 18:24:46.335495 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:46.338062 env[1212]: time="2025-03-17T18:24:46.337751740Z" level=info msg="CreateContainer within sandbox \"ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:24:46.338168 kubelet[1415]: E0317 18:24:46.337989 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:46.358088 env[1212]: time="2025-03-17T18:24:46.358045620Z" level=info msg="CreateContainer within sandbox \"ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"361f0e7f076315f3b03bca2a7fc51dc81335258290c710fc559dc112e854387d\"" Mar 17 18:24:46.359127 env[1212]: time="2025-03-17T18:24:46.359101980Z" level=info msg="StartContainer for \"361f0e7f076315f3b03bca2a7fc51dc81335258290c710fc559dc112e854387d\"" Mar 17 18:24:46.377104 systemd[1]: Started cri-containerd-361f0e7f076315f3b03bca2a7fc51dc81335258290c710fc559dc112e854387d.scope. Mar 17 18:24:46.424808 env[1212]: time="2025-03-17T18:24:46.424351340Z" level=info msg="StartContainer for \"361f0e7f076315f3b03bca2a7fc51dc81335258290c710fc559dc112e854387d\" returns successfully" Mar 17 18:24:46.429641 systemd[1]: cri-containerd-361f0e7f076315f3b03bca2a7fc51dc81335258290c710fc559dc112e854387d.scope: Deactivated successfully. Mar 17 18:24:46.554345 env[1212]: time="2025-03-17T18:24:46.554293660Z" level=info msg="shim disconnected" id=361f0e7f076315f3b03bca2a7fc51dc81335258290c710fc559dc112e854387d Mar 17 18:24:46.554581 env[1212]: time="2025-03-17T18:24:46.554562980Z" level=warning msg="cleaning up after shim disconnected" id=361f0e7f076315f3b03bca2a7fc51dc81335258290c710fc559dc112e854387d namespace=k8s.io Mar 17 18:24:46.554652 env[1212]: time="2025-03-17T18:24:46.554638900Z" level=info msg="cleaning up dead shim" Mar 17 18:24:46.561116 env[1212]: time="2025-03-17T18:24:46.561077180Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:24:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1797 runtime=io.containerd.runc.v2\n" Mar 17 18:24:47.152487 kubelet[1415]: E0317 18:24:47.152447 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:47.341576 kubelet[1415]: E0317 18:24:47.341528 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:47.341842 kubelet[1415]: E0317 18:24:47.341824 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:47.343716 env[1212]: time="2025-03-17T18:24:47.343654740Z" level=info msg="CreateContainer within sandbox \"ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:24:47.356272 env[1212]: time="2025-03-17T18:24:47.356222900Z" level=info msg="CreateContainer within sandbox \"ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f68348f2b14cb3c6885faec8d892746ef4875280394fffa0d044c473e0f5f8b5\"" Mar 17 18:24:47.356782 env[1212]: time="2025-03-17T18:24:47.356754060Z" level=info msg="StartContainer for \"f68348f2b14cb3c6885faec8d892746ef4875280394fffa0d044c473e0f5f8b5\"" Mar 17 18:24:47.360814 kubelet[1415]: I0317 18:24:47.360756 1415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6hk69" podStartSLOduration=3.40169778 podStartE2EDuration="10.3607409s" podCreationTimestamp="2025-03-17 18:24:37 +0000 UTC" firstStartedPulling="2025-03-17 18:24:39.21887286 +0000 UTC m=+2.659250241" lastFinishedPulling="2025-03-17 18:24:46.17791594 +0000 UTC m=+9.618293361" observedRunningTime="2025-03-17 18:24:46.36184958 +0000 UTC m=+9.802227001" watchObservedRunningTime="2025-03-17 18:24:47.3607409 +0000 UTC m=+10.801118321" Mar 17 18:24:47.374483 systemd[1]: Started cri-containerd-f68348f2b14cb3c6885faec8d892746ef4875280394fffa0d044c473e0f5f8b5.scope. Mar 17 18:24:47.409303 systemd[1]: cri-containerd-f68348f2b14cb3c6885faec8d892746ef4875280394fffa0d044c473e0f5f8b5.scope: Deactivated successfully. Mar 17 18:24:47.415162 env[1212]: time="2025-03-17T18:24:47.415116900Z" level=info msg="StartContainer for \"f68348f2b14cb3c6885faec8d892746ef4875280394fffa0d044c473e0f5f8b5\" returns successfully" Mar 17 18:24:47.415987 env[1212]: time="2025-03-17T18:24:47.415792540Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b472b8c_07f0_4e48_b039_a6b958371b88.slice/cri-containerd-f68348f2b14cb3c6885faec8d892746ef4875280394fffa0d044c473e0f5f8b5.scope/memory.events\": no such file or directory" Mar 17 18:24:47.435705 env[1212]: time="2025-03-17T18:24:47.435656820Z" level=info msg="shim disconnected" id=f68348f2b14cb3c6885faec8d892746ef4875280394fffa0d044c473e0f5f8b5 Mar 17 18:24:47.435705 env[1212]: time="2025-03-17T18:24:47.435709580Z" level=warning msg="cleaning up after shim disconnected" id=f68348f2b14cb3c6885faec8d892746ef4875280394fffa0d044c473e0f5f8b5 namespace=k8s.io Mar 17 18:24:47.435893 env[1212]: time="2025-03-17T18:24:47.435718380Z" level=info msg="cleaning up dead shim" Mar 17 18:24:47.442462 env[1212]: time="2025-03-17T18:24:47.442429180Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:24:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1942 runtime=io.containerd.runc.v2\n" Mar 17 18:24:47.581006 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f68348f2b14cb3c6885faec8d892746ef4875280394fffa0d044c473e0f5f8b5-rootfs.mount: Deactivated successfully. Mar 17 18:24:48.153048 kubelet[1415]: E0317 18:24:48.153002 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:48.345110 kubelet[1415]: E0317 18:24:48.344964 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:48.346799 env[1212]: time="2025-03-17T18:24:48.346763460Z" level=info msg="CreateContainer within sandbox \"ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:24:48.364283 env[1212]: time="2025-03-17T18:24:48.362557340Z" level=info msg="CreateContainer within sandbox \"ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68\"" Mar 17 18:24:48.364283 env[1212]: time="2025-03-17T18:24:48.362995780Z" level=info msg="StartContainer for \"1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68\"" Mar 17 18:24:48.375835 systemd[1]: Started cri-containerd-1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68.scope. Mar 17 18:24:48.411667 env[1212]: time="2025-03-17T18:24:48.411271020Z" level=info msg="StartContainer for \"1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68\" returns successfully" Mar 17 18:24:48.511366 kubelet[1415]: I0317 18:24:48.511339 1415 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 17 18:24:48.581132 systemd[1]: run-containerd-runc-k8s.io-1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68-runc.oQHNy1.mount: Deactivated successfully. Mar 17 18:24:48.660727 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Mar 17 18:24:48.897772 kernel: Initializing XFRM netlink socket Mar 17 18:24:48.899761 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Mar 17 18:24:49.153608 kubelet[1415]: E0317 18:24:49.153273 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:49.348988 kubelet[1415]: E0317 18:24:49.348892 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:49.362747 kubelet[1415]: I0317 18:24:49.362679 1415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4h7cg" podStartSLOduration=6.99829754 podStartE2EDuration="12.3626633s" podCreationTimestamp="2025-03-17 18:24:37 +0000 UTC" firstStartedPulling="2025-03-17 18:24:39.20684526 +0000 UTC m=+2.647222641" lastFinishedPulling="2025-03-17 18:24:44.57121098 +0000 UTC m=+8.011588401" observedRunningTime="2025-03-17 18:24:49.3624533 +0000 UTC m=+12.802830761" watchObservedRunningTime="2025-03-17 18:24:49.3626633 +0000 UTC m=+12.803040721" Mar 17 18:24:50.153876 kubelet[1415]: E0317 18:24:50.153844 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:50.350607 kubelet[1415]: E0317 18:24:50.350572 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:50.507507 systemd-networkd[1043]: cilium_host: Link UP Mar 17 18:24:50.508014 systemd-networkd[1043]: cilium_net: Link UP Mar 17 18:24:50.508530 systemd-networkd[1043]: cilium_net: Gained carrier Mar 17 18:24:50.509535 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Mar 17 18:24:50.509590 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 18:24:50.509095 systemd-networkd[1043]: cilium_host: Gained carrier Mar 17 18:24:50.584210 systemd-networkd[1043]: cilium_vxlan: Link UP Mar 17 18:24:50.584217 systemd-networkd[1043]: cilium_vxlan: Gained carrier Mar 17 18:24:50.628880 systemd-networkd[1043]: cilium_host: Gained IPv6LL Mar 17 18:24:50.872803 kernel: NET: Registered PF_ALG protocol family Mar 17 18:24:51.154662 kubelet[1415]: E0317 18:24:51.154552 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:51.340835 systemd-networkd[1043]: cilium_net: Gained IPv6LL Mar 17 18:24:51.352335 kubelet[1415]: E0317 18:24:51.352300 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:51.463109 systemd-networkd[1043]: lxc_health: Link UP Mar 17 18:24:51.471906 systemd-networkd[1043]: lxc_health: Gained carrier Mar 17 18:24:51.472720 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:24:52.154811 kubelet[1415]: E0317 18:24:52.154781 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:52.353642 kubelet[1415]: E0317 18:24:52.353616 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:52.492865 systemd-networkd[1043]: cilium_vxlan: Gained IPv6LL Mar 17 18:24:53.155269 kubelet[1415]: E0317 18:24:53.155216 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:53.196859 systemd-networkd[1043]: lxc_health: Gained IPv6LL Mar 17 18:24:53.354512 kubelet[1415]: E0317 18:24:53.354478 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:53.552811 systemd[1]: Created slice kubepods-besteffort-pod59d98008_cb69_4ef9_a155_d0bfe2fd9e00.slice. Mar 17 18:24:53.558879 kubelet[1415]: I0317 18:24:53.558840 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f7n8\" (UniqueName: \"kubernetes.io/projected/59d98008-cb69-4ef9-a155-d0bfe2fd9e00-kube-api-access-7f7n8\") pod \"nginx-deployment-8587fbcb89-w4zcm\" (UID: \"59d98008-cb69-4ef9-a155-d0bfe2fd9e00\") " pod="default/nginx-deployment-8587fbcb89-w4zcm" Mar 17 18:24:53.856895 env[1212]: time="2025-03-17T18:24:53.856545700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-w4zcm,Uid:59d98008-cb69-4ef9-a155-d0bfe2fd9e00,Namespace:default,Attempt:0,}" Mar 17 18:24:53.899134 systemd-networkd[1043]: lxc39e1231bb712: Link UP Mar 17 18:24:53.908720 kernel: eth0: renamed from tmpf6afa Mar 17 18:24:53.916739 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:24:53.916795 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc39e1231bb712: link becomes ready Mar 17 18:24:53.916837 systemd-networkd[1043]: lxc39e1231bb712: Gained carrier Mar 17 18:24:54.155623 kubelet[1415]: E0317 18:24:54.155448 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:54.355729 kubelet[1415]: E0317 18:24:54.355646 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:55.155799 kubelet[1415]: E0317 18:24:55.155762 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:55.308843 systemd-networkd[1043]: lxc39e1231bb712: Gained IPv6LL Mar 17 18:24:55.897716 env[1212]: time="2025-03-17T18:24:55.897635100Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:24:55.897716 env[1212]: time="2025-03-17T18:24:55.897674060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:24:55.897716 env[1212]: time="2025-03-17T18:24:55.897684820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:24:55.898050 env[1212]: time="2025-03-17T18:24:55.897827060Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f6afa1039f15991fe6a2555607405f56da73e6f16d853330fc1cdd21eaa9f331 pid=2497 runtime=io.containerd.runc.v2 Mar 17 18:24:55.912357 systemd[1]: run-containerd-runc-k8s.io-f6afa1039f15991fe6a2555607405f56da73e6f16d853330fc1cdd21eaa9f331-runc.2CszQw.mount: Deactivated successfully. Mar 17 18:24:55.915071 systemd[1]: Started cri-containerd-f6afa1039f15991fe6a2555607405f56da73e6f16d853330fc1cdd21eaa9f331.scope. Mar 17 18:24:55.992410 systemd-resolved[1153]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:24:56.013311 env[1212]: time="2025-03-17T18:24:56.013264500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-w4zcm,Uid:59d98008-cb69-4ef9-a155-d0bfe2fd9e00,Namespace:default,Attempt:0,} returns sandbox id \"f6afa1039f15991fe6a2555607405f56da73e6f16d853330fc1cdd21eaa9f331\"" Mar 17 18:24:56.015200 env[1212]: time="2025-03-17T18:24:56.015164020Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 18:24:56.157184 kubelet[1415]: E0317 18:24:56.157072 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:57.149306 kubelet[1415]: E0317 18:24:57.149252 1415 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:57.157593 kubelet[1415]: E0317 18:24:57.157548 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:58.082262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1944337879.mount: Deactivated successfully. Mar 17 18:24:58.158378 kubelet[1415]: E0317 18:24:58.158328 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:59.159387 kubelet[1415]: E0317 18:24:59.159339 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:24:59.295715 env[1212]: time="2025-03-17T18:24:59.295653460Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:59.297081 env[1212]: time="2025-03-17T18:24:59.297040500Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:59.299465 env[1212]: time="2025-03-17T18:24:59.299432660Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:59.301152 env[1212]: time="2025-03-17T18:24:59.301122900Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:59.301967 env[1212]: time="2025-03-17T18:24:59.301940860Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\"" Mar 17 18:24:59.304383 env[1212]: time="2025-03-17T18:24:59.304351460Z" level=info msg="CreateContainer within sandbox \"f6afa1039f15991fe6a2555607405f56da73e6f16d853330fc1cdd21eaa9f331\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Mar 17 18:24:59.313483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4159813237.mount: Deactivated successfully. Mar 17 18:24:59.317858 env[1212]: time="2025-03-17T18:24:59.317810780Z" level=info msg="CreateContainer within sandbox \"f6afa1039f15991fe6a2555607405f56da73e6f16d853330fc1cdd21eaa9f331\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"2f1996fd4218c2cd6fad94acbc7b95f8f2ad0b49555a006ab5f773823f8e03de\"" Mar 17 18:24:59.318395 env[1212]: time="2025-03-17T18:24:59.318351900Z" level=info msg="StartContainer for \"2f1996fd4218c2cd6fad94acbc7b95f8f2ad0b49555a006ab5f773823f8e03de\"" Mar 17 18:24:59.332623 systemd[1]: Started cri-containerd-2f1996fd4218c2cd6fad94acbc7b95f8f2ad0b49555a006ab5f773823f8e03de.scope. Mar 17 18:24:59.368377 env[1212]: time="2025-03-17T18:24:59.368319660Z" level=info msg="StartContainer for \"2f1996fd4218c2cd6fad94acbc7b95f8f2ad0b49555a006ab5f773823f8e03de\" returns successfully" Mar 17 18:25:00.160178 kubelet[1415]: E0317 18:25:00.160131 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:01.160838 kubelet[1415]: E0317 18:25:01.160793 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:02.161431 kubelet[1415]: E0317 18:25:02.161383 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:03.162220 kubelet[1415]: E0317 18:25:03.162180 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:04.162941 kubelet[1415]: E0317 18:25:04.162901 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:05.164306 kubelet[1415]: E0317 18:25:05.164247 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:05.587383 kubelet[1415]: I0317 18:25:05.586893 1415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-w4zcm" podStartSLOduration=9.2984797 podStartE2EDuration="12.58687734s" podCreationTimestamp="2025-03-17 18:24:53 +0000 UTC" firstStartedPulling="2025-03-17 18:24:56.01486842 +0000 UTC m=+19.455245841" lastFinishedPulling="2025-03-17 18:24:59.30326606 +0000 UTC m=+22.743643481" observedRunningTime="2025-03-17 18:25:00.38063322 +0000 UTC m=+23.821010641" watchObservedRunningTime="2025-03-17 18:25:05.58687734 +0000 UTC m=+29.027254761" Mar 17 18:25:05.591945 systemd[1]: Created slice kubepods-besteffort-pod9698497f_ae80_4a1e_aad1_fd40b9761bb5.slice. Mar 17 18:25:05.721313 kubelet[1415]: I0317 18:25:05.721270 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/9698497f-ae80-4a1e-aad1-fd40b9761bb5-data\") pod \"nfs-server-provisioner-0\" (UID: \"9698497f-ae80-4a1e-aad1-fd40b9761bb5\") " pod="default/nfs-server-provisioner-0" Mar 17 18:25:05.721313 kubelet[1415]: I0317 18:25:05.721317 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdjdz\" (UniqueName: \"kubernetes.io/projected/9698497f-ae80-4a1e-aad1-fd40b9761bb5-kube-api-access-kdjdz\") pod \"nfs-server-provisioner-0\" (UID: \"9698497f-ae80-4a1e-aad1-fd40b9761bb5\") " pod="default/nfs-server-provisioner-0" Mar 17 18:25:05.896275 env[1212]: time="2025-03-17T18:25:05.896141664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9698497f-ae80-4a1e-aad1-fd40b9761bb5,Namespace:default,Attempt:0,}" Mar 17 18:25:06.071281 systemd-networkd[1043]: lxc919f26effc80: Link UP Mar 17 18:25:06.086786 kernel: eth0: renamed from tmpade12 Mar 17 18:25:06.098786 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:25:06.098888 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc919f26effc80: link becomes ready Mar 17 18:25:06.098912 systemd-networkd[1043]: lxc919f26effc80: Gained carrier Mar 17 18:25:06.165459 kubelet[1415]: E0317 18:25:06.165353 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:06.319494 env[1212]: time="2025-03-17T18:25:06.319411071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:25:06.319494 env[1212]: time="2025-03-17T18:25:06.319456832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:25:06.319704 env[1212]: time="2025-03-17T18:25:06.319468592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:25:06.319704 env[1212]: time="2025-03-17T18:25:06.319614754Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ade12f7a0b89457b4a33eea6a83db9d4c334ca63121f37978c52888df2d92fda pid=2627 runtime=io.containerd.runc.v2 Mar 17 18:25:06.337436 systemd[1]: Started cri-containerd-ade12f7a0b89457b4a33eea6a83db9d4c334ca63121f37978c52888df2d92fda.scope. Mar 17 18:25:06.375946 systemd-resolved[1153]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:25:06.397837 env[1212]: time="2025-03-17T18:25:06.397793278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9698497f-ae80-4a1e-aad1-fd40b9761bb5,Namespace:default,Attempt:0,} returns sandbox id \"ade12f7a0b89457b4a33eea6a83db9d4c334ca63121f37978c52888df2d92fda\"" Mar 17 18:25:06.399601 env[1212]: time="2025-03-17T18:25:06.399567590Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Mar 17 18:25:07.167745 kubelet[1415]: E0317 18:25:07.166246 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:07.596874 systemd-networkd[1043]: lxc919f26effc80: Gained IPv6LL Mar 17 18:25:08.166766 kubelet[1415]: E0317 18:25:08.166713 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:08.498808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1365850698.mount: Deactivated successfully. Mar 17 18:25:09.167451 kubelet[1415]: E0317 18:25:09.167401 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:10.168006 kubelet[1415]: E0317 18:25:10.167969 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:10.284535 env[1212]: time="2025-03-17T18:25:10.284486060Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:25:10.286036 env[1212]: time="2025-03-17T18:25:10.285999281Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:25:10.287824 env[1212]: time="2025-03-17T18:25:10.287796666Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:25:10.290186 env[1212]: time="2025-03-17T18:25:10.290154979Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:25:10.290952 env[1212]: time="2025-03-17T18:25:10.290915989Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Mar 17 18:25:10.293450 env[1212]: time="2025-03-17T18:25:10.293413224Z" level=info msg="CreateContainer within sandbox \"ade12f7a0b89457b4a33eea6a83db9d4c334ca63121f37978c52888df2d92fda\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Mar 17 18:25:10.305316 env[1212]: time="2025-03-17T18:25:10.305282188Z" level=info msg="CreateContainer within sandbox \"ade12f7a0b89457b4a33eea6a83db9d4c334ca63121f37978c52888df2d92fda\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a3ae22c57ec09297e24cad7142406f17e4d014f6eee044b075806a9022731784\"" Mar 17 18:25:10.306019 env[1212]: time="2025-03-17T18:25:10.305990918Z" level=info msg="StartContainer for \"a3ae22c57ec09297e24cad7142406f17e4d014f6eee044b075806a9022731784\"" Mar 17 18:25:10.320766 systemd[1]: Started cri-containerd-a3ae22c57ec09297e24cad7142406f17e4d014f6eee044b075806a9022731784.scope. Mar 17 18:25:10.363530 env[1212]: time="2025-03-17T18:25:10.363483996Z" level=info msg="StartContainer for \"a3ae22c57ec09297e24cad7142406f17e4d014f6eee044b075806a9022731784\" returns successfully" Mar 17 18:25:10.400995 kubelet[1415]: I0317 18:25:10.400749 1415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.507692687 podStartE2EDuration="5.400730952s" podCreationTimestamp="2025-03-17 18:25:05 +0000 UTC" firstStartedPulling="2025-03-17 18:25:06.399070981 +0000 UTC m=+29.839448402" lastFinishedPulling="2025-03-17 18:25:10.292109286 +0000 UTC m=+33.732486667" observedRunningTime="2025-03-17 18:25:10.39980646 +0000 UTC m=+33.840183881" watchObservedRunningTime="2025-03-17 18:25:10.400730952 +0000 UTC m=+33.841108373" Mar 17 18:25:11.170264 kubelet[1415]: E0317 18:25:11.168412 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:12.168912 kubelet[1415]: E0317 18:25:12.168872 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:13.169820 kubelet[1415]: E0317 18:25:13.169773 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:14.170161 kubelet[1415]: E0317 18:25:14.170112 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:14.490871 update_engine[1205]: I0317 18:25:14.490563 1205 update_attempter.cc:509] Updating boot flags... Mar 17 18:25:15.171234 kubelet[1415]: E0317 18:25:15.171183 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:16.172144 kubelet[1415]: E0317 18:25:16.172101 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:17.148587 kubelet[1415]: E0317 18:25:17.148546 1415 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:17.173024 kubelet[1415]: E0317 18:25:17.172976 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:18.173741 kubelet[1415]: E0317 18:25:18.173712 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:19.174378 kubelet[1415]: E0317 18:25:19.174333 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:19.830742 systemd[1]: Created slice kubepods-besteffort-pod41869368_0fb1_4a2b_a440_e5c3517f5061.slice. Mar 17 18:25:20.001502 kubelet[1415]: I0317 18:25:20.001454 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-982034e2-f26a-466b-b4f5-c7c84c204b45\" (UniqueName: \"kubernetes.io/nfs/41869368-0fb1-4a2b-a440-e5c3517f5061-pvc-982034e2-f26a-466b-b4f5-c7c84c204b45\") pod \"test-pod-1\" (UID: \"41869368-0fb1-4a2b-a440-e5c3517f5061\") " pod="default/test-pod-1" Mar 17 18:25:20.001643 kubelet[1415]: I0317 18:25:20.001513 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpfhl\" (UniqueName: \"kubernetes.io/projected/41869368-0fb1-4a2b-a440-e5c3517f5061-kube-api-access-qpfhl\") pod \"test-pod-1\" (UID: \"41869368-0fb1-4a2b-a440-e5c3517f5061\") " pod="default/test-pod-1" Mar 17 18:25:20.136715 kernel: FS-Cache: Loaded Mar 17 18:25:20.165792 kernel: RPC: Registered named UNIX socket transport module. Mar 17 18:25:20.165868 kernel: RPC: Registered udp transport module. Mar 17 18:25:20.165888 kernel: RPC: Registered tcp transport module. Mar 17 18:25:20.165905 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Mar 17 18:25:20.175002 kubelet[1415]: E0317 18:25:20.174961 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:20.211737 kernel: FS-Cache: Netfs 'nfs' registered for caching Mar 17 18:25:20.348751 kernel: NFS: Registering the id_resolver key type Mar 17 18:25:20.348864 kernel: Key type id_resolver registered Mar 17 18:25:20.348885 kernel: Key type id_legacy registered Mar 17 18:25:20.386832 nfsidmap[2761]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Mar 17 18:25:20.390118 nfsidmap[2764]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Mar 17 18:25:20.433790 env[1212]: time="2025-03-17T18:25:20.433713510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:41869368-0fb1-4a2b-a440-e5c3517f5061,Namespace:default,Attempt:0,}" Mar 17 18:25:20.461123 systemd-networkd[1043]: lxcbed8067a58d0: Link UP Mar 17 18:25:20.477730 kernel: eth0: renamed from tmpc8732 Mar 17 18:25:20.487727 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:25:20.487802 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcbed8067a58d0: link becomes ready Mar 17 18:25:20.488044 systemd-networkd[1043]: lxcbed8067a58d0: Gained carrier Mar 17 18:25:20.643406 env[1212]: time="2025-03-17T18:25:20.642906712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:25:20.643406 env[1212]: time="2025-03-17T18:25:20.642957033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:25:20.643406 env[1212]: time="2025-03-17T18:25:20.642967433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:25:20.643406 env[1212]: time="2025-03-17T18:25:20.643107994Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c873215aea5015c0154c4cc017a4ca18ae751d71caa2d32c279bec432770037c pid=2801 runtime=io.containerd.runc.v2 Mar 17 18:25:20.669020 systemd[1]: Started cri-containerd-c873215aea5015c0154c4cc017a4ca18ae751d71caa2d32c279bec432770037c.scope. Mar 17 18:25:20.709019 systemd-resolved[1153]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:25:20.723290 env[1212]: time="2025-03-17T18:25:20.723135216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:41869368-0fb1-4a2b-a440-e5c3517f5061,Namespace:default,Attempt:0,} returns sandbox id \"c873215aea5015c0154c4cc017a4ca18ae751d71caa2d32c279bec432770037c\"" Mar 17 18:25:20.724535 env[1212]: time="2025-03-17T18:25:20.724300984Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 18:25:21.021065 env[1212]: time="2025-03-17T18:25:21.020967773Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:25:21.022438 env[1212]: time="2025-03-17T18:25:21.022385623Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:25:21.025494 env[1212]: time="2025-03-17T18:25:21.025456164Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:25:21.027052 env[1212]: time="2025-03-17T18:25:21.027018535Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:25:21.027943 env[1212]: time="2025-03-17T18:25:21.027908501Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\"" Mar 17 18:25:21.030514 env[1212]: time="2025-03-17T18:25:21.030471038Z" level=info msg="CreateContainer within sandbox \"c873215aea5015c0154c4cc017a4ca18ae751d71caa2d32c279bec432770037c\" for container &ContainerMetadata{Name:test,Attempt:0,}" Mar 17 18:25:21.048909 env[1212]: time="2025-03-17T18:25:21.048851403Z" level=info msg="CreateContainer within sandbox \"c873215aea5015c0154c4cc017a4ca18ae751d71caa2d32c279bec432770037c\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"d02e474ed8913fcdb94e31de9a2e088a7cf06944b2164991201fa7b80d95f06f\"" Mar 17 18:25:21.049491 env[1212]: time="2025-03-17T18:25:21.049453968Z" level=info msg="StartContainer for \"d02e474ed8913fcdb94e31de9a2e088a7cf06944b2164991201fa7b80d95f06f\"" Mar 17 18:25:21.063391 systemd[1]: Started cri-containerd-d02e474ed8913fcdb94e31de9a2e088a7cf06944b2164991201fa7b80d95f06f.scope. Mar 17 18:25:21.102582 env[1212]: time="2025-03-17T18:25:21.102536650Z" level=info msg="StartContainer for \"d02e474ed8913fcdb94e31de9a2e088a7cf06944b2164991201fa7b80d95f06f\" returns successfully" Mar 17 18:25:21.175767 kubelet[1415]: E0317 18:25:21.175724 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:21.416662 kubelet[1415]: I0317 18:25:21.416600 1415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.111768266 podStartE2EDuration="16.416584831s" podCreationTimestamp="2025-03-17 18:25:05 +0000 UTC" firstStartedPulling="2025-03-17 18:25:20.724106063 +0000 UTC m=+44.164483484" lastFinishedPulling="2025-03-17 18:25:21.028922628 +0000 UTC m=+44.469300049" observedRunningTime="2025-03-17 18:25:21.416255429 +0000 UTC m=+44.856632810" watchObservedRunningTime="2025-03-17 18:25:21.416584831 +0000 UTC m=+44.856962252" Mar 17 18:25:21.740879 systemd-networkd[1043]: lxcbed8067a58d0: Gained IPv6LL Mar 17 18:25:22.176393 kubelet[1415]: E0317 18:25:22.176340 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:23.176743 kubelet[1415]: E0317 18:25:23.176688 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:24.155455 env[1212]: time="2025-03-17T18:25:24.155394760Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:25:24.160650 env[1212]: time="2025-03-17T18:25:24.160610749Z" level=info msg="StopContainer for \"1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68\" with timeout 2 (s)" Mar 17 18:25:24.160905 env[1212]: time="2025-03-17T18:25:24.160880390Z" level=info msg="Stop container \"1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68\" with signal terminated" Mar 17 18:25:24.167802 systemd-networkd[1043]: lxc_health: Link DOWN Mar 17 18:25:24.167808 systemd-networkd[1043]: lxc_health: Lost carrier Mar 17 18:25:24.177759 kubelet[1415]: E0317 18:25:24.177728 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:24.201031 systemd[1]: cri-containerd-1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68.scope: Deactivated successfully. Mar 17 18:25:24.201358 systemd[1]: cri-containerd-1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68.scope: Consumed 6.323s CPU time. Mar 17 18:25:24.216523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68-rootfs.mount: Deactivated successfully. Mar 17 18:25:24.345683 env[1212]: time="2025-03-17T18:25:24.345636029Z" level=info msg="shim disconnected" id=1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68 Mar 17 18:25:24.345683 env[1212]: time="2025-03-17T18:25:24.345681869Z" level=warning msg="cleaning up after shim disconnected" id=1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68 namespace=k8s.io Mar 17 18:25:24.345900 env[1212]: time="2025-03-17T18:25:24.345707189Z" level=info msg="cleaning up dead shim" Mar 17 18:25:24.352296 env[1212]: time="2025-03-17T18:25:24.352256786Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:25:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2937 runtime=io.containerd.runc.v2\n" Mar 17 18:25:24.355114 env[1212]: time="2025-03-17T18:25:24.355077362Z" level=info msg="StopContainer for \"1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68\" returns successfully" Mar 17 18:25:24.355659 env[1212]: time="2025-03-17T18:25:24.355634245Z" level=info msg="StopPodSandbox for \"ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07\"" Mar 17 18:25:24.355731 env[1212]: time="2025-03-17T18:25:24.355703765Z" level=info msg="Container to stop \"361f0e7f076315f3b03bca2a7fc51dc81335258290c710fc559dc112e854387d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:25:24.355731 env[1212]: time="2025-03-17T18:25:24.355720845Z" level=info msg="Container to stop \"f68348f2b14cb3c6885faec8d892746ef4875280394fffa0d044c473e0f5f8b5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:25:24.355782 env[1212]: time="2025-03-17T18:25:24.355732965Z" level=info msg="Container to stop \"89cef7f4b1b2b6106bd72c44ad635595f49001618fa2f087ae89316715c091ee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:25:24.355782 env[1212]: time="2025-03-17T18:25:24.355744286Z" level=info msg="Container to stop \"d60bf8042117b1f53188a783f9bfb2ea6febd7331fdaaf26f29516170d9d6349\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:25:24.355782 env[1212]: time="2025-03-17T18:25:24.355755526Z" level=info msg="Container to stop \"1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:25:24.357343 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07-shm.mount: Deactivated successfully. Mar 17 18:25:24.361590 systemd[1]: cri-containerd-ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07.scope: Deactivated successfully. Mar 17 18:25:24.381601 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07-rootfs.mount: Deactivated successfully. Mar 17 18:25:24.385842 env[1212]: time="2025-03-17T18:25:24.385784694Z" level=info msg="shim disconnected" id=ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07 Mar 17 18:25:24.386018 env[1212]: time="2025-03-17T18:25:24.386001136Z" level=warning msg="cleaning up after shim disconnected" id=ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07 namespace=k8s.io Mar 17 18:25:24.386079 env[1212]: time="2025-03-17T18:25:24.386067016Z" level=info msg="cleaning up dead shim" Mar 17 18:25:24.392574 env[1212]: time="2025-03-17T18:25:24.392546252Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:25:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2968 runtime=io.containerd.runc.v2\n" Mar 17 18:25:24.392959 env[1212]: time="2025-03-17T18:25:24.392932015Z" level=info msg="TearDown network for sandbox \"ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07\" successfully" Mar 17 18:25:24.393062 env[1212]: time="2025-03-17T18:25:24.393043815Z" level=info msg="StopPodSandbox for \"ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07\" returns successfully" Mar 17 18:25:24.416200 kubelet[1415]: I0317 18:25:24.415415 1415 scope.go:117] "RemoveContainer" containerID="1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68" Mar 17 18:25:24.417003 env[1212]: time="2025-03-17T18:25:24.416961390Z" level=info msg="RemoveContainer for \"1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68\"" Mar 17 18:25:24.419915 env[1212]: time="2025-03-17T18:25:24.419878526Z" level=info msg="RemoveContainer for \"1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68\" returns successfully" Mar 17 18:25:24.420112 kubelet[1415]: I0317 18:25:24.420082 1415 scope.go:117] "RemoveContainer" containerID="f68348f2b14cb3c6885faec8d892746ef4875280394fffa0d044c473e0f5f8b5" Mar 17 18:25:24.420926 env[1212]: time="2025-03-17T18:25:24.420897852Z" level=info msg="RemoveContainer for \"f68348f2b14cb3c6885faec8d892746ef4875280394fffa0d044c473e0f5f8b5\"" Mar 17 18:25:24.423079 env[1212]: time="2025-03-17T18:25:24.422994263Z" level=info msg="RemoveContainer for \"f68348f2b14cb3c6885faec8d892746ef4875280394fffa0d044c473e0f5f8b5\" returns successfully" Mar 17 18:25:24.423230 kubelet[1415]: I0317 18:25:24.423192 1415 scope.go:117] "RemoveContainer" containerID="361f0e7f076315f3b03bca2a7fc51dc81335258290c710fc559dc112e854387d" Mar 17 18:25:24.423974 env[1212]: time="2025-03-17T18:25:24.423942949Z" level=info msg="RemoveContainer for \"361f0e7f076315f3b03bca2a7fc51dc81335258290c710fc559dc112e854387d\"" Mar 17 18:25:24.426093 env[1212]: time="2025-03-17T18:25:24.426059521Z" level=info msg="RemoveContainer for \"361f0e7f076315f3b03bca2a7fc51dc81335258290c710fc559dc112e854387d\" returns successfully" Mar 17 18:25:24.426241 kubelet[1415]: I0317 18:25:24.426201 1415 scope.go:117] "RemoveContainer" containerID="d60bf8042117b1f53188a783f9bfb2ea6febd7331fdaaf26f29516170d9d6349" Mar 17 18:25:24.427436 env[1212]: time="2025-03-17T18:25:24.427409248Z" level=info msg="RemoveContainer for \"d60bf8042117b1f53188a783f9bfb2ea6febd7331fdaaf26f29516170d9d6349\"" Mar 17 18:25:24.429626 env[1212]: time="2025-03-17T18:25:24.429477340Z" level=info msg="RemoveContainer for \"d60bf8042117b1f53188a783f9bfb2ea6febd7331fdaaf26f29516170d9d6349\" returns successfully" Mar 17 18:25:24.429812 kubelet[1415]: I0317 18:25:24.429778 1415 scope.go:117] "RemoveContainer" containerID="89cef7f4b1b2b6106bd72c44ad635595f49001618fa2f087ae89316715c091ee" Mar 17 18:25:24.430753 env[1212]: time="2025-03-17T18:25:24.430724667Z" level=info msg="RemoveContainer for \"89cef7f4b1b2b6106bd72c44ad635595f49001618fa2f087ae89316715c091ee\"" Mar 17 18:25:24.432862 env[1212]: time="2025-03-17T18:25:24.432830519Z" level=info msg="RemoveContainer for \"89cef7f4b1b2b6106bd72c44ad635595f49001618fa2f087ae89316715c091ee\" returns successfully" Mar 17 18:25:24.433090 kubelet[1415]: I0317 18:25:24.433066 1415 scope.go:117] "RemoveContainer" containerID="1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68" Mar 17 18:25:24.433388 env[1212]: time="2025-03-17T18:25:24.433318721Z" level=error msg="ContainerStatus for \"1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68\": not found" Mar 17 18:25:24.433534 kubelet[1415]: E0317 18:25:24.433501 1415 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68\": not found" containerID="1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68" Mar 17 18:25:24.433630 kubelet[1415]: I0317 18:25:24.433543 1415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68"} err="failed to get container status \"1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68\": rpc error: code = NotFound desc = an error occurred when try to find container \"1235a13e1ce2afdd72975cb16ec1d1feb82dea84a42edcf50adbe4c65a34ae68\": not found" Mar 17 18:25:24.433662 kubelet[1415]: I0317 18:25:24.433631 1415 scope.go:117] "RemoveContainer" containerID="f68348f2b14cb3c6885faec8d892746ef4875280394fffa0d044c473e0f5f8b5" Mar 17 18:25:24.433851 env[1212]: time="2025-03-17T18:25:24.433803004Z" level=error msg="ContainerStatus for \"f68348f2b14cb3c6885faec8d892746ef4875280394fffa0d044c473e0f5f8b5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f68348f2b14cb3c6885faec8d892746ef4875280394fffa0d044c473e0f5f8b5\": not found" Mar 17 18:25:24.434058 kubelet[1415]: E0317 18:25:24.434034 1415 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f68348f2b14cb3c6885faec8d892746ef4875280394fffa0d044c473e0f5f8b5\": not found" containerID="f68348f2b14cb3c6885faec8d892746ef4875280394fffa0d044c473e0f5f8b5" Mar 17 18:25:24.434103 kubelet[1415]: I0317 18:25:24.434057 1415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f68348f2b14cb3c6885faec8d892746ef4875280394fffa0d044c473e0f5f8b5"} err="failed to get container status \"f68348f2b14cb3c6885faec8d892746ef4875280394fffa0d044c473e0f5f8b5\": rpc error: code = NotFound desc = an error occurred when try to find container \"f68348f2b14cb3c6885faec8d892746ef4875280394fffa0d044c473e0f5f8b5\": not found" Mar 17 18:25:24.434103 kubelet[1415]: I0317 18:25:24.434087 1415 scope.go:117] "RemoveContainer" containerID="361f0e7f076315f3b03bca2a7fc51dc81335258290c710fc559dc112e854387d" Mar 17 18:25:24.434325 env[1212]: time="2025-03-17T18:25:24.434281807Z" level=error msg="ContainerStatus for \"361f0e7f076315f3b03bca2a7fc51dc81335258290c710fc559dc112e854387d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"361f0e7f076315f3b03bca2a7fc51dc81335258290c710fc559dc112e854387d\": not found" Mar 17 18:25:24.434438 kubelet[1415]: E0317 18:25:24.434420 1415 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"361f0e7f076315f3b03bca2a7fc51dc81335258290c710fc559dc112e854387d\": not found" containerID="361f0e7f076315f3b03bca2a7fc51dc81335258290c710fc559dc112e854387d" Mar 17 18:25:24.434475 kubelet[1415]: I0317 18:25:24.434441 1415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"361f0e7f076315f3b03bca2a7fc51dc81335258290c710fc559dc112e854387d"} err="failed to get container status \"361f0e7f076315f3b03bca2a7fc51dc81335258290c710fc559dc112e854387d\": rpc error: code = NotFound desc = an error occurred when try to find container \"361f0e7f076315f3b03bca2a7fc51dc81335258290c710fc559dc112e854387d\": not found" Mar 17 18:25:24.434475 kubelet[1415]: I0317 18:25:24.434471 1415 scope.go:117] "RemoveContainer" containerID="d60bf8042117b1f53188a783f9bfb2ea6febd7331fdaaf26f29516170d9d6349" Mar 17 18:25:24.434705 env[1212]: time="2025-03-17T18:25:24.434643129Z" level=error msg="ContainerStatus for \"d60bf8042117b1f53188a783f9bfb2ea6febd7331fdaaf26f29516170d9d6349\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d60bf8042117b1f53188a783f9bfb2ea6febd7331fdaaf26f29516170d9d6349\": not found" Mar 17 18:25:24.434919 kubelet[1415]: E0317 18:25:24.434893 1415 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d60bf8042117b1f53188a783f9bfb2ea6febd7331fdaaf26f29516170d9d6349\": not found" containerID="d60bf8042117b1f53188a783f9bfb2ea6febd7331fdaaf26f29516170d9d6349" Mar 17 18:25:24.434966 kubelet[1415]: I0317 18:25:24.434924 1415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d60bf8042117b1f53188a783f9bfb2ea6febd7331fdaaf26f29516170d9d6349"} err="failed to get container status \"d60bf8042117b1f53188a783f9bfb2ea6febd7331fdaaf26f29516170d9d6349\": rpc error: code = NotFound desc = an error occurred when try to find container \"d60bf8042117b1f53188a783f9bfb2ea6febd7331fdaaf26f29516170d9d6349\": not found" Mar 17 18:25:24.434966 kubelet[1415]: I0317 18:25:24.434944 1415 scope.go:117] "RemoveContainer" containerID="89cef7f4b1b2b6106bd72c44ad635595f49001618fa2f087ae89316715c091ee" Mar 17 18:25:24.435152 env[1212]: time="2025-03-17T18:25:24.435106532Z" level=error msg="ContainerStatus for \"89cef7f4b1b2b6106bd72c44ad635595f49001618fa2f087ae89316715c091ee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89cef7f4b1b2b6106bd72c44ad635595f49001618fa2f087ae89316715c091ee\": not found" Mar 17 18:25:24.435248 kubelet[1415]: E0317 18:25:24.435230 1415 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89cef7f4b1b2b6106bd72c44ad635595f49001618fa2f087ae89316715c091ee\": not found" containerID="89cef7f4b1b2b6106bd72c44ad635595f49001618fa2f087ae89316715c091ee" Mar 17 18:25:24.435290 kubelet[1415]: I0317 18:25:24.435251 1415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89cef7f4b1b2b6106bd72c44ad635595f49001618fa2f087ae89316715c091ee"} err="failed to get container status \"89cef7f4b1b2b6106bd72c44ad635595f49001618fa2f087ae89316715c091ee\": rpc error: code = NotFound desc = an error occurred when try to find container \"89cef7f4b1b2b6106bd72c44ad635595f49001618fa2f087ae89316715c091ee\": not found" Mar 17 18:25:24.529840 kubelet[1415]: I0317 18:25:24.529781 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-cilium-cgroup\") pod \"6b472b8c-07f0-4e48-b039-a6b958371b88\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " Mar 17 18:25:24.529840 kubelet[1415]: I0317 18:25:24.529835 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-host-proc-sys-kernel\") pod \"6b472b8c-07f0-4e48-b039-a6b958371b88\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " Mar 17 18:25:24.529986 kubelet[1415]: I0317 18:25:24.529860 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6b472b8c-07f0-4e48-b039-a6b958371b88-hubble-tls\") pod \"6b472b8c-07f0-4e48-b039-a6b958371b88\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " Mar 17 18:25:24.529986 kubelet[1415]: I0317 18:25:24.529887 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-xtables-lock\") pod \"6b472b8c-07f0-4e48-b039-a6b958371b88\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " Mar 17 18:25:24.529986 kubelet[1415]: I0317 18:25:24.529906 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6b472b8c-07f0-4e48-b039-a6b958371b88-clustermesh-secrets\") pod \"6b472b8c-07f0-4e48-b039-a6b958371b88\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " Mar 17 18:25:24.529986 kubelet[1415]: I0317 18:25:24.529925 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b472b8c-07f0-4e48-b039-a6b958371b88-cilium-config-path\") pod \"6b472b8c-07f0-4e48-b039-a6b958371b88\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " Mar 17 18:25:24.529986 kubelet[1415]: I0317 18:25:24.529941 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-cni-path\") pod \"6b472b8c-07f0-4e48-b039-a6b958371b88\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " Mar 17 18:25:24.529986 kubelet[1415]: I0317 18:25:24.529965 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxqr8\" (UniqueName: \"kubernetes.io/projected/6b472b8c-07f0-4e48-b039-a6b958371b88-kube-api-access-jxqr8\") pod \"6b472b8c-07f0-4e48-b039-a6b958371b88\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " Mar 17 18:25:24.530118 kubelet[1415]: I0317 18:25:24.529981 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-cilium-run\") pod \"6b472b8c-07f0-4e48-b039-a6b958371b88\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " Mar 17 18:25:24.530118 kubelet[1415]: I0317 18:25:24.529995 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-bpf-maps\") pod \"6b472b8c-07f0-4e48-b039-a6b958371b88\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " Mar 17 18:25:24.530118 kubelet[1415]: I0317 18:25:24.530012 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-hostproc\") pod \"6b472b8c-07f0-4e48-b039-a6b958371b88\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " Mar 17 18:25:24.530118 kubelet[1415]: I0317 18:25:24.530037 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-etc-cni-netd\") pod \"6b472b8c-07f0-4e48-b039-a6b958371b88\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " Mar 17 18:25:24.530118 kubelet[1415]: I0317 18:25:24.530054 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-host-proc-sys-net\") pod \"6b472b8c-07f0-4e48-b039-a6b958371b88\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " Mar 17 18:25:24.530118 kubelet[1415]: I0317 18:25:24.530069 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-lib-modules\") pod \"6b472b8c-07f0-4e48-b039-a6b958371b88\" (UID: \"6b472b8c-07f0-4e48-b039-a6b958371b88\") " Mar 17 18:25:24.530255 kubelet[1415]: I0317 18:25:24.529917 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6b472b8c-07f0-4e48-b039-a6b958371b88" (UID: "6b472b8c-07f0-4e48-b039-a6b958371b88"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:24.530255 kubelet[1415]: I0317 18:25:24.529920 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6b472b8c-07f0-4e48-b039-a6b958371b88" (UID: "6b472b8c-07f0-4e48-b039-a6b958371b88"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:24.530255 kubelet[1415]: I0317 18:25:24.529941 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6b472b8c-07f0-4e48-b039-a6b958371b88" (UID: "6b472b8c-07f0-4e48-b039-a6b958371b88"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:24.530255 kubelet[1415]: I0317 18:25:24.530126 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6b472b8c-07f0-4e48-b039-a6b958371b88" (UID: "6b472b8c-07f0-4e48-b039-a6b958371b88"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:24.530255 kubelet[1415]: I0317 18:25:24.530201 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-cni-path" (OuterVolumeSpecName: "cni-path") pod "6b472b8c-07f0-4e48-b039-a6b958371b88" (UID: "6b472b8c-07f0-4e48-b039-a6b958371b88"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:24.531096 kubelet[1415]: I0317 18:25:24.530360 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-hostproc" (OuterVolumeSpecName: "hostproc") pod "6b472b8c-07f0-4e48-b039-a6b958371b88" (UID: "6b472b8c-07f0-4e48-b039-a6b958371b88"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:24.531096 kubelet[1415]: I0317 18:25:24.530398 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6b472b8c-07f0-4e48-b039-a6b958371b88" (UID: "6b472b8c-07f0-4e48-b039-a6b958371b88"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:24.531096 kubelet[1415]: I0317 18:25:24.530454 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6b472b8c-07f0-4e48-b039-a6b958371b88" (UID: "6b472b8c-07f0-4e48-b039-a6b958371b88"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:24.531096 kubelet[1415]: I0317 18:25:24.530727 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6b472b8c-07f0-4e48-b039-a6b958371b88" (UID: "6b472b8c-07f0-4e48-b039-a6b958371b88"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:24.531096 kubelet[1415]: I0317 18:25:24.530775 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6b472b8c-07f0-4e48-b039-a6b958371b88" (UID: "6b472b8c-07f0-4e48-b039-a6b958371b88"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:24.532041 kubelet[1415]: I0317 18:25:24.531982 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b472b8c-07f0-4e48-b039-a6b958371b88-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6b472b8c-07f0-4e48-b039-a6b958371b88" (UID: "6b472b8c-07f0-4e48-b039-a6b958371b88"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:25:24.534246 systemd[1]: var-lib-kubelet-pods-6b472b8c\x2d07f0\x2d4e48\x2db039\x2da6b958371b88-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:25:24.534914 kubelet[1415]: I0317 18:25:24.534302 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b472b8c-07f0-4e48-b039-a6b958371b88-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6b472b8c-07f0-4e48-b039-a6b958371b88" (UID: "6b472b8c-07f0-4e48-b039-a6b958371b88"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:25:24.535153 kubelet[1415]: I0317 18:25:24.535107 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b472b8c-07f0-4e48-b039-a6b958371b88-kube-api-access-jxqr8" (OuterVolumeSpecName: "kube-api-access-jxqr8") pod "6b472b8c-07f0-4e48-b039-a6b958371b88" (UID: "6b472b8c-07f0-4e48-b039-a6b958371b88"). InnerVolumeSpecName "kube-api-access-jxqr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:25:24.535385 kubelet[1415]: I0317 18:25:24.535362 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b472b8c-07f0-4e48-b039-a6b958371b88-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6b472b8c-07f0-4e48-b039-a6b958371b88" (UID: "6b472b8c-07f0-4e48-b039-a6b958371b88"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:25:24.630833 kubelet[1415]: I0317 18:25:24.630801 1415 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jxqr8\" (UniqueName: \"kubernetes.io/projected/6b472b8c-07f0-4e48-b039-a6b958371b88-kube-api-access-jxqr8\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:24.630992 kubelet[1415]: I0317 18:25:24.630981 1415 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-cilium-run\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:24.631052 kubelet[1415]: I0317 18:25:24.631043 1415 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-bpf-maps\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:24.631107 kubelet[1415]: I0317 18:25:24.631098 1415 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-hostproc\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:24.631240 kubelet[1415]: I0317 18:25:24.631227 1415 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b472b8c-07f0-4e48-b039-a6b958371b88-cilium-config-path\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:24.631297 kubelet[1415]: I0317 18:25:24.631288 1415 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-cni-path\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:24.631357 kubelet[1415]: I0317 18:25:24.631348 1415 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-etc-cni-netd\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:24.631413 kubelet[1415]: I0317 18:25:24.631403 1415 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-host-proc-sys-net\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:24.631462 kubelet[1415]: I0317 18:25:24.631454 1415 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-lib-modules\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:24.631515 kubelet[1415]: I0317 18:25:24.631506 1415 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-cilium-cgroup\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:24.631564 kubelet[1415]: I0317 18:25:24.631555 1415 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-host-proc-sys-kernel\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:24.631613 kubelet[1415]: I0317 18:25:24.631604 1415 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6b472b8c-07f0-4e48-b039-a6b958371b88-hubble-tls\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:24.631662 kubelet[1415]: I0317 18:25:24.631654 1415 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b472b8c-07f0-4e48-b039-a6b958371b88-xtables-lock\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:24.631746 kubelet[1415]: I0317 18:25:24.631736 1415 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6b472b8c-07f0-4e48-b039-a6b958371b88-clustermesh-secrets\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:24.719573 systemd[1]: Removed slice kubepods-burstable-pod6b472b8c_07f0_4e48_b039_a6b958371b88.slice. Mar 17 18:25:24.719654 systemd[1]: kubepods-burstable-pod6b472b8c_07f0_4e48_b039_a6b958371b88.slice: Consumed 6.520s CPU time. Mar 17 18:25:25.097857 systemd[1]: var-lib-kubelet-pods-6b472b8c\x2d07f0\x2d4e48\x2db039\x2da6b958371b88-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djxqr8.mount: Deactivated successfully. Mar 17 18:25:25.097951 systemd[1]: var-lib-kubelet-pods-6b472b8c\x2d07f0\x2d4e48\x2db039\x2da6b958371b88-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:25:25.178241 kubelet[1415]: E0317 18:25:25.178206 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:25.317344 kubelet[1415]: I0317 18:25:25.317305 1415 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b472b8c-07f0-4e48-b039-a6b958371b88" path="/var/lib/kubelet/pods/6b472b8c-07f0-4e48-b039-a6b958371b88/volumes" Mar 17 18:25:26.178857 kubelet[1415]: E0317 18:25:26.178814 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:27.178971 kubelet[1415]: E0317 18:25:27.178919 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:27.296444 kubelet[1415]: E0317 18:25:27.296409 1415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6b472b8c-07f0-4e48-b039-a6b958371b88" containerName="apply-sysctl-overwrites" Mar 17 18:25:27.296608 kubelet[1415]: E0317 18:25:27.296596 1415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6b472b8c-07f0-4e48-b039-a6b958371b88" containerName="mount-bpf-fs" Mar 17 18:25:27.296684 kubelet[1415]: E0317 18:25:27.296673 1415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6b472b8c-07f0-4e48-b039-a6b958371b88" containerName="cilium-agent" Mar 17 18:25:27.296780 kubelet[1415]: E0317 18:25:27.296770 1415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6b472b8c-07f0-4e48-b039-a6b958371b88" containerName="mount-cgroup" Mar 17 18:25:27.296833 kubelet[1415]: E0317 18:25:27.296823 1415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6b472b8c-07f0-4e48-b039-a6b958371b88" containerName="clean-cilium-state" Mar 17 18:25:27.296905 kubelet[1415]: I0317 18:25:27.296894 1415 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b472b8c-07f0-4e48-b039-a6b958371b88" containerName="cilium-agent" Mar 17 18:25:27.297772 kubelet[1415]: E0317 18:25:27.297742 1415 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:25:27.301614 systemd[1]: Created slice kubepods-besteffort-podc31cc409_155f_41a5_935c_bf3bdb24adbb.slice. Mar 17 18:25:27.324950 systemd[1]: Created slice kubepods-burstable-poddada4ec7_3c23_41fe_831a_817a34770dfa.slice. Mar 17 18:25:27.447276 kubelet[1415]: I0317 18:25:27.447165 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwldc\" (UniqueName: \"kubernetes.io/projected/dada4ec7-3c23-41fe-831a-817a34770dfa-kube-api-access-pwldc\") pod \"cilium-5qhgs\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " pod="kube-system/cilium-5qhgs" Mar 17 18:25:27.447276 kubelet[1415]: I0317 18:25:27.447219 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c31cc409-155f-41a5-935c-bf3bdb24adbb-cilium-config-path\") pod \"cilium-operator-5d85765b45-ctclk\" (UID: \"c31cc409-155f-41a5-935c-bf3bdb24adbb\") " pod="kube-system/cilium-operator-5d85765b45-ctclk" Mar 17 18:25:27.447276 kubelet[1415]: I0317 18:25:27.447250 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-hostproc\") pod \"cilium-5qhgs\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " pod="kube-system/cilium-5qhgs" Mar 17 18:25:27.447276 kubelet[1415]: I0317 18:25:27.447268 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-lib-modules\") pod \"cilium-5qhgs\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " pod="kube-system/cilium-5qhgs" Mar 17 18:25:27.447465 kubelet[1415]: I0317 18:25:27.447283 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-host-proc-sys-net\") pod \"cilium-5qhgs\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " pod="kube-system/cilium-5qhgs" Mar 17 18:25:27.447465 kubelet[1415]: I0317 18:25:27.447301 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-host-proc-sys-kernel\") pod \"cilium-5qhgs\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " pod="kube-system/cilium-5qhgs" Mar 17 18:25:27.447465 kubelet[1415]: I0317 18:25:27.447318 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-cilium-run\") pod \"cilium-5qhgs\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " pod="kube-system/cilium-5qhgs" Mar 17 18:25:27.447465 kubelet[1415]: I0317 18:25:27.447332 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dada4ec7-3c23-41fe-831a-817a34770dfa-clustermesh-secrets\") pod \"cilium-5qhgs\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " pod="kube-system/cilium-5qhgs" Mar 17 18:25:27.447465 kubelet[1415]: I0317 18:25:27.447347 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dada4ec7-3c23-41fe-831a-817a34770dfa-hubble-tls\") pod \"cilium-5qhgs\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " pod="kube-system/cilium-5qhgs" Mar 17 18:25:27.447566 kubelet[1415]: I0317 18:25:27.447362 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w2rv\" (UniqueName: \"kubernetes.io/projected/c31cc409-155f-41a5-935c-bf3bdb24adbb-kube-api-access-8w2rv\") pod \"cilium-operator-5d85765b45-ctclk\" (UID: \"c31cc409-155f-41a5-935c-bf3bdb24adbb\") " pod="kube-system/cilium-operator-5d85765b45-ctclk" Mar 17 18:25:27.447566 kubelet[1415]: I0317 18:25:27.447377 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-xtables-lock\") pod \"cilium-5qhgs\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " pod="kube-system/cilium-5qhgs" Mar 17 18:25:27.447566 kubelet[1415]: I0317 18:25:27.447391 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-cni-path\") pod \"cilium-5qhgs\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " pod="kube-system/cilium-5qhgs" Mar 17 18:25:27.447566 kubelet[1415]: I0317 18:25:27.447405 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-etc-cni-netd\") pod \"cilium-5qhgs\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " pod="kube-system/cilium-5qhgs" Mar 17 18:25:27.447566 kubelet[1415]: I0317 18:25:27.447421 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-bpf-maps\") pod \"cilium-5qhgs\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " pod="kube-system/cilium-5qhgs" Mar 17 18:25:27.447667 kubelet[1415]: I0317 18:25:27.447436 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-cilium-cgroup\") pod \"cilium-5qhgs\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " pod="kube-system/cilium-5qhgs" Mar 17 18:25:27.447667 kubelet[1415]: I0317 18:25:27.447452 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dada4ec7-3c23-41fe-831a-817a34770dfa-cilium-config-path\") pod \"cilium-5qhgs\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " pod="kube-system/cilium-5qhgs" Mar 17 18:25:27.447667 kubelet[1415]: I0317 18:25:27.447467 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dada4ec7-3c23-41fe-831a-817a34770dfa-cilium-ipsec-secrets\") pod \"cilium-5qhgs\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " pod="kube-system/cilium-5qhgs" Mar 17 18:25:27.465742 kubelet[1415]: E0317 18:25:27.465678 1415 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-pwldc lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-5qhgs" podUID="dada4ec7-3c23-41fe-831a-817a34770dfa" Mar 17 18:25:27.603442 kubelet[1415]: E0317 18:25:27.603409 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:27.603916 env[1212]: time="2025-03-17T18:25:27.603861681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ctclk,Uid:c31cc409-155f-41a5-935c-bf3bdb24adbb,Namespace:kube-system,Attempt:0,}" Mar 17 18:25:27.615263 env[1212]: time="2025-03-17T18:25:27.615202014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:25:27.615263 env[1212]: time="2025-03-17T18:25:27.615239814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:25:27.615263 env[1212]: time="2025-03-17T18:25:27.615249534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:25:27.615580 env[1212]: time="2025-03-17T18:25:27.615364015Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8f38313a864c817cb2c5e38e4978d10400a712c0952802046c984ab23748d159 pid=2998 runtime=io.containerd.runc.v2 Mar 17 18:25:27.629086 systemd[1]: Started cri-containerd-8f38313a864c817cb2c5e38e4978d10400a712c0952802046c984ab23748d159.scope. Mar 17 18:25:27.673675 env[1212]: time="2025-03-17T18:25:27.673627644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ctclk,Uid:c31cc409-155f-41a5-935c-bf3bdb24adbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f38313a864c817cb2c5e38e4978d10400a712c0952802046c984ab23748d159\"" Mar 17 18:25:27.674376 kubelet[1415]: E0317 18:25:27.674353 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:27.675176 env[1212]: time="2025-03-17T18:25:27.675148171Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:25:28.179959 kubelet[1415]: E0317 18:25:28.179917 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:28.470612 kubelet[1415]: I0317 18:25:28.470274 1415 setters.go:600] "Node became not ready" node="10.0.0.98" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:25:28Z","lastTransitionTime":"2025-03-17T18:25:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:25:28.557420 kubelet[1415]: I0317 18:25:28.557369 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwldc\" (UniqueName: \"kubernetes.io/projected/dada4ec7-3c23-41fe-831a-817a34770dfa-kube-api-access-pwldc\") pod \"dada4ec7-3c23-41fe-831a-817a34770dfa\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " Mar 17 18:25:28.557420 kubelet[1415]: I0317 18:25:28.557411 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dada4ec7-3c23-41fe-831a-817a34770dfa-cilium-ipsec-secrets\") pod \"dada4ec7-3c23-41fe-831a-817a34770dfa\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " Mar 17 18:25:28.557577 kubelet[1415]: I0317 18:25:28.557430 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-host-proc-sys-kernel\") pod \"dada4ec7-3c23-41fe-831a-817a34770dfa\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " Mar 17 18:25:28.557577 kubelet[1415]: I0317 18:25:28.557445 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-cilium-run\") pod \"dada4ec7-3c23-41fe-831a-817a34770dfa\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " Mar 17 18:25:28.557577 kubelet[1415]: I0317 18:25:28.557467 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dada4ec7-3c23-41fe-831a-817a34770dfa-clustermesh-secrets\") pod \"dada4ec7-3c23-41fe-831a-817a34770dfa\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " Mar 17 18:25:28.557577 kubelet[1415]: I0317 18:25:28.557484 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-host-proc-sys-net\") pod \"dada4ec7-3c23-41fe-831a-817a34770dfa\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " Mar 17 18:25:28.557577 kubelet[1415]: I0317 18:25:28.557500 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dada4ec7-3c23-41fe-831a-817a34770dfa-hubble-tls\") pod \"dada4ec7-3c23-41fe-831a-817a34770dfa\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " Mar 17 18:25:28.557577 kubelet[1415]: I0317 18:25:28.557515 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-xtables-lock\") pod \"dada4ec7-3c23-41fe-831a-817a34770dfa\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " Mar 17 18:25:28.557728 kubelet[1415]: I0317 18:25:28.557533 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-hostproc\") pod \"dada4ec7-3c23-41fe-831a-817a34770dfa\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " Mar 17 18:25:28.557728 kubelet[1415]: I0317 18:25:28.557546 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-lib-modules\") pod \"dada4ec7-3c23-41fe-831a-817a34770dfa\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " Mar 17 18:25:28.557728 kubelet[1415]: I0317 18:25:28.557559 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-cni-path\") pod \"dada4ec7-3c23-41fe-831a-817a34770dfa\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " Mar 17 18:25:28.557728 kubelet[1415]: I0317 18:25:28.557574 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-etc-cni-netd\") pod \"dada4ec7-3c23-41fe-831a-817a34770dfa\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " Mar 17 18:25:28.557728 kubelet[1415]: I0317 18:25:28.557588 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-bpf-maps\") pod \"dada4ec7-3c23-41fe-831a-817a34770dfa\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " Mar 17 18:25:28.557728 kubelet[1415]: I0317 18:25:28.557606 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-cilium-cgroup\") pod \"dada4ec7-3c23-41fe-831a-817a34770dfa\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " Mar 17 18:25:28.557856 kubelet[1415]: I0317 18:25:28.557623 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dada4ec7-3c23-41fe-831a-817a34770dfa-cilium-config-path\") pod \"dada4ec7-3c23-41fe-831a-817a34770dfa\" (UID: \"dada4ec7-3c23-41fe-831a-817a34770dfa\") " Mar 17 18:25:28.559739 kubelet[1415]: I0317 18:25:28.557917 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dada4ec7-3c23-41fe-831a-817a34770dfa" (UID: "dada4ec7-3c23-41fe-831a-817a34770dfa"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:28.559739 kubelet[1415]: I0317 18:25:28.557962 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dada4ec7-3c23-41fe-831a-817a34770dfa" (UID: "dada4ec7-3c23-41fe-831a-817a34770dfa"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:28.559739 kubelet[1415]: I0317 18:25:28.557964 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-cni-path" (OuterVolumeSpecName: "cni-path") pod "dada4ec7-3c23-41fe-831a-817a34770dfa" (UID: "dada4ec7-3c23-41fe-831a-817a34770dfa"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:28.559739 kubelet[1415]: I0317 18:25:28.557978 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dada4ec7-3c23-41fe-831a-817a34770dfa" (UID: "dada4ec7-3c23-41fe-831a-817a34770dfa"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:28.559739 kubelet[1415]: I0317 18:25:28.557997 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-hostproc" (OuterVolumeSpecName: "hostproc") pod "dada4ec7-3c23-41fe-831a-817a34770dfa" (UID: "dada4ec7-3c23-41fe-831a-817a34770dfa"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:28.559925 kubelet[1415]: I0317 18:25:28.558026 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dada4ec7-3c23-41fe-831a-817a34770dfa" (UID: "dada4ec7-3c23-41fe-831a-817a34770dfa"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:28.559925 kubelet[1415]: I0317 18:25:28.558045 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dada4ec7-3c23-41fe-831a-817a34770dfa" (UID: "dada4ec7-3c23-41fe-831a-817a34770dfa"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:28.559925 kubelet[1415]: I0317 18:25:28.558063 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dada4ec7-3c23-41fe-831a-817a34770dfa" (UID: "dada4ec7-3c23-41fe-831a-817a34770dfa"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:28.559925 kubelet[1415]: I0317 18:25:28.558078 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dada4ec7-3c23-41fe-831a-817a34770dfa" (UID: "dada4ec7-3c23-41fe-831a-817a34770dfa"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:28.559925 kubelet[1415]: I0317 18:25:28.558323 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dada4ec7-3c23-41fe-831a-817a34770dfa" (UID: "dada4ec7-3c23-41fe-831a-817a34770dfa"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:28.562484 kubelet[1415]: I0317 18:25:28.559330 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dada4ec7-3c23-41fe-831a-817a34770dfa-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dada4ec7-3c23-41fe-831a-817a34770dfa" (UID: "dada4ec7-3c23-41fe-831a-817a34770dfa"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:25:28.561379 systemd[1]: var-lib-kubelet-pods-dada4ec7\x2d3c23\x2d41fe\x2d831a\x2d817a34770dfa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpwldc.mount: Deactivated successfully. Mar 17 18:25:28.561460 systemd[1]: var-lib-kubelet-pods-dada4ec7\x2d3c23\x2d41fe\x2d831a\x2d817a34770dfa-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:25:28.561509 systemd[1]: var-lib-kubelet-pods-dada4ec7\x2d3c23\x2d41fe\x2d831a\x2d817a34770dfa-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 18:25:28.563602 systemd[1]: var-lib-kubelet-pods-dada4ec7\x2d3c23\x2d41fe\x2d831a\x2d817a34770dfa-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:25:28.564213 kubelet[1415]: I0317 18:25:28.564180 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dada4ec7-3c23-41fe-831a-817a34770dfa-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "dada4ec7-3c23-41fe-831a-817a34770dfa" (UID: "dada4ec7-3c23-41fe-831a-817a34770dfa"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:25:28.564472 kubelet[1415]: I0317 18:25:28.564447 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dada4ec7-3c23-41fe-831a-817a34770dfa-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dada4ec7-3c23-41fe-831a-817a34770dfa" (UID: "dada4ec7-3c23-41fe-831a-817a34770dfa"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:25:28.564711 kubelet[1415]: I0317 18:25:28.564653 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dada4ec7-3c23-41fe-831a-817a34770dfa-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dada4ec7-3c23-41fe-831a-817a34770dfa" (UID: "dada4ec7-3c23-41fe-831a-817a34770dfa"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:25:28.564935 kubelet[1415]: I0317 18:25:28.564914 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dada4ec7-3c23-41fe-831a-817a34770dfa-kube-api-access-pwldc" (OuterVolumeSpecName: "kube-api-access-pwldc") pod "dada4ec7-3c23-41fe-831a-817a34770dfa" (UID: "dada4ec7-3c23-41fe-831a-817a34770dfa"). InnerVolumeSpecName "kube-api-access-pwldc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:25:28.658773 kubelet[1415]: I0317 18:25:28.658719 1415 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-host-proc-sys-net\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:28.658901 kubelet[1415]: I0317 18:25:28.658891 1415 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dada4ec7-3c23-41fe-831a-817a34770dfa-hubble-tls\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:28.658954 kubelet[1415]: I0317 18:25:28.658945 1415 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-xtables-lock\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:28.659030 kubelet[1415]: I0317 18:25:28.659020 1415 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-hostproc\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:28.659096 kubelet[1415]: I0317 18:25:28.659077 1415 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-lib-modules\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:28.659163 kubelet[1415]: I0317 18:25:28.659151 1415 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-cni-path\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:28.659216 kubelet[1415]: I0317 18:25:28.659207 1415 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-etc-cni-netd\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:28.659267 kubelet[1415]: I0317 18:25:28.659257 1415 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-bpf-maps\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:28.659328 kubelet[1415]: I0317 18:25:28.659318 1415 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dada4ec7-3c23-41fe-831a-817a34770dfa-cilium-config-path\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:28.659378 kubelet[1415]: I0317 18:25:28.659369 1415 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-cilium-cgroup\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:28.659427 kubelet[1415]: I0317 18:25:28.659418 1415 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dada4ec7-3c23-41fe-831a-817a34770dfa-cilium-ipsec-secrets\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:28.659481 kubelet[1415]: I0317 18:25:28.659470 1415 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-host-proc-sys-kernel\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:28.659537 kubelet[1415]: I0317 18:25:28.659528 1415 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dada4ec7-3c23-41fe-831a-817a34770dfa-cilium-run\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:28.659590 kubelet[1415]: I0317 18:25:28.659581 1415 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dada4ec7-3c23-41fe-831a-817a34770dfa-clustermesh-secrets\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:28.659646 kubelet[1415]: I0317 18:25:28.659637 1415 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pwldc\" (UniqueName: \"kubernetes.io/projected/dada4ec7-3c23-41fe-831a-817a34770dfa-kube-api-access-pwldc\") on node \"10.0.0.98\" DevicePath \"\"" Mar 17 18:25:28.794366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3846766939.mount: Deactivated successfully. Mar 17 18:25:29.180280 kubelet[1415]: E0317 18:25:29.180227 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:29.309026 env[1212]: time="2025-03-17T18:25:29.308976196Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:25:29.310417 env[1212]: time="2025-03-17T18:25:29.310381522Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:25:29.311960 env[1212]: time="2025-03-17T18:25:29.311925568Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:25:29.312467 env[1212]: time="2025-03-17T18:25:29.312442291Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 18:25:29.314848 env[1212]: time="2025-03-17T18:25:29.314819380Z" level=info msg="CreateContainer within sandbox \"8f38313a864c817cb2c5e38e4978d10400a712c0952802046c984ab23748d159\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:25:29.320072 systemd[1]: Removed slice kubepods-burstable-poddada4ec7_3c23_41fe_831a_817a34770dfa.slice. Mar 17 18:25:29.326962 env[1212]: time="2025-03-17T18:25:29.326911629Z" level=info msg="CreateContainer within sandbox \"8f38313a864c817cb2c5e38e4978d10400a712c0952802046c984ab23748d159\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"52db3708e5da729badbbfcf153f3d5bef01e8ede46445f76fc22d5781c8ee2e5\"" Mar 17 18:25:29.327433 env[1212]: time="2025-03-17T18:25:29.327342071Z" level=info msg="StartContainer for \"52db3708e5da729badbbfcf153f3d5bef01e8ede46445f76fc22d5781c8ee2e5\"" Mar 17 18:25:29.340061 systemd[1]: Started cri-containerd-52db3708e5da729badbbfcf153f3d5bef01e8ede46445f76fc22d5781c8ee2e5.scope. Mar 17 18:25:29.380881 env[1212]: time="2025-03-17T18:25:29.380830009Z" level=info msg="StartContainer for \"52db3708e5da729badbbfcf153f3d5bef01e8ede46445f76fc22d5781c8ee2e5\" returns successfully" Mar 17 18:25:29.425831 kubelet[1415]: E0317 18:25:29.425799 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:29.435239 kubelet[1415]: I0317 18:25:29.434781 1415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-ctclk" podStartSLOduration=0.795999183 podStartE2EDuration="2.434653388s" podCreationTimestamp="2025-03-17 18:25:27 +0000 UTC" firstStartedPulling="2025-03-17 18:25:27.67491445 +0000 UTC m=+51.115291871" lastFinishedPulling="2025-03-17 18:25:29.313568695 +0000 UTC m=+52.753946076" observedRunningTime="2025-03-17 18:25:29.434310946 +0000 UTC m=+52.874688367" watchObservedRunningTime="2025-03-17 18:25:29.434653388 +0000 UTC m=+52.875030809" Mar 17 18:25:29.472138 systemd[1]: Created slice kubepods-burstable-podf2594d03_8a3e_4e47_bafa_c7cfb69f1b52.slice. Mar 17 18:25:29.563450 kubelet[1415]: I0317 18:25:29.563369 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2594d03-8a3e-4e47-bafa-c7cfb69f1b52-etc-cni-netd\") pod \"cilium-2wcmp\" (UID: \"f2594d03-8a3e-4e47-bafa-c7cfb69f1b52\") " pod="kube-system/cilium-2wcmp" Mar 17 18:25:29.563450 kubelet[1415]: I0317 18:25:29.563416 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2594d03-8a3e-4e47-bafa-c7cfb69f1b52-cilium-config-path\") pod \"cilium-2wcmp\" (UID: \"f2594d03-8a3e-4e47-bafa-c7cfb69f1b52\") " pod="kube-system/cilium-2wcmp" Mar 17 18:25:29.563626 kubelet[1415]: I0317 18:25:29.563520 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f2594d03-8a3e-4e47-bafa-c7cfb69f1b52-cilium-ipsec-secrets\") pod \"cilium-2wcmp\" (UID: \"f2594d03-8a3e-4e47-bafa-c7cfb69f1b52\") " pod="kube-system/cilium-2wcmp" Mar 17 18:25:29.563653 kubelet[1415]: I0317 18:25:29.563634 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2594d03-8a3e-4e47-bafa-c7cfb69f1b52-host-proc-sys-kernel\") pod \"cilium-2wcmp\" (UID: \"f2594d03-8a3e-4e47-bafa-c7cfb69f1b52\") " pod="kube-system/cilium-2wcmp" Mar 17 18:25:29.563684 kubelet[1415]: I0317 18:25:29.563662 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2594d03-8a3e-4e47-bafa-c7cfb69f1b52-hubble-tls\") pod \"cilium-2wcmp\" (UID: \"f2594d03-8a3e-4e47-bafa-c7cfb69f1b52\") " pod="kube-system/cilium-2wcmp" Mar 17 18:25:29.563734 kubelet[1415]: I0317 18:25:29.563682 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2594d03-8a3e-4e47-bafa-c7cfb69f1b52-hostproc\") pod \"cilium-2wcmp\" (UID: \"f2594d03-8a3e-4e47-bafa-c7cfb69f1b52\") " pod="kube-system/cilium-2wcmp" Mar 17 18:25:29.563734 kubelet[1415]: I0317 18:25:29.563719 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2594d03-8a3e-4e47-bafa-c7cfb69f1b52-lib-modules\") pod \"cilium-2wcmp\" (UID: \"f2594d03-8a3e-4e47-bafa-c7cfb69f1b52\") " pod="kube-system/cilium-2wcmp" Mar 17 18:25:29.563785 kubelet[1415]: I0317 18:25:29.563735 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2594d03-8a3e-4e47-bafa-c7cfb69f1b52-xtables-lock\") pod \"cilium-2wcmp\" (UID: \"f2594d03-8a3e-4e47-bafa-c7cfb69f1b52\") " pod="kube-system/cilium-2wcmp" Mar 17 18:25:29.563785 kubelet[1415]: I0317 18:25:29.563758 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2594d03-8a3e-4e47-bafa-c7cfb69f1b52-clustermesh-secrets\") pod \"cilium-2wcmp\" (UID: \"f2594d03-8a3e-4e47-bafa-c7cfb69f1b52\") " pod="kube-system/cilium-2wcmp" Mar 17 18:25:29.563785 kubelet[1415]: I0317 18:25:29.563775 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2594d03-8a3e-4e47-bafa-c7cfb69f1b52-host-proc-sys-net\") pod \"cilium-2wcmp\" (UID: \"f2594d03-8a3e-4e47-bafa-c7cfb69f1b52\") " pod="kube-system/cilium-2wcmp" Mar 17 18:25:29.563847 kubelet[1415]: I0317 18:25:29.563794 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnt4h\" (UniqueName: \"kubernetes.io/projected/f2594d03-8a3e-4e47-bafa-c7cfb69f1b52-kube-api-access-mnt4h\") pod \"cilium-2wcmp\" (UID: \"f2594d03-8a3e-4e47-bafa-c7cfb69f1b52\") " pod="kube-system/cilium-2wcmp" Mar 17 18:25:29.563847 kubelet[1415]: I0317 18:25:29.563821 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2594d03-8a3e-4e47-bafa-c7cfb69f1b52-bpf-maps\") pod \"cilium-2wcmp\" (UID: \"f2594d03-8a3e-4e47-bafa-c7cfb69f1b52\") " pod="kube-system/cilium-2wcmp" Mar 17 18:25:29.563847 kubelet[1415]: I0317 18:25:29.563843 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2594d03-8a3e-4e47-bafa-c7cfb69f1b52-cilium-cgroup\") pod \"cilium-2wcmp\" (UID: \"f2594d03-8a3e-4e47-bafa-c7cfb69f1b52\") " pod="kube-system/cilium-2wcmp" Mar 17 18:25:29.563912 kubelet[1415]: I0317 18:25:29.563859 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2594d03-8a3e-4e47-bafa-c7cfb69f1b52-cni-path\") pod \"cilium-2wcmp\" (UID: \"f2594d03-8a3e-4e47-bafa-c7cfb69f1b52\") " pod="kube-system/cilium-2wcmp" Mar 17 18:25:29.563939 kubelet[1415]: I0317 18:25:29.563912 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2594d03-8a3e-4e47-bafa-c7cfb69f1b52-cilium-run\") pod \"cilium-2wcmp\" (UID: \"f2594d03-8a3e-4e47-bafa-c7cfb69f1b52\") " pod="kube-system/cilium-2wcmp" Mar 17 18:25:29.784600 kubelet[1415]: E0317 18:25:29.784064 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:29.784732 env[1212]: time="2025-03-17T18:25:29.784535332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2wcmp,Uid:f2594d03-8a3e-4e47-bafa-c7cfb69f1b52,Namespace:kube-system,Attempt:0,}" Mar 17 18:25:29.795657 env[1212]: time="2025-03-17T18:25:29.795590057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:25:29.795766 env[1212]: time="2025-03-17T18:25:29.795670337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:25:29.795766 env[1212]: time="2025-03-17T18:25:29.795705697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:25:29.795921 env[1212]: time="2025-03-17T18:25:29.795877818Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f2c8c4461cc30875f544b28bb2072f848b06c8cd1f2eac956a1a1ca0f4235fd6 pid=3083 runtime=io.containerd.runc.v2 Mar 17 18:25:29.810009 systemd[1]: Started cri-containerd-f2c8c4461cc30875f544b28bb2072f848b06c8cd1f2eac956a1a1ca0f4235fd6.scope. Mar 17 18:25:29.848095 env[1212]: time="2025-03-17T18:25:29.848052350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2wcmp,Uid:f2594d03-8a3e-4e47-bafa-c7cfb69f1b52,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2c8c4461cc30875f544b28bb2072f848b06c8cd1f2eac956a1a1ca0f4235fd6\"" Mar 17 18:25:29.848742 kubelet[1415]: E0317 18:25:29.848719 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:29.850823 env[1212]: time="2025-03-17T18:25:29.850771841Z" level=info msg="CreateContainer within sandbox \"f2c8c4461cc30875f544b28bb2072f848b06c8cd1f2eac956a1a1ca0f4235fd6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:25:29.876200 env[1212]: time="2025-03-17T18:25:29.876144665Z" level=info msg="CreateContainer within sandbox \"f2c8c4461cc30875f544b28bb2072f848b06c8cd1f2eac956a1a1ca0f4235fd6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0803d1a78ab7ead182e336f8cc4213aff5db1f78385f648c182cfa9ecd0ff3f2\"" Mar 17 18:25:29.876867 env[1212]: time="2025-03-17T18:25:29.876843307Z" level=info msg="StartContainer for \"0803d1a78ab7ead182e336f8cc4213aff5db1f78385f648c182cfa9ecd0ff3f2\"" Mar 17 18:25:29.889578 systemd[1]: Started cri-containerd-0803d1a78ab7ead182e336f8cc4213aff5db1f78385f648c182cfa9ecd0ff3f2.scope. Mar 17 18:25:29.921172 env[1212]: time="2025-03-17T18:25:29.921127288Z" level=info msg="StartContainer for \"0803d1a78ab7ead182e336f8cc4213aff5db1f78385f648c182cfa9ecd0ff3f2\" returns successfully" Mar 17 18:25:29.934256 systemd[1]: cri-containerd-0803d1a78ab7ead182e336f8cc4213aff5db1f78385f648c182cfa9ecd0ff3f2.scope: Deactivated successfully. Mar 17 18:25:29.989104 env[1212]: time="2025-03-17T18:25:29.989059244Z" level=info msg="shim disconnected" id=0803d1a78ab7ead182e336f8cc4213aff5db1f78385f648c182cfa9ecd0ff3f2 Mar 17 18:25:29.989104 env[1212]: time="2025-03-17T18:25:29.989096724Z" level=warning msg="cleaning up after shim disconnected" id=0803d1a78ab7ead182e336f8cc4213aff5db1f78385f648c182cfa9ecd0ff3f2 namespace=k8s.io Mar 17 18:25:29.989104 env[1212]: time="2025-03-17T18:25:29.989106164Z" level=info msg="cleaning up dead shim" Mar 17 18:25:29.995256 env[1212]: time="2025-03-17T18:25:29.995222429Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:25:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3164 runtime=io.containerd.runc.v2\n" Mar 17 18:25:30.181125 kubelet[1415]: E0317 18:25:30.181071 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:30.428813 kubelet[1415]: E0317 18:25:30.428477 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:30.428813 kubelet[1415]: E0317 18:25:30.428592 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:30.430390 env[1212]: time="2025-03-17T18:25:30.430352731Z" level=info msg="CreateContainer within sandbox \"f2c8c4461cc30875f544b28bb2072f848b06c8cd1f2eac956a1a1ca0f4235fd6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:25:30.440687 env[1212]: time="2025-03-17T18:25:30.440444689Z" level=info msg="CreateContainer within sandbox \"f2c8c4461cc30875f544b28bb2072f848b06c8cd1f2eac956a1a1ca0f4235fd6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9893d29df00e29d058d374edd6a2bbbcea47b17c589790ac51b50ca0781d7bfd\"" Mar 17 18:25:30.441303 env[1212]: time="2025-03-17T18:25:30.441273212Z" level=info msg="StartContainer for \"9893d29df00e29d058d374edd6a2bbbcea47b17c589790ac51b50ca0781d7bfd\"" Mar 17 18:25:30.454644 systemd[1]: Started cri-containerd-9893d29df00e29d058d374edd6a2bbbcea47b17c589790ac51b50ca0781d7bfd.scope. Mar 17 18:25:30.479996 env[1212]: time="2025-03-17T18:25:30.479951120Z" level=info msg="StartContainer for \"9893d29df00e29d058d374edd6a2bbbcea47b17c589790ac51b50ca0781d7bfd\" returns successfully" Mar 17 18:25:30.496259 systemd[1]: cri-containerd-9893d29df00e29d058d374edd6a2bbbcea47b17c589790ac51b50ca0781d7bfd.scope: Deactivated successfully. Mar 17 18:25:30.514732 env[1212]: time="2025-03-17T18:25:30.514670093Z" level=info msg="shim disconnected" id=9893d29df00e29d058d374edd6a2bbbcea47b17c589790ac51b50ca0781d7bfd Mar 17 18:25:30.514732 env[1212]: time="2025-03-17T18:25:30.514732333Z" level=warning msg="cleaning up after shim disconnected" id=9893d29df00e29d058d374edd6a2bbbcea47b17c589790ac51b50ca0781d7bfd namespace=k8s.io Mar 17 18:25:30.514918 env[1212]: time="2025-03-17T18:25:30.514742853Z" level=info msg="cleaning up dead shim" Mar 17 18:25:30.521579 env[1212]: time="2025-03-17T18:25:30.521538399Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:25:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3227 runtime=io.containerd.runc.v2\n" Mar 17 18:25:31.181659 kubelet[1415]: E0317 18:25:31.181611 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:31.318323 kubelet[1415]: I0317 18:25:31.318286 1415 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dada4ec7-3c23-41fe-831a-817a34770dfa" path="/var/lib/kubelet/pods/dada4ec7-3c23-41fe-831a-817a34770dfa/volumes" Mar 17 18:25:31.432146 kubelet[1415]: E0317 18:25:31.431668 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:31.433486 env[1212]: time="2025-03-17T18:25:31.433450459Z" level=info msg="CreateContainer within sandbox \"f2c8c4461cc30875f544b28bb2072f848b06c8cd1f2eac956a1a1ca0f4235fd6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:25:31.449863 env[1212]: time="2025-03-17T18:25:31.449824078Z" level=info msg="CreateContainer within sandbox \"f2c8c4461cc30875f544b28bb2072f848b06c8cd1f2eac956a1a1ca0f4235fd6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"92ac12fd9960078afbb40227c397bf4c5b974760f6515d29215b7034efc59366\"" Mar 17 18:25:31.450362 env[1212]: time="2025-03-17T18:25:31.450340160Z" level=info msg="StartContainer for \"92ac12fd9960078afbb40227c397bf4c5b974760f6515d29215b7034efc59366\"" Mar 17 18:25:31.467178 systemd[1]: Started cri-containerd-92ac12fd9960078afbb40227c397bf4c5b974760f6515d29215b7034efc59366.scope. Mar 17 18:25:31.498902 env[1212]: time="2025-03-17T18:25:31.498842373Z" level=info msg="StartContainer for \"92ac12fd9960078afbb40227c397bf4c5b974760f6515d29215b7034efc59366\" returns successfully" Mar 17 18:25:31.501802 systemd[1]: cri-containerd-92ac12fd9960078afbb40227c397bf4c5b974760f6515d29215b7034efc59366.scope: Deactivated successfully. Mar 17 18:25:31.520973 env[1212]: time="2025-03-17T18:25:31.520928012Z" level=info msg="shim disconnected" id=92ac12fd9960078afbb40227c397bf4c5b974760f6515d29215b7034efc59366 Mar 17 18:25:31.521220 env[1212]: time="2025-03-17T18:25:31.521200293Z" level=warning msg="cleaning up after shim disconnected" id=92ac12fd9960078afbb40227c397bf4c5b974760f6515d29215b7034efc59366 namespace=k8s.io Mar 17 18:25:31.521293 env[1212]: time="2025-03-17T18:25:31.521277653Z" level=info msg="cleaning up dead shim" Mar 17 18:25:31.527652 env[1212]: time="2025-03-17T18:25:31.527616596Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:25:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3285 runtime=io.containerd.runc.v2\n" Mar 17 18:25:31.553219 systemd[1]: run-containerd-runc-k8s.io-92ac12fd9960078afbb40227c397bf4c5b974760f6515d29215b7034efc59366-runc.HCZSuL.mount: Deactivated successfully. Mar 17 18:25:31.553315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-92ac12fd9960078afbb40227c397bf4c5b974760f6515d29215b7034efc59366-rootfs.mount: Deactivated successfully. Mar 17 18:25:32.182241 kubelet[1415]: E0317 18:25:32.182197 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:32.299437 kubelet[1415]: E0317 18:25:32.299399 1415 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:25:32.436506 kubelet[1415]: E0317 18:25:32.436207 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:32.447512 env[1212]: time="2025-03-17T18:25:32.445747021Z" level=info msg="CreateContainer within sandbox \"f2c8c4461cc30875f544b28bb2072f848b06c8cd1f2eac956a1a1ca0f4235fd6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:25:32.467670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount273588052.mount: Deactivated successfully. Mar 17 18:25:32.469754 env[1212]: time="2025-03-17T18:25:32.469704782Z" level=info msg="CreateContainer within sandbox \"f2c8c4461cc30875f544b28bb2072f848b06c8cd1f2eac956a1a1ca0f4235fd6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e47ef570a9da8fe3a9cbb1fd143583b2e214e66ed2004fa1f92d71eeb5046aa3\"" Mar 17 18:25:32.470550 env[1212]: time="2025-03-17T18:25:32.470513464Z" level=info msg="StartContainer for \"e47ef570a9da8fe3a9cbb1fd143583b2e214e66ed2004fa1f92d71eeb5046aa3\"" Mar 17 18:25:32.485982 systemd[1]: Started cri-containerd-e47ef570a9da8fe3a9cbb1fd143583b2e214e66ed2004fa1f92d71eeb5046aa3.scope. Mar 17 18:25:32.516624 env[1212]: time="2025-03-17T18:25:32.516575779Z" level=info msg="StartContainer for \"e47ef570a9da8fe3a9cbb1fd143583b2e214e66ed2004fa1f92d71eeb5046aa3\" returns successfully" Mar 17 18:25:32.516830 systemd[1]: cri-containerd-e47ef570a9da8fe3a9cbb1fd143583b2e214e66ed2004fa1f92d71eeb5046aa3.scope: Deactivated successfully. Mar 17 18:25:32.536780 env[1212]: time="2025-03-17T18:25:32.536730206Z" level=info msg="shim disconnected" id=e47ef570a9da8fe3a9cbb1fd143583b2e214e66ed2004fa1f92d71eeb5046aa3 Mar 17 18:25:32.536780 env[1212]: time="2025-03-17T18:25:32.536778806Z" level=warning msg="cleaning up after shim disconnected" id=e47ef570a9da8fe3a9cbb1fd143583b2e214e66ed2004fa1f92d71eeb5046aa3 namespace=k8s.io Mar 17 18:25:32.536780 env[1212]: time="2025-03-17T18:25:32.536787527Z" level=info msg="cleaning up dead shim" Mar 17 18:25:32.542554 env[1212]: time="2025-03-17T18:25:32.542519346Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:25:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3340 runtime=io.containerd.runc.v2\n" Mar 17 18:25:32.553392 systemd[1]: run-containerd-runc-k8s.io-e47ef570a9da8fe3a9cbb1fd143583b2e214e66ed2004fa1f92d71eeb5046aa3-runc.5MwI5D.mount: Deactivated successfully. Mar 17 18:25:32.553485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e47ef570a9da8fe3a9cbb1fd143583b2e214e66ed2004fa1f92d71eeb5046aa3-rootfs.mount: Deactivated successfully. Mar 17 18:25:33.182641 kubelet[1415]: E0317 18:25:33.182595 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:33.440369 kubelet[1415]: E0317 18:25:33.439739 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:33.442064 env[1212]: time="2025-03-17T18:25:33.442024790Z" level=info msg="CreateContainer within sandbox \"f2c8c4461cc30875f544b28bb2072f848b06c8cd1f2eac956a1a1ca0f4235fd6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:25:33.454066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2478054906.mount: Deactivated successfully. Mar 17 18:25:33.458193 env[1212]: time="2025-03-17T18:25:33.458145521Z" level=info msg="CreateContainer within sandbox \"f2c8c4461cc30875f544b28bb2072f848b06c8cd1f2eac956a1a1ca0f4235fd6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"03efd7625ba16136deb94ff0f73c4330a0db4651f11a520ce89737cdbd81ac1e\"" Mar 17 18:25:33.458651 env[1212]: time="2025-03-17T18:25:33.458626202Z" level=info msg="StartContainer for \"03efd7625ba16136deb94ff0f73c4330a0db4651f11a520ce89737cdbd81ac1e\"" Mar 17 18:25:33.473080 systemd[1]: Started cri-containerd-03efd7625ba16136deb94ff0f73c4330a0db4651f11a520ce89737cdbd81ac1e.scope. Mar 17 18:25:33.528914 env[1212]: time="2025-03-17T18:25:33.528867503Z" level=info msg="StartContainer for \"03efd7625ba16136deb94ff0f73c4330a0db4651f11a520ce89737cdbd81ac1e\" returns successfully" Mar 17 18:25:33.764717 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Mar 17 18:25:34.183011 kubelet[1415]: E0317 18:25:34.182968 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:34.443615 kubelet[1415]: E0317 18:25:34.443137 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:34.462147 kubelet[1415]: I0317 18:25:34.462094 1415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2wcmp" podStartSLOduration=5.462072267 podStartE2EDuration="5.462072267s" podCreationTimestamp="2025-03-17 18:25:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:25:34.461794866 +0000 UTC m=+57.902172327" watchObservedRunningTime="2025-03-17 18:25:34.462072267 +0000 UTC m=+57.902449648" Mar 17 18:25:35.183706 kubelet[1415]: E0317 18:25:35.183659 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:35.785595 kubelet[1415]: E0317 18:25:35.785551 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:35.844571 systemd[1]: run-containerd-runc-k8s.io-03efd7625ba16136deb94ff0f73c4330a0db4651f11a520ce89737cdbd81ac1e-runc.x6QVW7.mount: Deactivated successfully. Mar 17 18:25:36.184745 kubelet[1415]: E0317 18:25:36.184688 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:36.552976 systemd-networkd[1043]: lxc_health: Link UP Mar 17 18:25:36.560549 systemd-networkd[1043]: lxc_health: Gained carrier Mar 17 18:25:36.560703 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:25:37.148455 kubelet[1415]: E0317 18:25:37.148392 1415 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:37.171781 env[1212]: time="2025-03-17T18:25:37.171742424Z" level=info msg="StopPodSandbox for \"ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07\"" Mar 17 18:25:37.172101 env[1212]: time="2025-03-17T18:25:37.171830664Z" level=info msg="TearDown network for sandbox \"ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07\" successfully" Mar 17 18:25:37.172101 env[1212]: time="2025-03-17T18:25:37.171864544Z" level=info msg="StopPodSandbox for \"ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07\" returns successfully" Mar 17 18:25:37.172375 env[1212]: time="2025-03-17T18:25:37.172347905Z" level=info msg="RemovePodSandbox for \"ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07\"" Mar 17 18:25:37.172510 env[1212]: time="2025-03-17T18:25:37.172472546Z" level=info msg="Forcibly stopping sandbox \"ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07\"" Mar 17 18:25:37.172630 env[1212]: time="2025-03-17T18:25:37.172609146Z" level=info msg="TearDown network for sandbox \"ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07\" successfully" Mar 17 18:25:37.176477 env[1212]: time="2025-03-17T18:25:37.176439275Z" level=info msg="RemovePodSandbox \"ef4eb56cd14325231553f0c10093d6a11341522fa877108e77b1a68e0f6aee07\" returns successfully" Mar 17 18:25:37.185579 kubelet[1415]: E0317 18:25:37.185537 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:37.786319 kubelet[1415]: E0317 18:25:37.786276 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:37.804832 systemd-networkd[1043]: lxc_health: Gained IPv6LL Mar 17 18:25:37.991332 systemd[1]: run-containerd-runc-k8s.io-03efd7625ba16136deb94ff0f73c4330a0db4651f11a520ce89737cdbd81ac1e-runc.9UOCUl.mount: Deactivated successfully. Mar 17 18:25:38.186271 kubelet[1415]: E0317 18:25:38.186201 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:38.449577 kubelet[1415]: E0317 18:25:38.449319 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:39.186729 kubelet[1415]: E0317 18:25:39.186681 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:39.450803 kubelet[1415]: E0317 18:25:39.450481 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:40.187334 kubelet[1415]: E0317 18:25:40.187267 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:41.188414 kubelet[1415]: E0317 18:25:41.188363 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:42.191149 kubelet[1415]: E0317 18:25:42.191103 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:43.191785 kubelet[1415]: E0317 18:25:43.191742 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:25:44.192105 kubelet[1415]: E0317 18:25:44.192047 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"