Mar 17 18:31:26.747573 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 18:31:26.747591 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Mar 17 17:11:44 -00 2025 Mar 17 18:31:26.747599 kernel: efi: EFI v2.70 by EDK II Mar 17 18:31:26.747605 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Mar 17 18:31:26.747609 kernel: random: crng init done Mar 17 18:31:26.747615 kernel: ACPI: Early table checksum verification disabled Mar 17 18:31:26.747621 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Mar 17 18:31:26.747628 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 17 18:31:26.747633 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:31:26.747639 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:31:26.747644 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:31:26.747650 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:31:26.747655 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:31:26.747661 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:31:26.747668 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:31:26.747674 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:31:26.747680 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:31:26.747686 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 17 18:31:26.747691 kernel: NUMA: Failed to initialise from firmware Mar 17 18:31:26.747697 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 18:31:26.747702 kernel: NUMA: NODE_DATA [mem 0xdcb0a900-0xdcb0ffff] Mar 17 18:31:26.747708 kernel: Zone ranges: Mar 17 18:31:26.747714 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 18:31:26.747720 kernel: DMA32 empty Mar 17 18:31:26.747726 kernel: Normal empty Mar 17 18:31:26.747731 kernel: Movable zone start for each node Mar 17 18:31:26.747737 kernel: Early memory node ranges Mar 17 18:31:26.747742 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Mar 17 18:31:26.747748 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Mar 17 18:31:26.747754 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Mar 17 18:31:26.747760 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Mar 17 18:31:26.747765 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Mar 17 18:31:26.747771 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Mar 17 18:31:26.747776 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Mar 17 18:31:26.747782 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 18:31:26.747789 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 17 18:31:26.747794 kernel: psci: probing for conduit method from ACPI. Mar 17 18:31:26.747800 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 18:31:26.747806 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 18:31:26.747811 kernel: psci: Trusted OS migration not required Mar 17 18:31:26.747819 kernel: psci: SMC Calling Convention v1.1 Mar 17 18:31:26.747825 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 17 18:31:26.747833 kernel: ACPI: SRAT not present Mar 17 18:31:26.747839 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Mar 17 18:31:26.747845 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Mar 17 18:31:26.747852 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 17 18:31:26.747858 kernel: Detected PIPT I-cache on CPU0 Mar 17 18:31:26.747864 kernel: CPU features: detected: GIC system register CPU interface Mar 17 18:31:26.747870 kernel: CPU features: detected: Hardware dirty bit management Mar 17 18:31:26.747876 kernel: CPU features: detected: Spectre-v4 Mar 17 18:31:26.747882 kernel: CPU features: detected: Spectre-BHB Mar 17 18:31:26.747889 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 18:31:26.747895 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 18:31:26.747901 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 18:31:26.747907 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 18:31:26.747913 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 17 18:31:26.747919 kernel: Policy zone: DMA Mar 17 18:31:26.747926 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:31:26.747932 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:31:26.747938 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:31:26.747944 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:31:26.747950 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:31:26.747958 kernel: Memory: 2457400K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 114888K reserved, 0K cma-reserved) Mar 17 18:31:26.747964 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 18:31:26.747970 kernel: trace event string verifier disabled Mar 17 18:31:26.747976 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 18:31:26.747983 kernel: rcu: RCU event tracing is enabled. Mar 17 18:31:26.747989 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 18:31:26.747995 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 18:31:26.748001 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:31:26.748008 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:31:26.748014 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 18:31:26.748020 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 18:31:26.748027 kernel: GICv3: 256 SPIs implemented Mar 17 18:31:26.748033 kernel: GICv3: 0 Extended SPIs implemented Mar 17 18:31:26.748039 kernel: GICv3: Distributor has no Range Selector support Mar 17 18:31:26.748045 kernel: Root IRQ handler: gic_handle_irq Mar 17 18:31:26.748051 kernel: GICv3: 16 PPIs implemented Mar 17 18:31:26.748057 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 17 18:31:26.748063 kernel: ACPI: SRAT not present Mar 17 18:31:26.748069 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 17 18:31:26.748075 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 18:31:26.748081 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Mar 17 18:31:26.748087 kernel: GICv3: using LPI property table @0x00000000400d0000 Mar 17 18:31:26.748093 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Mar 17 18:31:26.748100 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:31:26.748115 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 18:31:26.748130 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 18:31:26.748136 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 18:31:26.748142 kernel: arm-pv: using stolen time PV Mar 17 18:31:26.748149 kernel: Console: colour dummy device 80x25 Mar 17 18:31:26.748155 kernel: ACPI: Core revision 20210730 Mar 17 18:31:26.748161 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 18:31:26.748168 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:31:26.748174 kernel: LSM: Security Framework initializing Mar 17 18:31:26.748182 kernel: SELinux: Initializing. Mar 17 18:31:26.748188 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:31:26.748194 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:31:26.748200 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:31:26.748206 kernel: Platform MSI: ITS@0x8080000 domain created Mar 17 18:31:26.748213 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 17 18:31:26.748219 kernel: Remapping and enabling EFI services. Mar 17 18:31:26.748225 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:31:26.748231 kernel: Detected PIPT I-cache on CPU1 Mar 17 18:31:26.748238 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 17 18:31:26.748245 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Mar 17 18:31:26.748251 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:31:26.748257 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 18:31:26.748263 kernel: Detected PIPT I-cache on CPU2 Mar 17 18:31:26.748270 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 17 18:31:26.748276 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Mar 17 18:31:26.748282 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:31:26.748288 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 17 18:31:26.748295 kernel: Detected PIPT I-cache on CPU3 Mar 17 18:31:26.748302 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 17 18:31:26.748308 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Mar 17 18:31:26.748314 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:31:26.748321 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 17 18:31:26.748331 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 18:31:26.748338 kernel: SMP: Total of 4 processors activated. Mar 17 18:31:26.748345 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 18:31:26.748351 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 18:31:26.748422 kernel: CPU features: detected: Common not Private translations Mar 17 18:31:26.748429 kernel: CPU features: detected: CRC32 instructions Mar 17 18:31:26.748435 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 18:31:26.748442 kernel: CPU features: detected: LSE atomic instructions Mar 17 18:31:26.748452 kernel: CPU features: detected: Privileged Access Never Mar 17 18:31:26.748458 kernel: CPU features: detected: RAS Extension Support Mar 17 18:31:26.748465 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 17 18:31:26.748471 kernel: CPU: All CPU(s) started at EL1 Mar 17 18:31:26.748478 kernel: alternatives: patching kernel code Mar 17 18:31:26.748485 kernel: devtmpfs: initialized Mar 17 18:31:26.748492 kernel: KASLR enabled Mar 17 18:31:26.748499 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:31:26.748505 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 18:31:26.748512 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:31:26.748518 kernel: SMBIOS 3.0.0 present. Mar 17 18:31:26.748525 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Mar 17 18:31:26.748531 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:31:26.748538 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 18:31:26.748546 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 18:31:26.748552 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 18:31:26.748559 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:31:26.748566 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 Mar 17 18:31:26.748572 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:31:26.748578 kernel: cpuidle: using governor menu Mar 17 18:31:26.748585 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 18:31:26.748592 kernel: ASID allocator initialised with 32768 entries Mar 17 18:31:26.748598 kernel: ACPI: bus type PCI registered Mar 17 18:31:26.748606 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:31:26.748613 kernel: Serial: AMBA PL011 UART driver Mar 17 18:31:26.748619 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:31:26.748626 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 18:31:26.748632 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:31:26.748639 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 18:31:26.748645 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:31:26.748652 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 18:31:26.748658 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:31:26.748666 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:31:26.748672 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:31:26.748679 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:31:26.748685 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:31:26.748692 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:31:26.748699 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:31:26.748705 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:31:26.748712 kernel: ACPI: Interpreter enabled Mar 17 18:31:26.748718 kernel: ACPI: Using GIC for interrupt routing Mar 17 18:31:26.748726 kernel: ACPI: MCFG table detected, 1 entries Mar 17 18:31:26.748733 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 17 18:31:26.748739 kernel: printk: console [ttyAMA0] enabled Mar 17 18:31:26.748746 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 18:31:26.748878 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:31:26.748941 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 18:31:26.749019 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 18:31:26.749083 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 17 18:31:26.749161 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 17 18:31:26.749171 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 17 18:31:26.749178 kernel: PCI host bridge to bus 0000:00 Mar 17 18:31:26.749252 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 17 18:31:26.749350 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 18:31:26.749407 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 17 18:31:26.749457 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 18:31:26.749528 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 17 18:31:26.749597 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 18:31:26.749668 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 17 18:31:26.749731 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 17 18:31:26.749790 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 18:31:26.749847 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 18:31:26.749908 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 17 18:31:26.749966 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 17 18:31:26.750019 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 17 18:31:26.750070 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 18:31:26.750148 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 17 18:31:26.750159 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 18:31:26.750166 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 18:31:26.750172 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 18:31:26.750181 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 18:31:26.750188 kernel: iommu: Default domain type: Translated Mar 17 18:31:26.750194 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 18:31:26.750201 kernel: vgaarb: loaded Mar 17 18:31:26.750208 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:31:26.750214 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:31:26.750221 kernel: PTP clock support registered Mar 17 18:31:26.750227 kernel: Registered efivars operations Mar 17 18:31:26.750234 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 18:31:26.750242 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:31:26.750248 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:31:26.750255 kernel: pnp: PnP ACPI init Mar 17 18:31:26.750322 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 17 18:31:26.750332 kernel: pnp: PnP ACPI: found 1 devices Mar 17 18:31:26.750338 kernel: NET: Registered PF_INET protocol family Mar 17 18:31:26.750345 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:31:26.750352 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 18:31:26.750360 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:31:26.750366 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:31:26.750373 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 17 18:31:26.750380 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 18:31:26.750386 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:31:26.750393 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:31:26.750399 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:31:26.750406 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:31:26.750413 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 17 18:31:26.750421 kernel: kvm [1]: HYP mode not available Mar 17 18:31:26.750427 kernel: Initialise system trusted keyrings Mar 17 18:31:26.750434 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 18:31:26.750440 kernel: Key type asymmetric registered Mar 17 18:31:26.750447 kernel: Asymmetric key parser 'x509' registered Mar 17 18:31:26.750453 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:31:26.750460 kernel: io scheduler mq-deadline registered Mar 17 18:31:26.750466 kernel: io scheduler kyber registered Mar 17 18:31:26.750473 kernel: io scheduler bfq registered Mar 17 18:31:26.750481 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 18:31:26.750487 kernel: ACPI: button: Power Button [PWRB] Mar 17 18:31:26.750494 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 18:31:26.750553 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 17 18:31:26.750562 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:31:26.750568 kernel: thunder_xcv, ver 1.0 Mar 17 18:31:26.750575 kernel: thunder_bgx, ver 1.0 Mar 17 18:31:26.750581 kernel: nicpf, ver 1.0 Mar 17 18:31:26.750587 kernel: nicvf, ver 1.0 Mar 17 18:31:26.750653 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 18:31:26.750706 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T18:31:26 UTC (1742236286) Mar 17 18:31:26.750715 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 18:31:26.750722 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:31:26.750729 kernel: Segment Routing with IPv6 Mar 17 18:31:26.750736 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:31:26.750742 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:31:26.750749 kernel: Key type dns_resolver registered Mar 17 18:31:26.750756 kernel: registered taskstats version 1 Mar 17 18:31:26.750763 kernel: Loading compiled-in X.509 certificates Mar 17 18:31:26.750769 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: c6f3fb83dc6bb7052b07ec5b1ef41d12f9b3f7e4' Mar 17 18:31:26.750776 kernel: Key type .fscrypt registered Mar 17 18:31:26.750783 kernel: Key type fscrypt-provisioning registered Mar 17 18:31:26.750789 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:31:26.750796 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:31:26.750802 kernel: ima: No architecture policies found Mar 17 18:31:26.750809 kernel: clk: Disabling unused clocks Mar 17 18:31:26.750816 kernel: Freeing unused kernel memory: 36416K Mar 17 18:31:26.750823 kernel: Run /init as init process Mar 17 18:31:26.750829 kernel: with arguments: Mar 17 18:31:26.750836 kernel: /init Mar 17 18:31:26.750842 kernel: with environment: Mar 17 18:31:26.750848 kernel: HOME=/ Mar 17 18:31:26.750855 kernel: TERM=linux Mar 17 18:31:26.750861 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:31:26.750869 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:31:26.750879 systemd[1]: Detected virtualization kvm. Mar 17 18:31:26.750886 systemd[1]: Detected architecture arm64. Mar 17 18:31:26.750893 systemd[1]: Running in initrd. Mar 17 18:31:26.750900 systemd[1]: No hostname configured, using default hostname. Mar 17 18:31:26.750906 systemd[1]: Hostname set to . Mar 17 18:31:26.750914 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:31:26.750921 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:31:26.750929 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:31:26.750936 systemd[1]: Reached target cryptsetup.target. Mar 17 18:31:26.750943 systemd[1]: Reached target paths.target. Mar 17 18:31:26.750949 systemd[1]: Reached target slices.target. Mar 17 18:31:26.750956 systemd[1]: Reached target swap.target. Mar 17 18:31:26.750963 systemd[1]: Reached target timers.target. Mar 17 18:31:26.750970 systemd[1]: Listening on iscsid.socket. Mar 17 18:31:26.750979 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:31:26.750986 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:31:26.750993 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:31:26.751000 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:31:26.751007 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:31:26.751013 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:31:26.751021 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:31:26.751027 systemd[1]: Reached target sockets.target. Mar 17 18:31:26.751034 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:31:26.751042 systemd[1]: Finished network-cleanup.service. Mar 17 18:31:26.751049 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:31:26.751056 systemd[1]: Starting systemd-journald.service... Mar 17 18:31:26.751063 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:31:26.751070 systemd[1]: Starting systemd-resolved.service... Mar 17 18:31:26.751077 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:31:26.751084 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:31:26.751091 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:31:26.751098 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:31:26.751115 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:31:26.751131 kernel: audit: type=1130 audit(1742236286.746:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:26.751138 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:31:26.751149 systemd-journald[291]: Journal started Mar 17 18:31:26.751190 systemd-journald[291]: Runtime Journal (/run/log/journal/cf651fec52f84591bc6d86d776bb22ba) is 6.0M, max 48.7M, 42.6M free. Mar 17 18:31:26.755750 systemd[1]: Started systemd-journald.service. Mar 17 18:31:26.755786 kernel: audit: type=1130 audit(1742236286.752:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:26.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:26.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:26.744081 systemd-modules-load[292]: Inserted module 'overlay' Mar 17 18:31:26.756982 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:31:26.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:26.761166 kernel: audit: type=1130 audit(1742236286.757:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:26.775699 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:31:26.781740 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:31:26.781759 kernel: audit: type=1130 audit(1742236286.776:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:26.781770 kernel: Bridge firewalling registered Mar 17 18:31:26.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:26.777502 systemd-resolved[293]: Positive Trust Anchors: Mar 17 18:31:26.777509 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:31:26.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:26.777535 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:31:26.793249 kernel: audit: type=1130 audit(1742236286.784:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:26.779669 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:31:26.781072 systemd-modules-load[292]: Inserted module 'br_netfilter' Mar 17 18:31:26.795922 kernel: SCSI subsystem initialized Mar 17 18:31:26.781605 systemd-resolved[293]: Defaulting to hostname 'linux'. Mar 17 18:31:26.783195 systemd[1]: Started systemd-resolved.service. Mar 17 18:31:26.797499 dracut-cmdline[309]: dracut-dracut-053 Mar 17 18:31:26.788473 systemd[1]: Reached target nss-lookup.target. Mar 17 18:31:26.799076 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:31:26.806763 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:31:26.806787 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:31:26.806797 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:31:26.809592 systemd-modules-load[292]: Inserted module 'dm_multipath' Mar 17 18:31:26.810376 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:31:26.812039 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:31:26.816427 kernel: audit: type=1130 audit(1742236286.811:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:26.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:26.821504 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:31:26.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:26.826147 kernel: audit: type=1130 audit(1742236286.822:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:26.864137 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:31:26.876156 kernel: iscsi: registered transport (tcp) Mar 17 18:31:26.893134 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:31:26.893146 kernel: QLogic iSCSI HBA Driver Mar 17 18:31:26.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:26.926078 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:31:26.930639 kernel: audit: type=1130 audit(1742236286.926:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:26.927745 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:31:26.974145 kernel: raid6: neonx8 gen() 13743 MB/s Mar 17 18:31:26.991138 kernel: raid6: neonx8 xor() 10835 MB/s Mar 17 18:31:27.008133 kernel: raid6: neonx4 gen() 13529 MB/s Mar 17 18:31:27.025140 kernel: raid6: neonx4 xor() 10977 MB/s Mar 17 18:31:27.042139 kernel: raid6: neonx2 gen() 12958 MB/s Mar 17 18:31:27.059137 kernel: raid6: neonx2 xor() 10297 MB/s Mar 17 18:31:27.076131 kernel: raid6: neonx1 gen() 10508 MB/s Mar 17 18:31:27.093137 kernel: raid6: neonx1 xor() 8783 MB/s Mar 17 18:31:27.110137 kernel: raid6: int64x8 gen() 6262 MB/s Mar 17 18:31:27.127143 kernel: raid6: int64x8 xor() 3539 MB/s Mar 17 18:31:27.144138 kernel: raid6: int64x4 gen() 7195 MB/s Mar 17 18:31:27.161146 kernel: raid6: int64x4 xor() 3845 MB/s Mar 17 18:31:27.178138 kernel: raid6: int64x2 gen() 6143 MB/s Mar 17 18:31:27.195138 kernel: raid6: int64x2 xor() 3317 MB/s Mar 17 18:31:27.212145 kernel: raid6: int64x1 gen() 5041 MB/s Mar 17 18:31:27.229233 kernel: raid6: int64x1 xor() 2644 MB/s Mar 17 18:31:27.229244 kernel: raid6: using algorithm neonx8 gen() 13743 MB/s Mar 17 18:31:27.229252 kernel: raid6: .... xor() 10835 MB/s, rmw enabled Mar 17 18:31:27.230311 kernel: raid6: using neon recovery algorithm Mar 17 18:31:27.240140 kernel: xor: measuring software checksum speed Mar 17 18:31:27.241448 kernel: 8regs : 14794 MB/sec Mar 17 18:31:27.241469 kernel: 32regs : 20712 MB/sec Mar 17 18:31:27.242674 kernel: arm64_neon : 27775 MB/sec Mar 17 18:31:27.242687 kernel: xor: using function: arm64_neon (27775 MB/sec) Mar 17 18:31:27.297142 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Mar 17 18:31:27.306790 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:31:27.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:27.310000 audit: BPF prog-id=7 op=LOAD Mar 17 18:31:27.310000 audit: BPF prog-id=8 op=LOAD Mar 17 18:31:27.311149 kernel: audit: type=1130 audit(1742236287.307:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:27.311361 systemd[1]: Starting systemd-udevd.service... Mar 17 18:31:27.325236 systemd-udevd[491]: Using default interface naming scheme 'v252'. Mar 17 18:31:27.328653 systemd[1]: Started systemd-udevd.service. Mar 17 18:31:27.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:27.334612 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:31:27.345232 dracut-pre-trigger[510]: rd.md=0: removing MD RAID activation Mar 17 18:31:27.372218 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:31:27.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:27.373774 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:31:27.410992 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:31:27.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:27.439176 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 18:31:27.444392 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 18:31:27.444405 kernel: GPT:9289727 != 19775487 Mar 17 18:31:27.444413 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 18:31:27.444422 kernel: GPT:9289727 != 19775487 Mar 17 18:31:27.444430 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:31:27.444438 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:31:27.456153 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (552) Mar 17 18:31:27.459871 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:31:27.467932 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:31:27.474749 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:31:27.478533 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:31:27.479995 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:31:27.483440 systemd[1]: Starting disk-uuid.service... Mar 17 18:31:27.489380 disk-uuid[566]: Primary Header is updated. Mar 17 18:31:27.489380 disk-uuid[566]: Secondary Entries is updated. Mar 17 18:31:27.489380 disk-uuid[566]: Secondary Header is updated. Mar 17 18:31:27.495140 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:31:28.512063 disk-uuid[567]: The operation has completed successfully. Mar 17 18:31:28.513277 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:31:28.540306 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:31:28.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:28.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:28.540397 systemd[1]: Finished disk-uuid.service. Mar 17 18:31:28.541988 systemd[1]: Starting verity-setup.service... Mar 17 18:31:28.556150 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 18:31:28.582465 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:31:28.584031 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:31:28.584911 systemd[1]: Finished verity-setup.service. Mar 17 18:31:28.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:28.634138 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:31:28.634610 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:31:28.635487 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:31:28.636239 systemd[1]: Starting ignition-setup.service... Mar 17 18:31:28.638468 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:31:28.645719 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:31:28.645757 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:31:28.645772 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:31:28.653434 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:31:28.660219 systemd[1]: Finished ignition-setup.service. Mar 17 18:31:28.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:28.661821 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:31:28.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:28.726000 audit: BPF prog-id=9 op=LOAD Mar 17 18:31:28.724737 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:31:28.727156 systemd[1]: Starting systemd-networkd.service... Mar 17 18:31:28.753322 systemd-networkd[743]: lo: Link UP Mar 17 18:31:28.754875 systemd-networkd[743]: lo: Gained carrier Mar 17 18:31:28.755909 ignition[651]: Ignition 2.14.0 Mar 17 18:31:28.755920 ignition[651]: Stage: fetch-offline Mar 17 18:31:28.755958 ignition[651]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:31:28.755967 ignition[651]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:31:28.756107 ignition[651]: parsed url from cmdline: "" Mar 17 18:31:28.756110 ignition[651]: no config URL provided Mar 17 18:31:28.756115 ignition[651]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:31:28.756133 ignition[651]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:31:28.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:28.762409 systemd-networkd[743]: Enumeration completed Mar 17 18:31:28.756153 ignition[651]: op(1): [started] loading QEMU firmware config module Mar 17 18:31:28.762533 systemd[1]: Started systemd-networkd.service. Mar 17 18:31:28.756159 ignition[651]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 18:31:28.762610 systemd-networkd[743]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:31:28.760223 ignition[651]: op(1): [finished] loading QEMU firmware config module Mar 17 18:31:28.763635 systemd-networkd[743]: eth0: Link UP Mar 17 18:31:28.763639 systemd-networkd[743]: eth0: Gained carrier Mar 17 18:31:28.763744 systemd[1]: Reached target network.target. Mar 17 18:31:28.766183 systemd[1]: Starting iscsiuio.service... Mar 17 18:31:28.775008 systemd[1]: Started iscsiuio.service. Mar 17 18:31:28.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:28.776824 systemd[1]: Starting iscsid.service... Mar 17 18:31:28.777176 ignition[651]: parsing config with SHA512: 6b8d2274b6d7d464eb4f80daee82b6271b42ff3fb69ce4d7c05faa0a950fae1765be5701ca2cda7f57c203ed64e563183fcf67cd13fd96d63f0a3b989a504147 Mar 17 18:31:28.780771 iscsid[750]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:31:28.780771 iscsid[750]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:31:28.780771 iscsid[750]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:31:28.780771 iscsid[750]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:31:28.780771 iscsid[750]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:31:28.780771 iscsid[750]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:31:28.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:28.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:28.783412 systemd[1]: Started iscsid.service. Mar 17 18:31:28.785062 ignition[651]: fetch-offline: fetch-offline passed Mar 17 18:31:28.784703 unknown[651]: fetched base config from "system" Mar 17 18:31:28.785153 ignition[651]: Ignition finished successfully Mar 17 18:31:28.784710 unknown[651]: fetched user config from "qemu" Mar 17 18:31:28.787472 systemd-networkd[743]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:31:28.788974 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:31:28.791216 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:31:28.791970 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 18:31:28.792714 systemd[1]: Starting ignition-kargs.service... Mar 17 18:31:28.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:28.803246 ignition[752]: Ignition 2.14.0 Mar 17 18:31:28.803849 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:31:28.803252 ignition[752]: Stage: kargs Mar 17 18:31:28.804744 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:31:28.803345 ignition[752]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:31:28.806720 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:31:28.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:28.803353 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:31:28.807958 systemd[1]: Reached target remote-fs.target. Mar 17 18:31:28.804241 ignition[752]: kargs: kargs passed Mar 17 18:31:28.810039 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:31:28.804285 ignition[752]: Ignition finished successfully Mar 17 18:31:28.811389 systemd[1]: Finished ignition-kargs.service. Mar 17 18:31:28.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:28.813488 systemd[1]: Starting ignition-disks.service... Mar 17 18:31:28.817820 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:31:28.819956 ignition[766]: Ignition 2.14.0 Mar 17 18:31:28.819962 ignition[766]: Stage: disks Mar 17 18:31:28.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:28.821574 systemd[1]: Finished ignition-disks.service. Mar 17 18:31:28.820057 ignition[766]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:31:28.822430 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:31:28.820065 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:31:28.823670 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:31:28.820789 ignition[766]: disks: disks passed Mar 17 18:31:28.824983 systemd[1]: Reached target local-fs.target. Mar 17 18:31:28.820829 ignition[766]: Ignition finished successfully Mar 17 18:31:28.826384 systemd[1]: Reached target sysinit.target. Mar 17 18:31:28.827762 systemd[1]: Reached target basic.target. Mar 17 18:31:28.830185 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:31:28.839883 systemd-resolved[293]: Detected conflict on linux IN A 10.0.0.128 Mar 17 18:31:28.839896 systemd-resolved[293]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Mar 17 18:31:28.842344 systemd-fsck[778]: ROOT: clean, 623/553520 files, 56021/553472 blocks Mar 17 18:31:28.954621 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:31:28.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:28.956598 systemd[1]: Mounting sysroot.mount... Mar 17 18:31:28.969837 systemd[1]: Mounted sysroot.mount. Mar 17 18:31:28.971059 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:31:28.970627 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:31:28.972836 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:31:28.973706 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 18:31:28.973743 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:31:28.973764 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:31:28.975595 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:31:28.977529 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:31:28.981708 initrd-setup-root[788]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:31:28.985334 initrd-setup-root[796]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:31:28.988215 initrd-setup-root[804]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:31:28.992093 initrd-setup-root[812]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:31:29.024460 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:31:29.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:29.026028 systemd[1]: Starting ignition-mount.service... Mar 17 18:31:29.027413 systemd[1]: Starting sysroot-boot.service... Mar 17 18:31:29.031608 bash[829]: umount: /sysroot/usr/share/oem: not mounted. Mar 17 18:31:29.040511 ignition[831]: INFO : Ignition 2.14.0 Mar 17 18:31:29.040511 ignition[831]: INFO : Stage: mount Mar 17 18:31:29.042165 ignition[831]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:31:29.042165 ignition[831]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:31:29.042165 ignition[831]: INFO : mount: mount passed Mar 17 18:31:29.042165 ignition[831]: INFO : Ignition finished successfully Mar 17 18:31:29.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:29.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:29.041952 systemd[1]: Finished ignition-mount.service. Mar 17 18:31:29.045091 systemd[1]: Finished sysroot-boot.service. Mar 17 18:31:29.592573 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:31:29.598133 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (839) Mar 17 18:31:29.600718 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:31:29.600733 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:31:29.600742 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:31:29.603640 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:31:29.605967 systemd[1]: Starting ignition-files.service... Mar 17 18:31:29.620247 ignition[859]: INFO : Ignition 2.14.0 Mar 17 18:31:29.620247 ignition[859]: INFO : Stage: files Mar 17 18:31:29.621838 ignition[859]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:31:29.621838 ignition[859]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:31:29.621838 ignition[859]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:31:29.625315 ignition[859]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:31:29.625315 ignition[859]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:31:29.628002 ignition[859]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:31:29.628002 ignition[859]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:31:29.630715 ignition[859]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:31:29.630715 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:31:29.630715 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:31:29.630715 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:31:29.630715 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:31:29.630715 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 18:31:29.630715 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 18:31:29.630715 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 18:31:29.630715 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 Mar 17 18:31:29.628283 unknown[859]: wrote ssh authorized keys file for user: core Mar 17 18:31:29.828267 systemd-networkd[743]: eth0: Gained IPv6LL Mar 17 18:31:30.095468 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Mar 17 18:31:30.466918 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" Mar 17 18:31:30.466918 ignition[859]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Mar 17 18:31:30.470913 ignition[859]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:31:30.470913 ignition[859]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:31:30.470913 ignition[859]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Mar 17 18:31:30.470913 ignition[859]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 18:31:30.470913 ignition[859]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:31:30.500948 ignition[859]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:31:30.503337 ignition[859]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 18:31:30.503337 ignition[859]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:31:30.503337 ignition[859]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:31:30.503337 ignition[859]: INFO : files: files passed Mar 17 18:31:30.503337 ignition[859]: INFO : Ignition finished successfully Mar 17 18:31:30.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.503500 systemd[1]: Finished ignition-files.service. Mar 17 18:31:30.506265 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:31:30.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.513871 initrd-setup-root-after-ignition[884]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Mar 17 18:31:30.507681 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:31:30.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.517766 initrd-setup-root-after-ignition[886]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:31:30.508336 systemd[1]: Starting ignition-quench.service... Mar 17 18:31:30.511817 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:31:30.512111 systemd[1]: Finished ignition-quench.service. Mar 17 18:31:30.514868 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:31:30.516194 systemd[1]: Reached target ignition-complete.target. Mar 17 18:31:30.519038 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:31:30.530652 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:31:30.530742 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:31:30.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.532413 systemd[1]: Reached target initrd-fs.target. Mar 17 18:31:30.533620 systemd[1]: Reached target initrd.target. Mar 17 18:31:30.534882 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:31:30.535592 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:31:30.545352 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:31:30.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.546832 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:31:30.554220 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:31:30.555047 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:31:30.556472 systemd[1]: Stopped target timers.target. Mar 17 18:31:30.557822 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:31:30.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.557925 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:31:30.559224 systemd[1]: Stopped target initrd.target. Mar 17 18:31:30.560554 systemd[1]: Stopped target basic.target. Mar 17 18:31:30.561826 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:31:30.563205 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:31:30.564505 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:31:30.565947 systemd[1]: Stopped target remote-fs.target. Mar 17 18:31:30.567339 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:31:30.568763 systemd[1]: Stopped target sysinit.target. Mar 17 18:31:30.570000 systemd[1]: Stopped target local-fs.target. Mar 17 18:31:30.571302 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:31:30.572584 systemd[1]: Stopped target swap.target. Mar 17 18:31:30.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.573771 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:31:30.573878 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:31:30.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.575224 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:31:30.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.576360 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:31:30.576459 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:31:30.577909 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:31:30.578003 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:31:30.579298 systemd[1]: Stopped target paths.target. Mar 17 18:31:30.580451 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:31:30.585144 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:31:30.586260 systemd[1]: Stopped target slices.target. Mar 17 18:31:30.587628 systemd[1]: Stopped target sockets.target. Mar 17 18:31:30.588869 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:31:30.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.588981 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:31:30.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.590344 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:31:30.590435 systemd[1]: Stopped ignition-files.service. Mar 17 18:31:30.595435 iscsid[750]: iscsid shutting down. Mar 17 18:31:30.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.592735 systemd[1]: Stopping ignition-mount.service... Mar 17 18:31:30.594251 systemd[1]: Stopping iscsid.service... Mar 17 18:31:30.594829 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:31:30.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.600294 ignition[899]: INFO : Ignition 2.14.0 Mar 17 18:31:30.600294 ignition[899]: INFO : Stage: umount Mar 17 18:31:30.600294 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:31:30.600294 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:31:30.600294 ignition[899]: INFO : umount: umount passed Mar 17 18:31:30.600294 ignition[899]: INFO : Ignition finished successfully Mar 17 18:31:30.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.594946 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:31:30.596875 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:31:30.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.598021 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:31:30.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.598177 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:31:30.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.599525 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:31:30.599612 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:31:30.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.602347 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 18:31:30.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.602432 systemd[1]: Stopped iscsid.service. Mar 17 18:31:30.604224 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:31:30.604294 systemd[1]: Stopped ignition-mount.service. Mar 17 18:31:30.606612 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:31:30.607108 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:31:30.607192 systemd[1]: Closed iscsid.socket. Mar 17 18:31:30.608055 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:31:30.608110 systemd[1]: Stopped ignition-disks.service. Mar 17 18:31:30.609440 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:31:30.609482 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:31:30.610837 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:31:30.610879 systemd[1]: Stopped ignition-setup.service. Mar 17 18:31:30.612432 systemd[1]: Stopping iscsiuio.service... Mar 17 18:31:30.614689 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:31:30.614777 systemd[1]: Stopped iscsiuio.service. Mar 17 18:31:30.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.616214 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:31:30.616289 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:31:30.618172 systemd[1]: Stopped target network.target. Mar 17 18:31:30.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.618916 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:31:30.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.618953 systemd[1]: Closed iscsiuio.socket. Mar 17 18:31:30.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.620173 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:31:30.621720 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:31:30.629226 systemd-networkd[743]: eth0: DHCPv6 lease lost Mar 17 18:31:30.643000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:31:30.630935 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:31:30.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.631025 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:31:30.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.632673 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:31:30.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.632701 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:31:30.634339 systemd[1]: Stopping network-cleanup.service... Mar 17 18:31:30.650000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:31:30.635049 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:31:30.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.635115 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:31:30.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.636589 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:31:30.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.636630 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:31:30.656000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.638663 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:31:30.638698 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:31:30.641898 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:31:30.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.643541 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:31:30.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.643976 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:31:30.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.644076 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:31:30.645401 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:31:30.645506 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:31:30.646803 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:31:30.646882 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:31:30.648567 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:31:30.648606 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:31:30.649464 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:31:30.649493 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:31:30.650855 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:31:30.650897 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:31:30.652229 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:31:30.652268 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:31:30.653757 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:31:30.653795 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:31:30.655287 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:31:30.655325 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:31:30.657386 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:31:30.658962 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:31:30.659016 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:31:30.660780 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:31:30.660869 systemd[1]: Stopped network-cleanup.service. Mar 17 18:31:30.686297 systemd-journald[291]: Received SIGTERM from PID 1 (n/a). Mar 17 18:31:30.662327 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:31:30.662400 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:31:30.664045 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:31:30.666445 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:31:30.672780 systemd[1]: Switching root. Mar 17 18:31:30.690313 systemd-journald[291]: Journal stopped Mar 17 18:31:32.662142 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:31:32.662190 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:31:32.662206 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:31:32.662221 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:31:32.662230 kernel: SELinux: policy capability open_perms=1 Mar 17 18:31:32.662241 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:31:32.662250 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:31:32.662260 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:31:32.662273 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:31:32.662282 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:31:32.662292 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:31:32.662303 systemd[1]: Successfully loaded SELinux policy in 34.561ms. Mar 17 18:31:32.662323 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.904ms. Mar 17 18:31:32.662334 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:31:32.662344 systemd[1]: Detected virtualization kvm. Mar 17 18:31:32.662354 systemd[1]: Detected architecture arm64. Mar 17 18:31:32.662364 systemd[1]: Detected first boot. Mar 17 18:31:32.662374 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:31:32.662384 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:31:32.662394 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:31:32.662406 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:31:32.662417 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:31:32.662429 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:31:32.662440 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 18:31:32.662450 systemd[1]: Stopped initrd-switch-root.service. Mar 17 18:31:32.662460 kernel: kauditd_printk_skb: 80 callbacks suppressed Mar 17 18:31:32.662470 kernel: audit: type=1334 audit(1742236292.515:84): prog-id=12 op=LOAD Mar 17 18:31:32.662479 kernel: audit: type=1334 audit(1742236292.515:85): prog-id=3 op=UNLOAD Mar 17 18:31:32.662489 kernel: audit: type=1334 audit(1742236292.515:86): prog-id=13 op=LOAD Mar 17 18:31:32.662499 kernel: audit: type=1334 audit(1742236292.515:87): prog-id=14 op=LOAD Mar 17 18:31:32.662508 kernel: audit: type=1334 audit(1742236292.515:88): prog-id=4 op=UNLOAD Mar 17 18:31:32.662518 kernel: audit: type=1334 audit(1742236292.515:89): prog-id=5 op=UNLOAD Mar 17 18:31:32.662530 kernel: audit: type=1131 audit(1742236292.516:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.662540 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 18:31:32.662550 kernel: audit: type=1130 audit(1742236292.524:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.662560 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:31:32.662572 kernel: audit: type=1131 audit(1742236292.524:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.662581 kernel: audit: type=1334 audit(1742236292.534:93): prog-id=12 op=UNLOAD Mar 17 18:31:32.662594 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:31:32.662604 systemd[1]: Created slice system-getty.slice. Mar 17 18:31:32.662614 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:31:32.662624 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:31:32.662634 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:31:32.662644 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:31:32.662655 systemd[1]: Created slice user.slice. Mar 17 18:31:32.662668 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:31:32.662678 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:31:32.662687 systemd[1]: Set up automount boot.automount. Mar 17 18:31:32.662697 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:31:32.662709 systemd[1]: Stopped target initrd-switch-root.target. Mar 17 18:31:32.662720 systemd[1]: Stopped target initrd-fs.target. Mar 17 18:31:32.662730 systemd[1]: Stopped target initrd-root-fs.target. Mar 17 18:31:32.662739 systemd[1]: Reached target integritysetup.target. Mar 17 18:31:32.662749 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:31:32.662759 systemd[1]: Reached target remote-fs.target. Mar 17 18:31:32.662769 systemd[1]: Reached target slices.target. Mar 17 18:31:32.662779 systemd[1]: Reached target swap.target. Mar 17 18:31:32.662789 systemd[1]: Reached target torcx.target. Mar 17 18:31:32.662799 systemd[1]: Reached target veritysetup.target. Mar 17 18:31:32.662810 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:31:32.662820 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:31:32.662830 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:31:32.662840 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:31:32.662851 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:31:32.662861 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:31:32.662871 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:31:32.662881 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:31:32.662891 systemd[1]: Mounting media.mount... Mar 17 18:31:32.662902 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:31:32.662912 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:31:32.662921 systemd[1]: Mounting tmp.mount... Mar 17 18:31:32.662931 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:31:32.662942 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:31:32.662952 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:31:32.662962 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:31:32.662972 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:31:32.662981 systemd[1]: Starting modprobe@drm.service... Mar 17 18:31:32.662993 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:31:32.663002 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:31:32.663012 systemd[1]: Starting modprobe@loop.service... Mar 17 18:31:32.663023 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:31:32.663033 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 18:31:32.663043 systemd[1]: Stopped systemd-fsck-root.service. Mar 17 18:31:32.663060 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 18:31:32.663070 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 18:31:32.663080 systemd[1]: Stopped systemd-journald.service. Mar 17 18:31:32.663091 kernel: loop: module loaded Mar 17 18:31:32.663101 systemd[1]: Starting systemd-journald.service... Mar 17 18:31:32.663112 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:31:32.663129 kernel: fuse: init (API version 7.34) Mar 17 18:31:32.663139 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:31:32.663149 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:31:32.663159 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:31:32.663170 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 18:31:32.663180 systemd[1]: Stopped verity-setup.service. Mar 17 18:31:32.663192 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:31:32.663202 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:31:32.663212 systemd[1]: Mounted media.mount. Mar 17 18:31:32.663222 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:31:32.663233 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:31:32.663243 systemd[1]: Mounted tmp.mount. Mar 17 18:31:32.663256 systemd-journald[998]: Journal started Mar 17 18:31:32.663295 systemd-journald[998]: Runtime Journal (/run/log/journal/cf651fec52f84591bc6d86d776bb22ba) is 6.0M, max 48.7M, 42.6M free. Mar 17 18:31:32.663326 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:31:30.750000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:31:30.820000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:31:30.820000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:31:30.820000 audit: BPF prog-id=10 op=LOAD Mar 17 18:31:30.820000 audit: BPF prog-id=10 op=UNLOAD Mar 17 18:31:30.820000 audit: BPF prog-id=11 op=LOAD Mar 17 18:31:30.820000 audit: BPF prog-id=11 op=UNLOAD Mar 17 18:31:30.856000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:31:30.856000 audit[932]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58b2 a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:30.856000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:31:30.857000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:31:30.857000 audit[932]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5989 a2=1ed a3=0 items=2 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:30.857000 audit: CWD cwd="/" Mar 17 18:31:30.857000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:31:30.857000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:31:30.857000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:31:32.515000 audit: BPF prog-id=12 op=LOAD Mar 17 18:31:32.515000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:31:32.515000 audit: BPF prog-id=13 op=LOAD Mar 17 18:31:32.515000 audit: BPF prog-id=14 op=LOAD Mar 17 18:31:32.515000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:31:32.515000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:31:32.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.534000 audit: BPF prog-id=12 op=UNLOAD Mar 17 18:31:32.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.635000 audit: BPF prog-id=15 op=LOAD Mar 17 18:31:32.635000 audit: BPF prog-id=16 op=LOAD Mar 17 18:31:32.635000 audit: BPF prog-id=17 op=LOAD Mar 17 18:31:32.635000 audit: BPF prog-id=13 op=UNLOAD Mar 17 18:31:32.635000 audit: BPF prog-id=14 op=UNLOAD Mar 17 18:31:32.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.660000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:31:32.660000 audit[998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffeddaec20 a2=4000 a3=1 items=0 ppid=1 pid=998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:32.660000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:31:32.513399 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:31:30.855032 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:31:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:31:32.513410 systemd[1]: Unnecessary job was removed for dev-vda6.device. Mar 17 18:31:30.855288 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:31:30Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:31:32.515886 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 18:31:30.855306 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:31:30Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:31:30.855333 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:31:30Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Mar 17 18:31:30.855342 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:31:30Z" level=debug msg="skipped missing lower profile" missing profile=oem Mar 17 18:31:32.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:30.855368 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:31:30Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Mar 17 18:31:30.855379 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:31:30Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Mar 17 18:31:30.855561 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:31:30Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Mar 17 18:31:30.855592 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:31:30Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:31:30.855603 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:31:30Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:31:30.855993 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:31:30Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Mar 17 18:31:30.856028 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:31:30Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Mar 17 18:31:30.856045 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:31:30Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Mar 17 18:31:30.856058 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:31:30Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Mar 17 18:31:30.856084 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:31:30Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Mar 17 18:31:30.856098 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:31:30Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Mar 17 18:31:32.281476 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:31:32Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:31:32.281741 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:31:32Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:31:32.281835 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:31:32Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:31:32.282001 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:31:32Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:31:32.282050 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:31:32Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Mar 17 18:31:32.282114 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:31:32Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Mar 17 18:31:32.666248 systemd[1]: Started systemd-journald.service. Mar 17 18:31:32.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.666940 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:31:32.667093 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:31:32.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.668294 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:31:32.668502 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:31:32.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.669582 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:31:32.669720 systemd[1]: Finished modprobe@drm.service. Mar 17 18:31:32.670781 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:31:32.670924 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:31:32.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.672083 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:31:32.672275 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:31:32.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.673407 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:31:32.673578 systemd[1]: Finished modprobe@loop.service. Mar 17 18:31:32.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.674677 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:31:32.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.676045 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:31:32.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.677367 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:31:32.678862 systemd[1]: Reached target network-pre.target. Mar 17 18:31:32.681216 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:31:32.683305 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:31:32.684020 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:31:32.685510 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:31:32.687510 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:31:32.688418 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:31:32.689490 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:31:32.690405 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:31:32.691491 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:31:32.694757 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:31:32.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.695951 systemd-journald[998]: Time spent on flushing to /var/log/journal/cf651fec52f84591bc6d86d776bb22ba is 19.741ms for 972 entries. Mar 17 18:31:32.695951 systemd-journald[998]: System Journal (/var/log/journal/cf651fec52f84591bc6d86d776bb22ba) is 8.0M, max 195.6M, 187.6M free. Mar 17 18:31:32.726491 systemd-journald[998]: Received client request to flush runtime journal. Mar 17 18:31:32.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:32.695964 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:31:32.698301 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:31:32.727091 udevadm[1033]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 18:31:32.700674 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:31:32.703032 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:31:32.704025 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:31:32.705201 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:31:32.707106 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:31:32.714066 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:31:32.723666 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:31:32.728797 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:31:32.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.067022 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:31:33.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.068000 audit: BPF prog-id=18 op=LOAD Mar 17 18:31:33.068000 audit: BPF prog-id=19 op=LOAD Mar 17 18:31:33.068000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:31:33.068000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:31:33.069347 systemd[1]: Starting systemd-udevd.service... Mar 17 18:31:33.085113 systemd-udevd[1036]: Using default interface naming scheme 'v252'. Mar 17 18:31:33.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.103000 audit: BPF prog-id=20 op=LOAD Mar 17 18:31:33.101971 systemd[1]: Started systemd-udevd.service. Mar 17 18:31:33.104822 systemd[1]: Starting systemd-networkd.service... Mar 17 18:31:33.108000 audit: BPF prog-id=21 op=LOAD Mar 17 18:31:33.108000 audit: BPF prog-id=22 op=LOAD Mar 17 18:31:33.108000 audit: BPF prog-id=23 op=LOAD Mar 17 18:31:33.109609 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:31:33.124217 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Mar 17 18:31:33.141770 systemd[1]: Started systemd-userdbd.service. Mar 17 18:31:33.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.174266 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:31:33.197515 systemd-networkd[1045]: lo: Link UP Mar 17 18:31:33.197527 systemd-networkd[1045]: lo: Gained carrier Mar 17 18:31:33.197856 systemd-networkd[1045]: Enumeration completed Mar 17 18:31:33.197962 systemd[1]: Started systemd-networkd.service. Mar 17 18:31:33.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.199425 systemd-networkd[1045]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:31:33.202287 systemd-networkd[1045]: eth0: Link UP Mar 17 18:31:33.202297 systemd-networkd[1045]: eth0: Gained carrier Mar 17 18:31:33.210563 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:31:33.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.212771 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:31:33.229296 systemd-networkd[1045]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:31:33.231586 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:31:33.254005 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:31:33.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.255156 systemd[1]: Reached target cryptsetup.target. Mar 17 18:31:33.257234 systemd[1]: Starting lvm2-activation.service... Mar 17 18:31:33.261310 lvm[1070]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:31:33.288028 systemd[1]: Finished lvm2-activation.service. Mar 17 18:31:33.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.289073 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:31:33.289980 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:31:33.290017 systemd[1]: Reached target local-fs.target. Mar 17 18:31:33.290874 systemd[1]: Reached target machines.target. Mar 17 18:31:33.293009 systemd[1]: Starting ldconfig.service... Mar 17 18:31:33.294088 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:31:33.294163 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:31:33.295364 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:31:33.297642 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:31:33.299902 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:31:33.303083 systemd[1]: Starting systemd-sysext.service... Mar 17 18:31:33.304347 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1072 (bootctl) Mar 17 18:31:33.307894 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:31:33.312346 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:31:33.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.317442 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:31:33.334788 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:31:33.334987 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:31:33.378149 kernel: loop0: detected capacity change from 0 to 201592 Mar 17 18:31:33.378821 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:31:33.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.394191 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:31:33.394316 systemd-fsck[1080]: fsck.fat 4.2 (2021-01-31) Mar 17 18:31:33.394316 systemd-fsck[1080]: /dev/vda1: 236 files, 117179/258078 clusters Mar 17 18:31:33.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.396501 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:31:33.417145 kernel: loop1: detected capacity change from 0 to 201592 Mar 17 18:31:33.422375 (sd-sysext)[1084]: Using extensions 'kubernetes'. Mar 17 18:31:33.423062 (sd-sysext)[1084]: Merged extensions into '/usr'. Mar 17 18:31:33.440999 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:31:33.442509 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:31:33.444735 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:31:33.447094 systemd[1]: Starting modprobe@loop.service... Mar 17 18:31:33.448088 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:31:33.448268 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:31:33.449157 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:31:33.449301 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:31:33.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.450555 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:31:33.450710 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:31:33.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.451985 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:31:33.452597 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:31:33.452720 systemd[1]: Finished modprobe@loop.service. Mar 17 18:31:33.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.454198 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:31:33.498434 ldconfig[1071]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:31:33.503203 systemd[1]: Finished ldconfig.service. Mar 17 18:31:33.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.653209 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:31:33.654908 systemd[1]: Mounting boot.mount... Mar 17 18:31:33.656748 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:31:33.662882 systemd[1]: Mounted boot.mount. Mar 17 18:31:33.663848 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:31:33.666260 systemd[1]: Finished systemd-sysext.service. Mar 17 18:31:33.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.668625 systemd[1]: Starting ensure-sysext.service... Mar 17 18:31:33.670842 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:31:33.673893 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:31:33.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.676350 systemd[1]: Reloading. Mar 17 18:31:33.684738 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:31:33.686518 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:31:33.692762 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:31:33.715110 /usr/lib/systemd/system-generators/torcx-generator[1112]: time="2025-03-17T18:31:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:31:33.715476 /usr/lib/systemd/system-generators/torcx-generator[1112]: time="2025-03-17T18:31:33Z" level=info msg="torcx already run" Mar 17 18:31:33.769028 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:31:33.769058 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:31:33.784400 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:31:33.825000 audit: BPF prog-id=24 op=LOAD Mar 17 18:31:33.825000 audit: BPF prog-id=15 op=UNLOAD Mar 17 18:31:33.825000 audit: BPF prog-id=25 op=LOAD Mar 17 18:31:33.825000 audit: BPF prog-id=26 op=LOAD Mar 17 18:31:33.826000 audit: BPF prog-id=16 op=UNLOAD Mar 17 18:31:33.826000 audit: BPF prog-id=17 op=UNLOAD Mar 17 18:31:33.828000 audit: BPF prog-id=27 op=LOAD Mar 17 18:31:33.828000 audit: BPF prog-id=21 op=UNLOAD Mar 17 18:31:33.828000 audit: BPF prog-id=28 op=LOAD Mar 17 18:31:33.828000 audit: BPF prog-id=29 op=LOAD Mar 17 18:31:33.828000 audit: BPF prog-id=22 op=UNLOAD Mar 17 18:31:33.828000 audit: BPF prog-id=23 op=UNLOAD Mar 17 18:31:33.829000 audit: BPF prog-id=30 op=LOAD Mar 17 18:31:33.829000 audit: BPF prog-id=20 op=UNLOAD Mar 17 18:31:33.829000 audit: BPF prog-id=31 op=LOAD Mar 17 18:31:33.829000 audit: BPF prog-id=32 op=LOAD Mar 17 18:31:33.829000 audit: BPF prog-id=18 op=UNLOAD Mar 17 18:31:33.829000 audit: BPF prog-id=19 op=UNLOAD Mar 17 18:31:33.831288 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:31:33.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.836064 systemd[1]: Starting audit-rules.service... Mar 17 18:31:33.838411 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:31:33.841064 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:31:33.844000 audit: BPF prog-id=33 op=LOAD Mar 17 18:31:33.845099 systemd[1]: Starting systemd-resolved.service... Mar 17 18:31:33.847000 audit: BPF prog-id=34 op=LOAD Mar 17 18:31:33.848652 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:31:33.851037 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:31:33.854192 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:31:33.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.855000 audit[1162]: SYSTEM_BOOT pid=1162 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.859018 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:31:33.860608 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:31:33.863825 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:31:33.866936 systemd[1]: Starting modprobe@loop.service... Mar 17 18:31:33.867834 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:31:33.868016 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:31:33.868201 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:31:33.869487 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:31:33.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.871063 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:31:33.871233 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:31:33.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.872600 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:31:33.872707 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:31:33.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.874017 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:31:33.874301 systemd[1]: Finished modprobe@loop.service. Mar 17 18:31:33.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.875593 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:31:33.875721 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:31:33.877097 systemd[1]: Starting systemd-update-done.service... Mar 17 18:31:33.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.878607 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:31:33.882562 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:31:33.883778 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:31:33.885690 systemd[1]: Starting modprobe@drm.service... Mar 17 18:31:33.887532 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:31:33.889659 systemd[1]: Starting modprobe@loop.service... Mar 17 18:31:33.890676 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:31:33.890817 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:31:33.892908 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:31:33.894029 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:31:33.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.895215 systemd[1]: Finished systemd-update-done.service. Mar 17 18:31:33.896480 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:31:33.896637 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:31:33.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.897919 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:31:33.898036 systemd[1]: Finished modprobe@drm.service. Mar 17 18:31:33.899301 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:31:33.899409 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:31:33.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.900863 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:31:33.901027 systemd[1]: Finished modprobe@loop.service. Mar 17 18:31:33.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.902607 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:31:33.902700 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:31:33.903749 systemd[1]: Finished ensure-sysext.service. Mar 17 18:31:33.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:33.905502 systemd-resolved[1155]: Positive Trust Anchors: Mar 17 18:31:33.907280 systemd-resolved[1155]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:31:33.907314 systemd-resolved[1155]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:31:33.914000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:31:33.914000 audit[1181]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff49f27a0 a2=420 a3=0 items=0 ppid=1151 pid=1181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:33.914000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:31:33.915284 augenrules[1181]: No rules Mar 17 18:31:33.916014 systemd[1]: Finished audit-rules.service. Mar 17 18:31:33.917882 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:31:33.919071 systemd-timesyncd[1161]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 18:31:33.919149 systemd-timesyncd[1161]: Initial clock synchronization to Mon 2025-03-17 18:31:34.230554 UTC. Mar 17 18:31:33.919168 systemd[1]: Reached target time-set.target. Mar 17 18:31:33.921749 systemd-resolved[1155]: Defaulting to hostname 'linux'. Mar 17 18:31:33.923151 systemd[1]: Started systemd-resolved.service. Mar 17 18:31:33.923974 systemd[1]: Reached target network.target. Mar 17 18:31:33.924810 systemd[1]: Reached target nss-lookup.target. Mar 17 18:31:33.925628 systemd[1]: Reached target sysinit.target. Mar 17 18:31:33.926473 systemd[1]: Started motdgen.path. Mar 17 18:31:33.927197 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:31:33.928459 systemd[1]: Started logrotate.timer. Mar 17 18:31:33.929294 systemd[1]: Started mdadm.timer. Mar 17 18:31:33.929950 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:31:33.930804 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:31:33.930833 systemd[1]: Reached target paths.target. Mar 17 18:31:33.931573 systemd[1]: Reached target timers.target. Mar 17 18:31:33.932647 systemd[1]: Listening on dbus.socket. Mar 17 18:31:33.934392 systemd[1]: Starting docker.socket... Mar 17 18:31:33.937538 systemd[1]: Listening on sshd.socket. Mar 17 18:31:33.938394 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:31:33.938839 systemd[1]: Listening on docker.socket. Mar 17 18:31:33.939701 systemd[1]: Reached target sockets.target. Mar 17 18:31:33.940517 systemd[1]: Reached target basic.target. Mar 17 18:31:33.941302 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:31:33.941329 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:31:33.942299 systemd[1]: Starting containerd.service... Mar 17 18:31:33.943955 systemd[1]: Starting dbus.service... Mar 17 18:31:33.945811 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:31:33.947931 systemd[1]: Starting extend-filesystems.service... Mar 17 18:31:33.948877 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:31:33.950293 systemd[1]: Starting motdgen.service... Mar 17 18:31:33.952300 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:31:33.954323 systemd[1]: Starting sshd-keygen.service... Mar 17 18:31:33.956059 jq[1191]: false Mar 17 18:31:33.958244 systemd[1]: Starting systemd-logind.service... Mar 17 18:31:33.958983 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:31:33.959097 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:31:33.959619 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 18:31:33.960381 systemd[1]: Starting update-engine.service... Mar 17 18:31:33.963141 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:31:33.966011 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:31:33.966295 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:31:33.966667 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:31:33.966872 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:31:33.969337 jq[1204]: true Mar 17 18:31:33.978881 dbus-daemon[1190]: [system] SELinux support is enabled Mar 17 18:31:33.979075 systemd[1]: Started dbus.service. Mar 17 18:31:33.980220 extend-filesystems[1192]: Found loop1 Mar 17 18:31:33.981196 extend-filesystems[1192]: Found vda Mar 17 18:31:33.981715 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:31:33.981742 systemd[1]: Reached target system-config.target. Mar 17 18:31:33.982680 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:31:33.982775 extend-filesystems[1192]: Found vda1 Mar 17 18:31:33.982705 systemd[1]: Reached target user-config.target. Mar 17 18:31:33.984205 jq[1212]: true Mar 17 18:31:33.984832 extend-filesystems[1192]: Found vda2 Mar 17 18:31:33.985609 extend-filesystems[1192]: Found vda3 Mar 17 18:31:33.986393 extend-filesystems[1192]: Found usr Mar 17 18:31:33.987295 extend-filesystems[1192]: Found vda4 Mar 17 18:31:33.988052 extend-filesystems[1192]: Found vda6 Mar 17 18:31:33.989048 extend-filesystems[1192]: Found vda7 Mar 17 18:31:33.989048 extend-filesystems[1192]: Found vda9 Mar 17 18:31:33.989048 extend-filesystems[1192]: Checking size of /dev/vda9 Mar 17 18:31:33.989607 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:31:33.989880 systemd[1]: Finished motdgen.service. Mar 17 18:31:34.017095 extend-filesystems[1192]: Resized partition /dev/vda9 Mar 17 18:31:34.034686 systemd-logind[1199]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 18:31:34.035112 bash[1233]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:31:34.035373 systemd-logind[1199]: New seat seat0. Mar 17 18:31:34.035947 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:31:34.038281 systemd[1]: Started systemd-logind.service. Mar 17 18:31:34.041111 extend-filesystems[1236]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 18:31:34.050193 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 18:31:34.073904 update_engine[1203]: I0317 18:31:34.073619 1203 main.cc:92] Flatcar Update Engine starting Mar 17 18:31:34.076934 update_engine[1203]: I0317 18:31:34.076898 1203 update_check_scheduler.cc:74] Next update check in 11m6s Mar 17 18:31:34.076922 systemd[1]: Started update-engine.service. Mar 17 18:31:34.080008 systemd[1]: Started locksmithd.service. Mar 17 18:31:34.084254 env[1208]: time="2025-03-17T18:31:34.084175990Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:31:34.089521 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 18:31:34.104202 extend-filesystems[1236]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 18:31:34.104202 extend-filesystems[1236]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 18:31:34.104202 extend-filesystems[1236]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 18:31:34.104151 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:31:34.108872 env[1208]: time="2025-03-17T18:31:34.105318195Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:31:34.108872 env[1208]: time="2025-03-17T18:31:34.105488291Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:31:34.108872 env[1208]: time="2025-03-17T18:31:34.107043304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:31:34.108872 env[1208]: time="2025-03-17T18:31:34.107082161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:31:34.108872 env[1208]: time="2025-03-17T18:31:34.107308361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:31:34.108872 env[1208]: time="2025-03-17T18:31:34.107326397Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:31:34.108872 env[1208]: time="2025-03-17T18:31:34.107339529Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:31:34.108872 env[1208]: time="2025-03-17T18:31:34.107350583Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:31:34.108872 env[1208]: time="2025-03-17T18:31:34.107419279Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:31:34.108872 env[1208]: time="2025-03-17T18:31:34.107696927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:31:34.109073 extend-filesystems[1192]: Resized filesystem in /dev/vda9 Mar 17 18:31:34.104349 systemd[1]: Finished extend-filesystems.service. Mar 17 18:31:34.110082 env[1208]: time="2025-03-17T18:31:34.107809881Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:31:34.110082 env[1208]: time="2025-03-17T18:31:34.107825133Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:31:34.110082 env[1208]: time="2025-03-17T18:31:34.107873465Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:31:34.110082 env[1208]: time="2025-03-17T18:31:34.107885974Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:31:34.114470 env[1208]: time="2025-03-17T18:31:34.114419997Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:31:34.114470 env[1208]: time="2025-03-17T18:31:34.114463342Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:31:34.114561 env[1208]: time="2025-03-17T18:31:34.114477887Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:31:34.114561 env[1208]: time="2025-03-17T18:31:34.114514001Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:31:34.114561 env[1208]: time="2025-03-17T18:31:34.114530291Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:31:34.114561 env[1208]: time="2025-03-17T18:31:34.114545585Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:31:34.114561 env[1208]: time="2025-03-17T18:31:34.114559133Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:31:34.115032 env[1208]: time="2025-03-17T18:31:34.114989423Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:31:34.115032 env[1208]: time="2025-03-17T18:31:34.115024539Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:31:34.115086 env[1208]: time="2025-03-17T18:31:34.115063354Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:31:34.115108 env[1208]: time="2025-03-17T18:31:34.115086045Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:31:34.115108 env[1208]: time="2025-03-17T18:31:34.115101255Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:31:34.115323 env[1208]: time="2025-03-17T18:31:34.115290842Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:31:34.115439 env[1208]: time="2025-03-17T18:31:34.115412025Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:31:34.115674 env[1208]: time="2025-03-17T18:31:34.115651316Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:31:34.115712 env[1208]: time="2025-03-17T18:31:34.115684853Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:31:34.115712 env[1208]: time="2025-03-17T18:31:34.115700769Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:31:34.115823 env[1208]: time="2025-03-17T18:31:34.115808820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:31:34.115869 env[1208]: time="2025-03-17T18:31:34.115826191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:31:34.115869 env[1208]: time="2025-03-17T18:31:34.115840279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:31:34.115869 env[1208]: time="2025-03-17T18:31:34.115852580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:31:34.115869 env[1208]: time="2025-03-17T18:31:34.115864798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:31:34.115954 env[1208]: time="2025-03-17T18:31:34.115876892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:31:34.115954 env[1208]: time="2025-03-17T18:31:34.115888445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:31:34.115954 env[1208]: time="2025-03-17T18:31:34.115899873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:31:34.115954 env[1208]: time="2025-03-17T18:31:34.115913754Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:31:34.116059 env[1208]: time="2025-03-17T18:31:34.116041877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:31:34.116090 env[1208]: time="2025-03-17T18:31:34.116064692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:31:34.116090 env[1208]: time="2025-03-17T18:31:34.116078032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:31:34.116146 env[1208]: time="2025-03-17T18:31:34.116090790Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:31:34.116190 env[1208]: time="2025-03-17T18:31:34.116107081Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:31:34.116222 env[1208]: time="2025-03-17T18:31:34.116188327Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:31:34.116222 env[1208]: time="2025-03-17T18:31:34.116210893Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:31:34.116275 env[1208]: time="2025-03-17T18:31:34.116246799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:31:34.116577 env[1208]: time="2025-03-17T18:31:34.116493403Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:31:34.116577 env[1208]: time="2025-03-17T18:31:34.116566005Z" level=info msg="Connect containerd service" Mar 17 18:31:34.117509 env[1208]: time="2025-03-17T18:31:34.116626388Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:31:34.117892 env[1208]: time="2025-03-17T18:31:34.117861489Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:31:34.118454 env[1208]: time="2025-03-17T18:31:34.118432079Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:31:34.118513 env[1208]: time="2025-03-17T18:31:34.118495247Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:31:34.118630 systemd[1]: Started containerd.service. Mar 17 18:31:34.119768 env[1208]: time="2025-03-17T18:31:34.119740114Z" level=info msg="containerd successfully booted in 0.036645s" Mar 17 18:31:34.125841 env[1208]: time="2025-03-17T18:31:34.125774735Z" level=info msg="Start subscribing containerd event" Mar 17 18:31:34.125925 env[1208]: time="2025-03-17T18:31:34.125856147Z" level=info msg="Start recovering state" Mar 17 18:31:34.125982 env[1208]: time="2025-03-17T18:31:34.125948032Z" level=info msg="Start event monitor" Mar 17 18:31:34.125982 env[1208]: time="2025-03-17T18:31:34.125965943Z" level=info msg="Start snapshots syncer" Mar 17 18:31:34.126030 env[1208]: time="2025-03-17T18:31:34.125983688Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:31:34.126030 env[1208]: time="2025-03-17T18:31:34.125994992Z" level=info msg="Start streaming server" Mar 17 18:31:34.143286 locksmithd[1240]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:31:34.884307 systemd-networkd[1045]: eth0: Gained IPv6LL Mar 17 18:31:34.886359 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:31:34.887711 systemd[1]: Reached target network-online.target. Mar 17 18:31:34.890271 systemd[1]: Starting kubelet.service... Mar 17 18:31:35.500330 systemd[1]: Started kubelet.service. Mar 17 18:31:35.590877 sshd_keygen[1207]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:31:35.608670 systemd[1]: Finished sshd-keygen.service. Mar 17 18:31:35.611258 systemd[1]: Starting issuegen.service... Mar 17 18:31:35.615870 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:31:35.616114 systemd[1]: Finished issuegen.service. Mar 17 18:31:35.618471 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:31:35.625271 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:31:35.627824 systemd[1]: Started getty@tty1.service. Mar 17 18:31:35.630090 systemd[1]: Started serial-getty@ttyAMA0.service. Mar 17 18:31:35.631372 systemd[1]: Reached target getty.target. Mar 17 18:31:35.632319 systemd[1]: Reached target multi-user.target. Mar 17 18:31:35.634549 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:31:35.641319 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:31:35.641471 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:31:35.642657 systemd[1]: Startup finished in 616ms (kernel) + 4.141s (initrd) + 4.928s (userspace) = 9.686s. Mar 17 18:31:35.968008 kubelet[1256]: E0317 18:31:35.967887 1256 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:31:35.969577 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:31:35.969703 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:31:39.629502 systemd[1]: Created slice system-sshd.slice. Mar 17 18:31:39.630640 systemd[1]: Started sshd@0-10.0.0.128:22-10.0.0.1:53058.service. Mar 17 18:31:39.684382 sshd[1278]: Accepted publickey for core from 10.0.0.1 port 53058 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:31:39.686921 sshd[1278]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:31:39.696335 systemd[1]: Created slice user-500.slice. Mar 17 18:31:39.697465 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:31:39.699156 systemd-logind[1199]: New session 1 of user core. Mar 17 18:31:39.705500 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:31:39.706727 systemd[1]: Starting user@500.service... Mar 17 18:31:39.709520 (systemd)[1281]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:31:39.773032 systemd[1281]: Queued start job for default target default.target. Mar 17 18:31:39.773538 systemd[1281]: Reached target paths.target. Mar 17 18:31:39.773557 systemd[1281]: Reached target sockets.target. Mar 17 18:31:39.773569 systemd[1281]: Reached target timers.target. Mar 17 18:31:39.773579 systemd[1281]: Reached target basic.target. Mar 17 18:31:39.773628 systemd[1281]: Reached target default.target. Mar 17 18:31:39.773655 systemd[1281]: Startup finished in 58ms. Mar 17 18:31:39.773851 systemd[1]: Started user@500.service. Mar 17 18:31:39.774987 systemd[1]: Started session-1.scope. Mar 17 18:31:39.827378 systemd[1]: Started sshd@1-10.0.0.128:22-10.0.0.1:53064.service. Mar 17 18:31:39.883404 sshd[1290]: Accepted publickey for core from 10.0.0.1 port 53064 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:31:39.884978 sshd[1290]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:31:39.889345 systemd-logind[1199]: New session 2 of user core. Mar 17 18:31:39.889798 systemd[1]: Started session-2.scope. Mar 17 18:31:39.943877 sshd[1290]: pam_unix(sshd:session): session closed for user core Mar 17 18:31:39.946738 systemd[1]: sshd@1-10.0.0.128:22-10.0.0.1:53064.service: Deactivated successfully. Mar 17 18:31:39.947370 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 18:31:39.947947 systemd-logind[1199]: Session 2 logged out. Waiting for processes to exit. Mar 17 18:31:39.949098 systemd[1]: Started sshd@2-10.0.0.128:22-10.0.0.1:53066.service. Mar 17 18:31:39.949877 systemd-logind[1199]: Removed session 2. Mar 17 18:31:39.993780 sshd[1296]: Accepted publickey for core from 10.0.0.1 port 53066 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:31:39.995025 sshd[1296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:31:39.998740 systemd-logind[1199]: New session 3 of user core. Mar 17 18:31:39.999226 systemd[1]: Started session-3.scope. Mar 17 18:31:40.049490 sshd[1296]: pam_unix(sshd:session): session closed for user core Mar 17 18:31:40.052190 systemd[1]: sshd@2-10.0.0.128:22-10.0.0.1:53066.service: Deactivated successfully. Mar 17 18:31:40.052784 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 18:31:40.053262 systemd-logind[1199]: Session 3 logged out. Waiting for processes to exit. Mar 17 18:31:40.054334 systemd[1]: Started sshd@3-10.0.0.128:22-10.0.0.1:53082.service. Mar 17 18:31:40.054970 systemd-logind[1199]: Removed session 3. Mar 17 18:31:40.097966 sshd[1302]: Accepted publickey for core from 10.0.0.1 port 53082 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:31:40.099529 sshd[1302]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:31:40.102877 systemd-logind[1199]: New session 4 of user core. Mar 17 18:31:40.103716 systemd[1]: Started session-4.scope. Mar 17 18:31:40.157162 sshd[1302]: pam_unix(sshd:session): session closed for user core Mar 17 18:31:40.161651 systemd[1]: sshd@3-10.0.0.128:22-10.0.0.1:53082.service: Deactivated successfully. Mar 17 18:31:40.162357 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:31:40.162877 systemd-logind[1199]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:31:40.164069 systemd[1]: Started sshd@4-10.0.0.128:22-10.0.0.1:53092.service. Mar 17 18:31:40.164761 systemd-logind[1199]: Removed session 4. Mar 17 18:31:40.208210 sshd[1308]: Accepted publickey for core from 10.0.0.1 port 53092 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:31:40.209799 sshd[1308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:31:40.213053 systemd-logind[1199]: New session 5 of user core. Mar 17 18:31:40.213905 systemd[1]: Started session-5.scope. Mar 17 18:31:40.276382 sudo[1311]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:31:40.276665 sudo[1311]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:31:40.291334 systemd[1]: Starting coreos-metadata.service... Mar 17 18:31:40.299118 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 17 18:31:40.299312 systemd[1]: Finished coreos-metadata.service. Mar 17 18:31:40.771619 systemd[1]: Stopped kubelet.service. Mar 17 18:31:40.773581 systemd[1]: Starting kubelet.service... Mar 17 18:31:40.796706 systemd[1]: Reloading. Mar 17 18:31:40.845096 /usr/lib/systemd/system-generators/torcx-generator[1370]: time="2025-03-17T18:31:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:31:40.845131 /usr/lib/systemd/system-generators/torcx-generator[1370]: time="2025-03-17T18:31:40Z" level=info msg="torcx already run" Mar 17 18:31:41.007899 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:31:41.007921 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:31:41.023149 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:31:41.086907 systemd[1]: Started kubelet.service. Mar 17 18:31:41.088161 systemd[1]: Stopping kubelet.service... Mar 17 18:31:41.088408 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:31:41.088587 systemd[1]: Stopped kubelet.service. Mar 17 18:31:41.090099 systemd[1]: Starting kubelet.service... Mar 17 18:31:41.173517 systemd[1]: Started kubelet.service. Mar 17 18:31:41.204820 kubelet[1415]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:31:41.204820 kubelet[1415]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 17 18:31:41.204820 kubelet[1415]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:31:41.205146 kubelet[1415]: I0317 18:31:41.204861 1415 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:31:41.989589 kubelet[1415]: I0317 18:31:41.989549 1415 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 17 18:31:41.989730 kubelet[1415]: I0317 18:31:41.989719 1415 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:31:41.990094 kubelet[1415]: I0317 18:31:41.990074 1415 server.go:954] "Client rotation is on, will bootstrap in background" Mar 17 18:31:42.043631 kubelet[1415]: I0317 18:31:42.043579 1415 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:31:42.052609 kubelet[1415]: E0317 18:31:42.052575 1415 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:31:42.052609 kubelet[1415]: I0317 18:31:42.052610 1415 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:31:42.055146 kubelet[1415]: I0317 18:31:42.055119 1415 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:31:42.055352 kubelet[1415]: I0317 18:31:42.055315 1415 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:31:42.055511 kubelet[1415]: I0317 18:31:42.055346 1415 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.128","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:31:42.055592 kubelet[1415]: I0317 18:31:42.055575 1415 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:31:42.055592 kubelet[1415]: I0317 18:31:42.055584 1415 container_manager_linux.go:304] "Creating device plugin manager" Mar 17 18:31:42.055805 kubelet[1415]: I0317 18:31:42.055774 1415 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:31:42.062839 kubelet[1415]: I0317 18:31:42.062819 1415 kubelet.go:446] "Attempting to sync node with API server" Mar 17 18:31:42.062909 kubelet[1415]: I0317 18:31:42.062844 1415 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:31:42.062909 kubelet[1415]: I0317 18:31:42.062872 1415 kubelet.go:352] "Adding apiserver pod source" Mar 17 18:31:42.062909 kubelet[1415]: I0317 18:31:42.062893 1415 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:31:42.063186 kubelet[1415]: E0317 18:31:42.063090 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:31:42.063186 kubelet[1415]: E0317 18:31:42.063114 1415 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:31:42.065845 kubelet[1415]: I0317 18:31:42.065826 1415 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:31:42.066541 kubelet[1415]: I0317 18:31:42.066515 1415 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:31:42.066668 kubelet[1415]: W0317 18:31:42.066654 1415 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:31:42.068299 kubelet[1415]: I0317 18:31:42.068278 1415 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 17 18:31:42.068345 kubelet[1415]: I0317 18:31:42.068314 1415 server.go:1287] "Started kubelet" Mar 17 18:31:42.068434 kubelet[1415]: I0317 18:31:42.068405 1415 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:31:42.069386 kubelet[1415]: I0317 18:31:42.069365 1415 server.go:490] "Adding debug handlers to kubelet server" Mar 17 18:31:42.070005 kubelet[1415]: I0317 18:31:42.069708 1415 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:31:42.070120 kubelet[1415]: I0317 18:31:42.070089 1415 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:31:42.073073 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 18:31:42.073237 kubelet[1415]: W0317 18:31:42.073200 1415 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.128" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 17 18:31:42.073308 kubelet[1415]: E0317 18:31:42.073238 1415 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.128\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 17 18:31:42.073334 kubelet[1415]: I0317 18:31:42.073215 1415 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:31:42.073334 kubelet[1415]: I0317 18:31:42.073313 1415 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:31:42.075731 kubelet[1415]: E0317 18:31:42.075706 1415 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:31:42.076913 kubelet[1415]: W0317 18:31:42.076889 1415 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 17 18:31:42.076990 kubelet[1415]: E0317 18:31:42.076916 1415 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 17 18:31:42.077081 kubelet[1415]: E0317 18:31:42.077058 1415 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.128\" not found" Mar 17 18:31:42.077081 kubelet[1415]: I0317 18:31:42.077080 1415 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 17 18:31:42.077211 kubelet[1415]: I0317 18:31:42.077193 1415 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:31:42.077312 kubelet[1415]: I0317 18:31:42.077292 1415 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:31:42.077644 kubelet[1415]: I0317 18:31:42.077623 1415 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:31:42.077734 kubelet[1415]: I0317 18:31:42.077713 1415 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:31:42.078876 kubelet[1415]: I0317 18:31:42.078859 1415 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:31:42.089859 kubelet[1415]: I0317 18:31:42.089835 1415 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 17 18:31:42.089859 kubelet[1415]: I0317 18:31:42.089852 1415 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 17 18:31:42.089969 kubelet[1415]: I0317 18:31:42.089871 1415 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:31:42.091838 kubelet[1415]: E0317 18:31:42.091784 1415 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.128\" not found" node="10.0.0.128" Mar 17 18:31:42.159923 kubelet[1415]: I0317 18:31:42.159885 1415 policy_none.go:49] "None policy: Start" Mar 17 18:31:42.159923 kubelet[1415]: I0317 18:31:42.159918 1415 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 17 18:31:42.160059 kubelet[1415]: I0317 18:31:42.159941 1415 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:31:42.165366 systemd[1]: Created slice kubepods.slice. Mar 17 18:31:42.169424 systemd[1]: Created slice kubepods-besteffort.slice. Mar 17 18:31:42.177514 kubelet[1415]: E0317 18:31:42.177486 1415 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.128\" not found" Mar 17 18:31:42.180159 systemd[1]: Created slice kubepods-burstable.slice. Mar 17 18:31:42.182453 kubelet[1415]: I0317 18:31:42.182422 1415 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:31:42.182687 kubelet[1415]: I0317 18:31:42.182672 1415 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:31:42.182809 kubelet[1415]: I0317 18:31:42.182765 1415 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:31:42.183091 kubelet[1415]: I0317 18:31:42.183075 1415 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:31:42.184163 kubelet[1415]: E0317 18:31:42.184112 1415 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 17 18:31:42.184243 kubelet[1415]: E0317 18:31:42.184178 1415 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.128\" not found" Mar 17 18:31:42.201778 kubelet[1415]: I0317 18:31:42.201745 1415 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:31:42.202869 kubelet[1415]: I0317 18:31:42.202849 1415 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:31:42.202969 kubelet[1415]: I0317 18:31:42.202952 1415 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 17 18:31:42.203075 kubelet[1415]: I0317 18:31:42.203064 1415 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 17 18:31:42.203283 kubelet[1415]: I0317 18:31:42.203271 1415 kubelet.go:2388] "Starting kubelet main sync loop" Mar 17 18:31:42.203814 kubelet[1415]: E0317 18:31:42.203794 1415 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 17 18:31:42.284523 kubelet[1415]: I0317 18:31:42.284416 1415 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.128" Mar 17 18:31:42.290996 kubelet[1415]: I0317 18:31:42.290969 1415 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.128" Mar 17 18:31:42.291084 kubelet[1415]: E0317 18:31:42.291005 1415 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.0.0.128\": node \"10.0.0.128\" not found" Mar 17 18:31:42.295892 kubelet[1415]: E0317 18:31:42.295856 1415 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.128\" not found" Mar 17 18:31:42.396054 kubelet[1415]: E0317 18:31:42.395981 1415 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.128\" not found" Mar 17 18:31:42.496883 kubelet[1415]: E0317 18:31:42.496838 1415 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.128\" not found" Mar 17 18:31:42.597051 kubelet[1415]: E0317 18:31:42.596941 1415 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.128\" not found" Mar 17 18:31:42.697815 kubelet[1415]: E0317 18:31:42.697775 1415 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.128\" not found" Mar 17 18:31:42.698173 sudo[1311]: pam_unix(sudo:session): session closed for user root Mar 17 18:31:42.699971 sshd[1308]: pam_unix(sshd:session): session closed for user core Mar 17 18:31:42.702330 systemd[1]: sshd@4-10.0.0.128:22-10.0.0.1:53092.service: Deactivated successfully. Mar 17 18:31:42.703004 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:31:42.703520 systemd-logind[1199]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:31:42.704332 systemd-logind[1199]: Removed session 5. Mar 17 18:31:42.797990 kubelet[1415]: E0317 18:31:42.797936 1415 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.128\" not found" Mar 17 18:31:42.898914 kubelet[1415]: E0317 18:31:42.898818 1415 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.128\" not found" Mar 17 18:31:42.991736 kubelet[1415]: I0317 18:31:42.991696 1415 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 17 18:31:42.991881 kubelet[1415]: W0317 18:31:42.991854 1415 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 18:31:42.991961 kubelet[1415]: W0317 18:31:42.991866 1415 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Mar 17 18:31:42.999867 kubelet[1415]: E0317 18:31:42.999840 1415 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.128\" not found" Mar 17 18:31:43.063895 kubelet[1415]: E0317 18:31:43.063857 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:31:43.099948 kubelet[1415]: E0317 18:31:43.099915 1415 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.128\" not found" Mar 17 18:31:43.200668 kubelet[1415]: E0317 18:31:43.200581 1415 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.128\" not found" Mar 17 18:31:43.301650 kubelet[1415]: E0317 18:31:43.301616 1415 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.128\" not found" Mar 17 18:31:43.402363 kubelet[1415]: E0317 18:31:43.402334 1415 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.128\" not found" Mar 17 18:31:43.503460 kubelet[1415]: E0317 18:31:43.503372 1415 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.128\" not found" Mar 17 18:31:43.604211 kubelet[1415]: E0317 18:31:43.604182 1415 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.128\" not found" Mar 17 18:31:43.705256 kubelet[1415]: E0317 18:31:43.705222 1415 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.128\" not found" Mar 17 18:31:43.805601 kubelet[1415]: E0317 18:31:43.805508 1415 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.128\" not found" Mar 17 18:31:43.905982 kubelet[1415]: E0317 18:31:43.905959 1415 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.128\" not found" Mar 17 18:31:44.006746 kubelet[1415]: I0317 18:31:44.006719 1415 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Mar 17 18:31:44.007025 env[1208]: time="2025-03-17T18:31:44.006975612Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:31:44.007422 kubelet[1415]: I0317 18:31:44.007401 1415 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Mar 17 18:31:44.064386 kubelet[1415]: E0317 18:31:44.064309 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:31:44.064386 kubelet[1415]: I0317 18:31:44.064325 1415 apiserver.go:52] "Watching apiserver" Mar 17 18:31:44.073288 systemd[1]: Created slice kubepods-besteffort-pod752a52c9_50b5_4b2b_8a2a_c502a800a652.slice. Mar 17 18:31:44.078655 kubelet[1415]: I0317 18:31:44.078626 1415 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:31:44.088775 systemd[1]: Created slice kubepods-burstable-pod2e200104_0023_42bf_be43_d7ee1ed219e0.slice. Mar 17 18:31:44.090495 kubelet[1415]: I0317 18:31:44.090461 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6mlx\" (UniqueName: \"kubernetes.io/projected/2e200104-0023-42bf-be43-d7ee1ed219e0-kube-api-access-g6mlx\") pod \"cilium-xsw8n\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " pod="kube-system/cilium-xsw8n" Mar 17 18:31:44.090651 kubelet[1415]: I0317 18:31:44.090634 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-etc-cni-netd\") pod \"cilium-xsw8n\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " pod="kube-system/cilium-xsw8n" Mar 17 18:31:44.090761 kubelet[1415]: I0317 18:31:44.090745 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-xtables-lock\") pod \"cilium-xsw8n\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " pod="kube-system/cilium-xsw8n" Mar 17 18:31:44.090852 kubelet[1415]: I0317 18:31:44.090838 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e200104-0023-42bf-be43-d7ee1ed219e0-cilium-config-path\") pod \"cilium-xsw8n\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " pod="kube-system/cilium-xsw8n" Mar 17 18:31:44.090941 kubelet[1415]: I0317 18:31:44.090928 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-host-proc-sys-net\") pod \"cilium-xsw8n\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " pod="kube-system/cilium-xsw8n" Mar 17 18:31:44.091038 kubelet[1415]: I0317 18:31:44.091017 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/752a52c9-50b5-4b2b-8a2a-c502a800a652-xtables-lock\") pod \"kube-proxy-9dwnk\" (UID: \"752a52c9-50b5-4b2b-8a2a-c502a800a652\") " pod="kube-system/kube-proxy-9dwnk" Mar 17 18:31:44.091153 kubelet[1415]: I0317 18:31:44.091113 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-hostproc\") pod \"cilium-xsw8n\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " pod="kube-system/cilium-xsw8n" Mar 17 18:31:44.091253 kubelet[1415]: I0317 18:31:44.091239 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-cilium-cgroup\") pod \"cilium-xsw8n\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " pod="kube-system/cilium-xsw8n" Mar 17 18:31:44.091344 kubelet[1415]: I0317 18:31:44.091331 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-host-proc-sys-kernel\") pod \"cilium-xsw8n\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " pod="kube-system/cilium-xsw8n" Mar 17 18:31:44.091442 kubelet[1415]: I0317 18:31:44.091428 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/752a52c9-50b5-4b2b-8a2a-c502a800a652-lib-modules\") pod \"kube-proxy-9dwnk\" (UID: \"752a52c9-50b5-4b2b-8a2a-c502a800a652\") " pod="kube-system/kube-proxy-9dwnk" Mar 17 18:31:44.091537 kubelet[1415]: I0317 18:31:44.091522 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-cni-path\") pod \"cilium-xsw8n\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " pod="kube-system/cilium-xsw8n" Mar 17 18:31:44.091631 kubelet[1415]: I0317 18:31:44.091616 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-lib-modules\") pod \"cilium-xsw8n\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " pod="kube-system/cilium-xsw8n" Mar 17 18:31:44.091784 kubelet[1415]: I0317 18:31:44.091768 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2e200104-0023-42bf-be43-d7ee1ed219e0-clustermesh-secrets\") pod \"cilium-xsw8n\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " pod="kube-system/cilium-xsw8n" Mar 17 18:31:44.091892 kubelet[1415]: I0317 18:31:44.091878 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2e200104-0023-42bf-be43-d7ee1ed219e0-hubble-tls\") pod \"cilium-xsw8n\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " pod="kube-system/cilium-xsw8n" Mar 17 18:31:44.091989 kubelet[1415]: I0317 18:31:44.091976 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/752a52c9-50b5-4b2b-8a2a-c502a800a652-kube-proxy\") pod \"kube-proxy-9dwnk\" (UID: \"752a52c9-50b5-4b2b-8a2a-c502a800a652\") " pod="kube-system/kube-proxy-9dwnk" Mar 17 18:31:44.092082 kubelet[1415]: I0317 18:31:44.092069 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjdq7\" (UniqueName: \"kubernetes.io/projected/752a52c9-50b5-4b2b-8a2a-c502a800a652-kube-api-access-kjdq7\") pod \"kube-proxy-9dwnk\" (UID: \"752a52c9-50b5-4b2b-8a2a-c502a800a652\") " pod="kube-system/kube-proxy-9dwnk" Mar 17 18:31:44.092176 kubelet[1415]: I0317 18:31:44.092163 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-cilium-run\") pod \"cilium-xsw8n\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " pod="kube-system/cilium-xsw8n" Mar 17 18:31:44.092273 kubelet[1415]: I0317 18:31:44.092260 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-bpf-maps\") pod \"cilium-xsw8n\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " pod="kube-system/cilium-xsw8n" Mar 17 18:31:44.193696 kubelet[1415]: I0317 18:31:44.193656 1415 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 17 18:31:44.387665 kubelet[1415]: E0317 18:31:44.387559 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:44.388794 env[1208]: time="2025-03-17T18:31:44.388755249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9dwnk,Uid:752a52c9-50b5-4b2b-8a2a-c502a800a652,Namespace:kube-system,Attempt:0,}" Mar 17 18:31:44.400841 kubelet[1415]: E0317 18:31:44.400802 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:44.401292 env[1208]: time="2025-03-17T18:31:44.401252276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xsw8n,Uid:2e200104-0023-42bf-be43-d7ee1ed219e0,Namespace:kube-system,Attempt:0,}" Mar 17 18:31:44.973946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1612780692.mount: Deactivated successfully. Mar 17 18:31:44.978443 env[1208]: time="2025-03-17T18:31:44.978396697Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:44.980246 env[1208]: time="2025-03-17T18:31:44.980214574Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:44.981019 env[1208]: time="2025-03-17T18:31:44.980991898Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:44.982105 env[1208]: time="2025-03-17T18:31:44.982061183Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:44.984058 env[1208]: time="2025-03-17T18:31:44.984028576Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:44.986572 env[1208]: time="2025-03-17T18:31:44.986543564Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:44.988023 env[1208]: time="2025-03-17T18:31:44.987997671Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:44.989629 env[1208]: time="2025-03-17T18:31:44.989603680Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:45.023296 env[1208]: time="2025-03-17T18:31:45.023230763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:31:45.023296 env[1208]: time="2025-03-17T18:31:45.023275642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:31:45.023296 env[1208]: time="2025-03-17T18:31:45.023290534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:31:45.023679 env[1208]: time="2025-03-17T18:31:45.023593022Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b pid=1482 runtime=io.containerd.runc.v2 Mar 17 18:31:45.023784 env[1208]: time="2025-03-17T18:31:45.023734075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:31:45.023833 env[1208]: time="2025-03-17T18:31:45.023797317Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:31:45.023869 env[1208]: time="2025-03-17T18:31:45.023824680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:31:45.024018 env[1208]: time="2025-03-17T18:31:45.023986760Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b41c67b6e929337e1cb976d4da7db03b6e74465760453a883e8a79e25c7bfaf5 pid=1483 runtime=io.containerd.runc.v2 Mar 17 18:31:45.040938 systemd[1]: Started cri-containerd-d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b.scope. Mar 17 18:31:45.043452 systemd[1]: Started cri-containerd-b41c67b6e929337e1cb976d4da7db03b6e74465760453a883e8a79e25c7bfaf5.scope. Mar 17 18:31:45.065026 kubelet[1415]: E0317 18:31:45.064971 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:31:45.093126 env[1208]: time="2025-03-17T18:31:45.093079173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9dwnk,Uid:752a52c9-50b5-4b2b-8a2a-c502a800a652,Namespace:kube-system,Attempt:0,} returns sandbox id \"b41c67b6e929337e1cb976d4da7db03b6e74465760453a883e8a79e25c7bfaf5\"" Mar 17 18:31:45.093916 env[1208]: time="2025-03-17T18:31:45.093886062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xsw8n,Uid:2e200104-0023-42bf-be43-d7ee1ed219e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b\"" Mar 17 18:31:45.094012 kubelet[1415]: E0317 18:31:45.093987 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:45.095133 kubelet[1415]: E0317 18:31:45.095111 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:45.096113 env[1208]: time="2025-03-17T18:31:45.096083306Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 17 18:31:46.065152 kubelet[1415]: E0317 18:31:46.065097 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:31:46.133913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3361203615.mount: Deactivated successfully. Mar 17 18:31:46.614379 env[1208]: time="2025-03-17T18:31:46.614332082Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:46.616333 env[1208]: time="2025-03-17T18:31:46.616302177Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:46.617551 env[1208]: time="2025-03-17T18:31:46.617511026Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:46.618995 env[1208]: time="2025-03-17T18:31:46.618971352Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:46.619466 env[1208]: time="2025-03-17T18:31:46.619440282Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:2a637602f3e88e76046aa1a75bccdb37b25b2fcba99a380412e2c27ccd55c547\"" Mar 17 18:31:46.621499 env[1208]: time="2025-03-17T18:31:46.621461817Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:31:46.622008 env[1208]: time="2025-03-17T18:31:46.621977350Z" level=info msg="CreateContainer within sandbox \"b41c67b6e929337e1cb976d4da7db03b6e74465760453a883e8a79e25c7bfaf5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:31:46.632824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount441617830.mount: Deactivated successfully. Mar 17 18:31:46.636011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3756961207.mount: Deactivated successfully. Mar 17 18:31:46.638811 env[1208]: time="2025-03-17T18:31:46.638770501Z" level=info msg="CreateContainer within sandbox \"b41c67b6e929337e1cb976d4da7db03b6e74465760453a883e8a79e25c7bfaf5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"50ee63cec3113a1547a3aa686fe7841fb34b6d2d46406123ff818b1b54ced421\"" Mar 17 18:31:46.639422 env[1208]: time="2025-03-17T18:31:46.639392180Z" level=info msg="StartContainer for \"50ee63cec3113a1547a3aa686fe7841fb34b6d2d46406123ff818b1b54ced421\"" Mar 17 18:31:46.654173 systemd[1]: Started cri-containerd-50ee63cec3113a1547a3aa686fe7841fb34b6d2d46406123ff818b1b54ced421.scope. Mar 17 18:31:46.692128 env[1208]: time="2025-03-17T18:31:46.692078671Z" level=info msg="StartContainer for \"50ee63cec3113a1547a3aa686fe7841fb34b6d2d46406123ff818b1b54ced421\" returns successfully" Mar 17 18:31:47.065293 kubelet[1415]: E0317 18:31:47.065199 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:31:47.215286 kubelet[1415]: E0317 18:31:47.215258 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:48.065785 kubelet[1415]: E0317 18:31:48.065727 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:31:48.216241 kubelet[1415]: E0317 18:31:48.216206 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:49.066516 kubelet[1415]: E0317 18:31:49.066473 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:31:50.066860 kubelet[1415]: E0317 18:31:50.066813 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:31:50.989767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3180610640.mount: Deactivated successfully. Mar 17 18:31:51.067802 kubelet[1415]: E0317 18:31:51.067749 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:31:52.068509 kubelet[1415]: E0317 18:31:52.068474 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:31:53.069451 kubelet[1415]: E0317 18:31:53.069375 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:31:53.230061 env[1208]: time="2025-03-17T18:31:53.230011619Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:53.231467 env[1208]: time="2025-03-17T18:31:53.231430216Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:53.233416 env[1208]: time="2025-03-17T18:31:53.233389152Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:53.234096 env[1208]: time="2025-03-17T18:31:53.234062179Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 18:31:53.236800 env[1208]: time="2025-03-17T18:31:53.236762030Z" level=info msg="CreateContainer within sandbox \"d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:31:53.244769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1771230344.mount: Deactivated successfully. Mar 17 18:31:53.249777 env[1208]: time="2025-03-17T18:31:53.249730458Z" level=info msg="CreateContainer within sandbox \"d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f1c050088f8dde750f066371d3e34849c1733211cdfe1c8b493ef089bf0914c2\"" Mar 17 18:31:53.250390 env[1208]: time="2025-03-17T18:31:53.250338284Z" level=info msg="StartContainer for \"f1c050088f8dde750f066371d3e34849c1733211cdfe1c8b493ef089bf0914c2\"" Mar 17 18:31:53.267486 systemd[1]: Started cri-containerd-f1c050088f8dde750f066371d3e34849c1733211cdfe1c8b493ef089bf0914c2.scope. Mar 17 18:31:53.305491 env[1208]: time="2025-03-17T18:31:53.305404238Z" level=info msg="StartContainer for \"f1c050088f8dde750f066371d3e34849c1733211cdfe1c8b493ef089bf0914c2\" returns successfully" Mar 17 18:31:53.336422 systemd[1]: cri-containerd-f1c050088f8dde750f066371d3e34849c1733211cdfe1c8b493ef089bf0914c2.scope: Deactivated successfully. Mar 17 18:31:53.514462 env[1208]: time="2025-03-17T18:31:53.514416216Z" level=info msg="shim disconnected" id=f1c050088f8dde750f066371d3e34849c1733211cdfe1c8b493ef089bf0914c2 Mar 17 18:31:53.514462 env[1208]: time="2025-03-17T18:31:53.514461515Z" level=warning msg="cleaning up after shim disconnected" id=f1c050088f8dde750f066371d3e34849c1733211cdfe1c8b493ef089bf0914c2 namespace=k8s.io Mar 17 18:31:53.514462 env[1208]: time="2025-03-17T18:31:53.514470904Z" level=info msg="cleaning up dead shim" Mar 17 18:31:53.520571 env[1208]: time="2025-03-17T18:31:53.520532480Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:31:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1772 runtime=io.containerd.runc.v2\n" Mar 17 18:31:54.069537 kubelet[1415]: E0317 18:31:54.069497 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:31:54.225539 kubelet[1415]: E0317 18:31:54.225499 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:54.227265 env[1208]: time="2025-03-17T18:31:54.227223635Z" level=info msg="CreateContainer within sandbox \"d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:31:54.237388 env[1208]: time="2025-03-17T18:31:54.237350138Z" level=info msg="CreateContainer within sandbox \"d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"05c6ed49008f658ac787ddb8fc598076d26ea68dea4a718cf5a47a9c0e9d3603\"" Mar 17 18:31:54.238255 env[1208]: time="2025-03-17T18:31:54.238220879Z" level=info msg="StartContainer for \"05c6ed49008f658ac787ddb8fc598076d26ea68dea4a718cf5a47a9c0e9d3603\"" Mar 17 18:31:54.243162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1c050088f8dde750f066371d3e34849c1733211cdfe1c8b493ef089bf0914c2-rootfs.mount: Deactivated successfully. Mar 17 18:31:54.244241 kubelet[1415]: I0317 18:31:54.243952 1415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9dwnk" podStartSLOduration=10.71899058 podStartE2EDuration="12.243930066s" podCreationTimestamp="2025-03-17 18:31:42 +0000 UTC" firstStartedPulling="2025-03-17 18:31:45.0956909 +0000 UTC m=+3.919197275" lastFinishedPulling="2025-03-17 18:31:46.620630386 +0000 UTC m=+5.444136761" observedRunningTime="2025-03-17 18:31:47.222930369 +0000 UTC m=+6.046436704" watchObservedRunningTime="2025-03-17 18:31:54.243930066 +0000 UTC m=+13.067436442" Mar 17 18:31:54.256131 systemd[1]: Started cri-containerd-05c6ed49008f658ac787ddb8fc598076d26ea68dea4a718cf5a47a9c0e9d3603.scope. Mar 17 18:31:54.285302 env[1208]: time="2025-03-17T18:31:54.285251268Z" level=info msg="StartContainer for \"05c6ed49008f658ac787ddb8fc598076d26ea68dea4a718cf5a47a9c0e9d3603\" returns successfully" Mar 17 18:31:54.298089 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:31:54.298688 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:31:54.298955 systemd[1]: Stopping systemd-sysctl.service... Mar 17 18:31:54.300732 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:31:54.302203 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:31:54.304646 systemd[1]: cri-containerd-05c6ed49008f658ac787ddb8fc598076d26ea68dea4a718cf5a47a9c0e9d3603.scope: Deactivated successfully. Mar 17 18:31:54.307277 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:31:54.323353 env[1208]: time="2025-03-17T18:31:54.323256195Z" level=info msg="shim disconnected" id=05c6ed49008f658ac787ddb8fc598076d26ea68dea4a718cf5a47a9c0e9d3603 Mar 17 18:31:54.323353 env[1208]: time="2025-03-17T18:31:54.323295621Z" level=warning msg="cleaning up after shim disconnected" id=05c6ed49008f658ac787ddb8fc598076d26ea68dea4a718cf5a47a9c0e9d3603 namespace=k8s.io Mar 17 18:31:54.323353 env[1208]: time="2025-03-17T18:31:54.323305528Z" level=info msg="cleaning up dead shim" Mar 17 18:31:54.329912 env[1208]: time="2025-03-17T18:31:54.329862555Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:31:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1837 runtime=io.containerd.runc.v2\n" Mar 17 18:31:55.070257 kubelet[1415]: E0317 18:31:55.070220 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:31:55.227895 kubelet[1415]: E0317 18:31:55.227862 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:55.229737 env[1208]: time="2025-03-17T18:31:55.229698759Z" level=info msg="CreateContainer within sandbox \"d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:31:55.243090 env[1208]: time="2025-03-17T18:31:55.241830425Z" level=info msg="CreateContainer within sandbox \"d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5faebb7366f46cea09277ab6058a1d1f884d8f9a4205ca5c6607b769ebbd3746\"" Mar 17 18:31:55.243023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05c6ed49008f658ac787ddb8fc598076d26ea68dea4a718cf5a47a9c0e9d3603-rootfs.mount: Deactivated successfully. Mar 17 18:31:55.243661 env[1208]: time="2025-03-17T18:31:55.243632425Z" level=info msg="StartContainer for \"5faebb7366f46cea09277ab6058a1d1f884d8f9a4205ca5c6607b769ebbd3746\"" Mar 17 18:31:55.260861 systemd[1]: run-containerd-runc-k8s.io-5faebb7366f46cea09277ab6058a1d1f884d8f9a4205ca5c6607b769ebbd3746-runc.7hoook.mount: Deactivated successfully. Mar 17 18:31:55.263092 systemd[1]: Started cri-containerd-5faebb7366f46cea09277ab6058a1d1f884d8f9a4205ca5c6607b769ebbd3746.scope. Mar 17 18:31:55.294668 env[1208]: time="2025-03-17T18:31:55.294626254Z" level=info msg="StartContainer for \"5faebb7366f46cea09277ab6058a1d1f884d8f9a4205ca5c6607b769ebbd3746\" returns successfully" Mar 17 18:31:55.305491 systemd[1]: cri-containerd-5faebb7366f46cea09277ab6058a1d1f884d8f9a4205ca5c6607b769ebbd3746.scope: Deactivated successfully. Mar 17 18:31:55.320688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5faebb7366f46cea09277ab6058a1d1f884d8f9a4205ca5c6607b769ebbd3746-rootfs.mount: Deactivated successfully. Mar 17 18:31:55.325664 env[1208]: time="2025-03-17T18:31:55.325622789Z" level=info msg="shim disconnected" id=5faebb7366f46cea09277ab6058a1d1f884d8f9a4205ca5c6607b769ebbd3746 Mar 17 18:31:55.325805 env[1208]: time="2025-03-17T18:31:55.325787738Z" level=warning msg="cleaning up after shim disconnected" id=5faebb7366f46cea09277ab6058a1d1f884d8f9a4205ca5c6607b769ebbd3746 namespace=k8s.io Mar 17 18:31:55.325865 env[1208]: time="2025-03-17T18:31:55.325851969Z" level=info msg="cleaning up dead shim" Mar 17 18:31:55.331622 env[1208]: time="2025-03-17T18:31:55.331592396Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:31:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1892 runtime=io.containerd.runc.v2\n" Mar 17 18:31:56.070820 kubelet[1415]: E0317 18:31:56.070781 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:31:56.230646 kubelet[1415]: E0317 18:31:56.230617 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:56.232569 env[1208]: time="2025-03-17T18:31:56.232526104Z" level=info msg="CreateContainer within sandbox \"d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:31:56.243127 env[1208]: time="2025-03-17T18:31:56.243079078Z" level=info msg="CreateContainer within sandbox \"d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cb9b6eba520d1ee4cf022d71ad90eea096b3670280e1be18e8d3c3306d73cc5c\"" Mar 17 18:31:56.243703 env[1208]: time="2025-03-17T18:31:56.243672340Z" level=info msg="StartContainer for \"cb9b6eba520d1ee4cf022d71ad90eea096b3670280e1be18e8d3c3306d73cc5c\"" Mar 17 18:31:56.259359 systemd[1]: Started cri-containerd-cb9b6eba520d1ee4cf022d71ad90eea096b3670280e1be18e8d3c3306d73cc5c.scope. Mar 17 18:31:56.287254 systemd[1]: cri-containerd-cb9b6eba520d1ee4cf022d71ad90eea096b3670280e1be18e8d3c3306d73cc5c.scope: Deactivated successfully. Mar 17 18:31:56.288238 env[1208]: time="2025-03-17T18:31:56.288142205Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e200104_0023_42bf_be43_d7ee1ed219e0.slice/cri-containerd-cb9b6eba520d1ee4cf022d71ad90eea096b3670280e1be18e8d3c3306d73cc5c.scope/memory.events\": no such file or directory" Mar 17 18:31:56.290046 env[1208]: time="2025-03-17T18:31:56.290011815Z" level=info msg="StartContainer for \"cb9b6eba520d1ee4cf022d71ad90eea096b3670280e1be18e8d3c3306d73cc5c\" returns successfully" Mar 17 18:31:56.303824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb9b6eba520d1ee4cf022d71ad90eea096b3670280e1be18e8d3c3306d73cc5c-rootfs.mount: Deactivated successfully. Mar 17 18:31:56.307425 env[1208]: time="2025-03-17T18:31:56.307379424Z" level=info msg="shim disconnected" id=cb9b6eba520d1ee4cf022d71ad90eea096b3670280e1be18e8d3c3306d73cc5c Mar 17 18:31:56.307425 env[1208]: time="2025-03-17T18:31:56.307420268Z" level=warning msg="cleaning up after shim disconnected" id=cb9b6eba520d1ee4cf022d71ad90eea096b3670280e1be18e8d3c3306d73cc5c namespace=k8s.io Mar 17 18:31:56.307425 env[1208]: time="2025-03-17T18:31:56.307428806Z" level=info msg="cleaning up dead shim" Mar 17 18:31:56.313963 env[1208]: time="2025-03-17T18:31:56.313911677Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:31:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1948 runtime=io.containerd.runc.v2\n" Mar 17 18:31:57.071462 kubelet[1415]: E0317 18:31:57.071420 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:31:57.234755 kubelet[1415]: E0317 18:31:57.234648 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:57.238845 env[1208]: time="2025-03-17T18:31:57.238803643Z" level=info msg="CreateContainer within sandbox \"d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:31:57.250574 env[1208]: time="2025-03-17T18:31:57.250522567Z" level=info msg="CreateContainer within sandbox \"d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e\"" Mar 17 18:31:57.251029 env[1208]: time="2025-03-17T18:31:57.250997183Z" level=info msg="StartContainer for \"1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e\"" Mar 17 18:31:57.267916 systemd[1]: Started cri-containerd-1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e.scope. Mar 17 18:31:57.300183 env[1208]: time="2025-03-17T18:31:57.300109428Z" level=info msg="StartContainer for \"1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e\" returns successfully" Mar 17 18:31:57.458556 kubelet[1415]: I0317 18:31:57.457645 1415 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 17 18:31:57.577148 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Mar 17 18:31:57.814151 kernel: Initializing XFRM netlink socket Mar 17 18:31:57.817147 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Mar 17 18:31:58.072189 kubelet[1415]: E0317 18:31:58.072051 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:31:58.238548 kubelet[1415]: E0317 18:31:58.238492 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:58.254086 kubelet[1415]: I0317 18:31:58.254028 1415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xsw8n" podStartSLOduration=8.114227507 podStartE2EDuration="16.254009829s" podCreationTimestamp="2025-03-17 18:31:42 +0000 UTC" firstStartedPulling="2025-03-17 18:31:45.095727061 +0000 UTC m=+3.919233436" lastFinishedPulling="2025-03-17 18:31:53.235509383 +0000 UTC m=+12.059015758" observedRunningTime="2025-03-17 18:31:58.253528149 +0000 UTC m=+17.077034604" watchObservedRunningTime="2025-03-17 18:31:58.254009829 +0000 UTC m=+17.077516204" Mar 17 18:31:58.708046 systemd[1]: Created slice kubepods-besteffort-pod2050b461_2398_47c3_bedb_981931a7faaf.slice. Mar 17 18:31:58.784344 kubelet[1415]: I0317 18:31:58.784262 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qnj5\" (UniqueName: \"kubernetes.io/projected/2050b461-2398-47c3-bedb-981931a7faaf-kube-api-access-2qnj5\") pod \"nginx-deployment-7fcdb87857-rzl7f\" (UID: \"2050b461-2398-47c3-bedb-981931a7faaf\") " pod="default/nginx-deployment-7fcdb87857-rzl7f" Mar 17 18:31:59.011659 env[1208]: time="2025-03-17T18:31:59.011335308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rzl7f,Uid:2050b461-2398-47c3-bedb-981931a7faaf,Namespace:default,Attempt:0,}" Mar 17 18:31:59.072586 kubelet[1415]: E0317 18:31:59.072546 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:31:59.239833 kubelet[1415]: E0317 18:31:59.239807 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:59.423690 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Mar 17 18:31:59.423788 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 18:31:59.421927 systemd-networkd[1045]: cilium_host: Link UP Mar 17 18:31:59.422035 systemd-networkd[1045]: cilium_net: Link UP Mar 17 18:31:59.422788 systemd-networkd[1045]: cilium_net: Gained carrier Mar 17 18:31:59.423934 systemd-networkd[1045]: cilium_host: Gained carrier Mar 17 18:31:59.496535 systemd-networkd[1045]: cilium_vxlan: Link UP Mar 17 18:31:59.496541 systemd-networkd[1045]: cilium_vxlan: Gained carrier Mar 17 18:31:59.764297 systemd-networkd[1045]: cilium_net: Gained IPv6LL Mar 17 18:31:59.802162 kernel: NET: Registered PF_ALG protocol family Mar 17 18:32:00.073674 kubelet[1415]: E0317 18:32:00.073619 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:00.241316 kubelet[1415]: E0317 18:32:00.241276 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:00.359034 systemd-networkd[1045]: cilium_host: Gained IPv6LL Mar 17 18:32:00.359843 systemd-networkd[1045]: lxc_health: Link UP Mar 17 18:32:00.371801 systemd-networkd[1045]: lxc_health: Gained carrier Mar 17 18:32:00.372268 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:32:00.557769 systemd-networkd[1045]: lxc2aabaeee9e0e: Link UP Mar 17 18:32:00.569163 kernel: eth0: renamed from tmp2a2c3 Mar 17 18:32:00.579214 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:32:00.579327 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2aabaeee9e0e: link becomes ready Mar 17 18:32:00.579422 systemd-networkd[1045]: lxc2aabaeee9e0e: Gained carrier Mar 17 18:32:01.074405 kubelet[1415]: E0317 18:32:01.074347 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:01.125305 systemd-networkd[1045]: cilium_vxlan: Gained IPv6LL Mar 17 18:32:01.242525 kubelet[1415]: E0317 18:32:01.242485 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:02.063882 kubelet[1415]: E0317 18:32:02.063828 1415 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:02.075295 kubelet[1415]: E0317 18:32:02.075267 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:02.084279 systemd-networkd[1045]: lxc_health: Gained IPv6LL Mar 17 18:32:02.243579 kubelet[1415]: E0317 18:32:02.243536 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:02.340279 systemd-networkd[1045]: lxc2aabaeee9e0e: Gained IPv6LL Mar 17 18:32:03.075438 kubelet[1415]: E0317 18:32:03.075379 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:03.245003 kubelet[1415]: E0317 18:32:03.244953 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:04.012798 env[1208]: time="2025-03-17T18:32:04.012709583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:32:04.012798 env[1208]: time="2025-03-17T18:32:04.012753054Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:32:04.012798 env[1208]: time="2025-03-17T18:32:04.012764182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:32:04.013165 env[1208]: time="2025-03-17T18:32:04.012926096Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a2c3140e8533b9fadb697ea54c86414f4a529ae074b89bdc89ec3c3b0974181 pid=2506 runtime=io.containerd.runc.v2 Mar 17 18:32:04.029635 systemd[1]: run-containerd-runc-k8s.io-2a2c3140e8533b9fadb697ea54c86414f4a529ae074b89bdc89ec3c3b0974181-runc.yAsYiU.mount: Deactivated successfully. Mar 17 18:32:04.031155 systemd[1]: Started cri-containerd-2a2c3140e8533b9fadb697ea54c86414f4a529ae074b89bdc89ec3c3b0974181.scope. Mar 17 18:32:04.076013 kubelet[1415]: E0317 18:32:04.075959 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:04.093443 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:32:04.108112 env[1208]: time="2025-03-17T18:32:04.108071958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rzl7f,Uid:2050b461-2398-47c3-bedb-981931a7faaf,Namespace:default,Attempt:0,} returns sandbox id \"2a2c3140e8533b9fadb697ea54c86414f4a529ae074b89bdc89ec3c3b0974181\"" Mar 17 18:32:04.109210 env[1208]: time="2025-03-17T18:32:04.109183866Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 18:32:05.076377 kubelet[1415]: E0317 18:32:05.076333 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:06.076768 kubelet[1415]: E0317 18:32:06.076728 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:06.622661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3518963393.mount: Deactivated successfully. Mar 17 18:32:07.077344 kubelet[1415]: E0317 18:32:07.077305 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:07.836634 env[1208]: time="2025-03-17T18:32:07.836590402Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:07.838074 env[1208]: time="2025-03-17T18:32:07.838013668Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:07.843543 env[1208]: time="2025-03-17T18:32:07.843508747Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:07.845018 env[1208]: time="2025-03-17T18:32:07.844995341Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:07.845683 env[1208]: time="2025-03-17T18:32:07.845634901Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\"" Mar 17 18:32:07.848482 env[1208]: time="2025-03-17T18:32:07.848443406Z" level=info msg="CreateContainer within sandbox \"2a2c3140e8533b9fadb697ea54c86414f4a529ae074b89bdc89ec3c3b0974181\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Mar 17 18:32:07.857899 env[1208]: time="2025-03-17T18:32:07.857856661Z" level=info msg="CreateContainer within sandbox \"2a2c3140e8533b9fadb697ea54c86414f4a529ae074b89bdc89ec3c3b0974181\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"11fb9f18f8431450ced949294c9a5f5ccdbd9b44b14156cebc167996712a4c89\"" Mar 17 18:32:07.858539 env[1208]: time="2025-03-17T18:32:07.858497021Z" level=info msg="StartContainer for \"11fb9f18f8431450ced949294c9a5f5ccdbd9b44b14156cebc167996712a4c89\"" Mar 17 18:32:07.875855 systemd[1]: Started cri-containerd-11fb9f18f8431450ced949294c9a5f5ccdbd9b44b14156cebc167996712a4c89.scope. Mar 17 18:32:07.918661 env[1208]: time="2025-03-17T18:32:07.918557597Z" level=info msg="StartContainer for \"11fb9f18f8431450ced949294c9a5f5ccdbd9b44b14156cebc167996712a4c89\" returns successfully" Mar 17 18:32:08.078209 kubelet[1415]: E0317 18:32:08.078166 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:08.261622 kubelet[1415]: I0317 18:32:08.261501 1415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-rzl7f" podStartSLOduration=6.523217939 podStartE2EDuration="10.261485277s" podCreationTimestamp="2025-03-17 18:31:58 +0000 UTC" firstStartedPulling="2025-03-17 18:32:04.10875156 +0000 UTC m=+22.932257935" lastFinishedPulling="2025-03-17 18:32:07.847018898 +0000 UTC m=+26.670525273" observedRunningTime="2025-03-17 18:32:08.261217127 +0000 UTC m=+27.084723502" watchObservedRunningTime="2025-03-17 18:32:08.261485277 +0000 UTC m=+27.084991652" Mar 17 18:32:09.079277 kubelet[1415]: E0317 18:32:09.079235 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:10.080232 kubelet[1415]: E0317 18:32:10.080180 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:10.382437 systemd[1]: Created slice kubepods-besteffort-pode6afc290_bd04_4ac2_86e5_c9b9c0745127.slice. Mar 17 18:32:10.451476 kubelet[1415]: I0317 18:32:10.451436 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/e6afc290-bd04-4ac2-86e5-c9b9c0745127-data\") pod \"nfs-server-provisioner-0\" (UID: \"e6afc290-bd04-4ac2-86e5-c9b9c0745127\") " pod="default/nfs-server-provisioner-0" Mar 17 18:32:10.451686 kubelet[1415]: I0317 18:32:10.451667 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5v9x\" (UniqueName: \"kubernetes.io/projected/e6afc290-bd04-4ac2-86e5-c9b9c0745127-kube-api-access-w5v9x\") pod \"nfs-server-provisioner-0\" (UID: \"e6afc290-bd04-4ac2-86e5-c9b9c0745127\") " pod="default/nfs-server-provisioner-0" Mar 17 18:32:10.685697 env[1208]: time="2025-03-17T18:32:10.685597490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e6afc290-bd04-4ac2-86e5-c9b9c0745127,Namespace:default,Attempt:0,}" Mar 17 18:32:10.707042 systemd-networkd[1045]: lxc64c557692301: Link UP Mar 17 18:32:10.716157 kernel: eth0: renamed from tmp8ee53 Mar 17 18:32:10.723371 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:32:10.723440 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc64c557692301: link becomes ready Mar 17 18:32:10.723089 systemd-networkd[1045]: lxc64c557692301: Gained carrier Mar 17 18:32:10.895858 env[1208]: time="2025-03-17T18:32:10.895789238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:32:10.896011 env[1208]: time="2025-03-17T18:32:10.895826462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:32:10.896011 env[1208]: time="2025-03-17T18:32:10.895842192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:32:10.896011 env[1208]: time="2025-03-17T18:32:10.895959146Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8ee532c05796e6dcb756390033880b0abbb83683da63b1d9b8f7ecfab5e517b1 pid=2636 runtime=io.containerd.runc.v2 Mar 17 18:32:10.908199 systemd[1]: Started cri-containerd-8ee532c05796e6dcb756390033880b0abbb83683da63b1d9b8f7ecfab5e517b1.scope. Mar 17 18:32:10.931577 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:32:10.946926 env[1208]: time="2025-03-17T18:32:10.946833875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e6afc290-bd04-4ac2-86e5-c9b9c0745127,Namespace:default,Attempt:0,} returns sandbox id \"8ee532c05796e6dcb756390033880b0abbb83683da63b1d9b8f7ecfab5e517b1\"" Mar 17 18:32:10.948621 env[1208]: time="2025-03-17T18:32:10.948591586Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Mar 17 18:32:11.081357 kubelet[1415]: E0317 18:32:11.081307 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:12.082256 kubelet[1415]: E0317 18:32:12.082221 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:12.774305 systemd-networkd[1045]: lxc64c557692301: Gained IPv6LL Mar 17 18:32:13.077232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1079636476.mount: Deactivated successfully. Mar 17 18:32:13.082691 kubelet[1415]: E0317 18:32:13.082652 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:14.083479 kubelet[1415]: E0317 18:32:14.083425 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:14.786612 env[1208]: time="2025-03-17T18:32:14.786562519Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:14.788201 env[1208]: time="2025-03-17T18:32:14.788172056Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:14.789787 env[1208]: time="2025-03-17T18:32:14.789749497Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:14.791799 env[1208]: time="2025-03-17T18:32:14.791768361Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:14.792422 env[1208]: time="2025-03-17T18:32:14.792385234Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Mar 17 18:32:14.795003 env[1208]: time="2025-03-17T18:32:14.794961141Z" level=info msg="CreateContainer within sandbox \"8ee532c05796e6dcb756390033880b0abbb83683da63b1d9b8f7ecfab5e517b1\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Mar 17 18:32:14.805527 env[1208]: time="2025-03-17T18:32:14.805477598Z" level=info msg="CreateContainer within sandbox \"8ee532c05796e6dcb756390033880b0abbb83683da63b1d9b8f7ecfab5e517b1\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"25c9242f775575a7bbf40c32154f9370f2bb0eda54bf86c54d9ed551f5186884\"" Mar 17 18:32:14.805957 env[1208]: time="2025-03-17T18:32:14.805930788Z" level=info msg="StartContainer for \"25c9242f775575a7bbf40c32154f9370f2bb0eda54bf86c54d9ed551f5186884\"" Mar 17 18:32:14.827652 systemd[1]: run-containerd-runc-k8s.io-25c9242f775575a7bbf40c32154f9370f2bb0eda54bf86c54d9ed551f5186884-runc.cDX4Yh.mount: Deactivated successfully. Mar 17 18:32:14.830341 systemd[1]: Started cri-containerd-25c9242f775575a7bbf40c32154f9370f2bb0eda54bf86c54d9ed551f5186884.scope. Mar 17 18:32:14.880510 env[1208]: time="2025-03-17T18:32:14.880399458Z" level=info msg="StartContainer for \"25c9242f775575a7bbf40c32154f9370f2bb0eda54bf86c54d9ed551f5186884\" returns successfully" Mar 17 18:32:15.084676 kubelet[1415]: E0317 18:32:15.084562 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:16.085719 kubelet[1415]: E0317 18:32:16.085668 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:17.085823 kubelet[1415]: E0317 18:32:17.085783 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:18.086605 kubelet[1415]: E0317 18:32:18.086563 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:18.961024 update_engine[1203]: I0317 18:32:18.960978 1203 update_attempter.cc:509] Updating boot flags... Mar 17 18:32:19.087151 kubelet[1415]: E0317 18:32:19.087086 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:20.087938 kubelet[1415]: E0317 18:32:20.087888 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:21.088770 kubelet[1415]: E0317 18:32:21.088715 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:22.063378 kubelet[1415]: E0317 18:32:22.063332 1415 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:22.089743 kubelet[1415]: E0317 18:32:22.089714 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:23.090590 kubelet[1415]: E0317 18:32:23.090528 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:24.091559 kubelet[1415]: E0317 18:32:24.091521 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:24.944141 kubelet[1415]: I0317 18:32:24.944040 1415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.098536967 podStartE2EDuration="14.944022484s" podCreationTimestamp="2025-03-17 18:32:10 +0000 UTC" firstStartedPulling="2025-03-17 18:32:10.948184849 +0000 UTC m=+29.771691184" lastFinishedPulling="2025-03-17 18:32:14.793670326 +0000 UTC m=+33.617176701" observedRunningTime="2025-03-17 18:32:15.27563071 +0000 UTC m=+34.099137085" watchObservedRunningTime="2025-03-17 18:32:24.944022484 +0000 UTC m=+43.767528859" Mar 17 18:32:24.952564 systemd[1]: Created slice kubepods-besteffort-podd88224f3_38f4_44ba_9dd1_49d6bc6486b2.slice. Mar 17 18:32:25.033759 kubelet[1415]: I0317 18:32:25.033715 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5n6z\" (UniqueName: \"kubernetes.io/projected/d88224f3-38f4-44ba-9dd1-49d6bc6486b2-kube-api-access-m5n6z\") pod \"test-pod-1\" (UID: \"d88224f3-38f4-44ba-9dd1-49d6bc6486b2\") " pod="default/test-pod-1" Mar 17 18:32:25.033759 kubelet[1415]: I0317 18:32:25.033761 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4ce4bc91-f0a8-4c22-b1cc-b5d88914cab9\" (UniqueName: \"kubernetes.io/nfs/d88224f3-38f4-44ba-9dd1-49d6bc6486b2-pvc-4ce4bc91-f0a8-4c22-b1cc-b5d88914cab9\") pod \"test-pod-1\" (UID: \"d88224f3-38f4-44ba-9dd1-49d6bc6486b2\") " pod="default/test-pod-1" Mar 17 18:32:25.092283 kubelet[1415]: E0317 18:32:25.092248 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:25.160172 kernel: FS-Cache: Loaded Mar 17 18:32:25.187634 kernel: RPC: Registered named UNIX socket transport module. Mar 17 18:32:25.187736 kernel: RPC: Registered udp transport module. Mar 17 18:32:25.187761 kernel: RPC: Registered tcp transport module. Mar 17 18:32:25.189145 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Mar 17 18:32:25.230133 kernel: FS-Cache: Netfs 'nfs' registered for caching Mar 17 18:32:25.368597 kernel: NFS: Registering the id_resolver key type Mar 17 18:32:25.368728 kernel: Key type id_resolver registered Mar 17 18:32:25.368758 kernel: Key type id_legacy registered Mar 17 18:32:25.406862 nfsidmap[2770]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Mar 17 18:32:25.411797 nfsidmap[2773]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Mar 17 18:32:25.555676 env[1208]: time="2025-03-17T18:32:25.555616436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d88224f3-38f4-44ba-9dd1-49d6bc6486b2,Namespace:default,Attempt:0,}" Mar 17 18:32:25.582936 systemd-networkd[1045]: lxc6f0a5ee38261: Link UP Mar 17 18:32:25.593148 kernel: eth0: renamed from tmp04639 Mar 17 18:32:25.602836 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:32:25.602916 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6f0a5ee38261: link becomes ready Mar 17 18:32:25.602837 systemd-networkd[1045]: lxc6f0a5ee38261: Gained carrier Mar 17 18:32:25.780480 env[1208]: time="2025-03-17T18:32:25.780419380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:32:25.780630 env[1208]: time="2025-03-17T18:32:25.780456270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:32:25.780630 env[1208]: time="2025-03-17T18:32:25.780466433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:32:25.780630 env[1208]: time="2025-03-17T18:32:25.780568863Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/04639ed96b0bfc10cd1f380fb7dccfdeddaaedff9f96b2476e2711d01a5f80f5 pid=2810 runtime=io.containerd.runc.v2 Mar 17 18:32:25.793240 systemd[1]: Started cri-containerd-04639ed96b0bfc10cd1f380fb7dccfdeddaaedff9f96b2476e2711d01a5f80f5.scope. Mar 17 18:32:25.833845 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:32:25.848976 env[1208]: time="2025-03-17T18:32:25.848935781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d88224f3-38f4-44ba-9dd1-49d6bc6486b2,Namespace:default,Attempt:0,} returns sandbox id \"04639ed96b0bfc10cd1f380fb7dccfdeddaaedff9f96b2476e2711d01a5f80f5\"" Mar 17 18:32:25.850181 env[1208]: time="2025-03-17T18:32:25.850032741Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 18:32:26.092681 kubelet[1415]: E0317 18:32:26.092552 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:26.139223 env[1208]: time="2025-03-17T18:32:26.139175478Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:26.140495 env[1208]: time="2025-03-17T18:32:26.140460877Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:26.142084 env[1208]: time="2025-03-17T18:32:26.142057882Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:26.144138 env[1208]: time="2025-03-17T18:32:26.144099371Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:26.144448 env[1208]: time="2025-03-17T18:32:26.144413499Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:f660a383148a8217a75a455efeb8bfd4cbe3afa737712cc0e25f27c03b770dd4\"" Mar 17 18:32:26.149480 env[1208]: time="2025-03-17T18:32:26.149449783Z" level=info msg="CreateContainer within sandbox \"04639ed96b0bfc10cd1f380fb7dccfdeddaaedff9f96b2476e2711d01a5f80f5\" for container &ContainerMetadata{Name:test,Attempt:0,}" Mar 17 18:32:26.160294 env[1208]: time="2025-03-17T18:32:26.160259758Z" level=info msg="CreateContainer within sandbox \"04639ed96b0bfc10cd1f380fb7dccfdeddaaedff9f96b2476e2711d01a5f80f5\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"24609980354102175829d42fd4d0690487514896a498589bc0f7d473e5af4d38\"" Mar 17 18:32:26.160980 env[1208]: time="2025-03-17T18:32:26.160934186Z" level=info msg="StartContainer for \"24609980354102175829d42fd4d0690487514896a498589bc0f7d473e5af4d38\"" Mar 17 18:32:26.177492 systemd[1]: Started cri-containerd-24609980354102175829d42fd4d0690487514896a498589bc0f7d473e5af4d38.scope. Mar 17 18:32:26.221376 env[1208]: time="2025-03-17T18:32:26.221333471Z" level=info msg="StartContainer for \"24609980354102175829d42fd4d0690487514896a498589bc0f7d473e5af4d38\" returns successfully" Mar 17 18:32:26.660354 systemd-networkd[1045]: lxc6f0a5ee38261: Gained IPv6LL Mar 17 18:32:27.093469 kubelet[1415]: E0317 18:32:27.093427 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:27.146598 systemd[1]: run-containerd-runc-k8s.io-24609980354102175829d42fd4d0690487514896a498589bc0f7d473e5af4d38-runc.ye7NTB.mount: Deactivated successfully. Mar 17 18:32:28.094538 kubelet[1415]: E0317 18:32:28.094494 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:29.094846 kubelet[1415]: E0317 18:32:29.094803 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:30.095522 kubelet[1415]: E0317 18:32:30.095481 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:31.096296 kubelet[1415]: E0317 18:32:31.096261 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:32.096909 kubelet[1415]: E0317 18:32:32.096863 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:33.097469 kubelet[1415]: E0317 18:32:33.097427 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:33.188345 kubelet[1415]: I0317 18:32:33.188288 1415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=22.889879932 podStartE2EDuration="23.18826497s" podCreationTimestamp="2025-03-17 18:32:10 +0000 UTC" firstStartedPulling="2025-03-17 18:32:25.849751339 +0000 UTC m=+44.673257714" lastFinishedPulling="2025-03-17 18:32:26.148136377 +0000 UTC m=+44.971642752" observedRunningTime="2025-03-17 18:32:26.293695411 +0000 UTC m=+45.117201786" watchObservedRunningTime="2025-03-17 18:32:33.18826497 +0000 UTC m=+52.011771345" Mar 17 18:32:33.202946 systemd[1]: run-containerd-runc-k8s.io-1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e-runc.mGjbRb.mount: Deactivated successfully. Mar 17 18:32:33.233970 env[1208]: time="2025-03-17T18:32:33.233906695Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:32:33.239835 env[1208]: time="2025-03-17T18:32:33.239799960Z" level=info msg="StopContainer for \"1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e\" with timeout 2 (s)" Mar 17 18:32:33.240152 env[1208]: time="2025-03-17T18:32:33.240102542Z" level=info msg="Stop container \"1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e\" with signal terminated" Mar 17 18:32:33.245649 systemd-networkd[1045]: lxc_health: Link DOWN Mar 17 18:32:33.245656 systemd-networkd[1045]: lxc_health: Lost carrier Mar 17 18:32:33.280938 systemd[1]: cri-containerd-1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e.scope: Deactivated successfully. Mar 17 18:32:33.281284 systemd[1]: cri-containerd-1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e.scope: Consumed 6.394s CPU time. Mar 17 18:32:33.296331 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e-rootfs.mount: Deactivated successfully. Mar 17 18:32:33.306546 env[1208]: time="2025-03-17T18:32:33.306482937Z" level=info msg="shim disconnected" id=1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e Mar 17 18:32:33.306546 env[1208]: time="2025-03-17T18:32:33.306530787Z" level=warning msg="cleaning up after shim disconnected" id=1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e namespace=k8s.io Mar 17 18:32:33.306546 env[1208]: time="2025-03-17T18:32:33.306540829Z" level=info msg="cleaning up dead shim" Mar 17 18:32:33.313862 env[1208]: time="2025-03-17T18:32:33.313824303Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:32:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2942 runtime=io.containerd.runc.v2\n" Mar 17 18:32:33.316179 env[1208]: time="2025-03-17T18:32:33.316134943Z" level=info msg="StopContainer for \"1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e\" returns successfully" Mar 17 18:32:33.316738 env[1208]: time="2025-03-17T18:32:33.316708222Z" level=info msg="StopPodSandbox for \"d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b\"" Mar 17 18:32:33.316794 env[1208]: time="2025-03-17T18:32:33.316771355Z" level=info msg="Container to stop \"f1c050088f8dde750f066371d3e34849c1733211cdfe1c8b493ef089bf0914c2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:32:33.316794 env[1208]: time="2025-03-17T18:32:33.316785478Z" level=info msg="Container to stop \"1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:32:33.316843 env[1208]: time="2025-03-17T18:32:33.316796240Z" level=info msg="Container to stop \"05c6ed49008f658ac787ddb8fc598076d26ea68dea4a718cf5a47a9c0e9d3603\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:32:33.316843 env[1208]: time="2025-03-17T18:32:33.316806922Z" level=info msg="Container to stop \"5faebb7366f46cea09277ab6058a1d1f884d8f9a4205ca5c6607b769ebbd3746\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:32:33.316843 env[1208]: time="2025-03-17T18:32:33.316817565Z" level=info msg="Container to stop \"cb9b6eba520d1ee4cf022d71ad90eea096b3670280e1be18e8d3c3306d73cc5c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:32:33.318450 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b-shm.mount: Deactivated successfully. Mar 17 18:32:33.323868 systemd[1]: cri-containerd-d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b.scope: Deactivated successfully. Mar 17 18:32:33.347165 env[1208]: time="2025-03-17T18:32:33.347102938Z" level=info msg="shim disconnected" id=d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b Mar 17 18:32:33.347931 env[1208]: time="2025-03-17T18:32:33.347848933Z" level=warning msg="cleaning up after shim disconnected" id=d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b namespace=k8s.io Mar 17 18:32:33.348038 env[1208]: time="2025-03-17T18:32:33.348020049Z" level=info msg="cleaning up dead shim" Mar 17 18:32:33.354654 env[1208]: time="2025-03-17T18:32:33.354621821Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:32:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2972 runtime=io.containerd.runc.v2\n" Mar 17 18:32:33.355078 env[1208]: time="2025-03-17T18:32:33.355049790Z" level=info msg="TearDown network for sandbox \"d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b\" successfully" Mar 17 18:32:33.355239 env[1208]: time="2025-03-17T18:32:33.355217745Z" level=info msg="StopPodSandbox for \"d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b\" returns successfully" Mar 17 18:32:33.380440 kubelet[1415]: I0317 18:32:33.380405 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-lib-modules\") pod \"2e200104-0023-42bf-be43-d7ee1ed219e0\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " Mar 17 18:32:33.380440 kubelet[1415]: I0317 18:32:33.380450 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2e200104-0023-42bf-be43-d7ee1ed219e0-clustermesh-secrets\") pod \"2e200104-0023-42bf-be43-d7ee1ed219e0\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " Mar 17 18:32:33.380631 kubelet[1415]: I0317 18:32:33.380479 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-host-proc-sys-net\") pod \"2e200104-0023-42bf-be43-d7ee1ed219e0\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " Mar 17 18:32:33.380631 kubelet[1415]: I0317 18:32:33.380494 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-xtables-lock\") pod \"2e200104-0023-42bf-be43-d7ee1ed219e0\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " Mar 17 18:32:33.380631 kubelet[1415]: I0317 18:32:33.380511 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-bpf-maps\") pod \"2e200104-0023-42bf-be43-d7ee1ed219e0\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " Mar 17 18:32:33.380631 kubelet[1415]: I0317 18:32:33.380530 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e200104-0023-42bf-be43-d7ee1ed219e0-cilium-config-path\") pod \"2e200104-0023-42bf-be43-d7ee1ed219e0\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " Mar 17 18:32:33.380631 kubelet[1415]: I0317 18:32:33.380551 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-etc-cni-netd\") pod \"2e200104-0023-42bf-be43-d7ee1ed219e0\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " Mar 17 18:32:33.380631 kubelet[1415]: I0317 18:32:33.380566 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-hostproc\") pod \"2e200104-0023-42bf-be43-d7ee1ed219e0\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " Mar 17 18:32:33.380771 kubelet[1415]: I0317 18:32:33.380582 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-cilium-cgroup\") pod \"2e200104-0023-42bf-be43-d7ee1ed219e0\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " Mar 17 18:32:33.380771 kubelet[1415]: I0317 18:32:33.380596 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-host-proc-sys-kernel\") pod \"2e200104-0023-42bf-be43-d7ee1ed219e0\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " Mar 17 18:32:33.380771 kubelet[1415]: I0317 18:32:33.380609 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-cni-path\") pod \"2e200104-0023-42bf-be43-d7ee1ed219e0\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " Mar 17 18:32:33.380771 kubelet[1415]: I0317 18:32:33.380633 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2e200104-0023-42bf-be43-d7ee1ed219e0-hubble-tls\") pod \"2e200104-0023-42bf-be43-d7ee1ed219e0\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " Mar 17 18:32:33.380771 kubelet[1415]: I0317 18:32:33.380650 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6mlx\" (UniqueName: \"kubernetes.io/projected/2e200104-0023-42bf-be43-d7ee1ed219e0-kube-api-access-g6mlx\") pod \"2e200104-0023-42bf-be43-d7ee1ed219e0\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " Mar 17 18:32:33.380771 kubelet[1415]: I0317 18:32:33.380664 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-cilium-run\") pod \"2e200104-0023-42bf-be43-d7ee1ed219e0\" (UID: \"2e200104-0023-42bf-be43-d7ee1ed219e0\") " Mar 17 18:32:33.380903 kubelet[1415]: I0317 18:32:33.380732 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2e200104-0023-42bf-be43-d7ee1ed219e0" (UID: "2e200104-0023-42bf-be43-d7ee1ed219e0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:32:33.380903 kubelet[1415]: I0317 18:32:33.380763 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2e200104-0023-42bf-be43-d7ee1ed219e0" (UID: "2e200104-0023-42bf-be43-d7ee1ed219e0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:32:33.381479 kubelet[1415]: I0317 18:32:33.381010 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-hostproc" (OuterVolumeSpecName: "hostproc") pod "2e200104-0023-42bf-be43-d7ee1ed219e0" (UID: "2e200104-0023-42bf-be43-d7ee1ed219e0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:32:33.381479 kubelet[1415]: I0317 18:32:33.381050 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2e200104-0023-42bf-be43-d7ee1ed219e0" (UID: "2e200104-0023-42bf-be43-d7ee1ed219e0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:32:33.381479 kubelet[1415]: I0317 18:32:33.381031 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-cni-path" (OuterVolumeSpecName: "cni-path") pod "2e200104-0023-42bf-be43-d7ee1ed219e0" (UID: "2e200104-0023-42bf-be43-d7ee1ed219e0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:32:33.381479 kubelet[1415]: I0317 18:32:33.381073 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2e200104-0023-42bf-be43-d7ee1ed219e0" (UID: "2e200104-0023-42bf-be43-d7ee1ed219e0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:32:33.381479 kubelet[1415]: I0317 18:32:33.381084 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2e200104-0023-42bf-be43-d7ee1ed219e0" (UID: "2e200104-0023-42bf-be43-d7ee1ed219e0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:32:33.381714 kubelet[1415]: I0317 18:32:33.381091 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2e200104-0023-42bf-be43-d7ee1ed219e0" (UID: "2e200104-0023-42bf-be43-d7ee1ed219e0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:32:33.381714 kubelet[1415]: I0317 18:32:33.381137 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2e200104-0023-42bf-be43-d7ee1ed219e0" (UID: "2e200104-0023-42bf-be43-d7ee1ed219e0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:32:33.381714 kubelet[1415]: I0317 18:32:33.381178 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2e200104-0023-42bf-be43-d7ee1ed219e0" (UID: "2e200104-0023-42bf-be43-d7ee1ed219e0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:32:33.382979 kubelet[1415]: I0317 18:32:33.382926 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e200104-0023-42bf-be43-d7ee1ed219e0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2e200104-0023-42bf-be43-d7ee1ed219e0" (UID: "2e200104-0023-42bf-be43-d7ee1ed219e0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 18:32:33.384043 kubelet[1415]: I0317 18:32:33.384005 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e200104-0023-42bf-be43-d7ee1ed219e0-kube-api-access-g6mlx" (OuterVolumeSpecName: "kube-api-access-g6mlx") pod "2e200104-0023-42bf-be43-d7ee1ed219e0" (UID: "2e200104-0023-42bf-be43-d7ee1ed219e0"). InnerVolumeSpecName "kube-api-access-g6mlx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 18:32:33.384195 kubelet[1415]: I0317 18:32:33.384166 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e200104-0023-42bf-be43-d7ee1ed219e0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2e200104-0023-42bf-be43-d7ee1ed219e0" (UID: "2e200104-0023-42bf-be43-d7ee1ed219e0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 18:32:33.384445 kubelet[1415]: I0317 18:32:33.384417 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e200104-0023-42bf-be43-d7ee1ed219e0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2e200104-0023-42bf-be43-d7ee1ed219e0" (UID: "2e200104-0023-42bf-be43-d7ee1ed219e0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 18:32:33.481830 kubelet[1415]: I0317 18:32:33.481795 1415 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-lib-modules\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:33.482005 kubelet[1415]: I0317 18:32:33.481985 1415 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-host-proc-sys-net\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:33.482075 kubelet[1415]: I0317 18:32:33.482064 1415 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2e200104-0023-42bf-be43-d7ee1ed219e0-clustermesh-secrets\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:33.482209 kubelet[1415]: I0317 18:32:33.482195 1415 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2e200104-0023-42bf-be43-d7ee1ed219e0-cilium-config-path\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:33.482289 kubelet[1415]: I0317 18:32:33.482276 1415 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-xtables-lock\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:33.482354 kubelet[1415]: I0317 18:32:33.482344 1415 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-bpf-maps\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:33.482416 kubelet[1415]: I0317 18:32:33.482405 1415 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g6mlx\" (UniqueName: \"kubernetes.io/projected/2e200104-0023-42bf-be43-d7ee1ed219e0-kube-api-access-g6mlx\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:33.482475 kubelet[1415]: I0317 18:32:33.482466 1415 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-etc-cni-netd\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:33.482547 kubelet[1415]: I0317 18:32:33.482535 1415 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-hostproc\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:33.482613 kubelet[1415]: I0317 18:32:33.482602 1415 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-cilium-cgroup\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:33.482672 kubelet[1415]: I0317 18:32:33.482661 1415 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-host-proc-sys-kernel\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:33.482731 kubelet[1415]: I0317 18:32:33.482720 1415 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-cni-path\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:33.482788 kubelet[1415]: I0317 18:32:33.482778 1415 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2e200104-0023-42bf-be43-d7ee1ed219e0-hubble-tls\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:33.482845 kubelet[1415]: I0317 18:32:33.482835 1415 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2e200104-0023-42bf-be43-d7ee1ed219e0-cilium-run\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:34.097913 kubelet[1415]: E0317 18:32:34.097871 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:34.199298 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b-rootfs.mount: Deactivated successfully. Mar 17 18:32:34.199392 systemd[1]: var-lib-kubelet-pods-2e200104\x2d0023\x2d42bf\x2dbe43\x2dd7ee1ed219e0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dg6mlx.mount: Deactivated successfully. Mar 17 18:32:34.199451 systemd[1]: var-lib-kubelet-pods-2e200104\x2d0023\x2d42bf\x2dbe43\x2dd7ee1ed219e0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:32:34.199506 systemd[1]: var-lib-kubelet-pods-2e200104\x2d0023\x2d42bf\x2dbe43\x2dd7ee1ed219e0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:32:34.209595 systemd[1]: Removed slice kubepods-burstable-pod2e200104_0023_42bf_be43_d7ee1ed219e0.slice. Mar 17 18:32:34.209677 systemd[1]: kubepods-burstable-pod2e200104_0023_42bf_be43_d7ee1ed219e0.slice: Consumed 6.579s CPU time. Mar 17 18:32:34.302072 kubelet[1415]: I0317 18:32:34.302044 1415 scope.go:117] "RemoveContainer" containerID="1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e" Mar 17 18:32:34.304290 env[1208]: time="2025-03-17T18:32:34.304240291Z" level=info msg="RemoveContainer for \"1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e\"" Mar 17 18:32:34.310243 env[1208]: time="2025-03-17T18:32:34.310197402Z" level=info msg="RemoveContainer for \"1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e\" returns successfully" Mar 17 18:32:34.310477 kubelet[1415]: I0317 18:32:34.310446 1415 scope.go:117] "RemoveContainer" containerID="cb9b6eba520d1ee4cf022d71ad90eea096b3670280e1be18e8d3c3306d73cc5c" Mar 17 18:32:34.311597 env[1208]: time="2025-03-17T18:32:34.311568077Z" level=info msg="RemoveContainer for \"cb9b6eba520d1ee4cf022d71ad90eea096b3670280e1be18e8d3c3306d73cc5c\"" Mar 17 18:32:34.314355 env[1208]: time="2025-03-17T18:32:34.314327669Z" level=info msg="RemoveContainer for \"cb9b6eba520d1ee4cf022d71ad90eea096b3670280e1be18e8d3c3306d73cc5c\" returns successfully" Mar 17 18:32:34.314556 kubelet[1415]: I0317 18:32:34.314531 1415 scope.go:117] "RemoveContainer" containerID="5faebb7366f46cea09277ab6058a1d1f884d8f9a4205ca5c6607b769ebbd3746" Mar 17 18:32:34.316251 env[1208]: time="2025-03-17T18:32:34.316217207Z" level=info msg="RemoveContainer for \"5faebb7366f46cea09277ab6058a1d1f884d8f9a4205ca5c6607b769ebbd3746\"" Mar 17 18:32:34.318793 env[1208]: time="2025-03-17T18:32:34.318764236Z" level=info msg="RemoveContainer for \"5faebb7366f46cea09277ab6058a1d1f884d8f9a4205ca5c6607b769ebbd3746\" returns successfully" Mar 17 18:32:34.318991 kubelet[1415]: I0317 18:32:34.318972 1415 scope.go:117] "RemoveContainer" containerID="05c6ed49008f658ac787ddb8fc598076d26ea68dea4a718cf5a47a9c0e9d3603" Mar 17 18:32:34.320255 env[1208]: time="2025-03-17T18:32:34.320226209Z" level=info msg="RemoveContainer for \"05c6ed49008f658ac787ddb8fc598076d26ea68dea4a718cf5a47a9c0e9d3603\"" Mar 17 18:32:34.322431 env[1208]: time="2025-03-17T18:32:34.322376759Z" level=info msg="RemoveContainer for \"05c6ed49008f658ac787ddb8fc598076d26ea68dea4a718cf5a47a9c0e9d3603\" returns successfully" Mar 17 18:32:34.322593 kubelet[1415]: I0317 18:32:34.322570 1415 scope.go:117] "RemoveContainer" containerID="f1c050088f8dde750f066371d3e34849c1733211cdfe1c8b493ef089bf0914c2" Mar 17 18:32:34.323719 env[1208]: time="2025-03-17T18:32:34.323689421Z" level=info msg="RemoveContainer for \"f1c050088f8dde750f066371d3e34849c1733211cdfe1c8b493ef089bf0914c2\"" Mar 17 18:32:34.325762 env[1208]: time="2025-03-17T18:32:34.325732710Z" level=info msg="RemoveContainer for \"f1c050088f8dde750f066371d3e34849c1733211cdfe1c8b493ef089bf0914c2\" returns successfully" Mar 17 18:32:34.325945 kubelet[1415]: I0317 18:32:34.325923 1415 scope.go:117] "RemoveContainer" containerID="1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e" Mar 17 18:32:34.326247 env[1208]: time="2025-03-17T18:32:34.326176439Z" level=error msg="ContainerStatus for \"1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e\": not found" Mar 17 18:32:34.326431 kubelet[1415]: E0317 18:32:34.326407 1415 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e\": not found" containerID="1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e" Mar 17 18:32:34.326547 kubelet[1415]: I0317 18:32:34.326506 1415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e"} err="failed to get container status \"1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e\": rpc error: code = NotFound desc = an error occurred when try to find container \"1578fa5e04b3a039feadd67df769f8d77d65b256a0a0e2a6d6636b4d3926c44e\": not found" Mar 17 18:32:34.326616 kubelet[1415]: I0317 18:32:34.326602 1415 scope.go:117] "RemoveContainer" containerID="cb9b6eba520d1ee4cf022d71ad90eea096b3670280e1be18e8d3c3306d73cc5c" Mar 17 18:32:34.326886 env[1208]: time="2025-03-17T18:32:34.326839371Z" level=error msg="ContainerStatus for \"cb9b6eba520d1ee4cf022d71ad90eea096b3670280e1be18e8d3c3306d73cc5c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb9b6eba520d1ee4cf022d71ad90eea096b3670280e1be18e8d3c3306d73cc5c\": not found" Mar 17 18:32:34.327036 kubelet[1415]: E0317 18:32:34.327015 1415 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb9b6eba520d1ee4cf022d71ad90eea096b3670280e1be18e8d3c3306d73cc5c\": not found" containerID="cb9b6eba520d1ee4cf022d71ad90eea096b3670280e1be18e8d3c3306d73cc5c" Mar 17 18:32:34.327155 kubelet[1415]: I0317 18:32:34.327109 1415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb9b6eba520d1ee4cf022d71ad90eea096b3670280e1be18e8d3c3306d73cc5c"} err="failed to get container status \"cb9b6eba520d1ee4cf022d71ad90eea096b3670280e1be18e8d3c3306d73cc5c\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb9b6eba520d1ee4cf022d71ad90eea096b3670280e1be18e8d3c3306d73cc5c\": not found" Mar 17 18:32:34.327221 kubelet[1415]: I0317 18:32:34.327209 1415 scope.go:117] "RemoveContainer" containerID="5faebb7366f46cea09277ab6058a1d1f884d8f9a4205ca5c6607b769ebbd3746" Mar 17 18:32:34.327470 env[1208]: time="2025-03-17T18:32:34.327425649Z" level=error msg="ContainerStatus for \"5faebb7366f46cea09277ab6058a1d1f884d8f9a4205ca5c6607b769ebbd3746\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5faebb7366f46cea09277ab6058a1d1f884d8f9a4205ca5c6607b769ebbd3746\": not found" Mar 17 18:32:34.327632 kubelet[1415]: E0317 18:32:34.327612 1415 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5faebb7366f46cea09277ab6058a1d1f884d8f9a4205ca5c6607b769ebbd3746\": not found" containerID="5faebb7366f46cea09277ab6058a1d1f884d8f9a4205ca5c6607b769ebbd3746" Mar 17 18:32:34.327719 kubelet[1415]: I0317 18:32:34.327701 1415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5faebb7366f46cea09277ab6058a1d1f884d8f9a4205ca5c6607b769ebbd3746"} err="failed to get container status \"5faebb7366f46cea09277ab6058a1d1f884d8f9a4205ca5c6607b769ebbd3746\": rpc error: code = NotFound desc = an error occurred when try to find container \"5faebb7366f46cea09277ab6058a1d1f884d8f9a4205ca5c6607b769ebbd3746\": not found" Mar 17 18:32:34.327796 kubelet[1415]: I0317 18:32:34.327783 1415 scope.go:117] "RemoveContainer" containerID="05c6ed49008f658ac787ddb8fc598076d26ea68dea4a718cf5a47a9c0e9d3603" Mar 17 18:32:34.328018 env[1208]: time="2025-03-17T18:32:34.327975839Z" level=error msg="ContainerStatus for \"05c6ed49008f658ac787ddb8fc598076d26ea68dea4a718cf5a47a9c0e9d3603\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"05c6ed49008f658ac787ddb8fc598076d26ea68dea4a718cf5a47a9c0e9d3603\": not found" Mar 17 18:32:34.328180 kubelet[1415]: E0317 18:32:34.328161 1415 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"05c6ed49008f658ac787ddb8fc598076d26ea68dea4a718cf5a47a9c0e9d3603\": not found" containerID="05c6ed49008f658ac787ddb8fc598076d26ea68dea4a718cf5a47a9c0e9d3603" Mar 17 18:32:34.328300 kubelet[1415]: I0317 18:32:34.328267 1415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"05c6ed49008f658ac787ddb8fc598076d26ea68dea4a718cf5a47a9c0e9d3603"} err="failed to get container status \"05c6ed49008f658ac787ddb8fc598076d26ea68dea4a718cf5a47a9c0e9d3603\": rpc error: code = NotFound desc = an error occurred when try to find container \"05c6ed49008f658ac787ddb8fc598076d26ea68dea4a718cf5a47a9c0e9d3603\": not found" Mar 17 18:32:34.328371 kubelet[1415]: I0317 18:32:34.328359 1415 scope.go:117] "RemoveContainer" containerID="f1c050088f8dde750f066371d3e34849c1733211cdfe1c8b493ef089bf0914c2" Mar 17 18:32:34.328591 env[1208]: time="2025-03-17T18:32:34.328549553Z" level=error msg="ContainerStatus for \"f1c050088f8dde750f066371d3e34849c1733211cdfe1c8b493ef089bf0914c2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1c050088f8dde750f066371d3e34849c1733211cdfe1c8b493ef089bf0914c2\": not found" Mar 17 18:32:34.328751 kubelet[1415]: E0317 18:32:34.328725 1415 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f1c050088f8dde750f066371d3e34849c1733211cdfe1c8b493ef089bf0914c2\": not found" containerID="f1c050088f8dde750f066371d3e34849c1733211cdfe1c8b493ef089bf0914c2" Mar 17 18:32:34.328837 kubelet[1415]: I0317 18:32:34.328814 1415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f1c050088f8dde750f066371d3e34849c1733211cdfe1c8b493ef089bf0914c2"} err="failed to get container status \"f1c050088f8dde750f066371d3e34849c1733211cdfe1c8b493ef089bf0914c2\": rpc error: code = NotFound desc = an error occurred when try to find container \"f1c050088f8dde750f066371d3e34849c1733211cdfe1c8b493ef089bf0914c2\": not found" Mar 17 18:32:35.098953 kubelet[1415]: E0317 18:32:35.098913 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:36.065102 kubelet[1415]: I0317 18:32:36.065065 1415 memory_manager.go:355] "RemoveStaleState removing state" podUID="2e200104-0023-42bf-be43-d7ee1ed219e0" containerName="cilium-agent" Mar 17 18:32:36.070138 systemd[1]: Created slice kubepods-besteffort-podc23c0050_e401_4196_98c3_4ec3cbef3bbe.slice. Mar 17 18:32:36.073397 systemd[1]: Created slice kubepods-burstable-podf1e405fe_e5e9_420a_a16c_6a5c6db33a60.slice. Mar 17 18:32:36.096003 kubelet[1415]: I0317 18:32:36.095970 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-etc-cni-netd\") pod \"cilium-mj8sf\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " pod="kube-system/cilium-mj8sf" Mar 17 18:32:36.096221 kubelet[1415]: I0317 18:32:36.096201 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-host-proc-sys-kernel\") pod \"cilium-mj8sf\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " pod="kube-system/cilium-mj8sf" Mar 17 18:32:36.096330 kubelet[1415]: I0317 18:32:36.096311 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c23c0050-e401-4196-98c3-4ec3cbef3bbe-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-dxdmx\" (UID: \"c23c0050-e401-4196-98c3-4ec3cbef3bbe\") " pod="kube-system/cilium-operator-6c4d7847fc-dxdmx" Mar 17 18:32:36.096414 kubelet[1415]: I0317 18:32:36.096398 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-cilium-run\") pod \"cilium-mj8sf\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " pod="kube-system/cilium-mj8sf" Mar 17 18:32:36.096482 kubelet[1415]: I0317 18:32:36.096469 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-cilium-config-path\") pod \"cilium-mj8sf\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " pod="kube-system/cilium-mj8sf" Mar 17 18:32:36.096561 kubelet[1415]: I0317 18:32:36.096548 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flqlx\" (UniqueName: \"kubernetes.io/projected/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-kube-api-access-flqlx\") pod \"cilium-mj8sf\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " pod="kube-system/cilium-mj8sf" Mar 17 18:32:36.096672 kubelet[1415]: I0317 18:32:36.096655 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnppv\" (UniqueName: \"kubernetes.io/projected/c23c0050-e401-4196-98c3-4ec3cbef3bbe-kube-api-access-bnppv\") pod \"cilium-operator-6c4d7847fc-dxdmx\" (UID: \"c23c0050-e401-4196-98c3-4ec3cbef3bbe\") " pod="kube-system/cilium-operator-6c4d7847fc-dxdmx" Mar 17 18:32:36.096760 kubelet[1415]: I0317 18:32:36.096746 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-xtables-lock\") pod \"cilium-mj8sf\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " pod="kube-system/cilium-mj8sf" Mar 17 18:32:36.096832 kubelet[1415]: I0317 18:32:36.096820 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-cilium-ipsec-secrets\") pod \"cilium-mj8sf\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " pod="kube-system/cilium-mj8sf" Mar 17 18:32:36.096915 kubelet[1415]: I0317 18:32:36.096903 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-host-proc-sys-net\") pod \"cilium-mj8sf\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " pod="kube-system/cilium-mj8sf" Mar 17 18:32:36.097068 kubelet[1415]: I0317 18:32:36.097052 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-hubble-tls\") pod \"cilium-mj8sf\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " pod="kube-system/cilium-mj8sf" Mar 17 18:32:36.097181 kubelet[1415]: I0317 18:32:36.097166 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-bpf-maps\") pod \"cilium-mj8sf\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " pod="kube-system/cilium-mj8sf" Mar 17 18:32:36.097276 kubelet[1415]: I0317 18:32:36.097261 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-clustermesh-secrets\") pod \"cilium-mj8sf\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " pod="kube-system/cilium-mj8sf" Mar 17 18:32:36.097358 kubelet[1415]: I0317 18:32:36.097343 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-cni-path\") pod \"cilium-mj8sf\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " pod="kube-system/cilium-mj8sf" Mar 17 18:32:36.097443 kubelet[1415]: I0317 18:32:36.097429 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-lib-modules\") pod \"cilium-mj8sf\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " pod="kube-system/cilium-mj8sf" Mar 17 18:32:36.097534 kubelet[1415]: I0317 18:32:36.097521 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-hostproc\") pod \"cilium-mj8sf\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " pod="kube-system/cilium-mj8sf" Mar 17 18:32:36.097625 kubelet[1415]: I0317 18:32:36.097600 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-cilium-cgroup\") pod \"cilium-mj8sf\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " pod="kube-system/cilium-mj8sf" Mar 17 18:32:36.100128 kubelet[1415]: E0317 18:32:36.100097 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:36.214081 kubelet[1415]: I0317 18:32:36.213950 1415 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e200104-0023-42bf-be43-d7ee1ed219e0" path="/var/lib/kubelet/pods/2e200104-0023-42bf-be43-d7ee1ed219e0/volumes" Mar 17 18:32:36.220867 kubelet[1415]: E0317 18:32:36.220838 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:36.221449 env[1208]: time="2025-03-17T18:32:36.221410224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mj8sf,Uid:f1e405fe-e5e9-420a-a16c-6a5c6db33a60,Namespace:kube-system,Attempt:0,}" Mar 17 18:32:36.232310 env[1208]: time="2025-03-17T18:32:36.232234956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:32:36.232417 env[1208]: time="2025-03-17T18:32:36.232320812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:32:36.232417 env[1208]: time="2025-03-17T18:32:36.232347977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:32:36.232664 env[1208]: time="2025-03-17T18:32:36.232590062Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/05045e05966646e738fe139b4f485eed8e42333c4db2c5168e17554d6ea45592 pid=3000 runtime=io.containerd.runc.v2 Mar 17 18:32:36.242009 systemd[1]: Started cri-containerd-05045e05966646e738fe139b4f485eed8e42333c4db2c5168e17554d6ea45592.scope. Mar 17 18:32:36.290745 env[1208]: time="2025-03-17T18:32:36.290696984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mj8sf,Uid:f1e405fe-e5e9-420a-a16c-6a5c6db33a60,Namespace:kube-system,Attempt:0,} returns sandbox id \"05045e05966646e738fe139b4f485eed8e42333c4db2c5168e17554d6ea45592\"" Mar 17 18:32:36.291521 kubelet[1415]: E0317 18:32:36.291500 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:36.293395 env[1208]: time="2025-03-17T18:32:36.293365960Z" level=info msg="CreateContainer within sandbox \"05045e05966646e738fe139b4f485eed8e42333c4db2c5168e17554d6ea45592\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:32:36.302388 env[1208]: time="2025-03-17T18:32:36.302345229Z" level=info msg="CreateContainer within sandbox \"05045e05966646e738fe139b4f485eed8e42333c4db2c5168e17554d6ea45592\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d1c075e9a8550b772b25b8a951a79794fc8bddf6d6b4b8c217dc02c89a6698cb\"" Mar 17 18:32:36.302968 env[1208]: time="2025-03-17T18:32:36.302939260Z" level=info msg="StartContainer for \"d1c075e9a8550b772b25b8a951a79794fc8bddf6d6b4b8c217dc02c89a6698cb\"" Mar 17 18:32:36.318327 systemd[1]: Started cri-containerd-d1c075e9a8550b772b25b8a951a79794fc8bddf6d6b4b8c217dc02c89a6698cb.scope. Mar 17 18:32:36.339211 systemd[1]: cri-containerd-d1c075e9a8550b772b25b8a951a79794fc8bddf6d6b4b8c217dc02c89a6698cb.scope: Deactivated successfully. Mar 17 18:32:36.359058 env[1208]: time="2025-03-17T18:32:36.359009083Z" level=info msg="shim disconnected" id=d1c075e9a8550b772b25b8a951a79794fc8bddf6d6b4b8c217dc02c89a6698cb Mar 17 18:32:36.359058 env[1208]: time="2025-03-17T18:32:36.359059933Z" level=warning msg="cleaning up after shim disconnected" id=d1c075e9a8550b772b25b8a951a79794fc8bddf6d6b4b8c217dc02c89a6698cb namespace=k8s.io Mar 17 18:32:36.359294 env[1208]: time="2025-03-17T18:32:36.359069135Z" level=info msg="cleaning up dead shim" Mar 17 18:32:36.365855 env[1208]: time="2025-03-17T18:32:36.365799586Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:32:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3058 runtime=io.containerd.runc.v2\ntime=\"2025-03-17T18:32:36Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d1c075e9a8550b772b25b8a951a79794fc8bddf6d6b4b8c217dc02c89a6698cb/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Mar 17 18:32:36.366171 env[1208]: time="2025-03-17T18:32:36.366059194Z" level=error msg="copy shim log" error="read /proc/self/fd/57: file already closed" Mar 17 18:32:36.366393 env[1208]: time="2025-03-17T18:32:36.366343927Z" level=error msg="Failed to pipe stdout of container \"d1c075e9a8550b772b25b8a951a79794fc8bddf6d6b4b8c217dc02c89a6698cb\"" error="reading from a closed fifo" Mar 17 18:32:36.366444 env[1208]: time="2025-03-17T18:32:36.366390576Z" level=error msg="Failed to pipe stderr of container \"d1c075e9a8550b772b25b8a951a79794fc8bddf6d6b4b8c217dc02c89a6698cb\"" error="reading from a closed fifo" Mar 17 18:32:36.368000 env[1208]: time="2025-03-17T18:32:36.367951106Z" level=error msg="StartContainer for \"d1c075e9a8550b772b25b8a951a79794fc8bddf6d6b4b8c217dc02c89a6698cb\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Mar 17 18:32:36.368352 kubelet[1415]: E0317 18:32:36.368312 1415 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d1c075e9a8550b772b25b8a951a79794fc8bddf6d6b4b8c217dc02c89a6698cb" Mar 17 18:32:36.368710 kubelet[1415]: E0317 18:32:36.368689 1415 kuberuntime_manager.go:1341] "Unhandled Error" err=< Mar 17 18:32:36.368710 kubelet[1415]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Mar 17 18:32:36.368710 kubelet[1415]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Mar 17 18:32:36.368710 kubelet[1415]: rm /hostbin/cilium-mount Mar 17 18:32:36.368857 kubelet[1415]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flqlx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-mj8sf_kube-system(f1e405fe-e5e9-420a-a16c-6a5c6db33a60): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Mar 17 18:32:36.368857 kubelet[1415]: > logger="UnhandledError" Mar 17 18:32:36.369842 kubelet[1415]: E0317 18:32:36.369803 1415 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-mj8sf" podUID="f1e405fe-e5e9-420a-a16c-6a5c6db33a60" Mar 17 18:32:36.372503 kubelet[1415]: E0317 18:32:36.372475 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:36.372978 env[1208]: time="2025-03-17T18:32:36.372920790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dxdmx,Uid:c23c0050-e401-4196-98c3-4ec3cbef3bbe,Namespace:kube-system,Attempt:0,}" Mar 17 18:32:36.383682 env[1208]: time="2025-03-17T18:32:36.383619538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:32:36.383794 env[1208]: time="2025-03-17T18:32:36.383656665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:32:36.383794 env[1208]: time="2025-03-17T18:32:36.383673509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:32:36.383865 env[1208]: time="2025-03-17T18:32:36.383795131Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/86e9b8e43a829d5f2f55413e74b33c3d036e986e7127dea3c02dfa8a1af3542b pid=3079 runtime=io.containerd.runc.v2 Mar 17 18:32:36.393016 systemd[1]: Started cri-containerd-86e9b8e43a829d5f2f55413e74b33c3d036e986e7127dea3c02dfa8a1af3542b.scope. Mar 17 18:32:36.426421 env[1208]: time="2025-03-17T18:32:36.426383048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dxdmx,Uid:c23c0050-e401-4196-98c3-4ec3cbef3bbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"86e9b8e43a829d5f2f55413e74b33c3d036e986e7127dea3c02dfa8a1af3542b\"" Mar 17 18:32:36.427653 kubelet[1415]: E0317 18:32:36.427170 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:36.428487 env[1208]: time="2025-03-17T18:32:36.428457994Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:32:37.100727 kubelet[1415]: E0317 18:32:37.100684 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:37.192614 kubelet[1415]: E0317 18:32:37.192570 1415 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:32:37.310883 env[1208]: time="2025-03-17T18:32:37.309896720Z" level=info msg="StopPodSandbox for \"05045e05966646e738fe139b4f485eed8e42333c4db2c5168e17554d6ea45592\"" Mar 17 18:32:37.310883 env[1208]: time="2025-03-17T18:32:37.309951370Z" level=info msg="Container to stop \"d1c075e9a8550b772b25b8a951a79794fc8bddf6d6b4b8c217dc02c89a6698cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:32:37.311604 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05045e05966646e738fe139b4f485eed8e42333c4db2c5168e17554d6ea45592-shm.mount: Deactivated successfully. Mar 17 18:32:37.320358 systemd[1]: cri-containerd-05045e05966646e738fe139b4f485eed8e42333c4db2c5168e17554d6ea45592.scope: Deactivated successfully. Mar 17 18:32:37.338858 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05045e05966646e738fe139b4f485eed8e42333c4db2c5168e17554d6ea45592-rootfs.mount: Deactivated successfully. Mar 17 18:32:37.345943 env[1208]: time="2025-03-17T18:32:37.345886100Z" level=info msg="shim disconnected" id=05045e05966646e738fe139b4f485eed8e42333c4db2c5168e17554d6ea45592 Mar 17 18:32:37.345943 env[1208]: time="2025-03-17T18:32:37.345938750Z" level=warning msg="cleaning up after shim disconnected" id=05045e05966646e738fe139b4f485eed8e42333c4db2c5168e17554d6ea45592 namespace=k8s.io Mar 17 18:32:37.346141 env[1208]: time="2025-03-17T18:32:37.345948031Z" level=info msg="cleaning up dead shim" Mar 17 18:32:37.354058 env[1208]: time="2025-03-17T18:32:37.353485904Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:32:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3131 runtime=io.containerd.runc.v2\n" Mar 17 18:32:37.354058 env[1208]: time="2025-03-17T18:32:37.353761154Z" level=info msg="TearDown network for sandbox \"05045e05966646e738fe139b4f485eed8e42333c4db2c5168e17554d6ea45592\" successfully" Mar 17 18:32:37.354058 env[1208]: time="2025-03-17T18:32:37.353780877Z" level=info msg="StopPodSandbox for \"05045e05966646e738fe139b4f485eed8e42333c4db2c5168e17554d6ea45592\" returns successfully" Mar 17 18:32:37.406423 kubelet[1415]: I0317 18:32:37.406375 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-cilium-cgroup\") pod \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " Mar 17 18:32:37.406423 kubelet[1415]: I0317 18:32:37.406428 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-hubble-tls\") pod \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " Mar 17 18:32:37.406601 kubelet[1415]: I0317 18:32:37.406450 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-cilium-config-path\") pod \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " Mar 17 18:32:37.406601 kubelet[1415]: I0317 18:32:37.406468 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flqlx\" (UniqueName: \"kubernetes.io/projected/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-kube-api-access-flqlx\") pod \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " Mar 17 18:32:37.406601 kubelet[1415]: I0317 18:32:37.406486 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-cilium-ipsec-secrets\") pod \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " Mar 17 18:32:37.406601 kubelet[1415]: I0317 18:32:37.406502 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-clustermesh-secrets\") pod \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " Mar 17 18:32:37.406601 kubelet[1415]: I0317 18:32:37.406516 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-host-proc-sys-kernel\") pod \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " Mar 17 18:32:37.406601 kubelet[1415]: I0317 18:32:37.406531 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-xtables-lock\") pod \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " Mar 17 18:32:37.406601 kubelet[1415]: I0317 18:32:37.406545 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-cni-path\") pod \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " Mar 17 18:32:37.406601 kubelet[1415]: I0317 18:32:37.406559 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-hostproc\") pod \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " Mar 17 18:32:37.406601 kubelet[1415]: I0317 18:32:37.406573 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-cilium-run\") pod \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " Mar 17 18:32:37.406601 kubelet[1415]: I0317 18:32:37.406588 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-etc-cni-netd\") pod \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " Mar 17 18:32:37.406601 kubelet[1415]: I0317 18:32:37.406605 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-host-proc-sys-net\") pod \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " Mar 17 18:32:37.406850 kubelet[1415]: I0317 18:32:37.406621 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-bpf-maps\") pod \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " Mar 17 18:32:37.406850 kubelet[1415]: I0317 18:32:37.406635 1415 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-lib-modules\") pod \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\" (UID: \"f1e405fe-e5e9-420a-a16c-6a5c6db33a60\") " Mar 17 18:32:37.406850 kubelet[1415]: I0317 18:32:37.406699 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f1e405fe-e5e9-420a-a16c-6a5c6db33a60" (UID: "f1e405fe-e5e9-420a-a16c-6a5c6db33a60"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:32:37.406850 kubelet[1415]: I0317 18:32:37.406723 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f1e405fe-e5e9-420a-a16c-6a5c6db33a60" (UID: "f1e405fe-e5e9-420a-a16c-6a5c6db33a60"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:32:37.407008 kubelet[1415]: I0317 18:32:37.406957 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f1e405fe-e5e9-420a-a16c-6a5c6db33a60" (UID: "f1e405fe-e5e9-420a-a16c-6a5c6db33a60"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:32:37.407081 kubelet[1415]: I0317 18:32:37.407042 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-cni-path" (OuterVolumeSpecName: "cni-path") pod "f1e405fe-e5e9-420a-a16c-6a5c6db33a60" (UID: "f1e405fe-e5e9-420a-a16c-6a5c6db33a60"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:32:37.407173 kubelet[1415]: I0317 18:32:37.407067 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f1e405fe-e5e9-420a-a16c-6a5c6db33a60" (UID: "f1e405fe-e5e9-420a-a16c-6a5c6db33a60"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:32:37.407259 kubelet[1415]: I0317 18:32:37.407245 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-hostproc" (OuterVolumeSpecName: "hostproc") pod "f1e405fe-e5e9-420a-a16c-6a5c6db33a60" (UID: "f1e405fe-e5e9-420a-a16c-6a5c6db33a60"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:32:37.410178 kubelet[1415]: I0317 18:32:37.408949 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f1e405fe-e5e9-420a-a16c-6a5c6db33a60" (UID: "f1e405fe-e5e9-420a-a16c-6a5c6db33a60"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 17 18:32:37.410178 kubelet[1415]: I0317 18:32:37.409000 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f1e405fe-e5e9-420a-a16c-6a5c6db33a60" (UID: "f1e405fe-e5e9-420a-a16c-6a5c6db33a60"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:32:37.410178 kubelet[1415]: I0317 18:32:37.409020 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f1e405fe-e5e9-420a-a16c-6a5c6db33a60" (UID: "f1e405fe-e5e9-420a-a16c-6a5c6db33a60"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:32:37.410178 kubelet[1415]: I0317 18:32:37.409036 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f1e405fe-e5e9-420a-a16c-6a5c6db33a60" (UID: "f1e405fe-e5e9-420a-a16c-6a5c6db33a60"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:32:37.410441 kubelet[1415]: I0317 18:32:37.410417 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f1e405fe-e5e9-420a-a16c-6a5c6db33a60" (UID: "f1e405fe-e5e9-420a-a16c-6a5c6db33a60"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 17 18:32:37.410543 systemd[1]: var-lib-kubelet-pods-f1e405fe\x2de5e9\x2d420a\x2da16c\x2d6a5c6db33a60-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:32:37.411067 kubelet[1415]: I0317 18:32:37.411042 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f1e405fe-e5e9-420a-a16c-6a5c6db33a60" (UID: "f1e405fe-e5e9-420a-a16c-6a5c6db33a60"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 18:32:37.411473 kubelet[1415]: I0317 18:32:37.411450 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-kube-api-access-flqlx" (OuterVolumeSpecName: "kube-api-access-flqlx") pod "f1e405fe-e5e9-420a-a16c-6a5c6db33a60" (UID: "f1e405fe-e5e9-420a-a16c-6a5c6db33a60"). InnerVolumeSpecName "kube-api-access-flqlx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 18:32:37.411563 kubelet[1415]: I0317 18:32:37.411491 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f1e405fe-e5e9-420a-a16c-6a5c6db33a60" (UID: "f1e405fe-e5e9-420a-a16c-6a5c6db33a60"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 17 18:32:37.411625 kubelet[1415]: I0317 18:32:37.411548 1415 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f1e405fe-e5e9-420a-a16c-6a5c6db33a60" (UID: "f1e405fe-e5e9-420a-a16c-6a5c6db33a60"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 17 18:32:37.412361 systemd[1]: var-lib-kubelet-pods-f1e405fe\x2de5e9\x2d420a\x2da16c\x2d6a5c6db33a60-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dflqlx.mount: Deactivated successfully. Mar 17 18:32:37.412449 systemd[1]: var-lib-kubelet-pods-f1e405fe\x2de5e9\x2d420a\x2da16c\x2d6a5c6db33a60-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:32:37.412498 systemd[1]: var-lib-kubelet-pods-f1e405fe\x2de5e9\x2d420a\x2da16c\x2d6a5c6db33a60-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 18:32:37.507807 kubelet[1415]: I0317 18:32:37.507752 1415 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-hubble-tls\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:37.507807 kubelet[1415]: I0317 18:32:37.507787 1415 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-cilium-cgroup\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:37.507807 kubelet[1415]: I0317 18:32:37.507798 1415 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-cilium-ipsec-secrets\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:37.507807 kubelet[1415]: I0317 18:32:37.507811 1415 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-clustermesh-secrets\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:37.507807 kubelet[1415]: I0317 18:32:37.507819 1415 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-host-proc-sys-kernel\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:37.508066 kubelet[1415]: I0317 18:32:37.507827 1415 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-cilium-config-path\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:37.508066 kubelet[1415]: I0317 18:32:37.507837 1415 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-flqlx\" (UniqueName: \"kubernetes.io/projected/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-kube-api-access-flqlx\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:37.508066 kubelet[1415]: I0317 18:32:37.507845 1415 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-xtables-lock\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:37.508066 kubelet[1415]: I0317 18:32:37.507854 1415 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-cni-path\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:37.508066 kubelet[1415]: I0317 18:32:37.507870 1415 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-hostproc\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:37.508066 kubelet[1415]: I0317 18:32:37.507878 1415 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-cilium-run\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:37.508066 kubelet[1415]: I0317 18:32:37.507885 1415 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-host-proc-sys-net\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:37.508066 kubelet[1415]: I0317 18:32:37.507893 1415 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-bpf-maps\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:37.508066 kubelet[1415]: I0317 18:32:37.507900 1415 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-etc-cni-netd\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:37.508066 kubelet[1415]: I0317 18:32:37.507908 1415 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1e405fe-e5e9-420a-a16c-6a5c6db33a60-lib-modules\") on node \"10.0.0.128\" DevicePath \"\"" Mar 17 18:32:38.101396 kubelet[1415]: E0317 18:32:38.101351 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:38.209506 systemd[1]: Removed slice kubepods-burstable-podf1e405fe_e5e9_420a_a16c_6a5c6db33a60.slice. Mar 17 18:32:38.314062 kubelet[1415]: I0317 18:32:38.313924 1415 scope.go:117] "RemoveContainer" containerID="d1c075e9a8550b772b25b8a951a79794fc8bddf6d6b4b8c217dc02c89a6698cb" Mar 17 18:32:38.314827 env[1208]: time="2025-03-17T18:32:38.314741317Z" level=info msg="RemoveContainer for \"d1c075e9a8550b772b25b8a951a79794fc8bddf6d6b4b8c217dc02c89a6698cb\"" Mar 17 18:32:38.317400 env[1208]: time="2025-03-17T18:32:38.317306002Z" level=info msg="RemoveContainer for \"d1c075e9a8550b772b25b8a951a79794fc8bddf6d6b4b8c217dc02c89a6698cb\" returns successfully" Mar 17 18:32:38.349022 kubelet[1415]: I0317 18:32:38.348880 1415 memory_manager.go:355] "RemoveStaleState removing state" podUID="f1e405fe-e5e9-420a-a16c-6a5c6db33a60" containerName="mount-cgroup" Mar 17 18:32:38.355097 systemd[1]: Created slice kubepods-burstable-podfd2793c9_69b7_45c4_8fb9_508198a53cdd.slice. Mar 17 18:32:38.413701 kubelet[1415]: I0317 18:32:38.413644 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fd2793c9-69b7-45c4-8fb9-508198a53cdd-hostproc\") pod \"cilium-lc9f8\" (UID: \"fd2793c9-69b7-45c4-8fb9-508198a53cdd\") " pod="kube-system/cilium-lc9f8" Mar 17 18:32:38.413701 kubelet[1415]: I0317 18:32:38.413698 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd2793c9-69b7-45c4-8fb9-508198a53cdd-etc-cni-netd\") pod \"cilium-lc9f8\" (UID: \"fd2793c9-69b7-45c4-8fb9-508198a53cdd\") " pod="kube-system/cilium-lc9f8" Mar 17 18:32:38.413878 kubelet[1415]: I0317 18:32:38.413722 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fd2793c9-69b7-45c4-8fb9-508198a53cdd-bpf-maps\") pod \"cilium-lc9f8\" (UID: \"fd2793c9-69b7-45c4-8fb9-508198a53cdd\") " pod="kube-system/cilium-lc9f8" Mar 17 18:32:38.413878 kubelet[1415]: I0317 18:32:38.413738 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fd2793c9-69b7-45c4-8fb9-508198a53cdd-hubble-tls\") pod \"cilium-lc9f8\" (UID: \"fd2793c9-69b7-45c4-8fb9-508198a53cdd\") " pod="kube-system/cilium-lc9f8" Mar 17 18:32:38.413878 kubelet[1415]: I0317 18:32:38.413775 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd2793c9-69b7-45c4-8fb9-508198a53cdd-xtables-lock\") pod \"cilium-lc9f8\" (UID: \"fd2793c9-69b7-45c4-8fb9-508198a53cdd\") " pod="kube-system/cilium-lc9f8" Mar 17 18:32:38.413878 kubelet[1415]: I0317 18:32:38.413792 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fd2793c9-69b7-45c4-8fb9-508198a53cdd-host-proc-sys-kernel\") pod \"cilium-lc9f8\" (UID: \"fd2793c9-69b7-45c4-8fb9-508198a53cdd\") " pod="kube-system/cilium-lc9f8" Mar 17 18:32:38.413878 kubelet[1415]: I0317 18:32:38.413836 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fd2793c9-69b7-45c4-8fb9-508198a53cdd-cilium-run\") pod \"cilium-lc9f8\" (UID: \"fd2793c9-69b7-45c4-8fb9-508198a53cdd\") " pod="kube-system/cilium-lc9f8" Mar 17 18:32:38.413878 kubelet[1415]: I0317 18:32:38.413858 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fd2793c9-69b7-45c4-8fb9-508198a53cdd-host-proc-sys-net\") pod \"cilium-lc9f8\" (UID: \"fd2793c9-69b7-45c4-8fb9-508198a53cdd\") " pod="kube-system/cilium-lc9f8" Mar 17 18:32:38.413878 kubelet[1415]: I0317 18:32:38.413875 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd2793c9-69b7-45c4-8fb9-508198a53cdd-cilium-config-path\") pod \"cilium-lc9f8\" (UID: \"fd2793c9-69b7-45c4-8fb9-508198a53cdd\") " pod="kube-system/cilium-lc9f8" Mar 17 18:32:38.414052 kubelet[1415]: I0317 18:32:38.413898 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fd2793c9-69b7-45c4-8fb9-508198a53cdd-cilium-ipsec-secrets\") pod \"cilium-lc9f8\" (UID: \"fd2793c9-69b7-45c4-8fb9-508198a53cdd\") " pod="kube-system/cilium-lc9f8" Mar 17 18:32:38.414052 kubelet[1415]: I0317 18:32:38.413914 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2dwz\" (UniqueName: \"kubernetes.io/projected/fd2793c9-69b7-45c4-8fb9-508198a53cdd-kube-api-access-r2dwz\") pod \"cilium-lc9f8\" (UID: \"fd2793c9-69b7-45c4-8fb9-508198a53cdd\") " pod="kube-system/cilium-lc9f8" Mar 17 18:32:38.414052 kubelet[1415]: I0317 18:32:38.413935 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fd2793c9-69b7-45c4-8fb9-508198a53cdd-cilium-cgroup\") pod \"cilium-lc9f8\" (UID: \"fd2793c9-69b7-45c4-8fb9-508198a53cdd\") " pod="kube-system/cilium-lc9f8" Mar 17 18:32:38.414052 kubelet[1415]: I0317 18:32:38.413956 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd2793c9-69b7-45c4-8fb9-508198a53cdd-lib-modules\") pod \"cilium-lc9f8\" (UID: \"fd2793c9-69b7-45c4-8fb9-508198a53cdd\") " pod="kube-system/cilium-lc9f8" Mar 17 18:32:38.414052 kubelet[1415]: I0317 18:32:38.413972 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fd2793c9-69b7-45c4-8fb9-508198a53cdd-clustermesh-secrets\") pod \"cilium-lc9f8\" (UID: \"fd2793c9-69b7-45c4-8fb9-508198a53cdd\") " pod="kube-system/cilium-lc9f8" Mar 17 18:32:38.414052 kubelet[1415]: I0317 18:32:38.413988 1415 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fd2793c9-69b7-45c4-8fb9-508198a53cdd-cni-path\") pod \"cilium-lc9f8\" (UID: \"fd2793c9-69b7-45c4-8fb9-508198a53cdd\") " pod="kube-system/cilium-lc9f8" Mar 17 18:32:38.667318 kubelet[1415]: E0317 18:32:38.667189 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:38.668110 env[1208]: time="2025-03-17T18:32:38.667667704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lc9f8,Uid:fd2793c9-69b7-45c4-8fb9-508198a53cdd,Namespace:kube-system,Attempt:0,}" Mar 17 18:32:38.678559 env[1208]: time="2025-03-17T18:32:38.678473658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:32:38.678559 env[1208]: time="2025-03-17T18:32:38.678526388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:32:38.678559 env[1208]: time="2025-03-17T18:32:38.678537269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:32:38.678937 env[1208]: time="2025-03-17T18:32:38.678900772Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e5ece483c8eb85153f567db268b66950d17bb9cafe1f20cfb5d5f71084749e33 pid=3161 runtime=io.containerd.runc.v2 Mar 17 18:32:38.689612 systemd[1]: Started cri-containerd-e5ece483c8eb85153f567db268b66950d17bb9cafe1f20cfb5d5f71084749e33.scope. Mar 17 18:32:38.719043 env[1208]: time="2025-03-17T18:32:38.718999649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lc9f8,Uid:fd2793c9-69b7-45c4-8fb9-508198a53cdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5ece483c8eb85153f567db268b66950d17bb9cafe1f20cfb5d5f71084749e33\"" Mar 17 18:32:38.719981 kubelet[1415]: E0317 18:32:38.719954 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:38.722033 env[1208]: time="2025-03-17T18:32:38.721995209Z" level=info msg="CreateContainer within sandbox \"e5ece483c8eb85153f567db268b66950d17bb9cafe1f20cfb5d5f71084749e33\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:32:38.750292 env[1208]: time="2025-03-17T18:32:38.750222466Z" level=info msg="CreateContainer within sandbox \"e5ece483c8eb85153f567db268b66950d17bb9cafe1f20cfb5d5f71084749e33\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f08c2e1766504fea2383b721ebfc7d647851946644c113210ea2e5d21cb840d8\"" Mar 17 18:32:38.750746 env[1208]: time="2025-03-17T18:32:38.750720712Z" level=info msg="StartContainer for \"f08c2e1766504fea2383b721ebfc7d647851946644c113210ea2e5d21cb840d8\"" Mar 17 18:32:38.758254 env[1208]: time="2025-03-17T18:32:38.758108474Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:38.759436 env[1208]: time="2025-03-17T18:32:38.759388936Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:38.761393 env[1208]: time="2025-03-17T18:32:38.761364198Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:38.761822 env[1208]: time="2025-03-17T18:32:38.761783991Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 18:32:38.764157 env[1208]: time="2025-03-17T18:32:38.764124677Z" level=info msg="CreateContainer within sandbox \"86e9b8e43a829d5f2f55413e74b33c3d036e986e7127dea3c02dfa8a1af3542b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:32:38.775028 systemd[1]: Started cri-containerd-f08c2e1766504fea2383b721ebfc7d647851946644c113210ea2e5d21cb840d8.scope. Mar 17 18:32:38.786555 env[1208]: time="2025-03-17T18:32:38.786496398Z" level=info msg="CreateContainer within sandbox \"86e9b8e43a829d5f2f55413e74b33c3d036e986e7127dea3c02dfa8a1af3542b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1c06bc7854e6332accbd56c14bc52105189801d4a3784f617c0ab19aa4017d7d\"" Mar 17 18:32:38.786985 env[1208]: time="2025-03-17T18:32:38.786954478Z" level=info msg="StartContainer for \"1c06bc7854e6332accbd56c14bc52105189801d4a3784f617c0ab19aa4017d7d\"" Mar 17 18:32:38.803834 systemd[1]: Started cri-containerd-1c06bc7854e6332accbd56c14bc52105189801d4a3784f617c0ab19aa4017d7d.scope. Mar 17 18:32:38.823379 env[1208]: time="2025-03-17T18:32:38.823319867Z" level=info msg="StartContainer for \"f08c2e1766504fea2383b721ebfc7d647851946644c113210ea2e5d21cb840d8\" returns successfully" Mar 17 18:32:38.843526 systemd[1]: cri-containerd-f08c2e1766504fea2383b721ebfc7d647851946644c113210ea2e5d21cb840d8.scope: Deactivated successfully. Mar 17 18:32:38.851307 env[1208]: time="2025-03-17T18:32:38.851249792Z" level=info msg="StartContainer for \"1c06bc7854e6332accbd56c14bc52105189801d4a3784f617c0ab19aa4017d7d\" returns successfully" Mar 17 18:32:38.869264 env[1208]: time="2025-03-17T18:32:38.869114731Z" level=info msg="shim disconnected" id=f08c2e1766504fea2383b721ebfc7d647851946644c113210ea2e5d21cb840d8 Mar 17 18:32:38.869264 env[1208]: time="2025-03-17T18:32:38.869255556Z" level=warning msg="cleaning up after shim disconnected" id=f08c2e1766504fea2383b721ebfc7d647851946644c113210ea2e5d21cb840d8 namespace=k8s.io Mar 17 18:32:38.869485 env[1208]: time="2025-03-17T18:32:38.869268358Z" level=info msg="cleaning up dead shim" Mar 17 18:32:38.875557 env[1208]: time="2025-03-17T18:32:38.875523403Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:32:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3283 runtime=io.containerd.runc.v2\n" Mar 17 18:32:39.101793 kubelet[1415]: E0317 18:32:39.101742 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:39.318070 kubelet[1415]: E0317 18:32:39.317610 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:39.319638 kubelet[1415]: E0317 18:32:39.319470 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:39.321666 env[1208]: time="2025-03-17T18:32:39.321624423Z" level=info msg="CreateContainer within sandbox \"e5ece483c8eb85153f567db268b66950d17bb9cafe1f20cfb5d5f71084749e33\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:32:39.327913 kubelet[1415]: I0317 18:32:39.327792 1415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-dxdmx" podStartSLOduration=0.993183875 podStartE2EDuration="3.327778416s" podCreationTimestamp="2025-03-17 18:32:36 +0000 UTC" firstStartedPulling="2025-03-17 18:32:36.428194625 +0000 UTC m=+55.251701000" lastFinishedPulling="2025-03-17 18:32:38.762789166 +0000 UTC m=+57.586295541" observedRunningTime="2025-03-17 18:32:39.327589664 +0000 UTC m=+58.151096039" watchObservedRunningTime="2025-03-17 18:32:39.327778416 +0000 UTC m=+58.151284791" Mar 17 18:32:39.336329 env[1208]: time="2025-03-17T18:32:39.336278362Z" level=info msg="CreateContainer within sandbox \"e5ece483c8eb85153f567db268b66950d17bb9cafe1f20cfb5d5f71084749e33\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"06de182ee49eb1404d79741d836cea52294431883370dab0fecc217d3df95d14\"" Mar 17 18:32:39.336956 env[1208]: time="2025-03-17T18:32:39.336882944Z" level=info msg="StartContainer for \"06de182ee49eb1404d79741d836cea52294431883370dab0fecc217d3df95d14\"" Mar 17 18:32:39.356838 systemd[1]: Started cri-containerd-06de182ee49eb1404d79741d836cea52294431883370dab0fecc217d3df95d14.scope. Mar 17 18:32:39.389014 env[1208]: time="2025-03-17T18:32:39.388967766Z" level=info msg="StartContainer for \"06de182ee49eb1404d79741d836cea52294431883370dab0fecc217d3df95d14\" returns successfully" Mar 17 18:32:39.396076 systemd[1]: cri-containerd-06de182ee49eb1404d79741d836cea52294431883370dab0fecc217d3df95d14.scope: Deactivated successfully. Mar 17 18:32:39.413230 env[1208]: time="2025-03-17T18:32:39.413175950Z" level=info msg="shim disconnected" id=06de182ee49eb1404d79741d836cea52294431883370dab0fecc217d3df95d14 Mar 17 18:32:39.413230 env[1208]: time="2025-03-17T18:32:39.413218837Z" level=warning msg="cleaning up after shim disconnected" id=06de182ee49eb1404d79741d836cea52294431883370dab0fecc217d3df95d14 namespace=k8s.io Mar 17 18:32:39.413230 env[1208]: time="2025-03-17T18:32:39.413230119Z" level=info msg="cleaning up dead shim" Mar 17 18:32:39.419534 env[1208]: time="2025-03-17T18:32:39.419501771Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:32:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3345 runtime=io.containerd.runc.v2\n" Mar 17 18:32:39.463727 kubelet[1415]: W0317 18:32:39.463659 1415 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1e405fe_e5e9_420a_a16c_6a5c6db33a60.slice/cri-containerd-d1c075e9a8550b772b25b8a951a79794fc8bddf6d6b4b8c217dc02c89a6698cb.scope WatchSource:0}: container "d1c075e9a8550b772b25b8a951a79794fc8bddf6d6b4b8c217dc02c89a6698cb" in namespace "k8s.io": not found Mar 17 18:32:40.102405 kubelet[1415]: E0317 18:32:40.102346 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:40.203021 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-06de182ee49eb1404d79741d836cea52294431883370dab0fecc217d3df95d14-rootfs.mount: Deactivated successfully. Mar 17 18:32:40.206336 kubelet[1415]: I0317 18:32:40.206285 1415 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1e405fe-e5e9-420a-a16c-6a5c6db33a60" path="/var/lib/kubelet/pods/f1e405fe-e5e9-420a-a16c-6a5c6db33a60/volumes" Mar 17 18:32:40.323400 kubelet[1415]: E0317 18:32:40.323373 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:40.323601 kubelet[1415]: E0317 18:32:40.323446 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:40.325254 env[1208]: time="2025-03-17T18:32:40.325212963Z" level=info msg="CreateContainer within sandbox \"e5ece483c8eb85153f567db268b66950d17bb9cafe1f20cfb5d5f71084749e33\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:32:40.338397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1241394109.mount: Deactivated successfully. Mar 17 18:32:40.343575 env[1208]: time="2025-03-17T18:32:40.343517899Z" level=info msg="CreateContainer within sandbox \"e5ece483c8eb85153f567db268b66950d17bb9cafe1f20cfb5d5f71084749e33\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3029cac086013902d90460ab4817b0105a31b850047097a9b87c7274e1b918f8\"" Mar 17 18:32:40.344177 env[1208]: time="2025-03-17T18:32:40.344150442Z" level=info msg="StartContainer for \"3029cac086013902d90460ab4817b0105a31b850047097a9b87c7274e1b918f8\"" Mar 17 18:32:40.360994 systemd[1]: Started cri-containerd-3029cac086013902d90460ab4817b0105a31b850047097a9b87c7274e1b918f8.scope. Mar 17 18:32:40.387786 env[1208]: time="2025-03-17T18:32:40.387736007Z" level=info msg="StartContainer for \"3029cac086013902d90460ab4817b0105a31b850047097a9b87c7274e1b918f8\" returns successfully" Mar 17 18:32:40.390415 systemd[1]: cri-containerd-3029cac086013902d90460ab4817b0105a31b850047097a9b87c7274e1b918f8.scope: Deactivated successfully. Mar 17 18:32:40.410532 env[1208]: time="2025-03-17T18:32:40.410483425Z" level=info msg="shim disconnected" id=3029cac086013902d90460ab4817b0105a31b850047097a9b87c7274e1b918f8 Mar 17 18:32:40.410532 env[1208]: time="2025-03-17T18:32:40.410526752Z" level=warning msg="cleaning up after shim disconnected" id=3029cac086013902d90460ab4817b0105a31b850047097a9b87c7274e1b918f8 namespace=k8s.io Mar 17 18:32:40.410532 env[1208]: time="2025-03-17T18:32:40.410535474Z" level=info msg="cleaning up dead shim" Mar 17 18:32:40.418298 env[1208]: time="2025-03-17T18:32:40.418259290Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:32:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3405 runtime=io.containerd.runc.v2\n" Mar 17 18:32:41.102827 kubelet[1415]: E0317 18:32:41.102775 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:41.203146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3029cac086013902d90460ab4817b0105a31b850047097a9b87c7274e1b918f8-rootfs.mount: Deactivated successfully. Mar 17 18:32:41.326657 kubelet[1415]: E0317 18:32:41.326450 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:41.328981 env[1208]: time="2025-03-17T18:32:41.328925446Z" level=info msg="CreateContainer within sandbox \"e5ece483c8eb85153f567db268b66950d17bb9cafe1f20cfb5d5f71084749e33\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:32:41.343347 env[1208]: time="2025-03-17T18:32:41.343302272Z" level=info msg="CreateContainer within sandbox \"e5ece483c8eb85153f567db268b66950d17bb9cafe1f20cfb5d5f71084749e33\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"69c2cec3d9364d887bf3debb8dd12556a6e4cac324ce3f5eb2542d532d70e983\"" Mar 17 18:32:41.344009 env[1208]: time="2025-03-17T18:32:41.343970697Z" level=info msg="StartContainer for \"69c2cec3d9364d887bf3debb8dd12556a6e4cac324ce3f5eb2542d532d70e983\"" Mar 17 18:32:41.367452 systemd[1]: Started cri-containerd-69c2cec3d9364d887bf3debb8dd12556a6e4cac324ce3f5eb2542d532d70e983.scope. Mar 17 18:32:41.394105 systemd[1]: cri-containerd-69c2cec3d9364d887bf3debb8dd12556a6e4cac324ce3f5eb2542d532d70e983.scope: Deactivated successfully. Mar 17 18:32:41.398270 env[1208]: time="2025-03-17T18:32:41.398194444Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd2793c9_69b7_45c4_8fb9_508198a53cdd.slice/cri-containerd-69c2cec3d9364d887bf3debb8dd12556a6e4cac324ce3f5eb2542d532d70e983.scope/memory.events\": no such file or directory" Mar 17 18:32:41.398989 env[1208]: time="2025-03-17T18:32:41.398952563Z" level=info msg="StartContainer for \"69c2cec3d9364d887bf3debb8dd12556a6e4cac324ce3f5eb2542d532d70e983\" returns successfully" Mar 17 18:32:41.419903 env[1208]: time="2025-03-17T18:32:41.419848097Z" level=info msg="shim disconnected" id=69c2cec3d9364d887bf3debb8dd12556a6e4cac324ce3f5eb2542d532d70e983 Mar 17 18:32:41.420054 env[1208]: time="2025-03-17T18:32:41.419905986Z" level=warning msg="cleaning up after shim disconnected" id=69c2cec3d9364d887bf3debb8dd12556a6e4cac324ce3f5eb2542d532d70e983 namespace=k8s.io Mar 17 18:32:41.420054 env[1208]: time="2025-03-17T18:32:41.419916347Z" level=info msg="cleaning up dead shim" Mar 17 18:32:41.426712 env[1208]: time="2025-03-17T18:32:41.426657770Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:32:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3460 runtime=io.containerd.runc.v2\n" Mar 17 18:32:42.063494 kubelet[1415]: E0317 18:32:42.063429 1415 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:42.081435 env[1208]: time="2025-03-17T18:32:42.081398768Z" level=info msg="StopPodSandbox for \"05045e05966646e738fe139b4f485eed8e42333c4db2c5168e17554d6ea45592\"" Mar 17 18:32:42.081543 env[1208]: time="2025-03-17T18:32:42.081486982Z" level=info msg="TearDown network for sandbox \"05045e05966646e738fe139b4f485eed8e42333c4db2c5168e17554d6ea45592\" successfully" Mar 17 18:32:42.081543 env[1208]: time="2025-03-17T18:32:42.081521547Z" level=info msg="StopPodSandbox for \"05045e05966646e738fe139b4f485eed8e42333c4db2c5168e17554d6ea45592\" returns successfully" Mar 17 18:32:42.081893 env[1208]: time="2025-03-17T18:32:42.081866160Z" level=info msg="RemovePodSandbox for \"05045e05966646e738fe139b4f485eed8e42333c4db2c5168e17554d6ea45592\"" Mar 17 18:32:42.082013 env[1208]: time="2025-03-17T18:32:42.081979377Z" level=info msg="Forcibly stopping sandbox \"05045e05966646e738fe139b4f485eed8e42333c4db2c5168e17554d6ea45592\"" Mar 17 18:32:42.082155 env[1208]: time="2025-03-17T18:32:42.082135881Z" level=info msg="TearDown network for sandbox \"05045e05966646e738fe139b4f485eed8e42333c4db2c5168e17554d6ea45592\" successfully" Mar 17 18:32:42.085948 env[1208]: time="2025-03-17T18:32:42.085922140Z" level=info msg="RemovePodSandbox \"05045e05966646e738fe139b4f485eed8e42333c4db2c5168e17554d6ea45592\" returns successfully" Mar 17 18:32:42.086383 env[1208]: time="2025-03-17T18:32:42.086356607Z" level=info msg="StopPodSandbox for \"d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b\"" Mar 17 18:32:42.086465 env[1208]: time="2025-03-17T18:32:42.086429938Z" level=info msg="TearDown network for sandbox \"d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b\" successfully" Mar 17 18:32:42.086465 env[1208]: time="2025-03-17T18:32:42.086460383Z" level=info msg="StopPodSandbox for \"d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b\" returns successfully" Mar 17 18:32:42.086715 env[1208]: time="2025-03-17T18:32:42.086687657Z" level=info msg="RemovePodSandbox for \"d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b\"" Mar 17 18:32:42.086836 env[1208]: time="2025-03-17T18:32:42.086805315Z" level=info msg="Forcibly stopping sandbox \"d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b\"" Mar 17 18:32:42.086956 env[1208]: time="2025-03-17T18:32:42.086938016Z" level=info msg="TearDown network for sandbox \"d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b\" successfully" Mar 17 18:32:42.089282 env[1208]: time="2025-03-17T18:32:42.089257250Z" level=info msg="RemovePodSandbox \"d5019be9545791a1c5e7d17c8cd7039440d468f64238b6f79ae893d400c7d67b\" returns successfully" Mar 17 18:32:42.103197 kubelet[1415]: E0317 18:32:42.103162 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:42.193844 kubelet[1415]: E0317 18:32:42.193809 1415 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:32:42.203095 systemd[1]: run-containerd-runc-k8s.io-69c2cec3d9364d887bf3debb8dd12556a6e4cac324ce3f5eb2542d532d70e983-runc.BUctnE.mount: Deactivated successfully. Mar 17 18:32:42.203196 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69c2cec3d9364d887bf3debb8dd12556a6e4cac324ce3f5eb2542d532d70e983-rootfs.mount: Deactivated successfully. Mar 17 18:32:42.335489 kubelet[1415]: E0317 18:32:42.334777 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:42.338300 env[1208]: time="2025-03-17T18:32:42.338246899Z" level=info msg="CreateContainer within sandbox \"e5ece483c8eb85153f567db268b66950d17bb9cafe1f20cfb5d5f71084749e33\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:32:42.349009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3713914674.mount: Deactivated successfully. Mar 17 18:32:42.354272 env[1208]: time="2025-03-17T18:32:42.354191338Z" level=info msg="CreateContainer within sandbox \"e5ece483c8eb85153f567db268b66950d17bb9cafe1f20cfb5d5f71084749e33\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"99a0abb9490f0148db574f8b1707114fb508ab1228d16e2a3f6f1bfd5e88b6d9\"" Mar 17 18:32:42.354983 env[1208]: time="2025-03-17T18:32:42.354956815Z" level=info msg="StartContainer for \"99a0abb9490f0148db574f8b1707114fb508ab1228d16e2a3f6f1bfd5e88b6d9\"" Mar 17 18:32:42.371080 systemd[1]: Started cri-containerd-99a0abb9490f0148db574f8b1707114fb508ab1228d16e2a3f6f1bfd5e88b6d9.scope. Mar 17 18:32:42.399076 env[1208]: time="2025-03-17T18:32:42.399020956Z" level=info msg="StartContainer for \"99a0abb9490f0148db574f8b1707114fb508ab1228d16e2a3f6f1bfd5e88b6d9\" returns successfully" Mar 17 18:32:42.575643 kubelet[1415]: W0317 18:32:42.575457 1415 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd2793c9_69b7_45c4_8fb9_508198a53cdd.slice/cri-containerd-f08c2e1766504fea2383b721ebfc7d647851946644c113210ea2e5d21cb840d8.scope WatchSource:0}: task f08c2e1766504fea2383b721ebfc7d647851946644c113210ea2e5d21cb840d8 not found: not found Mar 17 18:32:42.657308 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Mar 17 18:32:43.103849 kubelet[1415]: E0317 18:32:43.103801 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:43.338497 kubelet[1415]: E0317 18:32:43.338449 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:43.358002 kubelet[1415]: I0317 18:32:43.357503 1415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lc9f8" podStartSLOduration=5.35748715 podStartE2EDuration="5.35748715s" podCreationTimestamp="2025-03-17 18:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:32:43.357279 +0000 UTC m=+62.180785375" watchObservedRunningTime="2025-03-17 18:32:43.35748715 +0000 UTC m=+62.180993526" Mar 17 18:32:43.873168 kubelet[1415]: I0317 18:32:43.872423 1415 setters.go:602] "Node became not ready" node="10.0.0.128" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:32:43Z","lastTransitionTime":"2025-03-17T18:32:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:32:44.104511 kubelet[1415]: E0317 18:32:44.104475 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:44.504356 systemd[1]: run-containerd-runc-k8s.io-99a0abb9490f0148db574f8b1707114fb508ab1228d16e2a3f6f1bfd5e88b6d9-runc.RUYoxZ.mount: Deactivated successfully. Mar 17 18:32:44.668699 kubelet[1415]: E0317 18:32:44.668664 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:45.105533 kubelet[1415]: E0317 18:32:45.105491 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:45.479999 systemd-networkd[1045]: lxc_health: Link UP Mar 17 18:32:45.488196 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:32:45.488033 systemd-networkd[1045]: lxc_health: Gained carrier Mar 17 18:32:45.688139 kubelet[1415]: W0317 18:32:45.688094 1415 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd2793c9_69b7_45c4_8fb9_508198a53cdd.slice/cri-containerd-06de182ee49eb1404d79741d836cea52294431883370dab0fecc217d3df95d14.scope WatchSource:0}: task 06de182ee49eb1404d79741d836cea52294431883370dab0fecc217d3df95d14 not found: not found Mar 17 18:32:46.106648 kubelet[1415]: E0317 18:32:46.106605 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:46.669891 kubelet[1415]: E0317 18:32:46.669346 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:47.107539 kubelet[1415]: E0317 18:32:47.107504 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:47.345756 kubelet[1415]: E0317 18:32:47.345721 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:47.460279 systemd-networkd[1045]: lxc_health: Gained IPv6LL Mar 17 18:32:48.108772 kubelet[1415]: E0317 18:32:48.108724 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:48.347792 kubelet[1415]: E0317 18:32:48.347751 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:48.794028 kubelet[1415]: W0317 18:32:48.793979 1415 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd2793c9_69b7_45c4_8fb9_508198a53cdd.slice/cri-containerd-3029cac086013902d90460ab4817b0105a31b850047097a9b87c7274e1b918f8.scope WatchSource:0}: task 3029cac086013902d90460ab4817b0105a31b850047097a9b87c7274e1b918f8 not found: not found Mar 17 18:32:49.109825 kubelet[1415]: E0317 18:32:49.109722 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:50.110741 kubelet[1415]: E0317 18:32:50.110688 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:51.111586 kubelet[1415]: E0317 18:32:51.111538 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 18:32:51.901545 kubelet[1415]: W0317 18:32:51.901504 1415 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd2793c9_69b7_45c4_8fb9_508198a53cdd.slice/cri-containerd-69c2cec3d9364d887bf3debb8dd12556a6e4cac324ce3f5eb2542d532d70e983.scope WatchSource:0}: task 69c2cec3d9364d887bf3debb8dd12556a6e4cac324ce3f5eb2542d532d70e983 not found: not found Mar 17 18:32:52.112147 kubelet[1415]: E0317 18:32:52.112100 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"