Mar 17 18:16:12.748558 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 18:16:12.748578 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Mar 17 17:11:44 -00 2025 Mar 17 18:16:12.748585 kernel: efi: EFI v2.70 by EDK II Mar 17 18:16:12.748591 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Mar 17 18:16:12.748596 kernel: random: crng init done Mar 17 18:16:12.748602 kernel: ACPI: Early table checksum verification disabled Mar 17 18:16:12.748608 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Mar 17 18:16:12.748614 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 17 18:16:12.748620 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:16:12.748625 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:16:12.748630 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:16:12.748636 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:16:12.748641 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:16:12.748646 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:16:12.748654 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:16:12.748660 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:16:12.748666 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:16:12.748672 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 17 18:16:12.748677 kernel: NUMA: Failed to initialise from firmware Mar 17 18:16:12.748683 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 18:16:12.748689 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Mar 17 18:16:12.748694 kernel: Zone ranges: Mar 17 18:16:12.748700 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 18:16:12.748707 kernel: DMA32 empty Mar 17 18:16:12.748712 kernel: Normal empty Mar 17 18:16:12.748718 kernel: Movable zone start for each node Mar 17 18:16:12.748724 kernel: Early memory node ranges Mar 17 18:16:12.748729 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Mar 17 18:16:12.748735 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Mar 17 18:16:12.748741 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Mar 17 18:16:12.748746 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Mar 17 18:16:12.748752 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Mar 17 18:16:12.748757 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Mar 17 18:16:12.748763 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Mar 17 18:16:12.748769 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 18:16:12.748776 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 17 18:16:12.748781 kernel: psci: probing for conduit method from ACPI. Mar 17 18:16:12.748787 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 18:16:12.748792 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 18:16:12.748798 kernel: psci: Trusted OS migration not required Mar 17 18:16:12.748807 kernel: psci: SMC Calling Convention v1.1 Mar 17 18:16:12.748813 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 17 18:16:12.748820 kernel: ACPI: SRAT not present Mar 17 18:16:12.748827 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Mar 17 18:16:12.748833 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Mar 17 18:16:12.748839 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 17 18:16:12.748845 kernel: Detected PIPT I-cache on CPU0 Mar 17 18:16:12.748851 kernel: CPU features: detected: GIC system register CPU interface Mar 17 18:16:12.748857 kernel: CPU features: detected: Hardware dirty bit management Mar 17 18:16:12.748863 kernel: CPU features: detected: Spectre-v4 Mar 17 18:16:12.748869 kernel: CPU features: detected: Spectre-BHB Mar 17 18:16:12.748876 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 18:16:12.748882 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 18:16:12.748888 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 18:16:12.748894 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 18:16:12.748900 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 17 18:16:12.748906 kernel: Policy zone: DMA Mar 17 18:16:12.748913 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:16:12.748920 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:16:12.748926 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:16:12.748932 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:16:12.748938 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:16:12.748945 kernel: Memory: 2457404K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 114884K reserved, 0K cma-reserved) Mar 17 18:16:12.748952 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 18:16:12.748958 kernel: trace event string verifier disabled Mar 17 18:16:12.748964 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 18:16:12.748970 kernel: rcu: RCU event tracing is enabled. Mar 17 18:16:12.748976 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 18:16:12.748983 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 18:16:12.748989 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:16:12.748995 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:16:12.749001 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 18:16:12.749007 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 18:16:12.749014 kernel: GICv3: 256 SPIs implemented Mar 17 18:16:12.749020 kernel: GICv3: 0 Extended SPIs implemented Mar 17 18:16:12.749026 kernel: GICv3: Distributor has no Range Selector support Mar 17 18:16:12.749037 kernel: Root IRQ handler: gic_handle_irq Mar 17 18:16:12.749044 kernel: GICv3: 16 PPIs implemented Mar 17 18:16:12.749050 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 17 18:16:12.749063 kernel: ACPI: SRAT not present Mar 17 18:16:12.749079 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 17 18:16:12.749099 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 18:16:12.749106 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Mar 17 18:16:12.749112 kernel: GICv3: using LPI property table @0x00000000400d0000 Mar 17 18:16:12.749118 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Mar 17 18:16:12.749127 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:16:12.749133 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 18:16:12.749140 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 18:16:12.749146 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 18:16:12.749152 kernel: arm-pv: using stolen time PV Mar 17 18:16:12.749159 kernel: Console: colour dummy device 80x25 Mar 17 18:16:12.749165 kernel: ACPI: Core revision 20210730 Mar 17 18:16:12.749171 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 18:16:12.749177 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:16:12.749183 kernel: LSM: Security Framework initializing Mar 17 18:16:12.749191 kernel: SELinux: Initializing. Mar 17 18:16:12.749197 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:16:12.749203 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:16:12.749209 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:16:12.749215 kernel: Platform MSI: ITS@0x8080000 domain created Mar 17 18:16:12.749221 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 17 18:16:12.749227 kernel: Remapping and enabling EFI services. Mar 17 18:16:12.749233 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:16:12.749240 kernel: Detected PIPT I-cache on CPU1 Mar 17 18:16:12.749247 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 17 18:16:12.749253 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Mar 17 18:16:12.749260 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:16:12.749267 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 18:16:12.749273 kernel: Detected PIPT I-cache on CPU2 Mar 17 18:16:12.749280 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 17 18:16:12.749286 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Mar 17 18:16:12.749292 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:16:12.749298 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 17 18:16:12.749305 kernel: Detected PIPT I-cache on CPU3 Mar 17 18:16:12.749312 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 17 18:16:12.749318 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Mar 17 18:16:12.749324 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:16:12.749330 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 17 18:16:12.749341 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 18:16:12.749349 kernel: SMP: Total of 4 processors activated. Mar 17 18:16:12.749355 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 18:16:12.749362 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 18:16:12.749368 kernel: CPU features: detected: Common not Private translations Mar 17 18:16:12.749375 kernel: CPU features: detected: CRC32 instructions Mar 17 18:16:12.749381 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 18:16:12.749388 kernel: CPU features: detected: LSE atomic instructions Mar 17 18:16:12.749395 kernel: CPU features: detected: Privileged Access Never Mar 17 18:16:12.749402 kernel: CPU features: detected: RAS Extension Support Mar 17 18:16:12.749409 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 17 18:16:12.749416 kernel: CPU: All CPU(s) started at EL1 Mar 17 18:16:12.749422 kernel: alternatives: patching kernel code Mar 17 18:16:12.749430 kernel: devtmpfs: initialized Mar 17 18:16:12.749436 kernel: KASLR enabled Mar 17 18:16:12.749444 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:16:12.749450 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 18:16:12.749457 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:16:12.749463 kernel: SMBIOS 3.0.0 present. Mar 17 18:16:12.749470 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Mar 17 18:16:12.749476 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:16:12.749483 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 18:16:12.749491 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 18:16:12.749498 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 18:16:12.749504 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:16:12.749511 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 Mar 17 18:16:12.749517 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:16:12.749523 kernel: cpuidle: using governor menu Mar 17 18:16:12.749530 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 18:16:12.749537 kernel: ASID allocator initialised with 32768 entries Mar 17 18:16:12.749543 kernel: ACPI: bus type PCI registered Mar 17 18:16:12.749551 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:16:12.749558 kernel: Serial: AMBA PL011 UART driver Mar 17 18:16:12.749564 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:16:12.749571 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 18:16:12.749577 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:16:12.749584 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 18:16:12.749590 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:16:12.749597 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 18:16:12.749604 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:16:12.749611 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:16:12.749618 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:16:12.749624 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:16:12.749631 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:16:12.749637 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:16:12.749644 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:16:12.749650 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:16:12.749657 kernel: ACPI: Interpreter enabled Mar 17 18:16:12.749663 kernel: ACPI: Using GIC for interrupt routing Mar 17 18:16:12.749673 kernel: ACPI: MCFG table detected, 1 entries Mar 17 18:16:12.749680 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 17 18:16:12.749687 kernel: printk: console [ttyAMA0] enabled Mar 17 18:16:12.749694 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 18:16:12.749817 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:16:12.749879 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 18:16:12.749935 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 18:16:12.749996 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 17 18:16:12.750096 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 17 18:16:12.750107 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 17 18:16:12.750114 kernel: PCI host bridge to bus 0000:00 Mar 17 18:16:12.750188 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 17 18:16:12.750241 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 18:16:12.750292 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 17 18:16:12.750341 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 18:16:12.750413 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 17 18:16:12.750481 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 18:16:12.750542 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 17 18:16:12.750600 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 17 18:16:12.750658 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 18:16:12.750715 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 18:16:12.750776 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 17 18:16:12.750833 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 17 18:16:12.750888 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 17 18:16:12.750940 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 18:16:12.750992 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 17 18:16:12.751000 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 18:16:12.751007 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 18:16:12.751014 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 18:16:12.751022 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 18:16:12.751029 kernel: iommu: Default domain type: Translated Mar 17 18:16:12.751035 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 18:16:12.751042 kernel: vgaarb: loaded Mar 17 18:16:12.751048 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:16:12.751063 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:16:12.751085 kernel: PTP clock support registered Mar 17 18:16:12.751092 kernel: Registered efivars operations Mar 17 18:16:12.751099 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 18:16:12.751107 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:16:12.751114 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:16:12.751121 kernel: pnp: PnP ACPI init Mar 17 18:16:12.751189 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 17 18:16:12.751199 kernel: pnp: PnP ACPI: found 1 devices Mar 17 18:16:12.751205 kernel: NET: Registered PF_INET protocol family Mar 17 18:16:12.751212 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:16:12.751219 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 18:16:12.751227 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:16:12.751234 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:16:12.751241 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 17 18:16:12.751248 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 18:16:12.751255 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:16:12.751261 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:16:12.751268 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:16:12.751274 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:16:12.751281 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 17 18:16:12.751289 kernel: kvm [1]: HYP mode not available Mar 17 18:16:12.751311 kernel: Initialise system trusted keyrings Mar 17 18:16:12.751319 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 18:16:12.751326 kernel: Key type asymmetric registered Mar 17 18:16:12.751332 kernel: Asymmetric key parser 'x509' registered Mar 17 18:16:12.751339 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:16:12.751345 kernel: io scheduler mq-deadline registered Mar 17 18:16:12.751352 kernel: io scheduler kyber registered Mar 17 18:16:12.751358 kernel: io scheduler bfq registered Mar 17 18:16:12.751366 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 18:16:12.751372 kernel: ACPI: button: Power Button [PWRB] Mar 17 18:16:12.751379 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 18:16:12.751447 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 17 18:16:12.751456 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:16:12.751463 kernel: thunder_xcv, ver 1.0 Mar 17 18:16:12.751469 kernel: thunder_bgx, ver 1.0 Mar 17 18:16:12.751476 kernel: nicpf, ver 1.0 Mar 17 18:16:12.751482 kernel: nicvf, ver 1.0 Mar 17 18:16:12.751554 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 18:16:12.751609 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T18:16:12 UTC (1742235372) Mar 17 18:16:12.751618 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 18:16:12.751624 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:16:12.751631 kernel: Segment Routing with IPv6 Mar 17 18:16:12.751637 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:16:12.751644 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:16:12.751650 kernel: Key type dns_resolver registered Mar 17 18:16:12.751658 kernel: registered taskstats version 1 Mar 17 18:16:12.751664 kernel: Loading compiled-in X.509 certificates Mar 17 18:16:12.751671 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: c6f3fb83dc6bb7052b07ec5b1ef41d12f9b3f7e4' Mar 17 18:16:12.751678 kernel: Key type .fscrypt registered Mar 17 18:16:12.751684 kernel: Key type fscrypt-provisioning registered Mar 17 18:16:12.751690 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:16:12.751697 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:16:12.751703 kernel: ima: No architecture policies found Mar 17 18:16:12.751710 kernel: clk: Disabling unused clocks Mar 17 18:16:12.751717 kernel: Freeing unused kernel memory: 36416K Mar 17 18:16:12.751724 kernel: Run /init as init process Mar 17 18:16:12.751730 kernel: with arguments: Mar 17 18:16:12.751736 kernel: /init Mar 17 18:16:12.751743 kernel: with environment: Mar 17 18:16:12.751749 kernel: HOME=/ Mar 17 18:16:12.751755 kernel: TERM=linux Mar 17 18:16:12.751761 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:16:12.751770 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:16:12.751780 systemd[1]: Detected virtualization kvm. Mar 17 18:16:12.751787 systemd[1]: Detected architecture arm64. Mar 17 18:16:12.751794 systemd[1]: Running in initrd. Mar 17 18:16:12.751801 systemd[1]: No hostname configured, using default hostname. Mar 17 18:16:12.751808 systemd[1]: Hostname set to . Mar 17 18:16:12.751815 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:16:12.751822 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:16:12.751831 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:16:12.751838 systemd[1]: Reached target cryptsetup.target. Mar 17 18:16:12.751844 systemd[1]: Reached target paths.target. Mar 17 18:16:12.751851 systemd[1]: Reached target slices.target. Mar 17 18:16:12.751858 systemd[1]: Reached target swap.target. Mar 17 18:16:12.751865 systemd[1]: Reached target timers.target. Mar 17 18:16:12.751872 systemd[1]: Listening on iscsid.socket. Mar 17 18:16:12.751880 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:16:12.751887 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:16:12.751894 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:16:12.751901 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:16:12.751908 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:16:12.751915 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:16:12.751922 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:16:12.751929 systemd[1]: Reached target sockets.target. Mar 17 18:16:12.751936 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:16:12.751944 systemd[1]: Finished network-cleanup.service. Mar 17 18:16:12.751951 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:16:12.751958 systemd[1]: Starting systemd-journald.service... Mar 17 18:16:12.751965 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:16:12.751973 systemd[1]: Starting systemd-resolved.service... Mar 17 18:16:12.751980 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:16:12.751987 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:16:12.751994 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:16:12.752001 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:16:12.752009 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:16:12.752016 kernel: audit: type=1130 audit(1742235372.749:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:12.752023 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:16:12.752034 systemd-journald[289]: Journal started Mar 17 18:16:12.752089 systemd-journald[289]: Runtime Journal (/run/log/journal/63e57aacc52d4b6181fcdcde0d2ce5b6) is 6.0M, max 48.7M, 42.6M free. Mar 17 18:16:12.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:12.743282 systemd-modules-load[290]: Inserted module 'overlay' Mar 17 18:16:12.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:12.758085 kernel: audit: type=1130 audit(1742235372.754:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:12.758106 systemd[1]: Started systemd-journald.service. Mar 17 18:16:12.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:12.759658 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:16:12.768587 kernel: audit: type=1130 audit(1742235372.758:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:12.771105 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:16:12.771124 systemd-resolved[291]: Positive Trust Anchors: Mar 17 18:16:12.771131 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:16:12.771159 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:16:12.775612 systemd-resolved[291]: Defaulting to hostname 'linux'. Mar 17 18:16:12.779181 systemd[1]: Started systemd-resolved.service. Mar 17 18:16:12.783537 kernel: Bridge firewalling registered Mar 17 18:16:12.783557 kernel: audit: type=1130 audit(1742235372.780:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:12.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:12.779703 systemd-modules-load[290]: Inserted module 'br_netfilter' Mar 17 18:16:12.780535 systemd[1]: Reached target nss-lookup.target. Mar 17 18:16:12.785556 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:16:12.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:12.790375 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:16:12.793142 kernel: audit: type=1130 audit(1742235372.787:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:12.795106 kernel: SCSI subsystem initialized Mar 17 18:16:12.799537 dracut-cmdline[308]: dracut-dracut-053 Mar 17 18:16:12.802360 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:16:12.809228 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:16:12.809246 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:16:12.809256 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:16:12.809009 systemd-modules-load[290]: Inserted module 'dm_multipath' Mar 17 18:16:12.809899 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:16:12.815143 kernel: audit: type=1130 audit(1742235372.811:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:12.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:12.813231 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:16:12.821582 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:16:12.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:12.826093 kernel: audit: type=1130 audit(1742235372.822:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:12.870095 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:16:12.882099 kernel: iscsi: registered transport (tcp) Mar 17 18:16:12.897089 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:16:12.897110 kernel: QLogic iSCSI HBA Driver Mar 17 18:16:12.930014 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:16:12.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:12.931737 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:16:12.935269 kernel: audit: type=1130 audit(1742235372.930:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:12.979098 kernel: raid6: neonx8 gen() 10154 MB/s Mar 17 18:16:12.996096 kernel: raid6: neonx8 xor() 9645 MB/s Mar 17 18:16:13.013095 kernel: raid6: neonx4 gen() 12887 MB/s Mar 17 18:16:13.030098 kernel: raid6: neonx4 xor() 11010 MB/s Mar 17 18:16:13.047095 kernel: raid6: neonx2 gen() 12757 MB/s Mar 17 18:16:13.064099 kernel: raid6: neonx2 xor() 10297 MB/s Mar 17 18:16:13.081097 kernel: raid6: neonx1 gen() 10185 MB/s Mar 17 18:16:13.098094 kernel: raid6: neonx1 xor() 8690 MB/s Mar 17 18:16:13.115092 kernel: raid6: int64x8 gen() 6178 MB/s Mar 17 18:16:13.132090 kernel: raid6: int64x8 xor() 3469 MB/s Mar 17 18:16:13.149091 kernel: raid6: int64x4 gen() 7060 MB/s Mar 17 18:16:13.166096 kernel: raid6: int64x4 xor() 3807 MB/s Mar 17 18:16:13.184082 kernel: raid6: int64x2 gen() 6446 MB/s Mar 17 18:16:13.201091 kernel: raid6: int64x2 xor() 3255 MB/s Mar 17 18:16:13.218105 kernel: raid6: int64x1 gen() 4965 MB/s Mar 17 18:16:13.235213 kernel: raid6: int64x1 xor() 2582 MB/s Mar 17 18:16:13.235226 kernel: raid6: using algorithm neonx4 gen() 12887 MB/s Mar 17 18:16:13.235234 kernel: raid6: .... xor() 11010 MB/s, rmw enabled Mar 17 18:16:13.236323 kernel: raid6: using neon recovery algorithm Mar 17 18:16:13.249100 kernel: xor: measuring software checksum speed Mar 17 18:16:13.250356 kernel: 8regs : 14688 MB/sec Mar 17 18:16:13.250369 kernel: 32regs : 20227 MB/sec Mar 17 18:16:13.251762 kernel: arm64_neon : 27570 MB/sec Mar 17 18:16:13.251774 kernel: xor: using function: arm64_neon (27570 MB/sec) Mar 17 18:16:13.307121 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Mar 17 18:16:13.317060 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:16:13.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:13.320000 audit: BPF prog-id=7 op=LOAD Mar 17 18:16:13.320000 audit: BPF prog-id=8 op=LOAD Mar 17 18:16:13.321095 kernel: audit: type=1130 audit(1742235373.317:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:13.321933 systemd[1]: Starting systemd-udevd.service... Mar 17 18:16:13.337051 systemd-udevd[491]: Using default interface naming scheme 'v252'. Mar 17 18:16:13.340411 systemd[1]: Started systemd-udevd.service. Mar 17 18:16:13.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:13.342056 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:16:13.357239 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Mar 17 18:16:13.385929 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:16:13.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:13.387630 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:16:13.433514 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:16:13.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:13.472108 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 18:16:13.477280 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 18:16:13.477295 kernel: GPT:9289727 != 19775487 Mar 17 18:16:13.477304 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 18:16:13.477313 kernel: GPT:9289727 != 19775487 Mar 17 18:16:13.477321 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:16:13.477329 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:16:13.496088 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (548) Mar 17 18:16:13.500060 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:16:13.503153 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:16:13.504290 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:16:13.509566 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:16:13.513250 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:16:13.517146 systemd[1]: Starting disk-uuid.service... Mar 17 18:16:13.522896 disk-uuid[562]: Primary Header is updated. Mar 17 18:16:13.522896 disk-uuid[562]: Secondary Entries is updated. Mar 17 18:16:13.522896 disk-uuid[562]: Secondary Header is updated. Mar 17 18:16:13.527094 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:16:14.536089 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:16:14.536140 disk-uuid[563]: The operation has completed successfully. Mar 17 18:16:14.559885 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:16:14.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:14.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:14.559976 systemd[1]: Finished disk-uuid.service. Mar 17 18:16:14.561728 systemd[1]: Starting verity-setup.service... Mar 17 18:16:14.590171 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 18:16:14.629200 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:16:14.631008 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:16:14.631962 systemd[1]: Finished verity-setup.service. Mar 17 18:16:14.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:14.700092 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:16:14.700204 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:16:14.701127 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:16:14.701933 systemd[1]: Starting ignition-setup.service... Mar 17 18:16:14.704534 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:16:14.711640 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:16:14.711680 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:16:14.711691 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:16:14.721374 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:16:14.750368 systemd[1]: Finished ignition-setup.service. Mar 17 18:16:14.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:14.752145 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:16:14.803048 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:16:14.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:14.804000 audit: BPF prog-id=9 op=LOAD Mar 17 18:16:14.805455 systemd[1]: Starting systemd-networkd.service... Mar 17 18:16:14.835944 systemd-networkd[737]: lo: Link UP Mar 17 18:16:14.835957 systemd-networkd[737]: lo: Gained carrier Mar 17 18:16:14.836491 systemd-networkd[737]: Enumeration completed Mar 17 18:16:14.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:14.836674 systemd-networkd[737]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:16:14.836769 systemd[1]: Started systemd-networkd.service. Mar 17 18:16:14.838091 systemd-networkd[737]: eth0: Link UP Mar 17 18:16:14.838095 systemd-networkd[737]: eth0: Gained carrier Mar 17 18:16:14.838371 systemd[1]: Reached target network.target. Mar 17 18:16:14.844965 ignition[683]: Ignition 2.14.0 Mar 17 18:16:14.840806 systemd[1]: Starting iscsiuio.service... Mar 17 18:16:14.844972 ignition[683]: Stage: fetch-offline Mar 17 18:16:14.845027 ignition[683]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:16:14.845036 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:16:14.845271 ignition[683]: parsed url from cmdline: "" Mar 17 18:16:14.845275 ignition[683]: no config URL provided Mar 17 18:16:14.845280 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:16:14.845288 ignition[683]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:16:14.845309 ignition[683]: op(1): [started] loading QEMU firmware config module Mar 17 18:16:14.845313 ignition[683]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 18:16:14.856660 systemd[1]: Started iscsiuio.service. Mar 17 18:16:14.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:14.858648 systemd[1]: Starting iscsid.service... Mar 17 18:16:14.864551 ignition[683]: op(1): [finished] loading QEMU firmware config module Mar 17 18:16:14.864181 systemd-networkd[737]: eth0: DHCPv4 address 10.0.0.58/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:16:14.867639 iscsid[745]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:16:14.867639 iscsid[745]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Mar 17 18:16:14.867639 iscsid[745]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:16:14.867639 iscsid[745]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:16:14.867639 iscsid[745]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:16:14.867639 iscsid[745]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:16:14.867639 iscsid[745]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:16:14.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:14.867583 systemd[1]: Started iscsid.service. Mar 17 18:16:14.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:14.870213 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:16:14.882722 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:16:14.885083 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:16:14.886548 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:16:14.888355 systemd[1]: Reached target remote-fs.target. Mar 17 18:16:14.890906 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:16:14.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:14.899718 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:16:14.925279 ignition[683]: parsing config with SHA512: 27eedbf198a922055e99d131c2134bfd425981cf9fe08c39ab889af7fe1d5b6e0552191e273f97075d88bfcb409babc28f1405185e03a370a296f2b39775eb54 Mar 17 18:16:14.935743 unknown[683]: fetched base config from "system" Mar 17 18:16:14.935754 unknown[683]: fetched user config from "qemu" Mar 17 18:16:14.936255 ignition[683]: fetch-offline: fetch-offline passed Mar 17 18:16:14.937767 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:16:14.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:14.936314 ignition[683]: Ignition finished successfully Mar 17 18:16:14.938759 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 18:16:14.939626 systemd[1]: Starting ignition-kargs.service... Mar 17 18:16:14.949227 ignition[759]: Ignition 2.14.0 Mar 17 18:16:14.949245 ignition[759]: Stage: kargs Mar 17 18:16:14.949342 ignition[759]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:16:14.949351 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:16:14.950258 ignition[759]: kargs: kargs passed Mar 17 18:16:14.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:14.953330 systemd[1]: Finished ignition-kargs.service. Mar 17 18:16:14.950302 ignition[759]: Ignition finished successfully Mar 17 18:16:14.955046 systemd[1]: Starting ignition-disks.service... Mar 17 18:16:14.961782 ignition[765]: Ignition 2.14.0 Mar 17 18:16:14.961792 ignition[765]: Stage: disks Mar 17 18:16:14.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:14.963916 systemd[1]: Finished ignition-disks.service. Mar 17 18:16:14.961892 ignition[765]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:16:14.964905 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:16:14.961902 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:16:14.965799 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:16:14.962893 ignition[765]: disks: disks passed Mar 17 18:16:14.967168 systemd[1]: Reached target local-fs.target. Mar 17 18:16:14.962937 ignition[765]: Ignition finished successfully Mar 17 18:16:14.968713 systemd[1]: Reached target sysinit.target. Mar 17 18:16:14.970135 systemd[1]: Reached target basic.target. Mar 17 18:16:14.972770 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:16:14.983525 systemd-fsck[773]: ROOT: clean, 623/553520 files, 56021/553472 blocks Mar 17 18:16:14.987573 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:16:14.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:14.990113 systemd[1]: Mounting sysroot.mount... Mar 17 18:16:14.998842 systemd[1]: Mounted sysroot.mount. Mar 17 18:16:15.000121 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:16:14.999647 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:16:15.002528 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:16:15.003417 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 18:16:15.003463 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:16:15.003487 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:16:15.005416 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:16:15.007229 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:16:15.011462 initrd-setup-root[783]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:16:15.015226 initrd-setup-root[791]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:16:15.019416 initrd-setup-root[799]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:16:15.023354 initrd-setup-root[807]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:16:15.055379 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:16:15.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:15.057053 systemd[1]: Starting ignition-mount.service... Mar 17 18:16:15.058435 systemd[1]: Starting sysroot-boot.service... Mar 17 18:16:15.063561 bash[825]: umount: /sysroot/usr/share/oem: not mounted. Mar 17 18:16:15.072472 ignition[827]: INFO : Ignition 2.14.0 Mar 17 18:16:15.072472 ignition[827]: INFO : Stage: mount Mar 17 18:16:15.074681 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:16:15.074681 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:16:15.074681 ignition[827]: INFO : mount: mount passed Mar 17 18:16:15.074681 ignition[827]: INFO : Ignition finished successfully Mar 17 18:16:15.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:15.075325 systemd[1]: Finished ignition-mount.service. Mar 17 18:16:15.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:15.078879 systemd[1]: Finished sysroot-boot.service. Mar 17 18:16:15.641095 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:16:15.648547 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (835) Mar 17 18:16:15.648586 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:16:15.648596 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:16:15.650078 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:16:15.652654 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:16:15.654374 systemd[1]: Starting ignition-files.service... Mar 17 18:16:15.674398 ignition[855]: INFO : Ignition 2.14.0 Mar 17 18:16:15.674398 ignition[855]: INFO : Stage: files Mar 17 18:16:15.676194 ignition[855]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:16:15.676194 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:16:15.676194 ignition[855]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:16:15.680296 ignition[855]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:16:15.680296 ignition[855]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:16:15.685957 ignition[855]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:16:15.687343 ignition[855]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:16:15.688872 unknown[855]: wrote ssh authorized keys file for user: core Mar 17 18:16:15.690129 ignition[855]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:16:15.690129 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 18:16:15.690129 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 18:16:15.690129 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 18:16:15.690129 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 17 18:16:15.757269 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 18:16:16.061112 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 18:16:16.063093 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:16:16.063093 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 18:16:16.222212 systemd-networkd[737]: eth0: Gained IPv6LL Mar 17 18:16:16.300425 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Mar 17 18:16:16.364439 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:16:16.364439 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:16:16.368295 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:16:16.368295 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:16:16.368295 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:16:16.368295 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:16:16.368295 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:16:16.368295 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:16:16.368295 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:16:16.368295 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:16:16.368295 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:16:16.368295 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:16:16.368295 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:16:16.368295 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:16:16.368295 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Mar 17 18:16:16.616003 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Mar 17 18:16:16.847006 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:16:16.847006 ignition[855]: INFO : files: op(d): [started] processing unit "containerd.service" Mar 17 18:16:16.850712 ignition[855]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 18:16:16.850712 ignition[855]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 18:16:16.850712 ignition[855]: INFO : files: op(d): [finished] processing unit "containerd.service" Mar 17 18:16:16.850712 ignition[855]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Mar 17 18:16:16.850712 ignition[855]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:16:16.850712 ignition[855]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:16:16.850712 ignition[855]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Mar 17 18:16:16.850712 ignition[855]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Mar 17 18:16:16.850712 ignition[855]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:16:16.850712 ignition[855]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:16:16.850712 ignition[855]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Mar 17 18:16:16.850712 ignition[855]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:16:16.850712 ignition[855]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:16:16.850712 ignition[855]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 18:16:16.850712 ignition[855]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:16:16.893012 ignition[855]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:16:16.894822 ignition[855]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 18:16:16.894822 ignition[855]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:16:16.894822 ignition[855]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:16:16.894822 ignition[855]: INFO : files: files passed Mar 17 18:16:16.894822 ignition[855]: INFO : Ignition finished successfully Mar 17 18:16:16.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:16.894486 systemd[1]: Finished ignition-files.service. Mar 17 18:16:16.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:16.904000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:16.896542 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:16:16.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:16.898263 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:16:16.912286 initrd-setup-root-after-ignition[880]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Mar 17 18:16:16.898959 systemd[1]: Starting ignition-quench.service... Mar 17 18:16:16.915121 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:16:16.903668 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:16:16.903757 systemd[1]: Finished ignition-quench.service. Mar 17 18:16:16.905332 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:16:16.909052 systemd[1]: Reached target ignition-complete.target. Mar 17 18:16:16.911302 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:16:16.923634 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:16:16.923732 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:16:16.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:16.925000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:16.925609 systemd[1]: Reached target initrd-fs.target. Mar 17 18:16:16.926899 systemd[1]: Reached target initrd.target. Mar 17 18:16:16.928498 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:16:16.929271 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:16:16.939173 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:16:16.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:16.940737 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:16:16.948612 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:16:16.949519 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:16:16.950948 systemd[1]: Stopped target timers.target. Mar 17 18:16:16.952328 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:16:16.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:16.952442 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:16:16.953756 systemd[1]: Stopped target initrd.target. Mar 17 18:16:16.955136 systemd[1]: Stopped target basic.target. Mar 17 18:16:16.956486 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:16:16.957913 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:16:16.959278 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:16:16.960754 systemd[1]: Stopped target remote-fs.target. Mar 17 18:16:16.962157 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:16:16.963601 systemd[1]: Stopped target sysinit.target. Mar 17 18:16:16.964887 systemd[1]: Stopped target local-fs.target. Mar 17 18:16:16.966294 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:16:16.967663 systemd[1]: Stopped target swap.target. Mar 17 18:16:16.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:16.968887 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:16:16.969013 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:16:16.973000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:16.970399 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:16:16.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:16.971637 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:16:16.971740 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:16:16.973357 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:16:16.973452 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:16:16.974848 systemd[1]: Stopped target paths.target. Mar 17 18:16:16.976136 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:16:16.978816 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:16:16.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:16.979811 systemd[1]: Stopped target slices.target. Mar 17 18:16:16.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:16.981238 systemd[1]: Stopped target sockets.target. Mar 17 18:16:16.982577 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:16:16.989996 iscsid[745]: iscsid shutting down. Mar 17 18:16:16.982685 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:16:16.984540 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:16:16.984634 systemd[1]: Stopped ignition-files.service. Mar 17 18:16:16.986941 systemd[1]: Stopping ignition-mount.service... Mar 17 18:16:16.994356 ignition[896]: INFO : Ignition 2.14.0 Mar 17 18:16:16.994356 ignition[896]: INFO : Stage: umount Mar 17 18:16:16.994356 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:16:16.994356 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:16:16.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:16.988264 systemd[1]: Stopping iscsid.service... Mar 17 18:16:17.000578 ignition[896]: INFO : umount: umount passed Mar 17 18:16:17.000578 ignition[896]: INFO : Ignition finished successfully Mar 17 18:16:17.009659 kernel: kauditd_printk_skb: 37 callbacks suppressed Mar 17 18:16:17.009683 kernel: audit: type=1131 audit(1742235377.001:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.009695 kernel: audit: type=1131 audit(1742235377.006:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:16.995788 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:16:17.013703 kernel: audit: type=1131 audit(1742235377.010:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:16.995926 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:16:16.998136 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:16:17.019156 kernel: audit: type=1131 audit(1742235377.015:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:16.999914 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:16:17.000065 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:16:17.025745 kernel: audit: type=1131 audit(1742235377.022:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.001508 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:16:17.030740 kernel: audit: type=1131 audit(1742235377.026:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.001600 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:16:17.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.007706 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 18:16:17.037572 kernel: audit: type=1131 audit(1742235377.031:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.007808 systemd[1]: Stopped iscsid.service. Mar 17 18:16:17.044645 kernel: audit: type=1130 audit(1742235377.038:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.044665 kernel: audit: type=1131 audit(1742235377.038:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.010799 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:16:17.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.010889 systemd[1]: Stopped ignition-mount.service. Mar 17 18:16:17.050480 kernel: audit: type=1131 audit(1742235377.045:57): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.016569 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:16:17.017166 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:16:17.017238 systemd[1]: Closed iscsid.socket. Mar 17 18:16:17.019826 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:16:17.019871 systemd[1]: Stopped ignition-disks.service. Mar 17 18:16:17.022226 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:16:17.022268 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:16:17.026568 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:16:17.026609 systemd[1]: Stopped ignition-setup.service. Mar 17 18:16:17.031744 systemd[1]: Stopping iscsiuio.service... Mar 17 18:16:17.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.036232 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:16:17.036320 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:16:17.038725 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:16:17.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.038806 systemd[1]: Stopped iscsiuio.service. Mar 17 18:16:17.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.046293 systemd[1]: Stopped target network.target. Mar 17 18:16:17.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.049831 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:16:17.049871 systemd[1]: Closed iscsiuio.socket. Mar 17 18:16:17.051260 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:16:17.052881 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:16:17.061124 systemd-networkd[737]: eth0: DHCPv6 lease lost Mar 17 18:16:17.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.080000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:16:17.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.062871 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:16:17.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.062985 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:16:17.064258 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:16:17.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.064289 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:16:17.066752 systemd[1]: Stopping network-cleanup.service... Mar 17 18:16:17.087000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:16:17.067726 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:16:17.067781 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:16:17.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.069405 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:16:17.069446 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:16:17.071582 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:16:17.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.071621 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:16:17.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.072633 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:16:17.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.077584 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:16:17.078055 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:16:17.078156 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:16:17.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.079685 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:16:17.079756 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:16:17.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:17.081668 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:16:17.081718 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:16:17.083856 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:16:17.083948 systemd[1]: Stopped network-cleanup.service. Mar 17 18:16:17.087855 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:16:17.087977 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:16:17.089763 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:16:17.089801 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:16:17.091110 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:16:17.091146 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:16:17.092707 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:16:17.092756 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:16:17.094182 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:16:17.117000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:16:17.117000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:16:17.117000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:16:17.117000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:16:17.117000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:16:17.094224 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:16:17.095890 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:16:17.095931 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:16:17.098103 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:16:17.099966 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:16:17.100030 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:16:17.103444 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:16:17.103532 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:16:17.104634 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:16:17.106685 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:16:17.112949 systemd[1]: Switching root. Mar 17 18:16:17.131443 systemd-journald[289]: Journal stopped Mar 17 18:16:19.219250 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Mar 17 18:16:19.219309 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:16:19.219321 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:16:19.219331 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:16:19.219340 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:16:19.219353 kernel: SELinux: policy capability open_perms=1 Mar 17 18:16:19.219364 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:16:19.219375 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:16:19.219384 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:16:19.219395 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:16:19.219404 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:16:19.219414 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:16:19.219424 systemd[1]: Successfully loaded SELinux policy in 33.989ms. Mar 17 18:16:19.219443 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.748ms. Mar 17 18:16:19.219457 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:16:19.219467 systemd[1]: Detected virtualization kvm. Mar 17 18:16:19.219479 systemd[1]: Detected architecture arm64. Mar 17 18:16:19.219490 systemd[1]: Detected first boot. Mar 17 18:16:19.219500 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:16:19.219510 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:16:19.219522 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:16:19.219534 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:16:19.219545 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:16:19.219557 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:16:19.219569 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:16:19.219580 systemd[1]: Unnecessary job was removed for dev-vda6.device. Mar 17 18:16:19.219590 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:16:19.219600 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:16:19.219611 systemd[1]: Created slice system-getty.slice. Mar 17 18:16:19.219621 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:16:19.219631 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:16:19.219643 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:16:19.219655 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:16:19.219665 systemd[1]: Created slice user.slice. Mar 17 18:16:19.219675 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:16:19.219686 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:16:19.219696 systemd[1]: Set up automount boot.automount. Mar 17 18:16:19.219706 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:16:19.219716 systemd[1]: Reached target integritysetup.target. Mar 17 18:16:19.219726 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:16:19.219737 systemd[1]: Reached target remote-fs.target. Mar 17 18:16:19.219750 systemd[1]: Reached target slices.target. Mar 17 18:16:19.219760 systemd[1]: Reached target swap.target. Mar 17 18:16:19.219771 systemd[1]: Reached target torcx.target. Mar 17 18:16:19.219781 systemd[1]: Reached target veritysetup.target. Mar 17 18:16:19.219791 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:16:19.219801 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:16:19.219811 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:16:19.219821 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:16:19.219832 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:16:19.219843 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:16:19.219853 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:16:19.219864 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:16:19.219875 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:16:19.219885 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:16:19.219896 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:16:19.219906 systemd[1]: Mounting media.mount... Mar 17 18:16:19.219924 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:16:19.219937 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:16:19.219949 systemd[1]: Mounting tmp.mount... Mar 17 18:16:19.219959 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:16:19.219970 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:16:19.219980 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:16:19.219990 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:16:19.220000 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:16:19.220010 systemd[1]: Starting modprobe@drm.service... Mar 17 18:16:19.220021 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:16:19.220031 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:16:19.220046 systemd[1]: Starting modprobe@loop.service... Mar 17 18:16:19.220057 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:16:19.220084 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 17 18:16:19.220095 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Mar 17 18:16:19.220107 systemd[1]: Starting systemd-journald.service... Mar 17 18:16:19.220117 kernel: fuse: init (API version 7.34) Mar 17 18:16:19.220130 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:16:19.220141 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:16:19.220151 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:16:19.220164 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:16:19.220174 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:16:19.220184 kernel: loop: module loaded Mar 17 18:16:19.220195 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:16:19.220204 systemd[1]: Mounted media.mount. Mar 17 18:16:19.220215 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:16:19.220226 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:16:19.220236 systemd[1]: Mounted tmp.mount. Mar 17 18:16:19.220246 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:16:19.220256 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:16:19.220268 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:16:19.220278 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:16:19.220288 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:16:19.220298 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:16:19.220311 systemd-journald[1027]: Journal started Mar 17 18:16:19.220353 systemd-journald[1027]: Runtime Journal (/run/log/journal/63e57aacc52d4b6181fcdcde0d2ce5b6) is 6.0M, max 48.7M, 42.6M free. Mar 17 18:16:19.115000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:16:19.115000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Mar 17 18:16:19.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.218000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:16:19.218000 audit[1027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc8163fc0 a2=4000 a3=1 items=0 ppid=1 pid=1027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:16:19.218000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:16:19.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.221526 systemd[1]: Finished modprobe@drm.service. Mar 17 18:16:19.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.226186 systemd[1]: Started systemd-journald.service. Mar 17 18:16:19.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.225066 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:16:19.225576 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:16:19.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.226745 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:16:19.226958 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:16:19.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.228197 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:16:19.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.229352 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:16:19.229553 systemd[1]: Finished modprobe@loop.service. Mar 17 18:16:19.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.230881 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:16:19.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.232147 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:16:19.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.233431 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:16:19.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.234838 systemd[1]: Reached target network-pre.target. Mar 17 18:16:19.236991 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:16:19.238944 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:16:19.239759 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:16:19.241459 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:16:19.243665 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:16:19.244641 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:16:19.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.257100 systemd-journald[1027]: Time spent on flushing to /var/log/journal/63e57aacc52d4b6181fcdcde0d2ce5b6 is 17.116ms for 935 entries. Mar 17 18:16:19.257100 systemd-journald[1027]: System Journal (/var/log/journal/63e57aacc52d4b6181fcdcde0d2ce5b6) is 8.0M, max 195.6M, 187.6M free. Mar 17 18:16:19.291837 systemd-journald[1027]: Received client request to flush runtime journal. Mar 17 18:16:19.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.245732 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:16:19.246702 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:16:19.247821 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:16:19.249841 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:16:19.293038 udevadm[1079]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 18:16:19.252730 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:16:19.253974 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:16:19.255050 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:16:19.257246 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:16:19.259544 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:16:19.261513 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:16:19.271771 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:16:19.288100 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:16:19.290376 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:16:19.292781 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:16:19.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.309993 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:16:19.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.645325 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:16:19.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.647691 systemd[1]: Starting systemd-udevd.service... Mar 17 18:16:19.682084 systemd-udevd[1089]: Using default interface naming scheme 'v252'. Mar 17 18:16:19.693816 systemd[1]: Started systemd-udevd.service. Mar 17 18:16:19.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.697325 systemd[1]: Starting systemd-networkd.service... Mar 17 18:16:19.703128 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:16:19.724797 systemd[1]: Found device dev-ttyAMA0.device. Mar 17 18:16:19.756482 systemd[1]: Started systemd-userdbd.service. Mar 17 18:16:19.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.765322 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:16:19.808613 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:16:19.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.811151 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:16:19.825179 systemd-networkd[1099]: lo: Link UP Mar 17 18:16:19.825190 systemd-networkd[1099]: lo: Gained carrier Mar 17 18:16:19.825561 systemd-networkd[1099]: Enumeration completed Mar 17 18:16:19.825679 systemd-networkd[1099]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:16:19.825692 systemd[1]: Started systemd-networkd.service. Mar 17 18:16:19.825953 lvm[1123]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:16:19.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.830481 systemd-networkd[1099]: eth0: Link UP Mar 17 18:16:19.830494 systemd-networkd[1099]: eth0: Gained carrier Mar 17 18:16:19.860214 systemd-networkd[1099]: eth0: DHCPv4 address 10.0.0.58/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:16:19.866020 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:16:19.867154 systemd[1]: Reached target cryptsetup.target. Mar 17 18:16:19.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.869329 systemd[1]: Starting lvm2-activation.service... Mar 17 18:16:19.877061 lvm[1125]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:16:19.911295 systemd[1]: Finished lvm2-activation.service. Mar 17 18:16:19.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.912297 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:16:19.913157 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:16:19.913187 systemd[1]: Reached target local-fs.target. Mar 17 18:16:19.913993 systemd[1]: Reached target machines.target. Mar 17 18:16:19.916192 systemd[1]: Starting ldconfig.service... Mar 17 18:16:19.917348 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:16:19.917412 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:16:19.918556 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:16:19.920473 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:16:19.922659 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:16:19.924846 systemd[1]: Starting systemd-sysext.service... Mar 17 18:16:19.926094 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1128 (bootctl) Mar 17 18:16:19.927168 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:16:19.934771 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:16:19.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.937062 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:16:19.939406 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:16:19.939656 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:16:19.953092 kernel: loop0: detected capacity change from 0 to 194096 Mar 17 18:16:19.988090 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:16:19.988754 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:16:19.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:19.994088 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:16:20.007114 systemd-fsck[1140]: fsck.fat 4.2 (2021-01-31) Mar 17 18:16:20.007114 systemd-fsck[1140]: /dev/vda1: 236 files, 117179/258078 clusters Mar 17 18:16:20.009787 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:16:20.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.012760 systemd[1]: Mounting boot.mount... Mar 17 18:16:20.015083 kernel: loop1: detected capacity change from 0 to 194096 Mar 17 18:16:20.018878 systemd[1]: Mounted boot.mount. Mar 17 18:16:20.024507 (sd-sysext)[1145]: Using extensions 'kubernetes'. Mar 17 18:16:20.024845 (sd-sysext)[1145]: Merged extensions into '/usr'. Mar 17 18:16:20.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.028094 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:16:20.050857 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:16:20.052463 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:16:20.054684 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:16:20.056850 systemd[1]: Starting modprobe@loop.service... Mar 17 18:16:20.057937 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:16:20.058156 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:16:20.059291 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:16:20.059475 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:16:20.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.061002 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:16:20.061171 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:16:20.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.062678 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:16:20.062837 systemd[1]: Finished modprobe@loop.service. Mar 17 18:16:20.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.064288 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:16:20.064461 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:16:20.099250 ldconfig[1127]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:16:20.102889 systemd[1]: Finished ldconfig.service. Mar 17 18:16:20.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.200945 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:16:20.206371 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:16:20.208357 systemd[1]: Finished systemd-sysext.service. Mar 17 18:16:20.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.211332 systemd[1]: Starting ensure-sysext.service... Mar 17 18:16:20.213180 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:16:20.217733 systemd[1]: Reloading. Mar 17 18:16:20.222732 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:16:20.223465 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:16:20.224787 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:16:20.255673 /usr/lib/systemd/system-generators/torcx-generator[1183]: time="2025-03-17T18:16:20Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:16:20.256192 /usr/lib/systemd/system-generators/torcx-generator[1183]: time="2025-03-17T18:16:20Z" level=info msg="torcx already run" Mar 17 18:16:20.322616 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:16:20.322640 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:16:20.337624 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:16:20.381743 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:16:20.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.385700 systemd[1]: Starting audit-rules.service... Mar 17 18:16:20.387550 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:16:20.389573 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:16:20.392346 systemd[1]: Starting systemd-resolved.service... Mar 17 18:16:20.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.394710 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:16:20.396751 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:16:20.398401 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:16:20.401916 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:16:20.407308 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:16:20.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.409000 audit[1238]: SYSTEM_BOOT pid=1238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.410188 systemd[1]: Starting systemd-update-done.service... Mar 17 18:16:20.416795 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:16:20.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.420266 systemd[1]: Finished systemd-update-done.service. Mar 17 18:16:20.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.424640 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:16:20.426121 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:16:20.428294 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:16:20.430370 systemd[1]: Starting modprobe@loop.service... Mar 17 18:16:20.431283 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:16:20.431430 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:16:20.431530 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:16:20.432342 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:16:20.432501 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:16:20.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.433794 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:16:20.433953 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:16:20.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.435373 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:16:20.437669 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:16:20.437839 systemd[1]: Finished modprobe@loop.service. Mar 17 18:16:20.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.440322 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:16:20.441521 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:16:20.443557 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:16:20.445468 systemd[1]: Starting modprobe@loop.service... Mar 17 18:16:20.446252 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:16:20.446405 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:16:20.446508 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:16:20.447359 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:16:20.447538 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:16:20.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.448799 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:16:20.450915 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:16:20.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.452442 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:16:20.452614 systemd[1]: Finished modprobe@loop.service. Mar 17 18:16:20.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.453968 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:16:20.454079 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:16:20.456378 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:16:20.457852 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:16:20.460291 systemd[1]: Starting modprobe@drm.service... Mar 17 18:16:20.462325 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:16:20.464586 systemd[1]: Starting modprobe@loop.service... Mar 17 18:16:20.465494 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:16:20.465654 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:16:20.467173 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:16:20.468179 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:16:20.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:16:20.471704 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:16:20.471893 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:16:20.473275 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:16:20.473422 systemd[1]: Finished modprobe@drm.service. Mar 17 18:16:20.474678 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:16:20.474820 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:16:20.475000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:16:20.475000 audit[1272]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd85613b0 a2=420 a3=0 items=0 ppid=1231 pid=1272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:16:20.475000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:16:20.476225 augenrules[1272]: No rules Mar 17 18:16:20.477645 systemd[1]: Finished audit-rules.service. Mar 17 18:16:20.479733 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:16:20.479955 systemd[1]: Finished modprobe@loop.service. Mar 17 18:16:20.481663 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:16:20.481782 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:16:20.483598 systemd[1]: Finished ensure-sysext.service. Mar 17 18:16:20.501711 systemd-resolved[1236]: Positive Trust Anchors: Mar 17 18:16:20.501726 systemd-resolved[1236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:16:20.501754 systemd-resolved[1236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:16:20.510911 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:16:20.512110 systemd[1]: Reached target time-set.target. Mar 17 18:16:20.515675 systemd-timesyncd[1237]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 18:16:20.515749 systemd-timesyncd[1237]: Initial clock synchronization to Mon 2025-03-17 18:16:20.652256 UTC. Mar 17 18:16:20.540800 systemd-resolved[1236]: Defaulting to hostname 'linux'. Mar 17 18:16:20.557898 systemd[1]: Started systemd-resolved.service. Mar 17 18:16:20.558870 systemd[1]: Reached target network.target. Mar 17 18:16:20.559739 systemd[1]: Reached target nss-lookup.target. Mar 17 18:16:20.560586 systemd[1]: Reached target sysinit.target. Mar 17 18:16:20.561506 systemd[1]: Started motdgen.path. Mar 17 18:16:20.562271 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:16:20.563539 systemd[1]: Started logrotate.timer. Mar 17 18:16:20.564425 systemd[1]: Started mdadm.timer. Mar 17 18:16:20.565145 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:16:20.566006 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:16:20.566048 systemd[1]: Reached target paths.target. Mar 17 18:16:20.566799 systemd[1]: Reached target timers.target. Mar 17 18:16:20.567935 systemd[1]: Listening on dbus.socket. Mar 17 18:16:20.570131 systemd[1]: Starting docker.socket... Mar 17 18:16:20.572999 systemd[1]: Listening on sshd.socket. Mar 17 18:16:20.574018 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:16:20.574455 systemd[1]: Listening on docker.socket. Mar 17 18:16:20.575343 systemd[1]: Reached target sockets.target. Mar 17 18:16:20.576182 systemd[1]: Reached target basic.target. Mar 17 18:16:20.577201 systemd[1]: System is tainted: cgroupsv1 Mar 17 18:16:20.577271 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:16:20.577309 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:16:20.578630 systemd[1]: Starting containerd.service... Mar 17 18:16:20.581276 systemd[1]: Starting dbus.service... Mar 17 18:16:20.583618 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:16:20.586324 systemd[1]: Starting extend-filesystems.service... Mar 17 18:16:20.587395 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:16:20.589394 systemd[1]: Starting motdgen.service... Mar 17 18:16:20.591910 systemd[1]: Starting prepare-helm.service... Mar 17 18:16:20.594793 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:16:20.597578 systemd[1]: Starting sshd-keygen.service... Mar 17 18:16:20.601022 systemd[1]: Starting systemd-logind.service... Mar 17 18:16:20.603619 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:16:20.603833 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:16:20.605831 systemd[1]: Starting update-engine.service... Mar 17 18:16:20.609001 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:16:20.614350 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:16:20.614677 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:16:20.622331 jq[1309]: true Mar 17 18:16:20.623197 jq[1294]: false Mar 17 18:16:20.623835 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:16:20.624200 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:16:20.638277 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:16:20.638535 systemd[1]: Finished motdgen.service. Mar 17 18:16:20.645005 jq[1321]: true Mar 17 18:16:20.652173 extend-filesystems[1295]: Found loop1 Mar 17 18:16:20.652173 extend-filesystems[1295]: Found vda Mar 17 18:16:20.652173 extend-filesystems[1295]: Found vda1 Mar 17 18:16:20.652173 extend-filesystems[1295]: Found vda2 Mar 17 18:16:20.652173 extend-filesystems[1295]: Found vda3 Mar 17 18:16:20.652173 extend-filesystems[1295]: Found usr Mar 17 18:16:20.652173 extend-filesystems[1295]: Found vda4 Mar 17 18:16:20.652173 extend-filesystems[1295]: Found vda6 Mar 17 18:16:20.652173 extend-filesystems[1295]: Found vda7 Mar 17 18:16:20.652173 extend-filesystems[1295]: Found vda9 Mar 17 18:16:20.652173 extend-filesystems[1295]: Checking size of /dev/vda9 Mar 17 18:16:20.680033 tar[1314]: linux-arm64/helm Mar 17 18:16:20.681150 dbus-daemon[1293]: [system] SELinux support is enabled Mar 17 18:16:20.681353 systemd[1]: Started dbus.service. Mar 17 18:16:20.686269 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:16:20.686306 systemd[1]: Reached target system-config.target. Mar 17 18:16:20.687275 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:16:20.687306 systemd[1]: Reached target user-config.target. Mar 17 18:16:20.689091 extend-filesystems[1295]: Resized partition /dev/vda9 Mar 17 18:16:20.709697 extend-filesystems[1350]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 18:16:20.739106 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 18:16:20.739870 bash[1351]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:16:20.741098 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:16:20.752513 systemd-logind[1302]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 18:16:20.753941 systemd-logind[1302]: New seat seat0. Mar 17 18:16:20.759137 systemd[1]: Started systemd-logind.service. Mar 17 18:16:20.792283 update_engine[1305]: I0317 18:16:20.791889 1305 main.cc:92] Flatcar Update Engine starting Mar 17 18:16:20.799542 systemd[1]: Started update-engine.service. Mar 17 18:16:20.799689 update_engine[1305]: I0317 18:16:20.799641 1305 update_check_scheduler.cc:74] Next update check in 4m30s Mar 17 18:16:20.802757 systemd[1]: Started locksmithd.service. Mar 17 18:16:20.805088 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 18:16:20.828086 extend-filesystems[1350]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 18:16:20.828086 extend-filesystems[1350]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 18:16:20.828086 extend-filesystems[1350]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 18:16:20.834497 extend-filesystems[1295]: Resized filesystem in /dev/vda9 Mar 17 18:16:20.829946 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:16:20.830233 systemd[1]: Finished extend-filesystems.service. Mar 17 18:16:20.862341 env[1317]: time="2025-03-17T18:16:20.861660840Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:16:20.882816 env[1317]: time="2025-03-17T18:16:20.882760880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:16:20.882973 env[1317]: time="2025-03-17T18:16:20.882944360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:16:20.884409 env[1317]: time="2025-03-17T18:16:20.884368760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:16:20.884409 env[1317]: time="2025-03-17T18:16:20.884403320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:16:20.884754 env[1317]: time="2025-03-17T18:16:20.884677120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:16:20.884754 env[1317]: time="2025-03-17T18:16:20.884700920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:16:20.884754 env[1317]: time="2025-03-17T18:16:20.884714760Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:16:20.884754 env[1317]: time="2025-03-17T18:16:20.884725680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:16:20.884864 env[1317]: time="2025-03-17T18:16:20.884797880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:16:20.885064 env[1317]: time="2025-03-17T18:16:20.884993760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:16:20.885322 env[1317]: time="2025-03-17T18:16:20.885174040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:16:20.885322 env[1317]: time="2025-03-17T18:16:20.885197720Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:16:20.885322 env[1317]: time="2025-03-17T18:16:20.885254080Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:16:20.885322 env[1317]: time="2025-03-17T18:16:20.885265720Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:16:20.893817 env[1317]: time="2025-03-17T18:16:20.890588360Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:16:20.893817 env[1317]: time="2025-03-17T18:16:20.890625680Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:16:20.893817 env[1317]: time="2025-03-17T18:16:20.890639920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:16:20.893817 env[1317]: time="2025-03-17T18:16:20.890670200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:16:20.893817 env[1317]: time="2025-03-17T18:16:20.890685040Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:16:20.893817 env[1317]: time="2025-03-17T18:16:20.890700240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:16:20.893817 env[1317]: time="2025-03-17T18:16:20.890714920Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:16:20.893817 env[1317]: time="2025-03-17T18:16:20.891095480Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:16:20.893817 env[1317]: time="2025-03-17T18:16:20.891118240Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:16:20.893817 env[1317]: time="2025-03-17T18:16:20.891135360Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:16:20.893817 env[1317]: time="2025-03-17T18:16:20.891150520Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:16:20.893817 env[1317]: time="2025-03-17T18:16:20.891163560Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:16:20.893817 env[1317]: time="2025-03-17T18:16:20.891277880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:16:20.893817 env[1317]: time="2025-03-17T18:16:20.891347720Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:16:20.902123 env[1317]: time="2025-03-17T18:16:20.891638520Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:16:20.902123 env[1317]: time="2025-03-17T18:16:20.891662640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:16:20.902123 env[1317]: time="2025-03-17T18:16:20.891676840Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:16:20.902123 env[1317]: time="2025-03-17T18:16:20.891925640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:16:20.902123 env[1317]: time="2025-03-17T18:16:20.891939560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:16:20.902123 env[1317]: time="2025-03-17T18:16:20.891951760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:16:20.902123 env[1317]: time="2025-03-17T18:16:20.891962760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:16:20.902123 env[1317]: time="2025-03-17T18:16:20.891974280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:16:20.902123 env[1317]: time="2025-03-17T18:16:20.891986520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:16:20.902123 env[1317]: time="2025-03-17T18:16:20.891998680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:16:20.902123 env[1317]: time="2025-03-17T18:16:20.892009680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:16:20.902123 env[1317]: time="2025-03-17T18:16:20.892024320Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:16:20.902123 env[1317]: time="2025-03-17T18:16:20.892167800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:16:20.902123 env[1317]: time="2025-03-17T18:16:20.892200120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:16:20.902123 env[1317]: time="2025-03-17T18:16:20.892212720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:16:20.894105 systemd[1]: Started containerd.service. Mar 17 18:16:20.903704 env[1317]: time="2025-03-17T18:16:20.892225640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:16:20.903704 env[1317]: time="2025-03-17T18:16:20.892240080Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:16:20.903704 env[1317]: time="2025-03-17T18:16:20.892251360Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:16:20.903704 env[1317]: time="2025-03-17T18:16:20.892268360Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:16:20.903704 env[1317]: time="2025-03-17T18:16:20.892301360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:16:20.894339 systemd-networkd[1099]: eth0: Gained IPv6LL Mar 17 18:16:20.896703 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:16:20.904704 env[1317]: time="2025-03-17T18:16:20.892488360Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:16:20.904704 env[1317]: time="2025-03-17T18:16:20.892555600Z" level=info msg="Connect containerd service" Mar 17 18:16:20.904704 env[1317]: time="2025-03-17T18:16:20.892615800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:16:20.904704 env[1317]: time="2025-03-17T18:16:20.893333600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:16:20.904704 env[1317]: time="2025-03-17T18:16:20.893871120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:16:20.904704 env[1317]: time="2025-03-17T18:16:20.893920760Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:16:20.904704 env[1317]: time="2025-03-17T18:16:20.893970960Z" level=info msg="containerd successfully booted in 0.033221s" Mar 17 18:16:20.904704 env[1317]: time="2025-03-17T18:16:20.894961160Z" level=info msg="Start subscribing containerd event" Mar 17 18:16:20.904704 env[1317]: time="2025-03-17T18:16:20.895133200Z" level=info msg="Start recovering state" Mar 17 18:16:20.904704 env[1317]: time="2025-03-17T18:16:20.895212800Z" level=info msg="Start event monitor" Mar 17 18:16:20.904704 env[1317]: time="2025-03-17T18:16:20.895236200Z" level=info msg="Start snapshots syncer" Mar 17 18:16:20.904704 env[1317]: time="2025-03-17T18:16:20.895255080Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:16:20.904704 env[1317]: time="2025-03-17T18:16:20.895264280Z" level=info msg="Start streaming server" Mar 17 18:16:20.898063 systemd[1]: Reached target network-online.target. Mar 17 18:16:20.900890 systemd[1]: Starting kubelet.service... Mar 17 18:16:20.936978 locksmithd[1354]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:16:21.082937 tar[1314]: linux-arm64/LICENSE Mar 17 18:16:21.082937 tar[1314]: linux-arm64/README.md Mar 17 18:16:21.087510 systemd[1]: Finished prepare-helm.service. Mar 17 18:16:21.470009 systemd[1]: Started kubelet.service. Mar 17 18:16:21.981814 kubelet[1379]: E0317 18:16:21.981759 1379 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:16:21.983731 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:16:21.983889 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:16:24.235091 sshd_keygen[1322]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:16:24.253464 systemd[1]: Finished sshd-keygen.service. Mar 17 18:16:24.255821 systemd[1]: Starting issuegen.service... Mar 17 18:16:24.260384 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:16:24.260589 systemd[1]: Finished issuegen.service. Mar 17 18:16:24.262735 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:16:24.268193 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:16:24.270552 systemd[1]: Started getty@tty1.service. Mar 17 18:16:24.272574 systemd[1]: Started serial-getty@ttyAMA0.service. Mar 17 18:16:24.273639 systemd[1]: Reached target getty.target. Mar 17 18:16:24.274498 systemd[1]: Reached target multi-user.target. Mar 17 18:16:24.276624 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:16:24.282766 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:16:24.282986 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:16:24.284134 systemd[1]: Startup finished in 5.218s (kernel) + 7.098s (userspace) = 12.317s. Mar 17 18:16:25.218475 systemd[1]: Created slice system-sshd.slice. Mar 17 18:16:25.219688 systemd[1]: Started sshd@0-10.0.0.58:22-10.0.0.1:48296.service. Mar 17 18:16:25.266857 sshd[1407]: Accepted publickey for core from 10.0.0.1 port 48296 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:16:25.268964 sshd[1407]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:16:25.278862 systemd-logind[1302]: New session 1 of user core. Mar 17 18:16:25.279202 systemd[1]: Created slice user-500.slice. Mar 17 18:16:25.280175 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:16:25.288570 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:16:25.289821 systemd[1]: Starting user@500.service... Mar 17 18:16:25.296020 (systemd)[1412]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:16:25.354032 systemd[1412]: Queued start job for default target default.target. Mar 17 18:16:25.354272 systemd[1412]: Reached target paths.target. Mar 17 18:16:25.354290 systemd[1412]: Reached target sockets.target. Mar 17 18:16:25.354302 systemd[1412]: Reached target timers.target. Mar 17 18:16:25.354324 systemd[1412]: Reached target basic.target. Mar 17 18:16:25.354368 systemd[1412]: Reached target default.target. Mar 17 18:16:25.354391 systemd[1412]: Startup finished in 53ms. Mar 17 18:16:25.354471 systemd[1]: Started user@500.service. Mar 17 18:16:25.355376 systemd[1]: Started session-1.scope. Mar 17 18:16:25.408000 systemd[1]: Started sshd@1-10.0.0.58:22-10.0.0.1:48310.service. Mar 17 18:16:25.450642 sshd[1421]: Accepted publickey for core from 10.0.0.1 port 48310 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:16:25.451769 sshd[1421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:16:25.455544 systemd-logind[1302]: New session 2 of user core. Mar 17 18:16:25.455925 systemd[1]: Started session-2.scope. Mar 17 18:16:25.510055 sshd[1421]: pam_unix(sshd:session): session closed for user core Mar 17 18:16:25.512301 systemd[1]: Started sshd@2-10.0.0.58:22-10.0.0.1:48312.service. Mar 17 18:16:25.513154 systemd-logind[1302]: Session 2 logged out. Waiting for processes to exit. Mar 17 18:16:25.513348 systemd[1]: sshd@1-10.0.0.58:22-10.0.0.1:48310.service: Deactivated successfully. Mar 17 18:16:25.514070 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 18:16:25.514436 systemd-logind[1302]: Removed session 2. Mar 17 18:16:25.551290 sshd[1426]: Accepted publickey for core from 10.0.0.1 port 48312 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:16:25.552348 sshd[1426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:16:25.555543 systemd-logind[1302]: New session 3 of user core. Mar 17 18:16:25.557442 systemd[1]: Started session-3.scope. Mar 17 18:16:25.607054 sshd[1426]: pam_unix(sshd:session): session closed for user core Mar 17 18:16:25.609885 systemd[1]: sshd@2-10.0.0.58:22-10.0.0.1:48312.service: Deactivated successfully. Mar 17 18:16:25.611073 systemd-logind[1302]: Session 3 logged out. Waiting for processes to exit. Mar 17 18:16:25.612704 systemd[1]: Started sshd@3-10.0.0.58:22-10.0.0.1:48314.service. Mar 17 18:16:25.613249 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 18:16:25.614418 systemd-logind[1302]: Removed session 3. Mar 17 18:16:25.651515 sshd[1435]: Accepted publickey for core from 10.0.0.1 port 48314 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:16:25.652576 sshd[1435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:16:25.655781 systemd-logind[1302]: New session 4 of user core. Mar 17 18:16:25.657782 systemd[1]: Started session-4.scope. Mar 17 18:16:25.711989 sshd[1435]: pam_unix(sshd:session): session closed for user core Mar 17 18:16:25.714389 systemd[1]: Started sshd@4-10.0.0.58:22-10.0.0.1:48330.service. Mar 17 18:16:25.714831 systemd[1]: sshd@3-10.0.0.58:22-10.0.0.1:48314.service: Deactivated successfully. Mar 17 18:16:25.716121 systemd-logind[1302]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:16:25.716141 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:16:25.717411 systemd-logind[1302]: Removed session 4. Mar 17 18:16:25.755438 sshd[1441]: Accepted publickey for core from 10.0.0.1 port 48330 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:16:25.756516 sshd[1441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:16:25.759611 systemd-logind[1302]: New session 5 of user core. Mar 17 18:16:25.760263 systemd[1]: Started session-5.scope. Mar 17 18:16:25.822347 sudo[1446]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:16:25.822566 sudo[1446]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:16:25.873274 systemd[1]: Starting docker.service... Mar 17 18:16:25.954947 env[1458]: time="2025-03-17T18:16:25.954892621Z" level=info msg="Starting up" Mar 17 18:16:25.956333 env[1458]: time="2025-03-17T18:16:25.956297391Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:16:25.956333 env[1458]: time="2025-03-17T18:16:25.956324742Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:16:25.956394 env[1458]: time="2025-03-17T18:16:25.956342720Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:16:25.956394 env[1458]: time="2025-03-17T18:16:25.956352941Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:16:25.958083 env[1458]: time="2025-03-17T18:16:25.958034334Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:16:25.958117 env[1458]: time="2025-03-17T18:16:25.958067220Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:16:25.958139 env[1458]: time="2025-03-17T18:16:25.958119660Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:16:25.958139 env[1458]: time="2025-03-17T18:16:25.958129680Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:16:26.143507 env[1458]: time="2025-03-17T18:16:26.143034313Z" level=warning msg="Your kernel does not support cgroup blkio weight" Mar 17 18:16:26.143507 env[1458]: time="2025-03-17T18:16:26.143065988Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Mar 17 18:16:26.143507 env[1458]: time="2025-03-17T18:16:26.143213550Z" level=info msg="Loading containers: start." Mar 17 18:16:26.273133 kernel: Initializing XFRM netlink socket Mar 17 18:16:26.295889 env[1458]: time="2025-03-17T18:16:26.295838317Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 18:16:26.345560 systemd-networkd[1099]: docker0: Link UP Mar 17 18:16:26.364216 env[1458]: time="2025-03-17T18:16:26.364179937Z" level=info msg="Loading containers: done." Mar 17 18:16:26.380258 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3072359676-merged.mount: Deactivated successfully. Mar 17 18:16:26.383635 env[1458]: time="2025-03-17T18:16:26.383602546Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:16:26.383906 env[1458]: time="2025-03-17T18:16:26.383883224Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 18:16:26.384072 env[1458]: time="2025-03-17T18:16:26.384055884Z" level=info msg="Daemon has completed initialization" Mar 17 18:16:26.397528 systemd[1]: Started docker.service. Mar 17 18:16:26.403383 env[1458]: time="2025-03-17T18:16:26.403267136Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:16:27.167863 env[1317]: time="2025-03-17T18:16:27.167820046Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 18:16:27.763283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3049283082.mount: Deactivated successfully. Mar 17 18:16:29.116154 env[1317]: time="2025-03-17T18:16:29.116105096Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:29.117649 env[1317]: time="2025-03-17T18:16:29.117619133Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:29.120809 env[1317]: time="2025-03-17T18:16:29.120781429Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:29.123168 env[1317]: time="2025-03-17T18:16:29.123136092Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:29.123873 env[1317]: time="2025-03-17T18:16:29.123839397Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\"" Mar 17 18:16:29.133179 env[1317]: time="2025-03-17T18:16:29.133122136Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 18:16:30.729487 env[1317]: time="2025-03-17T18:16:30.729445207Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:30.731651 env[1317]: time="2025-03-17T18:16:30.731624545Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:30.733655 env[1317]: time="2025-03-17T18:16:30.733619380Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:30.735553 env[1317]: time="2025-03-17T18:16:30.735519290Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:30.736333 env[1317]: time="2025-03-17T18:16:30.736302734Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\"" Mar 17 18:16:30.746061 env[1317]: time="2025-03-17T18:16:30.746028077Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 18:16:31.900403 env[1317]: time="2025-03-17T18:16:31.900351423Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:31.902120 env[1317]: time="2025-03-17T18:16:31.902043434Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:31.905701 env[1317]: time="2025-03-17T18:16:31.905667852Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:31.907538 env[1317]: time="2025-03-17T18:16:31.907515038Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:31.909167 env[1317]: time="2025-03-17T18:16:31.909136694Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\"" Mar 17 18:16:31.922040 env[1317]: time="2025-03-17T18:16:31.921910694Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 18:16:32.234932 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:16:32.235133 systemd[1]: Stopped kubelet.service. Mar 17 18:16:32.236598 systemd[1]: Starting kubelet.service... Mar 17 18:16:32.351213 systemd[1]: Started kubelet.service. Mar 17 18:16:32.442475 kubelet[1623]: E0317 18:16:32.442424 1623 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:16:32.445004 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:16:32.445162 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:16:33.224683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1510653693.mount: Deactivated successfully. Mar 17 18:16:33.636430 env[1317]: time="2025-03-17T18:16:33.636207991Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:33.638509 env[1317]: time="2025-03-17T18:16:33.638478618Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:33.640014 env[1317]: time="2025-03-17T18:16:33.639981372Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:33.643480 env[1317]: time="2025-03-17T18:16:33.643445854Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:33.644107 env[1317]: time="2025-03-17T18:16:33.643780120Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Mar 17 18:16:33.653105 env[1317]: time="2025-03-17T18:16:33.653049911Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 18:16:34.266122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1483616967.mount: Deactivated successfully. Mar 17 18:16:35.193605 env[1317]: time="2025-03-17T18:16:35.193554075Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:35.195135 env[1317]: time="2025-03-17T18:16:35.195104228Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:35.196849 env[1317]: time="2025-03-17T18:16:35.196811796Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:35.199329 env[1317]: time="2025-03-17T18:16:35.199289507Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:35.199943 env[1317]: time="2025-03-17T18:16:35.199914589Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 17 18:16:35.209242 env[1317]: time="2025-03-17T18:16:35.209197904Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 18:16:35.660600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2962121196.mount: Deactivated successfully. Mar 17 18:16:35.667214 env[1317]: time="2025-03-17T18:16:35.667110229Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:35.669008 env[1317]: time="2025-03-17T18:16:35.668973004Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:35.670936 env[1317]: time="2025-03-17T18:16:35.670883505Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:35.672835 env[1317]: time="2025-03-17T18:16:35.672800061Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:35.673424 env[1317]: time="2025-03-17T18:16:35.673389009Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Mar 17 18:16:35.682647 env[1317]: time="2025-03-17T18:16:35.682601137Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 18:16:36.244932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1825170241.mount: Deactivated successfully. Mar 17 18:16:38.727229 env[1317]: time="2025-03-17T18:16:38.727179505Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:38.728647 env[1317]: time="2025-03-17T18:16:38.728615595Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:38.730472 env[1317]: time="2025-03-17T18:16:38.730443375Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:38.734750 env[1317]: time="2025-03-17T18:16:38.734705445Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:38.735677 env[1317]: time="2025-03-17T18:16:38.735627189Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Mar 17 18:16:42.696103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:16:42.696281 systemd[1]: Stopped kubelet.service. Mar 17 18:16:42.697789 systemd[1]: Starting kubelet.service... Mar 17 18:16:42.846214 systemd[1]: Started kubelet.service. Mar 17 18:16:42.885570 kubelet[1732]: E0317 18:16:42.885528 1732 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:16:42.887121 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:16:42.887260 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:16:42.923507 systemd[1]: Stopped kubelet.service. Mar 17 18:16:42.925923 systemd[1]: Starting kubelet.service... Mar 17 18:16:42.942976 systemd[1]: Reloading. Mar 17 18:16:42.988679 /usr/lib/systemd/system-generators/torcx-generator[1771]: time="2025-03-17T18:16:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:16:42.989521 /usr/lib/systemd/system-generators/torcx-generator[1771]: time="2025-03-17T18:16:42Z" level=info msg="torcx already run" Mar 17 18:16:43.291096 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:16:43.291115 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:16:43.306867 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:16:43.371118 systemd[1]: Started kubelet.service. Mar 17 18:16:43.372548 systemd[1]: Stopping kubelet.service... Mar 17 18:16:43.373048 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:16:43.373313 systemd[1]: Stopped kubelet.service. Mar 17 18:16:43.374980 systemd[1]: Starting kubelet.service... Mar 17 18:16:43.457540 systemd[1]: Started kubelet.service. Mar 17 18:16:43.500718 kubelet[1827]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:16:43.500718 kubelet[1827]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:16:43.500718 kubelet[1827]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:16:43.501626 kubelet[1827]: I0317 18:16:43.501565 1827 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:16:44.872539 kubelet[1827]: I0317 18:16:44.872481 1827 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:16:44.872539 kubelet[1827]: I0317 18:16:44.872512 1827 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:16:44.872902 kubelet[1827]: I0317 18:16:44.872715 1827 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:16:44.898676 kubelet[1827]: E0317 18:16:44.898640 1827 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.58:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.58:6443: connect: connection refused Mar 17 18:16:44.898822 kubelet[1827]: I0317 18:16:44.898788 1827 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:16:44.910733 kubelet[1827]: I0317 18:16:44.910701 1827 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:16:44.911454 kubelet[1827]: I0317 18:16:44.911421 1827 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:16:44.911898 kubelet[1827]: I0317 18:16:44.911554 1827 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:16:44.912146 kubelet[1827]: I0317 18:16:44.912127 1827 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:16:44.912219 kubelet[1827]: I0317 18:16:44.912209 1827 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:16:44.912538 kubelet[1827]: I0317 18:16:44.912523 1827 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:16:44.914171 kubelet[1827]: I0317 18:16:44.914146 1827 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:16:44.914283 kubelet[1827]: I0317 18:16:44.914260 1827 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:16:44.914410 kubelet[1827]: W0317 18:16:44.914332 1827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Mar 17 18:16:44.914410 kubelet[1827]: E0317 18:16:44.914389 1827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Mar 17 18:16:44.914589 kubelet[1827]: I0317 18:16:44.914576 1827 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:16:44.914658 kubelet[1827]: I0317 18:16:44.914647 1827 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:16:44.915344 kubelet[1827]: W0317 18:16:44.915258 1827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Mar 17 18:16:44.915344 kubelet[1827]: E0317 18:16:44.915328 1827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Mar 17 18:16:44.918377 kubelet[1827]: I0317 18:16:44.918340 1827 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:16:44.918890 kubelet[1827]: I0317 18:16:44.918870 1827 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:16:44.919045 kubelet[1827]: W0317 18:16:44.919031 1827 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:16:44.920160 kubelet[1827]: I0317 18:16:44.920139 1827 server.go:1264] "Started kubelet" Mar 17 18:16:44.927444 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 18:16:44.927730 kubelet[1827]: I0317 18:16:44.927691 1827 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:16:44.930519 kubelet[1827]: E0317 18:16:44.928367 1827 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.58:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.58:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182da9deb95f5523 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 18:16:44.920116515 +0000 UTC m=+1.457824313,LastTimestamp:2025-03-17 18:16:44.920116515 +0000 UTC m=+1.457824313,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 18:16:44.931590 kubelet[1827]: I0317 18:16:44.931545 1827 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:16:44.932696 kubelet[1827]: I0317 18:16:44.932635 1827 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:16:44.933051 kubelet[1827]: I0317 18:16:44.933031 1827 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:16:44.933667 kubelet[1827]: I0317 18:16:44.933642 1827 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:16:44.934873 kubelet[1827]: E0317 18:16:44.934847 1827 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:16:44.935248 kubelet[1827]: I0317 18:16:44.935047 1827 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:16:44.935248 kubelet[1827]: I0317 18:16:44.935184 1827 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:16:44.935668 kubelet[1827]: E0317 18:16:44.935637 1827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="200ms" Mar 17 18:16:44.935815 kubelet[1827]: I0317 18:16:44.935716 1827 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:16:44.936231 kubelet[1827]: W0317 18:16:44.936171 1827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Mar 17 18:16:44.936231 kubelet[1827]: E0317 18:16:44.936232 1827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Mar 17 18:16:44.936353 kubelet[1827]: I0317 18:16:44.936249 1827 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:16:44.937513 kubelet[1827]: I0317 18:16:44.937490 1827 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:16:44.937513 kubelet[1827]: I0317 18:16:44.937512 1827 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:16:44.948568 kubelet[1827]: I0317 18:16:44.948482 1827 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:16:44.951504 kubelet[1827]: I0317 18:16:44.951461 1827 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:16:44.951649 kubelet[1827]: I0317 18:16:44.951633 1827 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:16:44.951686 kubelet[1827]: I0317 18:16:44.951659 1827 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:16:44.951743 kubelet[1827]: E0317 18:16:44.951720 1827 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:16:44.952678 kubelet[1827]: W0317 18:16:44.952591 1827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Mar 17 18:16:44.952678 kubelet[1827]: E0317 18:16:44.952654 1827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Mar 17 18:16:44.956515 kubelet[1827]: I0317 18:16:44.956494 1827 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:16:44.956515 kubelet[1827]: I0317 18:16:44.956511 1827 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:16:44.956639 kubelet[1827]: I0317 18:16:44.956531 1827 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:16:44.958847 kubelet[1827]: I0317 18:16:44.958819 1827 policy_none.go:49] "None policy: Start" Mar 17 18:16:44.959614 kubelet[1827]: I0317 18:16:44.959579 1827 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:16:44.959614 kubelet[1827]: I0317 18:16:44.959616 1827 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:16:44.964524 kubelet[1827]: I0317 18:16:44.964474 1827 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:16:44.964691 kubelet[1827]: I0317 18:16:44.964648 1827 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:16:44.964782 kubelet[1827]: I0317 18:16:44.964763 1827 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:16:44.966840 kubelet[1827]: E0317 18:16:44.966802 1827 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 18:16:45.036229 kubelet[1827]: I0317 18:16:45.036186 1827 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:16:45.040165 kubelet[1827]: E0317 18:16:45.040129 1827 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Mar 17 18:16:45.052296 kubelet[1827]: I0317 18:16:45.052238 1827 topology_manager.go:215] "Topology Admit Handler" podUID="448d735fe313d7b2a216b8643cd7603d" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 17 18:16:45.053426 kubelet[1827]: I0317 18:16:45.053394 1827 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 17 18:16:45.054300 kubelet[1827]: I0317 18:16:45.054272 1827 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 17 18:16:45.136315 kubelet[1827]: E0317 18:16:45.136208 1827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="400ms" Mar 17 18:16:45.137309 kubelet[1827]: I0317 18:16:45.137274 1827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/448d735fe313d7b2a216b8643cd7603d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"448d735fe313d7b2a216b8643cd7603d\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:16:45.137349 kubelet[1827]: I0317 18:16:45.137310 1827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:16:45.137349 kubelet[1827]: I0317 18:16:45.137339 1827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 17 18:16:45.137392 kubelet[1827]: I0317 18:16:45.137360 1827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:16:45.137392 kubelet[1827]: I0317 18:16:45.137378 1827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/448d735fe313d7b2a216b8643cd7603d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"448d735fe313d7b2a216b8643cd7603d\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:16:45.137459 kubelet[1827]: I0317 18:16:45.137393 1827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/448d735fe313d7b2a216b8643cd7603d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"448d735fe313d7b2a216b8643cd7603d\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:16:45.137459 kubelet[1827]: I0317 18:16:45.137416 1827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:16:45.137459 kubelet[1827]: I0317 18:16:45.137439 1827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:16:45.137525 kubelet[1827]: I0317 18:16:45.137456 1827 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:16:45.241503 kubelet[1827]: I0317 18:16:45.241469 1827 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:16:45.243550 kubelet[1827]: E0317 18:16:45.241947 1827 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Mar 17 18:16:45.358861 kubelet[1827]: E0317 18:16:45.358761 1827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:16:45.360544 kubelet[1827]: E0317 18:16:45.360442 1827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:16:45.360796 kubelet[1827]: E0317 18:16:45.360774 1827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:16:45.365400 env[1317]: time="2025-03-17T18:16:45.364755585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:448d735fe313d7b2a216b8643cd7603d,Namespace:kube-system,Attempt:0,}" Mar 17 18:16:45.365400 env[1317]: time="2025-03-17T18:16:45.365244684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,}" Mar 17 18:16:45.372085 env[1317]: time="2025-03-17T18:16:45.366496711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,}" Mar 17 18:16:45.537368 kubelet[1827]: E0317 18:16:45.537246 1827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="800ms" Mar 17 18:16:45.643124 kubelet[1827]: I0317 18:16:45.643086 1827 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:16:45.643432 kubelet[1827]: E0317 18:16:45.643407 1827 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Mar 17 18:16:45.909407 kubelet[1827]: W0317 18:16:45.909240 1827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Mar 17 18:16:45.909407 kubelet[1827]: E0317 18:16:45.909313 1827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.58:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Mar 17 18:16:45.942540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount543016268.mount: Deactivated successfully. Mar 17 18:16:45.946811 env[1317]: time="2025-03-17T18:16:45.946170791Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:45.952362 env[1317]: time="2025-03-17T18:16:45.952300757Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:45.956139 env[1317]: time="2025-03-17T18:16:45.956040307Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:45.958306 env[1317]: time="2025-03-17T18:16:45.957730198Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:45.960698 env[1317]: time="2025-03-17T18:16:45.959794627Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:45.960698 env[1317]: time="2025-03-17T18:16:45.960619439Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:45.963207 env[1317]: time="2025-03-17T18:16:45.963163001Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:45.965567 env[1317]: time="2025-03-17T18:16:45.965528879Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:45.966710 env[1317]: time="2025-03-17T18:16:45.966677915Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:45.969817 env[1317]: time="2025-03-17T18:16:45.969776101Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:45.973157 env[1317]: time="2025-03-17T18:16:45.972057321Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:45.974914 env[1317]: time="2025-03-17T18:16:45.974866147Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:16:45.999321 env[1317]: time="2025-03-17T18:16:45.998848279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:16:45.999321 env[1317]: time="2025-03-17T18:16:45.998888306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:16:45.999321 env[1317]: time="2025-03-17T18:16:45.998899434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:16:45.999987 env[1317]: time="2025-03-17T18:16:45.999886678Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/317a38e05c54339245be1947530dd18953772fbcb34d21b57c14c197e96bcd12 pid=1868 runtime=io.containerd.runc.v2 Mar 17 18:16:46.019993 kubelet[1827]: W0317 18:16:46.019907 1827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Mar 17 18:16:46.019993 kubelet[1827]: E0317 18:16:46.019981 1827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.58:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Mar 17 18:16:46.030096 env[1317]: time="2025-03-17T18:16:46.025789764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:16:46.030096 env[1317]: time="2025-03-17T18:16:46.025852642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:16:46.030096 env[1317]: time="2025-03-17T18:16:46.025869332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:16:46.030096 env[1317]: time="2025-03-17T18:16:46.026149342Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/527857945512379a5c7ebfa38e3bd4d9d23f8b69dfec54c48797166cef4902eb pid=1905 runtime=io.containerd.runc.v2 Mar 17 18:16:46.030096 env[1317]: time="2025-03-17T18:16:46.027157953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:16:46.030096 env[1317]: time="2025-03-17T18:16:46.027204661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:16:46.030096 env[1317]: time="2025-03-17T18:16:46.027216028Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:16:46.030096 env[1317]: time="2025-03-17T18:16:46.027341945Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3ff3525ab2ad3ef3c3fe584f47b4d64a22a05b7f02f693e886823420f2409952 pid=1904 runtime=io.containerd.runc.v2 Mar 17 18:16:46.063109 kubelet[1827]: W0317 18:16:46.054224 1827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Mar 17 18:16:46.063109 kubelet[1827]: E0317 18:16:46.054297 1827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.58:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Mar 17 18:16:46.088017 env[1317]: time="2025-03-17T18:16:46.087910817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:448d735fe313d7b2a216b8643cd7603d,Namespace:kube-system,Attempt:0,} returns sandbox id \"317a38e05c54339245be1947530dd18953772fbcb34d21b57c14c197e96bcd12\"" Mar 17 18:16:46.089605 kubelet[1827]: E0317 18:16:46.089556 1827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:16:46.093696 env[1317]: time="2025-03-17T18:16:46.093585457Z" level=info msg="CreateContainer within sandbox \"317a38e05c54339245be1947530dd18953772fbcb34d21b57c14c197e96bcd12\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:16:46.106603 env[1317]: time="2025-03-17T18:16:46.106555078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"527857945512379a5c7ebfa38e3bd4d9d23f8b69dfec54c48797166cef4902eb\"" Mar 17 18:16:46.107618 kubelet[1827]: E0317 18:16:46.107593 1827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:16:46.109511 env[1317]: time="2025-03-17T18:16:46.109471366Z" level=info msg="CreateContainer within sandbox \"527857945512379a5c7ebfa38e3bd4d9d23f8b69dfec54c48797166cef4902eb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:16:46.109928 env[1317]: time="2025-03-17T18:16:46.109881575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ff3525ab2ad3ef3c3fe584f47b4d64a22a05b7f02f693e886823420f2409952\"" Mar 17 18:16:46.110676 kubelet[1827]: E0317 18:16:46.110480 1827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:16:46.113818 env[1317]: time="2025-03-17T18:16:46.113715499Z" level=info msg="CreateContainer within sandbox \"3ff3525ab2ad3ef3c3fe584f47b4d64a22a05b7f02f693e886823420f2409952\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:16:46.122219 env[1317]: time="2025-03-17T18:16:46.122169383Z" level=info msg="CreateContainer within sandbox \"317a38e05c54339245be1947530dd18953772fbcb34d21b57c14c197e96bcd12\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4c6e45ed5e0b32c32bf849fe878c5200a615f8dea4529b4d0769a82f72c05160\"" Mar 17 18:16:46.122968 env[1317]: time="2025-03-17T18:16:46.122814734Z" level=info msg="StartContainer for \"4c6e45ed5e0b32c32bf849fe878c5200a615f8dea4529b4d0769a82f72c05160\"" Mar 17 18:16:46.126207 env[1317]: time="2025-03-17T18:16:46.126166125Z" level=info msg="CreateContainer within sandbox \"527857945512379a5c7ebfa38e3bd4d9d23f8b69dfec54c48797166cef4902eb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"75d5316f0bac0a69d725e05704540c941a81e91bb7d8da5eed0254bc3e9ede9b\"" Mar 17 18:16:46.126902 env[1317]: time="2025-03-17T18:16:46.126874034Z" level=info msg="StartContainer for \"75d5316f0bac0a69d725e05704540c941a81e91bb7d8da5eed0254bc3e9ede9b\"" Mar 17 18:16:46.132180 env[1317]: time="2025-03-17T18:16:46.132130861Z" level=info msg="CreateContainer within sandbox \"3ff3525ab2ad3ef3c3fe584f47b4d64a22a05b7f02f693e886823420f2409952\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a1deab6380f27e8eab603b578ddfc3c3af65aa48e07cac1229266440778eff4d\"" Mar 17 18:16:46.132656 env[1317]: time="2025-03-17T18:16:46.132624760Z" level=info msg="StartContainer for \"a1deab6380f27e8eab603b578ddfc3c3af65aa48e07cac1229266440778eff4d\"" Mar 17 18:16:46.224365 env[1317]: time="2025-03-17T18:16:46.224209152Z" level=info msg="StartContainer for \"a1deab6380f27e8eab603b578ddfc3c3af65aa48e07cac1229266440778eff4d\" returns successfully" Mar 17 18:16:46.269869 env[1317]: time="2025-03-17T18:16:46.269819879Z" level=info msg="StartContainer for \"4c6e45ed5e0b32c32bf849fe878c5200a615f8dea4529b4d0769a82f72c05160\" returns successfully" Mar 17 18:16:46.282193 kubelet[1827]: W0317 18:16:46.282094 1827 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Mar 17 18:16:46.282193 kubelet[1827]: E0317 18:16:46.282159 1827 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.58:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.58:6443: connect: connection refused Mar 17 18:16:46.283487 env[1317]: time="2025-03-17T18:16:46.283447659Z" level=info msg="StartContainer for \"75d5316f0bac0a69d725e05704540c941a81e91bb7d8da5eed0254bc3e9ede9b\" returns successfully" Mar 17 18:16:46.338446 kubelet[1827]: E0317 18:16:46.338388 1827 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.58:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.58:6443: connect: connection refused" interval="1.6s" Mar 17 18:16:46.445789 kubelet[1827]: I0317 18:16:46.445218 1827 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:16:46.445789 kubelet[1827]: E0317 18:16:46.445748 1827 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.58:6443/api/v1/nodes\": dial tcp 10.0.0.58:6443: connect: connection refused" node="localhost" Mar 17 18:16:46.958618 kubelet[1827]: E0317 18:16:46.958588 1827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:16:46.961307 kubelet[1827]: E0317 18:16:46.961285 1827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:16:46.963662 kubelet[1827]: E0317 18:16:46.963639 1827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:16:47.965477 kubelet[1827]: E0317 18:16:47.965446 1827 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:16:48.049935 kubelet[1827]: I0317 18:16:48.049898 1827 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:16:48.069216 kubelet[1827]: E0317 18:16:48.069176 1827 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 17 18:16:48.257685 kubelet[1827]: I0317 18:16:48.257582 1827 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 17 18:16:48.916762 kubelet[1827]: I0317 18:16:48.916721 1827 apiserver.go:52] "Watching apiserver" Mar 17 18:16:48.936045 kubelet[1827]: I0317 18:16:48.936019 1827 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:16:49.925831 systemd[1]: Reloading. Mar 17 18:16:49.994304 /usr/lib/systemd/system-generators/torcx-generator[2129]: time="2025-03-17T18:16:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:16:49.994648 /usr/lib/systemd/system-generators/torcx-generator[2129]: time="2025-03-17T18:16:49Z" level=info msg="torcx already run" Mar 17 18:16:50.085027 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:16:50.085046 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:16:50.101746 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:16:50.187460 systemd[1]: Stopping kubelet.service... Mar 17 18:16:50.208495 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:16:50.208787 systemd[1]: Stopped kubelet.service. Mar 17 18:16:50.211065 systemd[1]: Starting kubelet.service... Mar 17 18:16:50.293608 systemd[1]: Started kubelet.service. Mar 17 18:16:50.336303 kubelet[2182]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:16:50.336649 kubelet[2182]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:16:50.336701 kubelet[2182]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:16:50.336836 kubelet[2182]: I0317 18:16:50.336807 2182 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:16:50.342871 kubelet[2182]: I0317 18:16:50.342825 2182 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:16:50.342871 kubelet[2182]: I0317 18:16:50.342860 2182 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:16:50.343137 kubelet[2182]: I0317 18:16:50.343116 2182 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:16:50.344806 kubelet[2182]: I0317 18:16:50.344784 2182 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:16:50.346567 kubelet[2182]: I0317 18:16:50.346541 2182 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:16:50.354223 kubelet[2182]: I0317 18:16:50.354189 2182 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:16:50.354763 kubelet[2182]: I0317 18:16:50.354649 2182 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:16:50.354945 kubelet[2182]: I0317 18:16:50.354755 2182 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:16:50.355034 kubelet[2182]: I0317 18:16:50.354952 2182 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:16:50.355034 kubelet[2182]: I0317 18:16:50.354961 2182 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:16:50.355034 kubelet[2182]: I0317 18:16:50.354999 2182 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:16:50.355214 kubelet[2182]: I0317 18:16:50.355126 2182 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:16:50.355214 kubelet[2182]: I0317 18:16:50.355141 2182 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:16:50.355214 kubelet[2182]: I0317 18:16:50.355166 2182 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:16:50.355659 kubelet[2182]: I0317 18:16:50.355262 2182 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:16:50.357643 kubelet[2182]: I0317 18:16:50.357617 2182 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:16:50.359213 kubelet[2182]: I0317 18:16:50.359182 2182 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:16:50.380394 kubelet[2182]: I0317 18:16:50.380326 2182 server.go:1264] "Started kubelet" Mar 17 18:16:50.385205 kubelet[2182]: I0317 18:16:50.385167 2182 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:16:50.385752 kubelet[2182]: I0317 18:16:50.385678 2182 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:16:50.386024 kubelet[2182]: I0317 18:16:50.386005 2182 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:16:50.387848 kubelet[2182]: I0317 18:16:50.387286 2182 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:16:50.388820 kubelet[2182]: I0317 18:16:50.388798 2182 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:16:50.389102 kubelet[2182]: I0317 18:16:50.389084 2182 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:16:50.389306 kubelet[2182]: I0317 18:16:50.389292 2182 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:16:50.391961 kubelet[2182]: I0317 18:16:50.391929 2182 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:16:50.392549 kubelet[2182]: I0317 18:16:50.392527 2182 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:16:50.392765 kubelet[2182]: I0317 18:16:50.392740 2182 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:16:50.394584 kubelet[2182]: E0317 18:16:50.394554 2182 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:16:50.396284 kubelet[2182]: I0317 18:16:50.396264 2182 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:16:50.402581 kubelet[2182]: I0317 18:16:50.402525 2182 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:16:50.403569 kubelet[2182]: I0317 18:16:50.403541 2182 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:16:50.403569 kubelet[2182]: I0317 18:16:50.403570 2182 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:16:50.403671 kubelet[2182]: I0317 18:16:50.403591 2182 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:16:50.403671 kubelet[2182]: E0317 18:16:50.403634 2182 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:16:50.438048 kubelet[2182]: I0317 18:16:50.437958 2182 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:16:50.438254 kubelet[2182]: I0317 18:16:50.438236 2182 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:16:50.438323 kubelet[2182]: I0317 18:16:50.438313 2182 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:16:50.438526 kubelet[2182]: I0317 18:16:50.438510 2182 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:16:50.438609 kubelet[2182]: I0317 18:16:50.438583 2182 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:16:50.438674 kubelet[2182]: I0317 18:16:50.438665 2182 policy_none.go:49] "None policy: Start" Mar 17 18:16:50.439677 kubelet[2182]: I0317 18:16:50.439648 2182 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:16:50.439750 kubelet[2182]: I0317 18:16:50.439697 2182 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:16:50.439891 kubelet[2182]: I0317 18:16:50.439875 2182 state_mem.go:75] "Updated machine memory state" Mar 17 18:16:50.441065 kubelet[2182]: I0317 18:16:50.441035 2182 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:16:50.441264 kubelet[2182]: I0317 18:16:50.441219 2182 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:16:50.441355 kubelet[2182]: I0317 18:16:50.441334 2182 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:16:50.492892 kubelet[2182]: I0317 18:16:50.492865 2182 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:16:50.500639 kubelet[2182]: I0317 18:16:50.500581 2182 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Mar 17 18:16:50.500786 kubelet[2182]: I0317 18:16:50.500693 2182 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 17 18:16:50.504549 kubelet[2182]: I0317 18:16:50.504504 2182 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 17 18:16:50.504673 kubelet[2182]: I0317 18:16:50.504647 2182 topology_manager.go:215] "Topology Admit Handler" podUID="448d735fe313d7b2a216b8643cd7603d" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 17 18:16:50.504718 kubelet[2182]: I0317 18:16:50.504687 2182 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 17 18:16:50.513567 kubelet[2182]: E0317 18:16:50.513514 2182 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 18:16:50.690795 kubelet[2182]: I0317 18:16:50.690692 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/448d735fe313d7b2a216b8643cd7603d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"448d735fe313d7b2a216b8643cd7603d\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:16:50.690795 kubelet[2182]: I0317 18:16:50.690741 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:16:50.690795 kubelet[2182]: I0317 18:16:50.690767 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:16:50.690795 kubelet[2182]: I0317 18:16:50.690784 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:16:50.690981 kubelet[2182]: I0317 18:16:50.690813 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:16:50.690981 kubelet[2182]: I0317 18:16:50.690833 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:16:50.690981 kubelet[2182]: I0317 18:16:50.690848 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 17 18:16:50.690981 kubelet[2182]: I0317 18:16:50.690913 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/448d735fe313d7b2a216b8643cd7603d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"448d735fe313d7b2a216b8643cd7603d\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:16:50.690981 kubelet[2182]: I0317 18:16:50.690931 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/448d735fe313d7b2a216b8643cd7603d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"448d735fe313d7b2a216b8643cd7603d\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:16:50.812779 kubelet[2182]: E0317 18:16:50.812737 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:16:50.814532 kubelet[2182]: E0317 18:16:50.814485 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:16:50.814532 kubelet[2182]: E0317 18:16:50.814532 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:16:50.922122 sudo[2216]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 18:16:50.922353 sudo[2216]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 18:16:51.351986 sudo[2216]: pam_unix(sudo:session): session closed for user root Mar 17 18:16:51.356624 kubelet[2182]: I0317 18:16:51.356506 2182 apiserver.go:52] "Watching apiserver" Mar 17 18:16:51.389739 kubelet[2182]: I0317 18:16:51.389688 2182 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:16:51.419781 kubelet[2182]: E0317 18:16:51.418606 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:16:51.419781 kubelet[2182]: E0317 18:16:51.418689 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:16:51.422475 kubelet[2182]: E0317 18:16:51.422439 2182 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 17 18:16:51.422838 kubelet[2182]: E0317 18:16:51.422812 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:16:51.441518 kubelet[2182]: I0317 18:16:51.441459 2182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.441444821 podStartE2EDuration="1.441444821s" podCreationTimestamp="2025-03-17 18:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:16:51.440919057 +0000 UTC m=+1.141618706" watchObservedRunningTime="2025-03-17 18:16:51.441444821 +0000 UTC m=+1.142144470" Mar 17 18:16:51.461817 kubelet[2182]: I0317 18:16:51.461764 2182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.461746094 podStartE2EDuration="1.461746094s" podCreationTimestamp="2025-03-17 18:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:16:51.453325716 +0000 UTC m=+1.154025325" watchObservedRunningTime="2025-03-17 18:16:51.461746094 +0000 UTC m=+1.162445743" Mar 17 18:16:51.462525 kubelet[2182]: I0317 18:16:51.462484 2182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.46247188 podStartE2EDuration="1.46247188s" podCreationTimestamp="2025-03-17 18:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:16:51.460749824 +0000 UTC m=+1.161449473" watchObservedRunningTime="2025-03-17 18:16:51.46247188 +0000 UTC m=+1.163171529" Mar 17 18:16:52.420322 kubelet[2182]: E0317 18:16:52.420292 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:16:52.420768 kubelet[2182]: E0317 18:16:52.420496 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:16:53.227875 sudo[1446]: pam_unix(sudo:session): session closed for user root Mar 17 18:16:53.229520 sshd[1441]: pam_unix(sshd:session): session closed for user core Mar 17 18:16:53.232322 systemd[1]: sshd@4-10.0.0.58:22-10.0.0.1:48330.service: Deactivated successfully. Mar 17 18:16:53.233413 systemd-logind[1302]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:16:53.233416 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:16:53.234389 systemd-logind[1302]: Removed session 5. Mar 17 18:16:53.421941 kubelet[2182]: E0317 18:16:53.421900 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:16:55.993786 kubelet[2182]: E0317 18:16:55.993746 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:16:59.676354 kubelet[2182]: E0317 18:16:59.676278 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:00.433755 kubelet[2182]: E0317 18:17:00.433121 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:03.307551 kubelet[2182]: E0317 18:17:03.305710 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:03.436705 kubelet[2182]: E0317 18:17:03.436674 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:05.974012 update_engine[1305]: I0317 18:17:05.973968 1305 update_attempter.cc:509] Updating boot flags... Mar 17 18:17:06.003805 kubelet[2182]: E0317 18:17:06.003779 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:06.929896 kubelet[2182]: I0317 18:17:06.929867 2182 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:17:06.930503 env[1317]: time="2025-03-17T18:17:06.930462459Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:17:06.930945 kubelet[2182]: I0317 18:17:06.930918 2182 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:17:07.999294 kubelet[2182]: I0317 18:17:07.999223 2182 topology_manager.go:215] "Topology Admit Handler" podUID="c42a3ca5-45b3-4ca4-9e11-c813c8ef41fd" podNamespace="kube-system" podName="kube-proxy-62p46" Mar 17 18:17:08.004722 kubelet[2182]: I0317 18:17:08.004420 2182 topology_manager.go:215] "Topology Admit Handler" podUID="4870db43-19b5-4216-aeea-3207490aa9e9" podNamespace="kube-system" podName="cilium-lbpk2" Mar 17 18:17:08.044948 kubelet[2182]: I0317 18:17:08.044903 2182 topology_manager.go:215] "Topology Admit Handler" podUID="4808c6dc-414b-4eaa-b593-4c59c6a70ee1" podNamespace="kube-system" podName="cilium-operator-599987898-bz5zf" Mar 17 18:17:08.116848 kubelet[2182]: I0317 18:17:08.116798 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4870db43-19b5-4216-aeea-3207490aa9e9-hubble-tls\") pod \"cilium-lbpk2\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " pod="kube-system/cilium-lbpk2" Mar 17 18:17:08.116985 kubelet[2182]: I0317 18:17:08.116856 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgptg\" (UniqueName: \"kubernetes.io/projected/4870db43-19b5-4216-aeea-3207490aa9e9-kube-api-access-bgptg\") pod \"cilium-lbpk2\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " pod="kube-system/cilium-lbpk2" Mar 17 18:17:08.116985 kubelet[2182]: I0317 18:17:08.116883 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-cni-path\") pod \"cilium-lbpk2\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " pod="kube-system/cilium-lbpk2" Mar 17 18:17:08.116985 kubelet[2182]: I0317 18:17:08.116909 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c42a3ca5-45b3-4ca4-9e11-c813c8ef41fd-lib-modules\") pod \"kube-proxy-62p46\" (UID: \"c42a3ca5-45b3-4ca4-9e11-c813c8ef41fd\") " pod="kube-system/kube-proxy-62p46" Mar 17 18:17:08.116985 kubelet[2182]: I0317 18:17:08.116926 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpg8p\" (UniqueName: \"kubernetes.io/projected/c42a3ca5-45b3-4ca4-9e11-c813c8ef41fd-kube-api-access-mpg8p\") pod \"kube-proxy-62p46\" (UID: \"c42a3ca5-45b3-4ca4-9e11-c813c8ef41fd\") " pod="kube-system/kube-proxy-62p46" Mar 17 18:17:08.116985 kubelet[2182]: I0317 18:17:08.116942 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-host-proc-sys-net\") pod \"cilium-lbpk2\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " pod="kube-system/cilium-lbpk2" Mar 17 18:17:08.117153 kubelet[2182]: I0317 18:17:08.116957 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-host-proc-sys-kernel\") pod \"cilium-lbpk2\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " pod="kube-system/cilium-lbpk2" Mar 17 18:17:08.117153 kubelet[2182]: I0317 18:17:08.116981 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c42a3ca5-45b3-4ca4-9e11-c813c8ef41fd-xtables-lock\") pod \"kube-proxy-62p46\" (UID: \"c42a3ca5-45b3-4ca4-9e11-c813c8ef41fd\") " pod="kube-system/kube-proxy-62p46" Mar 17 18:17:08.117153 kubelet[2182]: I0317 18:17:08.116998 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-hostproc\") pod \"cilium-lbpk2\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " pod="kube-system/cilium-lbpk2" Mar 17 18:17:08.117153 kubelet[2182]: I0317 18:17:08.117012 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c42a3ca5-45b3-4ca4-9e11-c813c8ef41fd-kube-proxy\") pod \"kube-proxy-62p46\" (UID: \"c42a3ca5-45b3-4ca4-9e11-c813c8ef41fd\") " pod="kube-system/kube-proxy-62p46" Mar 17 18:17:08.117153 kubelet[2182]: I0317 18:17:08.117029 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-lib-modules\") pod \"cilium-lbpk2\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " pod="kube-system/cilium-lbpk2" Mar 17 18:17:08.117153 kubelet[2182]: I0317 18:17:08.117051 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4870db43-19b5-4216-aeea-3207490aa9e9-cilium-config-path\") pod \"cilium-lbpk2\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " pod="kube-system/cilium-lbpk2" Mar 17 18:17:08.117281 kubelet[2182]: I0317 18:17:08.117082 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-cilium-run\") pod \"cilium-lbpk2\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " pod="kube-system/cilium-lbpk2" Mar 17 18:17:08.117281 kubelet[2182]: I0317 18:17:08.117105 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-etc-cni-netd\") pod \"cilium-lbpk2\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " pod="kube-system/cilium-lbpk2" Mar 17 18:17:08.117281 kubelet[2182]: I0317 18:17:08.117122 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-bpf-maps\") pod \"cilium-lbpk2\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " pod="kube-system/cilium-lbpk2" Mar 17 18:17:08.117281 kubelet[2182]: I0317 18:17:08.117137 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-cilium-cgroup\") pod \"cilium-lbpk2\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " pod="kube-system/cilium-lbpk2" Mar 17 18:17:08.117281 kubelet[2182]: I0317 18:17:08.117161 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-xtables-lock\") pod \"cilium-lbpk2\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " pod="kube-system/cilium-lbpk2" Mar 17 18:17:08.117281 kubelet[2182]: I0317 18:17:08.117180 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4870db43-19b5-4216-aeea-3207490aa9e9-clustermesh-secrets\") pod \"cilium-lbpk2\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " pod="kube-system/cilium-lbpk2" Mar 17 18:17:08.218301 kubelet[2182]: I0317 18:17:08.218266 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4808c6dc-414b-4eaa-b593-4c59c6a70ee1-cilium-config-path\") pod \"cilium-operator-599987898-bz5zf\" (UID: \"4808c6dc-414b-4eaa-b593-4c59c6a70ee1\") " pod="kube-system/cilium-operator-599987898-bz5zf" Mar 17 18:17:08.218492 kubelet[2182]: I0317 18:17:08.218475 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjnn6\" (UniqueName: \"kubernetes.io/projected/4808c6dc-414b-4eaa-b593-4c59c6a70ee1-kube-api-access-hjnn6\") pod \"cilium-operator-599987898-bz5zf\" (UID: \"4808c6dc-414b-4eaa-b593-4c59c6a70ee1\") " pod="kube-system/cilium-operator-599987898-bz5zf" Mar 17 18:17:08.301688 kubelet[2182]: E0317 18:17:08.301547 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:08.302102 env[1317]: time="2025-03-17T18:17:08.302035609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-62p46,Uid:c42a3ca5-45b3-4ca4-9e11-c813c8ef41fd,Namespace:kube-system,Attempt:0,}" Mar 17 18:17:08.308043 kubelet[2182]: E0317 18:17:08.308020 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:08.309464 env[1317]: time="2025-03-17T18:17:08.308589589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lbpk2,Uid:4870db43-19b5-4216-aeea-3207490aa9e9,Namespace:kube-system,Attempt:0,}" Mar 17 18:17:08.315545 env[1317]: time="2025-03-17T18:17:08.315336512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:17:08.315545 env[1317]: time="2025-03-17T18:17:08.315415882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:17:08.315545 env[1317]: time="2025-03-17T18:17:08.315433884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:17:08.315728 env[1317]: time="2025-03-17T18:17:08.315644869Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4b199f2e970dddc6a88c382ca2c72f881da079232b5da090e0d9b301621236ba pid=2291 runtime=io.containerd.runc.v2 Mar 17 18:17:08.321210 env[1317]: time="2025-03-17T18:17:08.321108879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:17:08.321210 env[1317]: time="2025-03-17T18:17:08.321146084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:17:08.321210 env[1317]: time="2025-03-17T18:17:08.321158325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:17:08.321383 env[1317]: time="2025-03-17T18:17:08.321252576Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e29b4875d6c8b5094515c5c16e1aca0322a336aa084ec13187f19d3aaa868de5 pid=2314 runtime=io.containerd.runc.v2 Mar 17 18:17:08.349860 kubelet[2182]: E0317 18:17:08.349829 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:08.351470 env[1317]: time="2025-03-17T18:17:08.350271550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-bz5zf,Uid:4808c6dc-414b-4eaa-b593-4c59c6a70ee1,Namespace:kube-system,Attempt:0,}" Mar 17 18:17:08.374253 env[1317]: time="2025-03-17T18:17:08.373980211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:17:08.374253 env[1317]: time="2025-03-17T18:17:08.374017896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:17:08.374253 env[1317]: time="2025-03-17T18:17:08.374027697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:17:08.374674 env[1317]: time="2025-03-17T18:17:08.374611607Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f2840fd9af8d946ff0dca953134bceb16ae0df21b2582df6c9d951724323ea42 pid=2368 runtime=io.containerd.runc.v2 Mar 17 18:17:08.374738 env[1317]: time="2025-03-17T18:17:08.374696777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-62p46,Uid:c42a3ca5-45b3-4ca4-9e11-c813c8ef41fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b199f2e970dddc6a88c382ca2c72f881da079232b5da090e0d9b301621236ba\"" Mar 17 18:17:08.375164 env[1317]: time="2025-03-17T18:17:08.375132909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lbpk2,Uid:4870db43-19b5-4216-aeea-3207490aa9e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"e29b4875d6c8b5094515c5c16e1aca0322a336aa084ec13187f19d3aaa868de5\"" Mar 17 18:17:08.375396 kubelet[2182]: E0317 18:17:08.375369 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:08.376387 kubelet[2182]: E0317 18:17:08.375939 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:08.378109 env[1317]: time="2025-03-17T18:17:08.377518352Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:17:08.378109 env[1317]: time="2025-03-17T18:17:08.377533434Z" level=info msg="CreateContainer within sandbox \"4b199f2e970dddc6a88c382ca2c72f881da079232b5da090e0d9b301621236ba\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:17:08.391094 env[1317]: time="2025-03-17T18:17:08.391038241Z" level=info msg="CreateContainer within sandbox \"4b199f2e970dddc6a88c382ca2c72f881da079232b5da090e0d9b301621236ba\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"62d460ba85bdb74647319242d5c5d895cc284907f0470dccc913a3a5861ec6be\"" Mar 17 18:17:08.391574 env[1317]: time="2025-03-17T18:17:08.391540861Z" level=info msg="StartContainer for \"62d460ba85bdb74647319242d5c5d895cc284907f0470dccc913a3a5861ec6be\"" Mar 17 18:17:08.443239 env[1317]: time="2025-03-17T18:17:08.443195969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-bz5zf,Uid:4808c6dc-414b-4eaa-b593-4c59c6a70ee1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2840fd9af8d946ff0dca953134bceb16ae0df21b2582df6c9d951724323ea42\"" Mar 17 18:17:08.443778 kubelet[2182]: E0317 18:17:08.443755 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:08.446282 env[1317]: time="2025-03-17T18:17:08.446250212Z" level=info msg="StartContainer for \"62d460ba85bdb74647319242d5c5d895cc284907f0470dccc913a3a5861ec6be\" returns successfully" Mar 17 18:17:09.451154 kubelet[2182]: E0317 18:17:09.451115 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:10.454540 kubelet[2182]: E0317 18:17:10.454502 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:11.994460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3076089835.mount: Deactivated successfully. Mar 17 18:17:14.243547 env[1317]: time="2025-03-17T18:17:14.243500834Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:17:14.245358 env[1317]: time="2025-03-17T18:17:14.245328519Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:17:14.247632 env[1317]: time="2025-03-17T18:17:14.247606805Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:17:14.248357 env[1317]: time="2025-03-17T18:17:14.248329670Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 18:17:14.251089 env[1317]: time="2025-03-17T18:17:14.250873780Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:17:14.256486 env[1317]: time="2025-03-17T18:17:14.255062838Z" level=info msg="CreateContainer within sandbox \"e29b4875d6c8b5094515c5c16e1aca0322a336aa084ec13187f19d3aaa868de5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:17:14.268981 env[1317]: time="2025-03-17T18:17:14.268913088Z" level=info msg="CreateContainer within sandbox \"e29b4875d6c8b5094515c5c16e1aca0322a336aa084ec13187f19d3aaa868de5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"24037dc65f0e267d8463740008c17a743b06b37ccca321d11c936d39cee18f37\"" Mar 17 18:17:14.269387 env[1317]: time="2025-03-17T18:17:14.269359488Z" level=info msg="StartContainer for \"24037dc65f0e267d8463740008c17a743b06b37ccca321d11c936d39cee18f37\"" Mar 17 18:17:14.372947 env[1317]: time="2025-03-17T18:17:14.370402289Z" level=info msg="StartContainer for \"24037dc65f0e267d8463740008c17a743b06b37ccca321d11c936d39cee18f37\" returns successfully" Mar 17 18:17:14.393661 env[1317]: time="2025-03-17T18:17:14.393617264Z" level=info msg="shim disconnected" id=24037dc65f0e267d8463740008c17a743b06b37ccca321d11c936d39cee18f37 Mar 17 18:17:14.393661 env[1317]: time="2025-03-17T18:17:14.393662268Z" level=warning msg="cleaning up after shim disconnected" id=24037dc65f0e267d8463740008c17a743b06b37ccca321d11c936d39cee18f37 namespace=k8s.io Mar 17 18:17:14.393888 env[1317]: time="2025-03-17T18:17:14.393671189Z" level=info msg="cleaning up dead shim" Mar 17 18:17:14.401648 env[1317]: time="2025-03-17T18:17:14.401604905Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:17:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2609 runtime=io.containerd.runc.v2\n" Mar 17 18:17:14.464098 kubelet[2182]: E0317 18:17:14.463784 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:14.468996 env[1317]: time="2025-03-17T18:17:14.467219708Z" level=info msg="CreateContainer within sandbox \"e29b4875d6c8b5094515c5c16e1aca0322a336aa084ec13187f19d3aaa868de5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:17:14.482220 env[1317]: time="2025-03-17T18:17:14.482177138Z" level=info msg="CreateContainer within sandbox \"e29b4875d6c8b5094515c5c16e1aca0322a336aa084ec13187f19d3aaa868de5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b7d879989ef0b8c08c7735b82792ba0b775fe98e04862dad800ce9d353d1299f\"" Mar 17 18:17:14.509591 env[1317]: time="2025-03-17T18:17:14.502252390Z" level=info msg="StartContainer for \"b7d879989ef0b8c08c7735b82792ba0b775fe98e04862dad800ce9d353d1299f\"" Mar 17 18:17:14.509701 kubelet[2182]: I0317 18:17:14.504027 2182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-62p46" podStartSLOduration=7.504010789 podStartE2EDuration="7.504010789s" podCreationTimestamp="2025-03-17 18:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:17:09.467352446 +0000 UTC m=+19.168052095" watchObservedRunningTime="2025-03-17 18:17:14.504010789 +0000 UTC m=+24.204710518" Mar 17 18:17:14.574622 env[1317]: time="2025-03-17T18:17:14.574578959Z" level=info msg="StartContainer for \"b7d879989ef0b8c08c7735b82792ba0b775fe98e04862dad800ce9d353d1299f\" returns successfully" Mar 17 18:17:14.590440 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:17:14.590690 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:17:14.590917 systemd[1]: Stopping systemd-sysctl.service... Mar 17 18:17:14.592452 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:17:14.601007 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:17:14.613679 env[1317]: time="2025-03-17T18:17:14.613637564Z" level=info msg="shim disconnected" id=b7d879989ef0b8c08c7735b82792ba0b775fe98e04862dad800ce9d353d1299f Mar 17 18:17:14.613910 env[1317]: time="2025-03-17T18:17:14.613890027Z" level=warning msg="cleaning up after shim disconnected" id=b7d879989ef0b8c08c7735b82792ba0b775fe98e04862dad800ce9d353d1299f namespace=k8s.io Mar 17 18:17:14.613975 env[1317]: time="2025-03-17T18:17:14.613962474Z" level=info msg="cleaning up dead shim" Mar 17 18:17:14.621115 env[1317]: time="2025-03-17T18:17:14.621041673Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:17:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2673 runtime=io.containerd.runc.v2\n" Mar 17 18:17:15.262605 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24037dc65f0e267d8463740008c17a743b06b37ccca321d11c936d39cee18f37-rootfs.mount: Deactivated successfully. Mar 17 18:17:15.465888 kubelet[2182]: E0317 18:17:15.465821 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:15.467989 env[1317]: time="2025-03-17T18:17:15.467952583Z" level=info msg="CreateContainer within sandbox \"e29b4875d6c8b5094515c5c16e1aca0322a336aa084ec13187f19d3aaa868de5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:17:15.488350 env[1317]: time="2025-03-17T18:17:15.487967793Z" level=info msg="CreateContainer within sandbox \"e29b4875d6c8b5094515c5c16e1aca0322a336aa084ec13187f19d3aaa868de5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d068587cf8e070cfcabad6f7a8172900791323fb429daf07e9aae75074574578\"" Mar 17 18:17:15.488962 env[1317]: time="2025-03-17T18:17:15.488935237Z" level=info msg="StartContainer for \"d068587cf8e070cfcabad6f7a8172900791323fb429daf07e9aae75074574578\"" Mar 17 18:17:15.581342 env[1317]: time="2025-03-17T18:17:15.574934473Z" level=info msg="StartContainer for \"d068587cf8e070cfcabad6f7a8172900791323fb429daf07e9aae75074574578\" returns successfully" Mar 17 18:17:15.664338 env[1317]: time="2025-03-17T18:17:15.664289079Z" level=info msg="shim disconnected" id=d068587cf8e070cfcabad6f7a8172900791323fb429daf07e9aae75074574578 Mar 17 18:17:15.664338 env[1317]: time="2025-03-17T18:17:15.664337683Z" level=warning msg="cleaning up after shim disconnected" id=d068587cf8e070cfcabad6f7a8172900791323fb429daf07e9aae75074574578 namespace=k8s.io Mar 17 18:17:15.664564 env[1317]: time="2025-03-17T18:17:15.664347044Z" level=info msg="cleaning up dead shim" Mar 17 18:17:15.670738 env[1317]: time="2025-03-17T18:17:15.670696633Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:17:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2729 runtime=io.containerd.runc.v2\n" Mar 17 18:17:15.788195 systemd[1]: Started sshd@5-10.0.0.58:22-10.0.0.1:34580.service. Mar 17 18:17:15.829702 sshd[2741]: Accepted publickey for core from 10.0.0.1 port 34580 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:17:15.831177 sshd[2741]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:17:15.834636 systemd-logind[1302]: New session 6 of user core. Mar 17 18:17:15.835621 systemd[1]: Started session-6.scope. Mar 17 18:17:15.950064 sshd[2741]: pam_unix(sshd:session): session closed for user core Mar 17 18:17:15.952553 systemd[1]: sshd@5-10.0.0.58:22-10.0.0.1:34580.service: Deactivated successfully. Mar 17 18:17:15.953561 systemd-logind[1302]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:17:15.953590 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:17:15.954298 systemd-logind[1302]: Removed session 6. Mar 17 18:17:16.273375 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d068587cf8e070cfcabad6f7a8172900791323fb429daf07e9aae75074574578-rootfs.mount: Deactivated successfully. Mar 17 18:17:16.470296 kubelet[2182]: E0317 18:17:16.470269 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:16.476870 env[1317]: time="2025-03-17T18:17:16.476827961Z" level=info msg="CreateContainer within sandbox \"e29b4875d6c8b5094515c5c16e1aca0322a336aa084ec13187f19d3aaa868de5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:17:16.492559 env[1317]: time="2025-03-17T18:17:16.492519902Z" level=info msg="CreateContainer within sandbox \"e29b4875d6c8b5094515c5c16e1aca0322a336aa084ec13187f19d3aaa868de5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4cf38b17e941a61a29333f145ece80d8c9b8abe317bdf61bb153d29712057ae5\"" Mar 17 18:17:16.493887 env[1317]: time="2025-03-17T18:17:16.493855053Z" level=info msg="StartContainer for \"4cf38b17e941a61a29333f145ece80d8c9b8abe317bdf61bb153d29712057ae5\"" Mar 17 18:17:16.584165 env[1317]: time="2025-03-17T18:17:16.579090519Z" level=info msg="StartContainer for \"4cf38b17e941a61a29333f145ece80d8c9b8abe317bdf61bb153d29712057ae5\" returns successfully" Mar 17 18:17:16.611572 env[1317]: time="2025-03-17T18:17:16.611527608Z" level=info msg="shim disconnected" id=4cf38b17e941a61a29333f145ece80d8c9b8abe317bdf61bb153d29712057ae5 Mar 17 18:17:16.611572 env[1317]: time="2025-03-17T18:17:16.611572132Z" level=warning msg="cleaning up after shim disconnected" id=4cf38b17e941a61a29333f145ece80d8c9b8abe317bdf61bb153d29712057ae5 namespace=k8s.io Mar 17 18:17:16.611572 env[1317]: time="2025-03-17T18:17:16.611581093Z" level=info msg="cleaning up dead shim" Mar 17 18:17:16.617615 env[1317]: time="2025-03-17T18:17:16.617583590Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:17:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2799 runtime=io.containerd.runc.v2\n" Mar 17 18:17:16.976712 env[1317]: time="2025-03-17T18:17:16.976647718Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:17:16.977941 env[1317]: time="2025-03-17T18:17:16.977908102Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:17:16.979328 env[1317]: time="2025-03-17T18:17:16.979300298Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:17:16.979859 env[1317]: time="2025-03-17T18:17:16.979834182Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 18:17:16.983799 env[1317]: time="2025-03-17T18:17:16.983567092Z" level=info msg="CreateContainer within sandbox \"f2840fd9af8d946ff0dca953134bceb16ae0df21b2582df6c9d951724323ea42\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:17:16.992561 env[1317]: time="2025-03-17T18:17:16.992512193Z" level=info msg="CreateContainer within sandbox \"f2840fd9af8d946ff0dca953134bceb16ae0df21b2582df6c9d951724323ea42\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8631be3b616c85085364a8045d0f1cafe51f9bf973a052e53206272cf951ea42\"" Mar 17 18:17:16.992925 env[1317]: time="2025-03-17T18:17:16.992881024Z" level=info msg="StartContainer for \"8631be3b616c85085364a8045d0f1cafe51f9bf973a052e53206272cf951ea42\"" Mar 17 18:17:17.050093 env[1317]: time="2025-03-17T18:17:17.047830143Z" level=info msg="StartContainer for \"8631be3b616c85085364a8045d0f1cafe51f9bf973a052e53206272cf951ea42\" returns successfully" Mar 17 18:17:17.263648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount249968773.mount: Deactivated successfully. Mar 17 18:17:17.263772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cf38b17e941a61a29333f145ece80d8c9b8abe317bdf61bb153d29712057ae5-rootfs.mount: Deactivated successfully. Mar 17 18:17:17.476727 kubelet[2182]: E0317 18:17:17.476684 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:17.479023 kubelet[2182]: E0317 18:17:17.478983 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:17.479326 env[1317]: time="2025-03-17T18:17:17.479282111Z" level=info msg="CreateContainer within sandbox \"e29b4875d6c8b5094515c5c16e1aca0322a336aa084ec13187f19d3aaa868de5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:17:17.496078 env[1317]: time="2025-03-17T18:17:17.496011682Z" level=info msg="CreateContainer within sandbox \"e29b4875d6c8b5094515c5c16e1aca0322a336aa084ec13187f19d3aaa868de5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f557d5aa073afb3ad63972ba89a969cb36c185aefe309628609b30520d755ac6\"" Mar 17 18:17:17.498272 env[1317]: time="2025-03-17T18:17:17.498230259Z" level=info msg="StartContainer for \"f557d5aa073afb3ad63972ba89a969cb36c185aefe309628609b30520d755ac6\"" Mar 17 18:17:17.660876 env[1317]: time="2025-03-17T18:17:17.660821715Z" level=info msg="StartContainer for \"f557d5aa073afb3ad63972ba89a969cb36c185aefe309628609b30520d755ac6\" returns successfully" Mar 17 18:17:17.809104 kubelet[2182]: I0317 18:17:17.808425 2182 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 18:17:17.855807 kubelet[2182]: I0317 18:17:17.853098 2182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-bz5zf" podStartSLOduration=1.3173947400000001 podStartE2EDuration="9.853079092s" podCreationTimestamp="2025-03-17 18:17:08 +0000 UTC" firstStartedPulling="2025-03-17 18:17:08.444916014 +0000 UTC m=+18.145615623" lastFinishedPulling="2025-03-17 18:17:16.980600326 +0000 UTC m=+26.681299975" observedRunningTime="2025-03-17 18:17:17.532317771 +0000 UTC m=+27.233017420" watchObservedRunningTime="2025-03-17 18:17:17.853079092 +0000 UTC m=+27.553778741" Mar 17 18:17:17.855807 kubelet[2182]: I0317 18:17:17.853497 2182 topology_manager.go:215] "Topology Admit Handler" podUID="1a4a3adb-f0fb-4d95-8378-10fd557faee0" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6q859" Mar 17 18:17:17.860302 kubelet[2182]: I0317 18:17:17.860259 2182 topology_manager.go:215] "Topology Admit Handler" podUID="6f8a20c3-97d3-4313-8bf3-61622e49fcfa" podNamespace="kube-system" podName="coredns-7db6d8ff4d-ld4vx" Mar 17 18:17:17.861035 kubelet[2182]: W0317 18:17:17.861010 2182 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Mar 17 18:17:17.861959 kubelet[2182]: E0317 18:17:17.861929 2182 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Mar 17 18:17:17.940101 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Mar 17 18:17:17.997636 kubelet[2182]: I0317 18:17:17.997593 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f8a20c3-97d3-4313-8bf3-61622e49fcfa-config-volume\") pod \"coredns-7db6d8ff4d-ld4vx\" (UID: \"6f8a20c3-97d3-4313-8bf3-61622e49fcfa\") " pod="kube-system/coredns-7db6d8ff4d-ld4vx" Mar 17 18:17:17.997843 kubelet[2182]: I0317 18:17:17.997826 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a4a3adb-f0fb-4d95-8378-10fd557faee0-config-volume\") pod \"coredns-7db6d8ff4d-6q859\" (UID: \"1a4a3adb-f0fb-4d95-8378-10fd557faee0\") " pod="kube-system/coredns-7db6d8ff4d-6q859" Mar 17 18:17:17.997933 kubelet[2182]: I0317 18:17:17.997916 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpqzm\" (UniqueName: \"kubernetes.io/projected/1a4a3adb-f0fb-4d95-8378-10fd557faee0-kube-api-access-bpqzm\") pod \"coredns-7db6d8ff4d-6q859\" (UID: \"1a4a3adb-f0fb-4d95-8378-10fd557faee0\") " pod="kube-system/coredns-7db6d8ff4d-6q859" Mar 17 18:17:17.998014 kubelet[2182]: I0317 18:17:17.998000 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgvvh\" (UniqueName: \"kubernetes.io/projected/6f8a20c3-97d3-4313-8bf3-61622e49fcfa-kube-api-access-vgvvh\") pod \"coredns-7db6d8ff4d-ld4vx\" (UID: \"6f8a20c3-97d3-4313-8bf3-61622e49fcfa\") " pod="kube-system/coredns-7db6d8ff4d-ld4vx" Mar 17 18:17:18.168103 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Mar 17 18:17:18.483298 kubelet[2182]: E0317 18:17:18.483254 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:18.483965 kubelet[2182]: E0317 18:17:18.483925 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:18.497793 kubelet[2182]: I0317 18:17:18.497742 2182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lbpk2" podStartSLOduration=5.623619827 podStartE2EDuration="11.497729268s" podCreationTimestamp="2025-03-17 18:17:07 +0000 UTC" firstStartedPulling="2025-03-17 18:17:08.376614885 +0000 UTC m=+18.077314534" lastFinishedPulling="2025-03-17 18:17:14.250724326 +0000 UTC m=+23.951423975" observedRunningTime="2025-03-17 18:17:18.49736168 +0000 UTC m=+28.198061329" watchObservedRunningTime="2025-03-17 18:17:18.497729268 +0000 UTC m=+28.198428877" Mar 17 18:17:19.100576 kubelet[2182]: E0317 18:17:19.100528 2182 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:17:19.100737 kubelet[2182]: E0317 18:17:19.100632 2182 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6f8a20c3-97d3-4313-8bf3-61622e49fcfa-config-volume podName:6f8a20c3-97d3-4313-8bf3-61622e49fcfa nodeName:}" failed. No retries permitted until 2025-03-17 18:17:19.600609818 +0000 UTC m=+29.301309467 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6f8a20c3-97d3-4313-8bf3-61622e49fcfa-config-volume") pod "coredns-7db6d8ff4d-ld4vx" (UID: "6f8a20c3-97d3-4313-8bf3-61622e49fcfa") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:17:19.100737 kubelet[2182]: E0317 18:17:19.100535 2182 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:17:19.100737 kubelet[2182]: E0317 18:17:19.100701 2182 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a4a3adb-f0fb-4d95-8378-10fd557faee0-config-volume podName:1a4a3adb-f0fb-4d95-8378-10fd557faee0 nodeName:}" failed. No retries permitted until 2025-03-17 18:17:19.600685943 +0000 UTC m=+29.301385592 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/1a4a3adb-f0fb-4d95-8378-10fd557faee0-config-volume") pod "coredns-7db6d8ff4d-6q859" (UID: "1a4a3adb-f0fb-4d95-8378-10fd557faee0") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:17:19.485242 kubelet[2182]: E0317 18:17:19.485208 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:19.656358 kubelet[2182]: E0317 18:17:19.656325 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:19.657167 env[1317]: time="2025-03-17T18:17:19.657112080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6q859,Uid:1a4a3adb-f0fb-4d95-8378-10fd557faee0,Namespace:kube-system,Attempt:0,}" Mar 17 18:17:19.663830 kubelet[2182]: E0317 18:17:19.663802 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:19.664504 env[1317]: time="2025-03-17T18:17:19.664286247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ld4vx,Uid:6f8a20c3-97d3-4313-8bf3-61622e49fcfa,Namespace:kube-system,Attempt:0,}" Mar 17 18:17:19.792566 systemd-networkd[1099]: cilium_host: Link UP Mar 17 18:17:19.795788 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Mar 17 18:17:19.795853 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 18:17:19.795975 systemd-networkd[1099]: cilium_net: Link UP Mar 17 18:17:19.796179 systemd-networkd[1099]: cilium_net: Gained carrier Mar 17 18:17:19.796299 systemd-networkd[1099]: cilium_host: Gained carrier Mar 17 18:17:19.796377 systemd-networkd[1099]: cilium_net: Gained IPv6LL Mar 17 18:17:19.796479 systemd-networkd[1099]: cilium_host: Gained IPv6LL Mar 17 18:17:19.873859 systemd-networkd[1099]: cilium_vxlan: Link UP Mar 17 18:17:19.873866 systemd-networkd[1099]: cilium_vxlan: Gained carrier Mar 17 18:17:20.199107 kernel: NET: Registered PF_ALG protocol family Mar 17 18:17:20.487638 kubelet[2182]: E0317 18:17:20.487526 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:20.808777 systemd-networkd[1099]: lxc_health: Link UP Mar 17 18:17:20.817457 systemd-networkd[1099]: lxc_health: Gained carrier Mar 17 18:17:20.818091 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:17:20.953458 systemd[1]: Started sshd@6-10.0.0.58:22-10.0.0.1:34594.service. Mar 17 18:17:20.998129 sshd[3362]: Accepted publickey for core from 10.0.0.1 port 34594 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:17:21.001366 sshd[3362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:17:21.005528 systemd[1]: Started session-7.scope. Mar 17 18:17:21.005856 systemd-logind[1302]: New session 7 of user core. Mar 17 18:17:21.122691 sshd[3362]: pam_unix(sshd:session): session closed for user core Mar 17 18:17:21.125661 systemd[1]: sshd@6-10.0.0.58:22-10.0.0.1:34594.service: Deactivated successfully. Mar 17 18:17:21.126883 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:17:21.127329 systemd-logind[1302]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:17:21.128054 systemd-logind[1302]: Removed session 7. Mar 17 18:17:21.242661 systemd-networkd[1099]: lxce1a15c91fb20: Link UP Mar 17 18:17:21.250146 systemd-networkd[1099]: lxcabec43aec062: Link UP Mar 17 18:17:21.260136 kernel: eth0: renamed from tmp5fb28 Mar 17 18:17:21.266091 kernel: eth0: renamed from tmp7122f Mar 17 18:17:21.274019 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcabec43aec062: link becomes ready Mar 17 18:17:21.274200 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce1a15c91fb20: link becomes ready Mar 17 18:17:21.273769 systemd-networkd[1099]: lxcabec43aec062: Gained carrier Mar 17 18:17:21.273909 systemd-networkd[1099]: lxce1a15c91fb20: Gained carrier Mar 17 18:17:21.566279 systemd-networkd[1099]: cilium_vxlan: Gained IPv6LL Mar 17 18:17:22.313450 kubelet[2182]: E0317 18:17:22.313417 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:22.654233 systemd-networkd[1099]: lxcabec43aec062: Gained IPv6LL Mar 17 18:17:22.654479 systemd-networkd[1099]: lxce1a15c91fb20: Gained IPv6LL Mar 17 18:17:22.719196 systemd-networkd[1099]: lxc_health: Gained IPv6LL Mar 17 18:17:24.762334 env[1317]: time="2025-03-17T18:17:24.762261689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:17:24.762334 env[1317]: time="2025-03-17T18:17:24.762302892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:17:24.762334 env[1317]: time="2025-03-17T18:17:24.762314253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:17:24.762993 env[1317]: time="2025-03-17T18:17:24.762482623Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7122f67ecfe716e4be05d92af92bcd57cf3351a4229dbaf858639d26adba8bc6 pid=3424 runtime=io.containerd.runc.v2 Mar 17 18:17:24.768313 env[1317]: time="2025-03-17T18:17:24.763254710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:17:24.768313 env[1317]: time="2025-03-17T18:17:24.768278458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:17:24.768313 env[1317]: time="2025-03-17T18:17:24.768292419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:17:24.768666 env[1317]: time="2025-03-17T18:17:24.768629280Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5fb2808bc6d51d0b60bd879dbf45134a4783227cd38d834686dbdfab70ec468d pid=3432 runtime=io.containerd.runc.v2 Mar 17 18:17:24.833776 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:17:24.836011 systemd-resolved[1236]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:17:24.853045 env[1317]: time="2025-03-17T18:17:24.852998377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6q859,Uid:1a4a3adb-f0fb-4d95-8378-10fd557faee0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7122f67ecfe716e4be05d92af92bcd57cf3351a4229dbaf858639d26adba8bc6\"" Mar 17 18:17:24.853816 kubelet[2182]: E0317 18:17:24.853780 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:24.857453 env[1317]: time="2025-03-17T18:17:24.857405487Z" level=info msg="CreateContainer within sandbox \"7122f67ecfe716e4be05d92af92bcd57cf3351a4229dbaf858639d26adba8bc6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:17:24.862245 env[1317]: time="2025-03-17T18:17:24.862212782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-ld4vx,Uid:6f8a20c3-97d3-4313-8bf3-61622e49fcfa,Namespace:kube-system,Attempt:0,} returns sandbox id \"5fb2808bc6d51d0b60bd879dbf45134a4783227cd38d834686dbdfab70ec468d\"" Mar 17 18:17:24.862837 kubelet[2182]: E0317 18:17:24.862800 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:24.865923 env[1317]: time="2025-03-17T18:17:24.865888407Z" level=info msg="CreateContainer within sandbox \"5fb2808bc6d51d0b60bd879dbf45134a4783227cd38d834686dbdfab70ec468d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:17:24.876536 env[1317]: time="2025-03-17T18:17:24.876489978Z" level=info msg="CreateContainer within sandbox \"7122f67ecfe716e4be05d92af92bcd57cf3351a4229dbaf858639d26adba8bc6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"abbd3d09ecdd930a36c780b3b113d2dbeba265cfcb8f1dabf3cb85e644faf2b7\"" Mar 17 18:17:24.877240 env[1317]: time="2025-03-17T18:17:24.877212182Z" level=info msg="StartContainer for \"abbd3d09ecdd930a36c780b3b113d2dbeba265cfcb8f1dabf3cb85e644faf2b7\"" Mar 17 18:17:24.882835 env[1317]: time="2025-03-17T18:17:24.882788124Z" level=info msg="CreateContainer within sandbox \"5fb2808bc6d51d0b60bd879dbf45134a4783227cd38d834686dbdfab70ec468d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"99caa7cba35a8fb806eb834c30f63a0547736ad158d4efc0537603e0d7d4324e\"" Mar 17 18:17:24.883306 env[1317]: time="2025-03-17T18:17:24.883276674Z" level=info msg="StartContainer for \"99caa7cba35a8fb806eb834c30f63a0547736ad158d4efc0537603e0d7d4324e\"" Mar 17 18:17:24.960235 env[1317]: time="2025-03-17T18:17:24.960147111Z" level=info msg="StartContainer for \"abbd3d09ecdd930a36c780b3b113d2dbeba265cfcb8f1dabf3cb85e644faf2b7\" returns successfully" Mar 17 18:17:24.965177 env[1317]: time="2025-03-17T18:17:24.965046451Z" level=info msg="StartContainer for \"99caa7cba35a8fb806eb834c30f63a0547736ad158d4efc0537603e0d7d4324e\" returns successfully" Mar 17 18:17:25.499786 kubelet[2182]: E0317 18:17:25.499742 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:25.502778 kubelet[2182]: E0317 18:17:25.502748 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:25.523374 kubelet[2182]: I0317 18:17:25.523309 2182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6q859" podStartSLOduration=17.523292103 podStartE2EDuration="17.523292103s" podCreationTimestamp="2025-03-17 18:17:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:17:25.521867098 +0000 UTC m=+35.222566747" watchObservedRunningTime="2025-03-17 18:17:25.523292103 +0000 UTC m=+35.223991752" Mar 17 18:17:25.523578 kubelet[2182]: I0317 18:17:25.523451 2182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-ld4vx" podStartSLOduration=17.523447272 podStartE2EDuration="17.523447272s" podCreationTimestamp="2025-03-17 18:17:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:17:25.512841802 +0000 UTC m=+35.213541491" watchObservedRunningTime="2025-03-17 18:17:25.523447272 +0000 UTC m=+35.224146881" Mar 17 18:17:26.125603 systemd[1]: Started sshd@7-10.0.0.58:22-10.0.0.1:36902.service. Mar 17 18:17:26.168910 sshd[3577]: Accepted publickey for core from 10.0.0.1 port 36902 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:17:26.170242 sshd[3577]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:17:26.174133 systemd-logind[1302]: New session 8 of user core. Mar 17 18:17:26.174513 systemd[1]: Started session-8.scope. Mar 17 18:17:26.286997 sshd[3577]: pam_unix(sshd:session): session closed for user core Mar 17 18:17:26.290582 systemd[1]: sshd@7-10.0.0.58:22-10.0.0.1:36902.service: Deactivated successfully. Mar 17 18:17:26.291636 systemd-logind[1302]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:17:26.291691 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:17:26.292360 systemd-logind[1302]: Removed session 8. Mar 17 18:17:26.504358 kubelet[2182]: E0317 18:17:26.504327 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:26.506439 kubelet[2182]: E0317 18:17:26.506417 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:27.505875 kubelet[2182]: E0317 18:17:27.505846 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:27.507066 kubelet[2182]: E0317 18:17:27.505900 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:31.290038 systemd[1]: Started sshd@8-10.0.0.58:22-10.0.0.1:36914.service. Mar 17 18:17:31.328939 sshd[3592]: Accepted publickey for core from 10.0.0.1 port 36914 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:17:31.330186 sshd[3592]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:17:31.333513 systemd-logind[1302]: New session 9 of user core. Mar 17 18:17:31.334351 systemd[1]: Started session-9.scope. Mar 17 18:17:31.445261 sshd[3592]: pam_unix(sshd:session): session closed for user core Mar 17 18:17:31.446348 systemd[1]: Started sshd@9-10.0.0.58:22-10.0.0.1:36920.service. Mar 17 18:17:31.448684 systemd[1]: sshd@8-10.0.0.58:22-10.0.0.1:36914.service: Deactivated successfully. Mar 17 18:17:31.450053 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:17:31.450292 systemd-logind[1302]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:17:31.451289 systemd-logind[1302]: Removed session 9. Mar 17 18:17:31.486126 sshd[3605]: Accepted publickey for core from 10.0.0.1 port 36920 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:17:31.487679 sshd[3605]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:17:31.492374 systemd-logind[1302]: New session 10 of user core. Mar 17 18:17:31.492756 systemd[1]: Started session-10.scope. Mar 17 18:17:31.648811 sshd[3605]: pam_unix(sshd:session): session closed for user core Mar 17 18:17:31.649149 systemd[1]: Started sshd@10-10.0.0.58:22-10.0.0.1:36926.service. Mar 17 18:17:31.664740 systemd[1]: sshd@9-10.0.0.58:22-10.0.0.1:36920.service: Deactivated successfully. Mar 17 18:17:31.666249 systemd-logind[1302]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:17:31.666319 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:17:31.667333 systemd-logind[1302]: Removed session 10. Mar 17 18:17:31.712101 sshd[3618]: Accepted publickey for core from 10.0.0.1 port 36926 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:17:31.713743 sshd[3618]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:17:31.717360 systemd-logind[1302]: New session 11 of user core. Mar 17 18:17:31.718198 systemd[1]: Started session-11.scope. Mar 17 18:17:31.836044 sshd[3618]: pam_unix(sshd:session): session closed for user core Mar 17 18:17:31.838509 systemd[1]: sshd@10-10.0.0.58:22-10.0.0.1:36926.service: Deactivated successfully. Mar 17 18:17:31.839506 systemd-logind[1302]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:17:31.839545 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:17:31.840248 systemd-logind[1302]: Removed session 11. Mar 17 18:17:35.215536 kubelet[2182]: I0317 18:17:35.215479 2182 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:17:35.216373 kubelet[2182]: E0317 18:17:35.216285 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:35.522118 kubelet[2182]: E0317 18:17:35.521908 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:17:36.839560 systemd[1]: Started sshd@11-10.0.0.58:22-10.0.0.1:42962.service. Mar 17 18:17:36.878316 sshd[3634]: Accepted publickey for core from 10.0.0.1 port 42962 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:17:36.879341 sshd[3634]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:17:36.882793 systemd-logind[1302]: New session 12 of user core. Mar 17 18:17:36.883566 systemd[1]: Started session-12.scope. Mar 17 18:17:36.993515 sshd[3634]: pam_unix(sshd:session): session closed for user core Mar 17 18:17:36.996161 systemd[1]: sshd@11-10.0.0.58:22-10.0.0.1:42962.service: Deactivated successfully. Mar 17 18:17:36.997275 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:17:36.997580 systemd-logind[1302]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:17:36.998272 systemd-logind[1302]: Removed session 12. Mar 17 18:17:41.996945 systemd[1]: Started sshd@12-10.0.0.58:22-10.0.0.1:42976.service. Mar 17 18:17:42.036360 sshd[3650]: Accepted publickey for core from 10.0.0.1 port 42976 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:17:42.037833 sshd[3650]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:17:42.041777 systemd-logind[1302]: New session 13 of user core. Mar 17 18:17:42.042231 systemd[1]: Started session-13.scope. Mar 17 18:17:42.159257 sshd[3650]: pam_unix(sshd:session): session closed for user core Mar 17 18:17:42.161801 systemd[1]: Started sshd@13-10.0.0.58:22-10.0.0.1:42980.service. Mar 17 18:17:42.162378 systemd[1]: sshd@12-10.0.0.58:22-10.0.0.1:42976.service: Deactivated successfully. Mar 17 18:17:42.163282 systemd-logind[1302]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:17:42.163403 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:17:42.164510 systemd-logind[1302]: Removed session 13. Mar 17 18:17:42.201504 sshd[3662]: Accepted publickey for core from 10.0.0.1 port 42980 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:17:42.202824 sshd[3662]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:17:42.207094 systemd-logind[1302]: New session 14 of user core. Mar 17 18:17:42.208210 systemd[1]: Started session-14.scope. Mar 17 18:17:42.403384 sshd[3662]: pam_unix(sshd:session): session closed for user core Mar 17 18:17:42.405908 systemd[1]: Started sshd@14-10.0.0.58:22-10.0.0.1:42982.service. Mar 17 18:17:42.407770 systemd[1]: sshd@13-10.0.0.58:22-10.0.0.1:42980.service: Deactivated successfully. Mar 17 18:17:42.408882 systemd-logind[1302]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:17:42.408955 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:17:42.409671 systemd-logind[1302]: Removed session 14. Mar 17 18:17:42.448788 sshd[3674]: Accepted publickey for core from 10.0.0.1 port 42982 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:17:42.450084 sshd[3674]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:17:42.453509 systemd-logind[1302]: New session 15 of user core. Mar 17 18:17:42.454361 systemd[1]: Started session-15.scope. Mar 17 18:17:43.737538 sshd[3674]: pam_unix(sshd:session): session closed for user core Mar 17 18:17:43.740658 systemd[1]: Started sshd@15-10.0.0.58:22-10.0.0.1:44270.service. Mar 17 18:17:43.741840 systemd[1]: sshd@14-10.0.0.58:22-10.0.0.1:42982.service: Deactivated successfully. Mar 17 18:17:43.743693 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:17:43.744194 systemd-logind[1302]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:17:43.746223 systemd-logind[1302]: Removed session 15. Mar 17 18:17:43.790985 sshd[3694]: Accepted publickey for core from 10.0.0.1 port 44270 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:17:43.792700 sshd[3694]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:17:43.796305 systemd-logind[1302]: New session 16 of user core. Mar 17 18:17:43.797153 systemd[1]: Started session-16.scope. Mar 17 18:17:44.011920 sshd[3694]: pam_unix(sshd:session): session closed for user core Mar 17 18:17:44.015384 systemd[1]: Started sshd@16-10.0.0.58:22-10.0.0.1:44286.service. Mar 17 18:17:44.016900 systemd[1]: sshd@15-10.0.0.58:22-10.0.0.1:44270.service: Deactivated successfully. Mar 17 18:17:44.018586 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:17:44.020119 systemd-logind[1302]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:17:44.022030 systemd-logind[1302]: Removed session 16. Mar 17 18:17:44.055376 sshd[3709]: Accepted publickey for core from 10.0.0.1 port 44286 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:17:44.056885 sshd[3709]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:17:44.061204 systemd[1]: Started session-17.scope. Mar 17 18:17:44.062125 systemd-logind[1302]: New session 17 of user core. Mar 17 18:17:44.172524 sshd[3709]: pam_unix(sshd:session): session closed for user core Mar 17 18:17:44.174951 systemd[1]: sshd@16-10.0.0.58:22-10.0.0.1:44286.service: Deactivated successfully. Mar 17 18:17:44.175911 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:17:44.176042 systemd-logind[1302]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:17:44.177111 systemd-logind[1302]: Removed session 17. Mar 17 18:17:49.175834 systemd[1]: Started sshd@17-10.0.0.58:22-10.0.0.1:44296.service. Mar 17 18:17:49.215502 sshd[3728]: Accepted publickey for core from 10.0.0.1 port 44296 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:17:49.216959 sshd[3728]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:17:49.221465 systemd-logind[1302]: New session 18 of user core. Mar 17 18:17:49.223808 systemd[1]: Started session-18.scope. Mar 17 18:17:49.345196 sshd[3728]: pam_unix(sshd:session): session closed for user core Mar 17 18:17:49.349355 systemd[1]: sshd@17-10.0.0.58:22-10.0.0.1:44296.service: Deactivated successfully. Mar 17 18:17:49.350499 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:17:49.350504 systemd-logind[1302]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:17:49.351437 systemd-logind[1302]: Removed session 18. Mar 17 18:17:54.348028 systemd[1]: Started sshd@18-10.0.0.58:22-10.0.0.1:56962.service. Mar 17 18:17:54.390616 sshd[3744]: Accepted publickey for core from 10.0.0.1 port 56962 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:17:54.392229 sshd[3744]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:17:54.397551 systemd-logind[1302]: New session 19 of user core. Mar 17 18:17:54.398800 systemd[1]: Started session-19.scope. Mar 17 18:17:54.530791 sshd[3744]: pam_unix(sshd:session): session closed for user core Mar 17 18:17:54.533598 systemd[1]: sshd@18-10.0.0.58:22-10.0.0.1:56962.service: Deactivated successfully. Mar 17 18:17:54.534751 systemd-logind[1302]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:17:54.534832 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:17:54.535727 systemd-logind[1302]: Removed session 19. Mar 17 18:17:59.533501 systemd[1]: Started sshd@19-10.0.0.58:22-10.0.0.1:56974.service. Mar 17 18:17:59.582227 sshd[3758]: Accepted publickey for core from 10.0.0.1 port 56974 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:17:59.583457 sshd[3758]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:17:59.589955 systemd-logind[1302]: New session 20 of user core. Mar 17 18:17:59.591588 systemd[1]: Started session-20.scope. Mar 17 18:17:59.701992 sshd[3758]: pam_unix(sshd:session): session closed for user core Mar 17 18:17:59.704334 systemd[1]: sshd@19-10.0.0.58:22-10.0.0.1:56974.service: Deactivated successfully. Mar 17 18:17:59.705257 systemd-logind[1302]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:17:59.705311 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:17:59.705997 systemd-logind[1302]: Removed session 20. Mar 17 18:18:04.404724 kubelet[2182]: E0317 18:18:04.404681 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:18:04.705711 systemd[1]: Started sshd@20-10.0.0.58:22-10.0.0.1:33742.service. Mar 17 18:18:04.747914 sshd[3773]: Accepted publickey for core from 10.0.0.1 port 33742 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:18:04.749276 sshd[3773]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:18:04.754090 systemd-logind[1302]: New session 21 of user core. Mar 17 18:18:04.754393 systemd[1]: Started session-21.scope. Mar 17 18:18:04.862784 sshd[3773]: pam_unix(sshd:session): session closed for user core Mar 17 18:18:04.865274 systemd[1]: Started sshd@21-10.0.0.58:22-10.0.0.1:33756.service. Mar 17 18:18:04.867869 systemd[1]: sshd@20-10.0.0.58:22-10.0.0.1:33742.service: Deactivated successfully. Mar 17 18:18:04.868769 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:18:04.868793 systemd-logind[1302]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:18:04.869469 systemd-logind[1302]: Removed session 21. Mar 17 18:18:04.905680 sshd[3785]: Accepted publickey for core from 10.0.0.1 port 33756 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:18:04.906862 sshd[3785]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:18:04.910022 systemd-logind[1302]: New session 22 of user core. Mar 17 18:18:04.910824 systemd[1]: Started session-22.scope. Mar 17 18:18:06.578627 env[1317]: time="2025-03-17T18:18:06.578007151Z" level=info msg="StopContainer for \"8631be3b616c85085364a8045d0f1cafe51f9bf973a052e53206272cf951ea42\" with timeout 30 (s)" Mar 17 18:18:06.579002 env[1317]: time="2025-03-17T18:18:06.578678559Z" level=info msg="Stop container \"8631be3b616c85085364a8045d0f1cafe51f9bf973a052e53206272cf951ea42\" with signal terminated" Mar 17 18:18:06.612209 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8631be3b616c85085364a8045d0f1cafe51f9bf973a052e53206272cf951ea42-rootfs.mount: Deactivated successfully. Mar 17 18:18:06.627544 env[1317]: time="2025-03-17T18:18:06.627491914Z" level=info msg="shim disconnected" id=8631be3b616c85085364a8045d0f1cafe51f9bf973a052e53206272cf951ea42 Mar 17 18:18:06.627544 env[1317]: time="2025-03-17T18:18:06.627543234Z" level=warning msg="cleaning up after shim disconnected" id=8631be3b616c85085364a8045d0f1cafe51f9bf973a052e53206272cf951ea42 namespace=k8s.io Mar 17 18:18:06.627844 env[1317]: time="2025-03-17T18:18:06.627553194Z" level=info msg="cleaning up dead shim" Mar 17 18:18:06.637703 env[1317]: time="2025-03-17T18:18:06.637536668Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:18:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3833 runtime=io.containerd.runc.v2\n" Mar 17 18:18:06.637703 env[1317]: time="2025-03-17T18:18:06.637632349Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:18:06.640042 env[1317]: time="2025-03-17T18:18:06.640006176Z" level=info msg="StopContainer for \"8631be3b616c85085364a8045d0f1cafe51f9bf973a052e53206272cf951ea42\" returns successfully" Mar 17 18:18:06.641035 env[1317]: time="2025-03-17T18:18:06.640998867Z" level=info msg="StopPodSandbox for \"f2840fd9af8d946ff0dca953134bceb16ae0df21b2582df6c9d951724323ea42\"" Mar 17 18:18:06.641251 env[1317]: time="2025-03-17T18:18:06.641227870Z" level=info msg="Container to stop \"8631be3b616c85085364a8045d0f1cafe51f9bf973a052e53206272cf951ea42\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:18:06.643280 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2840fd9af8d946ff0dca953134bceb16ae0df21b2582df6c9d951724323ea42-shm.mount: Deactivated successfully. Mar 17 18:18:06.646591 env[1317]: time="2025-03-17T18:18:06.646560050Z" level=info msg="StopContainer for \"f557d5aa073afb3ad63972ba89a969cb36c185aefe309628609b30520d755ac6\" with timeout 2 (s)" Mar 17 18:18:06.647066 env[1317]: time="2025-03-17T18:18:06.647019616Z" level=info msg="Stop container \"f557d5aa073afb3ad63972ba89a969cb36c185aefe309628609b30520d755ac6\" with signal terminated" Mar 17 18:18:06.654443 systemd-networkd[1099]: lxc_health: Link DOWN Mar 17 18:18:06.654448 systemd-networkd[1099]: lxc_health: Lost carrier Mar 17 18:18:06.666664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2840fd9af8d946ff0dca953134bceb16ae0df21b2582df6c9d951724323ea42-rootfs.mount: Deactivated successfully. Mar 17 18:18:06.671884 env[1317]: time="2025-03-17T18:18:06.671839338Z" level=info msg="shim disconnected" id=f2840fd9af8d946ff0dca953134bceb16ae0df21b2582df6c9d951724323ea42 Mar 17 18:18:06.672180 env[1317]: time="2025-03-17T18:18:06.672151301Z" level=warning msg="cleaning up after shim disconnected" id=f2840fd9af8d946ff0dca953134bceb16ae0df21b2582df6c9d951724323ea42 namespace=k8s.io Mar 17 18:18:06.672443 env[1317]: time="2025-03-17T18:18:06.672424384Z" level=info msg="cleaning up dead shim" Mar 17 18:18:06.682742 env[1317]: time="2025-03-17T18:18:06.682701701Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:18:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3876 runtime=io.containerd.runc.v2\n" Mar 17 18:18:06.683244 env[1317]: time="2025-03-17T18:18:06.683211667Z" level=info msg="TearDown network for sandbox \"f2840fd9af8d946ff0dca953134bceb16ae0df21b2582df6c9d951724323ea42\" successfully" Mar 17 18:18:06.683349 env[1317]: time="2025-03-17T18:18:06.683330908Z" level=info msg="StopPodSandbox for \"f2840fd9af8d946ff0dca953134bceb16ae0df21b2582df6c9d951724323ea42\" returns successfully" Mar 17 18:18:06.709341 env[1317]: time="2025-03-17T18:18:06.709292003Z" level=info msg="shim disconnected" id=f557d5aa073afb3ad63972ba89a969cb36c185aefe309628609b30520d755ac6 Mar 17 18:18:06.709341 env[1317]: time="2025-03-17T18:18:06.709341684Z" level=warning msg="cleaning up after shim disconnected" id=f557d5aa073afb3ad63972ba89a969cb36c185aefe309628609b30520d755ac6 namespace=k8s.io Mar 17 18:18:06.709569 env[1317]: time="2025-03-17T18:18:06.709351764Z" level=info msg="cleaning up dead shim" Mar 17 18:18:06.716437 env[1317]: time="2025-03-17T18:18:06.716391284Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:18:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3904 runtime=io.containerd.runc.v2\n" Mar 17 18:18:06.719604 env[1317]: time="2025-03-17T18:18:06.719561040Z" level=info msg="StopContainer for \"f557d5aa073afb3ad63972ba89a969cb36c185aefe309628609b30520d755ac6\" returns successfully" Mar 17 18:18:06.720154 env[1317]: time="2025-03-17T18:18:06.720037605Z" level=info msg="StopPodSandbox for \"e29b4875d6c8b5094515c5c16e1aca0322a336aa084ec13187f19d3aaa868de5\"" Mar 17 18:18:06.720243 env[1317]: time="2025-03-17T18:18:06.720187967Z" level=info msg="Container to stop \"24037dc65f0e267d8463740008c17a743b06b37ccca321d11c936d39cee18f37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:18:06.720243 env[1317]: time="2025-03-17T18:18:06.720209167Z" level=info msg="Container to stop \"b7d879989ef0b8c08c7735b82792ba0b775fe98e04862dad800ce9d353d1299f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:18:06.720243 env[1317]: time="2025-03-17T18:18:06.720221127Z" level=info msg="Container to stop \"f557d5aa073afb3ad63972ba89a969cb36c185aefe309628609b30520d755ac6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:18:06.720243 env[1317]: time="2025-03-17T18:18:06.720232768Z" level=info msg="Container to stop \"d068587cf8e070cfcabad6f7a8172900791323fb429daf07e9aae75074574578\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:18:06.720354 env[1317]: time="2025-03-17T18:18:06.720244168Z" level=info msg="Container to stop \"4cf38b17e941a61a29333f145ece80d8c9b8abe317bdf61bb153d29712057ae5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:18:06.762554 env[1317]: time="2025-03-17T18:18:06.762506368Z" level=info msg="shim disconnected" id=e29b4875d6c8b5094515c5c16e1aca0322a336aa084ec13187f19d3aaa868de5 Mar 17 18:18:06.762778 env[1317]: time="2025-03-17T18:18:06.762758811Z" level=warning msg="cleaning up after shim disconnected" id=e29b4875d6c8b5094515c5c16e1aca0322a336aa084ec13187f19d3aaa868de5 namespace=k8s.io Mar 17 18:18:06.762844 env[1317]: time="2025-03-17T18:18:06.762830332Z" level=info msg="cleaning up dead shim" Mar 17 18:18:06.769582 env[1317]: time="2025-03-17T18:18:06.769546488Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:18:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3937 runtime=io.containerd.runc.v2\n" Mar 17 18:18:06.770001 env[1317]: time="2025-03-17T18:18:06.769973253Z" level=info msg="TearDown network for sandbox \"e29b4875d6c8b5094515c5c16e1aca0322a336aa084ec13187f19d3aaa868de5\" successfully" Mar 17 18:18:06.770137 env[1317]: time="2025-03-17T18:18:06.770116774Z" level=info msg="StopPodSandbox for \"e29b4875d6c8b5094515c5c16e1aca0322a336aa084ec13187f19d3aaa868de5\" returns successfully" Mar 17 18:18:06.850942 kubelet[2182]: I0317 18:18:06.850626 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-host-proc-sys-net\") pod \"4870db43-19b5-4216-aeea-3207490aa9e9\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " Mar 17 18:18:06.850942 kubelet[2182]: I0317 18:18:06.850662 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-cni-path\") pod \"4870db43-19b5-4216-aeea-3207490aa9e9\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " Mar 17 18:18:06.850942 kubelet[2182]: I0317 18:18:06.850678 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-lib-modules\") pod \"4870db43-19b5-4216-aeea-3207490aa9e9\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " Mar 17 18:18:06.850942 kubelet[2182]: I0317 18:18:06.850697 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-cilium-run\") pod \"4870db43-19b5-4216-aeea-3207490aa9e9\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " Mar 17 18:18:06.850942 kubelet[2182]: I0317 18:18:06.850719 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4870db43-19b5-4216-aeea-3207490aa9e9-clustermesh-secrets\") pod \"4870db43-19b5-4216-aeea-3207490aa9e9\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " Mar 17 18:18:06.850942 kubelet[2182]: I0317 18:18:06.850739 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4870db43-19b5-4216-aeea-3207490aa9e9-hubble-tls\") pod \"4870db43-19b5-4216-aeea-3207490aa9e9\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " Mar 17 18:18:06.853340 kubelet[2182]: I0317 18:18:06.850777 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-xtables-lock\") pod \"4870db43-19b5-4216-aeea-3207490aa9e9\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " Mar 17 18:18:06.853340 kubelet[2182]: I0317 18:18:06.850794 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-hostproc\") pod \"4870db43-19b5-4216-aeea-3207490aa9e9\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " Mar 17 18:18:06.853340 kubelet[2182]: I0317 18:18:06.850809 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-bpf-maps\") pod \"4870db43-19b5-4216-aeea-3207490aa9e9\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " Mar 17 18:18:06.853340 kubelet[2182]: I0317 18:18:06.850828 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjnn6\" (UniqueName: \"kubernetes.io/projected/4808c6dc-414b-4eaa-b593-4c59c6a70ee1-kube-api-access-hjnn6\") pod \"4808c6dc-414b-4eaa-b593-4c59c6a70ee1\" (UID: \"4808c6dc-414b-4eaa-b593-4c59c6a70ee1\") " Mar 17 18:18:06.853340 kubelet[2182]: I0317 18:18:06.850849 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4808c6dc-414b-4eaa-b593-4c59c6a70ee1-cilium-config-path\") pod \"4808c6dc-414b-4eaa-b593-4c59c6a70ee1\" (UID: \"4808c6dc-414b-4eaa-b593-4c59c6a70ee1\") " Mar 17 18:18:06.853340 kubelet[2182]: I0317 18:18:06.850866 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgptg\" (UniqueName: \"kubernetes.io/projected/4870db43-19b5-4216-aeea-3207490aa9e9-kube-api-access-bgptg\") pod \"4870db43-19b5-4216-aeea-3207490aa9e9\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " Mar 17 18:18:06.853490 kubelet[2182]: I0317 18:18:06.850881 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-cilium-cgroup\") pod \"4870db43-19b5-4216-aeea-3207490aa9e9\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " Mar 17 18:18:06.853490 kubelet[2182]: I0317 18:18:06.850896 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-host-proc-sys-kernel\") pod \"4870db43-19b5-4216-aeea-3207490aa9e9\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " Mar 17 18:18:06.853490 kubelet[2182]: I0317 18:18:06.850914 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4870db43-19b5-4216-aeea-3207490aa9e9-cilium-config-path\") pod \"4870db43-19b5-4216-aeea-3207490aa9e9\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " Mar 17 18:18:06.853490 kubelet[2182]: I0317 18:18:06.851707 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-etc-cni-netd\") pod \"4870db43-19b5-4216-aeea-3207490aa9e9\" (UID: \"4870db43-19b5-4216-aeea-3207490aa9e9\") " Mar 17 18:18:06.854201 kubelet[2182]: I0317 18:18:06.853761 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4870db43-19b5-4216-aeea-3207490aa9e9" (UID: "4870db43-19b5-4216-aeea-3207490aa9e9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:18:06.854201 kubelet[2182]: I0317 18:18:06.853788 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4870db43-19b5-4216-aeea-3207490aa9e9" (UID: "4870db43-19b5-4216-aeea-3207490aa9e9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:18:06.854201 kubelet[2182]: I0317 18:18:06.853772 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4870db43-19b5-4216-aeea-3207490aa9e9" (UID: "4870db43-19b5-4216-aeea-3207490aa9e9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:18:06.854201 kubelet[2182]: I0317 18:18:06.853814 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4870db43-19b5-4216-aeea-3207490aa9e9" (UID: "4870db43-19b5-4216-aeea-3207490aa9e9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:18:06.854201 kubelet[2182]: I0317 18:18:06.853879 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4870db43-19b5-4216-aeea-3207490aa9e9" (UID: "4870db43-19b5-4216-aeea-3207490aa9e9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:18:06.854400 kubelet[2182]: I0317 18:18:06.853990 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-cni-path" (OuterVolumeSpecName: "cni-path") pod "4870db43-19b5-4216-aeea-3207490aa9e9" (UID: "4870db43-19b5-4216-aeea-3207490aa9e9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:18:06.854400 kubelet[2182]: I0317 18:18:06.854019 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4870db43-19b5-4216-aeea-3207490aa9e9" (UID: "4870db43-19b5-4216-aeea-3207490aa9e9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:18:06.859859 kubelet[2182]: I0317 18:18:06.859816 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4808c6dc-414b-4eaa-b593-4c59c6a70ee1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4808c6dc-414b-4eaa-b593-4c59c6a70ee1" (UID: "4808c6dc-414b-4eaa-b593-4c59c6a70ee1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:18:06.861090 kubelet[2182]: I0317 18:18:06.861030 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4808c6dc-414b-4eaa-b593-4c59c6a70ee1-kube-api-access-hjnn6" (OuterVolumeSpecName: "kube-api-access-hjnn6") pod "4808c6dc-414b-4eaa-b593-4c59c6a70ee1" (UID: "4808c6dc-414b-4eaa-b593-4c59c6a70ee1"). InnerVolumeSpecName "kube-api-access-hjnn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:18:06.861090 kubelet[2182]: I0317 18:18:06.861084 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4870db43-19b5-4216-aeea-3207490aa9e9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4870db43-19b5-4216-aeea-3207490aa9e9" (UID: "4870db43-19b5-4216-aeea-3207490aa9e9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:18:06.861186 kubelet[2182]: I0317 18:18:06.861127 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-hostproc" (OuterVolumeSpecName: "hostproc") pod "4870db43-19b5-4216-aeea-3207490aa9e9" (UID: "4870db43-19b5-4216-aeea-3207490aa9e9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:18:06.861186 kubelet[2182]: I0317 18:18:06.861145 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4870db43-19b5-4216-aeea-3207490aa9e9" (UID: "4870db43-19b5-4216-aeea-3207490aa9e9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:18:06.861186 kubelet[2182]: I0317 18:18:06.861162 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4870db43-19b5-4216-aeea-3207490aa9e9" (UID: "4870db43-19b5-4216-aeea-3207490aa9e9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:18:06.861310 kubelet[2182]: I0317 18:18:06.861287 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4870db43-19b5-4216-aeea-3207490aa9e9-kube-api-access-bgptg" (OuterVolumeSpecName: "kube-api-access-bgptg") pod "4870db43-19b5-4216-aeea-3207490aa9e9" (UID: "4870db43-19b5-4216-aeea-3207490aa9e9"). InnerVolumeSpecName "kube-api-access-bgptg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:18:06.861789 kubelet[2182]: I0317 18:18:06.861750 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4870db43-19b5-4216-aeea-3207490aa9e9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4870db43-19b5-4216-aeea-3207490aa9e9" (UID: "4870db43-19b5-4216-aeea-3207490aa9e9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:18:06.863418 kubelet[2182]: I0317 18:18:06.863206 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4870db43-19b5-4216-aeea-3207490aa9e9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4870db43-19b5-4216-aeea-3207490aa9e9" (UID: "4870db43-19b5-4216-aeea-3207490aa9e9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:18:06.952542 kubelet[2182]: I0317 18:18:06.952486 2182 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4870db43-19b5-4216-aeea-3207490aa9e9-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:06.952542 kubelet[2182]: I0317 18:18:06.952521 2182 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:06.952542 kubelet[2182]: I0317 18:18:06.952530 2182 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:06.952542 kubelet[2182]: I0317 18:18:06.952539 2182 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:06.952542 kubelet[2182]: I0317 18:18:06.952548 2182 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hjnn6\" (UniqueName: \"kubernetes.io/projected/4808c6dc-414b-4eaa-b593-4c59c6a70ee1-kube-api-access-hjnn6\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:06.952542 kubelet[2182]: I0317 18:18:06.952560 2182 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-bgptg\" (UniqueName: \"kubernetes.io/projected/4870db43-19b5-4216-aeea-3207490aa9e9-kube-api-access-bgptg\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:06.952823 kubelet[2182]: I0317 18:18:06.952568 2182 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:06.952823 kubelet[2182]: I0317 18:18:06.952576 2182 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4808c6dc-414b-4eaa-b593-4c59c6a70ee1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:06.952823 kubelet[2182]: I0317 18:18:06.952585 2182 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:06.952823 kubelet[2182]: I0317 18:18:06.952593 2182 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4870db43-19b5-4216-aeea-3207490aa9e9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:06.952823 kubelet[2182]: I0317 18:18:06.952601 2182 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:06.952823 kubelet[2182]: I0317 18:18:06.952609 2182 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:06.952823 kubelet[2182]: I0317 18:18:06.952617 2182 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4870db43-19b5-4216-aeea-3207490aa9e9-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:06.952823 kubelet[2182]: I0317 18:18:06.952625 2182 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:06.952999 kubelet[2182]: I0317 18:18:06.952633 2182 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:06.952999 kubelet[2182]: I0317 18:18:06.952643 2182 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4870db43-19b5-4216-aeea-3207490aa9e9-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:07.588486 kubelet[2182]: I0317 18:18:07.588439 2182 scope.go:117] "RemoveContainer" containerID="8631be3b616c85085364a8045d0f1cafe51f9bf973a052e53206272cf951ea42" Mar 17 18:18:07.590459 env[1317]: time="2025-03-17T18:18:07.590405063Z" level=info msg="RemoveContainer for \"8631be3b616c85085364a8045d0f1cafe51f9bf973a052e53206272cf951ea42\"" Mar 17 18:18:07.594966 env[1317]: time="2025-03-17T18:18:07.594898236Z" level=info msg="RemoveContainer for \"8631be3b616c85085364a8045d0f1cafe51f9bf973a052e53206272cf951ea42\" returns successfully" Mar 17 18:18:07.596281 kubelet[2182]: I0317 18:18:07.596254 2182 scope.go:117] "RemoveContainer" containerID="8631be3b616c85085364a8045d0f1cafe51f9bf973a052e53206272cf951ea42" Mar 17 18:18:07.597301 env[1317]: time="2025-03-17T18:18:07.597227864Z" level=error msg="ContainerStatus for \"8631be3b616c85085364a8045d0f1cafe51f9bf973a052e53206272cf951ea42\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8631be3b616c85085364a8045d0f1cafe51f9bf973a052e53206272cf951ea42\": not found" Mar 17 18:18:07.597659 kubelet[2182]: E0317 18:18:07.597631 2182 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8631be3b616c85085364a8045d0f1cafe51f9bf973a052e53206272cf951ea42\": not found" containerID="8631be3b616c85085364a8045d0f1cafe51f9bf973a052e53206272cf951ea42" Mar 17 18:18:07.597743 kubelet[2182]: I0317 18:18:07.597668 2182 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8631be3b616c85085364a8045d0f1cafe51f9bf973a052e53206272cf951ea42"} err="failed to get container status \"8631be3b616c85085364a8045d0f1cafe51f9bf973a052e53206272cf951ea42\": rpc error: code = NotFound desc = an error occurred when try to find container \"8631be3b616c85085364a8045d0f1cafe51f9bf973a052e53206272cf951ea42\": not found" Mar 17 18:18:07.597782 kubelet[2182]: I0317 18:18:07.597742 2182 scope.go:117] "RemoveContainer" containerID="f557d5aa073afb3ad63972ba89a969cb36c185aefe309628609b30520d755ac6" Mar 17 18:18:07.598797 env[1317]: time="2025-03-17T18:18:07.598767482Z" level=info msg="RemoveContainer for \"f557d5aa073afb3ad63972ba89a969cb36c185aefe309628609b30520d755ac6\"" Mar 17 18:18:07.600727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f557d5aa073afb3ad63972ba89a969cb36c185aefe309628609b30520d755ac6-rootfs.mount: Deactivated successfully. Mar 17 18:18:07.600942 systemd[1]: var-lib-kubelet-pods-4808c6dc\x2d414b\x2d4eaa\x2db593\x2d4c59c6a70ee1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhjnn6.mount: Deactivated successfully. Mar 17 18:18:07.601048 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e29b4875d6c8b5094515c5c16e1aca0322a336aa084ec13187f19d3aaa868de5-rootfs.mount: Deactivated successfully. Mar 17 18:18:07.601154 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e29b4875d6c8b5094515c5c16e1aca0322a336aa084ec13187f19d3aaa868de5-shm.mount: Deactivated successfully. Mar 17 18:18:07.601243 systemd[1]: var-lib-kubelet-pods-4870db43\x2d19b5\x2d4216\x2daeea\x2d3207490aa9e9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbgptg.mount: Deactivated successfully. Mar 17 18:18:07.601308 env[1317]: time="2025-03-17T18:18:07.601266191Z" level=info msg="RemoveContainer for \"f557d5aa073afb3ad63972ba89a969cb36c185aefe309628609b30520d755ac6\" returns successfully" Mar 17 18:18:07.601327 systemd[1]: var-lib-kubelet-pods-4870db43\x2d19b5\x2d4216\x2daeea\x2d3207490aa9e9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:18:07.601402 systemd[1]: var-lib-kubelet-pods-4870db43\x2d19b5\x2d4216\x2daeea\x2d3207490aa9e9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:18:07.601525 kubelet[2182]: I0317 18:18:07.601495 2182 scope.go:117] "RemoveContainer" containerID="4cf38b17e941a61a29333f145ece80d8c9b8abe317bdf61bb153d29712057ae5" Mar 17 18:18:07.603543 env[1317]: time="2025-03-17T18:18:07.603513458Z" level=info msg="RemoveContainer for \"4cf38b17e941a61a29333f145ece80d8c9b8abe317bdf61bb153d29712057ae5\"" Mar 17 18:18:07.606396 env[1317]: time="2025-03-17T18:18:07.606362932Z" level=info msg="RemoveContainer for \"4cf38b17e941a61a29333f145ece80d8c9b8abe317bdf61bb153d29712057ae5\" returns successfully" Mar 17 18:18:07.606660 kubelet[2182]: I0317 18:18:07.606641 2182 scope.go:117] "RemoveContainer" containerID="d068587cf8e070cfcabad6f7a8172900791323fb429daf07e9aae75074574578" Mar 17 18:18:07.610541 env[1317]: time="2025-03-17T18:18:07.610498381Z" level=info msg="RemoveContainer for \"d068587cf8e070cfcabad6f7a8172900791323fb429daf07e9aae75074574578\"" Mar 17 18:18:07.618242 env[1317]: time="2025-03-17T18:18:07.618207112Z" level=info msg="RemoveContainer for \"d068587cf8e070cfcabad6f7a8172900791323fb429daf07e9aae75074574578\" returns successfully" Mar 17 18:18:07.618600 kubelet[2182]: I0317 18:18:07.618564 2182 scope.go:117] "RemoveContainer" containerID="b7d879989ef0b8c08c7735b82792ba0b775fe98e04862dad800ce9d353d1299f" Mar 17 18:18:07.619771 env[1317]: time="2025-03-17T18:18:07.619727730Z" level=info msg="RemoveContainer for \"b7d879989ef0b8c08c7735b82792ba0b775fe98e04862dad800ce9d353d1299f\"" Mar 17 18:18:07.622079 env[1317]: time="2025-03-17T18:18:07.622024477Z" level=info msg="RemoveContainer for \"b7d879989ef0b8c08c7735b82792ba0b775fe98e04862dad800ce9d353d1299f\" returns successfully" Mar 17 18:18:07.622243 kubelet[2182]: I0317 18:18:07.622207 2182 scope.go:117] "RemoveContainer" containerID="24037dc65f0e267d8463740008c17a743b06b37ccca321d11c936d39cee18f37" Mar 17 18:18:07.623082 env[1317]: time="2025-03-17T18:18:07.623032929Z" level=info msg="RemoveContainer for \"24037dc65f0e267d8463740008c17a743b06b37ccca321d11c936d39cee18f37\"" Mar 17 18:18:07.625532 env[1317]: time="2025-03-17T18:18:07.625494919Z" level=info msg="RemoveContainer for \"24037dc65f0e267d8463740008c17a743b06b37ccca321d11c936d39cee18f37\" returns successfully" Mar 17 18:18:07.626759 kubelet[2182]: I0317 18:18:07.626458 2182 scope.go:117] "RemoveContainer" containerID="f557d5aa073afb3ad63972ba89a969cb36c185aefe309628609b30520d755ac6" Mar 17 18:18:07.626956 env[1317]: time="2025-03-17T18:18:07.626892935Z" level=error msg="ContainerStatus for \"f557d5aa073afb3ad63972ba89a969cb36c185aefe309628609b30520d755ac6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f557d5aa073afb3ad63972ba89a969cb36c185aefe309628609b30520d755ac6\": not found" Mar 17 18:18:07.627171 kubelet[2182]: E0317 18:18:07.627097 2182 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f557d5aa073afb3ad63972ba89a969cb36c185aefe309628609b30520d755ac6\": not found" containerID="f557d5aa073afb3ad63972ba89a969cb36c185aefe309628609b30520d755ac6" Mar 17 18:18:07.627171 kubelet[2182]: I0317 18:18:07.627145 2182 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f557d5aa073afb3ad63972ba89a969cb36c185aefe309628609b30520d755ac6"} err="failed to get container status \"f557d5aa073afb3ad63972ba89a969cb36c185aefe309628609b30520d755ac6\": rpc error: code = NotFound desc = an error occurred when try to find container \"f557d5aa073afb3ad63972ba89a969cb36c185aefe309628609b30520d755ac6\": not found" Mar 17 18:18:07.627171 kubelet[2182]: I0317 18:18:07.627165 2182 scope.go:117] "RemoveContainer" containerID="4cf38b17e941a61a29333f145ece80d8c9b8abe317bdf61bb153d29712057ae5" Mar 17 18:18:07.627787 env[1317]: time="2025-03-17T18:18:07.627725585Z" level=error msg="ContainerStatus for \"4cf38b17e941a61a29333f145ece80d8c9b8abe317bdf61bb153d29712057ae5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4cf38b17e941a61a29333f145ece80d8c9b8abe317bdf61bb153d29712057ae5\": not found" Mar 17 18:18:07.627879 kubelet[2182]: E0317 18:18:07.627846 2182 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4cf38b17e941a61a29333f145ece80d8c9b8abe317bdf61bb153d29712057ae5\": not found" containerID="4cf38b17e941a61a29333f145ece80d8c9b8abe317bdf61bb153d29712057ae5" Mar 17 18:18:07.627952 kubelet[2182]: I0317 18:18:07.627886 2182 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4cf38b17e941a61a29333f145ece80d8c9b8abe317bdf61bb153d29712057ae5"} err="failed to get container status \"4cf38b17e941a61a29333f145ece80d8c9b8abe317bdf61bb153d29712057ae5\": rpc error: code = NotFound desc = an error occurred when try to find container \"4cf38b17e941a61a29333f145ece80d8c9b8abe317bdf61bb153d29712057ae5\": not found" Mar 17 18:18:07.627952 kubelet[2182]: I0317 18:18:07.627906 2182 scope.go:117] "RemoveContainer" containerID="d068587cf8e070cfcabad6f7a8172900791323fb429daf07e9aae75074574578" Mar 17 18:18:07.628150 env[1317]: time="2025-03-17T18:18:07.628098709Z" level=error msg="ContainerStatus for \"d068587cf8e070cfcabad6f7a8172900791323fb429daf07e9aae75074574578\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d068587cf8e070cfcabad6f7a8172900791323fb429daf07e9aae75074574578\": not found" Mar 17 18:18:07.628274 kubelet[2182]: E0317 18:18:07.628254 2182 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d068587cf8e070cfcabad6f7a8172900791323fb429daf07e9aae75074574578\": not found" containerID="d068587cf8e070cfcabad6f7a8172900791323fb429daf07e9aae75074574578" Mar 17 18:18:07.628310 kubelet[2182]: I0317 18:18:07.628281 2182 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d068587cf8e070cfcabad6f7a8172900791323fb429daf07e9aae75074574578"} err="failed to get container status \"d068587cf8e070cfcabad6f7a8172900791323fb429daf07e9aae75074574578\": rpc error: code = NotFound desc = an error occurred when try to find container \"d068587cf8e070cfcabad6f7a8172900791323fb429daf07e9aae75074574578\": not found" Mar 17 18:18:07.628310 kubelet[2182]: I0317 18:18:07.628298 2182 scope.go:117] "RemoveContainer" containerID="b7d879989ef0b8c08c7735b82792ba0b775fe98e04862dad800ce9d353d1299f" Mar 17 18:18:07.629750 env[1317]: time="2025-03-17T18:18:07.629691648Z" level=error msg="ContainerStatus for \"b7d879989ef0b8c08c7735b82792ba0b775fe98e04862dad800ce9d353d1299f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b7d879989ef0b8c08c7735b82792ba0b775fe98e04862dad800ce9d353d1299f\": not found" Mar 17 18:18:07.629904 kubelet[2182]: E0317 18:18:07.629880 2182 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b7d879989ef0b8c08c7735b82792ba0b775fe98e04862dad800ce9d353d1299f\": not found" containerID="b7d879989ef0b8c08c7735b82792ba0b775fe98e04862dad800ce9d353d1299f" Mar 17 18:18:07.629938 kubelet[2182]: I0317 18:18:07.629908 2182 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b7d879989ef0b8c08c7735b82792ba0b775fe98e04862dad800ce9d353d1299f"} err="failed to get container status \"b7d879989ef0b8c08c7735b82792ba0b775fe98e04862dad800ce9d353d1299f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b7d879989ef0b8c08c7735b82792ba0b775fe98e04862dad800ce9d353d1299f\": not found" Mar 17 18:18:07.629938 kubelet[2182]: I0317 18:18:07.629924 2182 scope.go:117] "RemoveContainer" containerID="24037dc65f0e267d8463740008c17a743b06b37ccca321d11c936d39cee18f37" Mar 17 18:18:07.630189 env[1317]: time="2025-03-17T18:18:07.630137894Z" level=error msg="ContainerStatus for \"24037dc65f0e267d8463740008c17a743b06b37ccca321d11c936d39cee18f37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"24037dc65f0e267d8463740008c17a743b06b37ccca321d11c936d39cee18f37\": not found" Mar 17 18:18:07.630296 kubelet[2182]: E0317 18:18:07.630278 2182 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"24037dc65f0e267d8463740008c17a743b06b37ccca321d11c936d39cee18f37\": not found" containerID="24037dc65f0e267d8463740008c17a743b06b37ccca321d11c936d39cee18f37" Mar 17 18:18:07.630327 kubelet[2182]: I0317 18:18:07.630302 2182 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"24037dc65f0e267d8463740008c17a743b06b37ccca321d11c936d39cee18f37"} err="failed to get container status \"24037dc65f0e267d8463740008c17a743b06b37ccca321d11c936d39cee18f37\": rpc error: code = NotFound desc = an error occurred when try to find container \"24037dc65f0e267d8463740008c17a743b06b37ccca321d11c936d39cee18f37\": not found" Mar 17 18:18:08.406826 kubelet[2182]: I0317 18:18:08.406777 2182 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4808c6dc-414b-4eaa-b593-4c59c6a70ee1" path="/var/lib/kubelet/pods/4808c6dc-414b-4eaa-b593-4c59c6a70ee1/volumes" Mar 17 18:18:08.407295 kubelet[2182]: I0317 18:18:08.407213 2182 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4870db43-19b5-4216-aeea-3207490aa9e9" path="/var/lib/kubelet/pods/4870db43-19b5-4216-aeea-3207490aa9e9/volumes" Mar 17 18:18:08.531047 sshd[3785]: pam_unix(sshd:session): session closed for user core Mar 17 18:18:08.533402 systemd[1]: Started sshd@22-10.0.0.58:22-10.0.0.1:33762.service. Mar 17 18:18:08.534016 systemd[1]: sshd@21-10.0.0.58:22-10.0.0.1:33756.service: Deactivated successfully. Mar 17 18:18:08.535148 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:18:08.535161 systemd-logind[1302]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:18:08.536651 systemd-logind[1302]: Removed session 22. Mar 17 18:18:08.572989 sshd[3954]: Accepted publickey for core from 10.0.0.1 port 33762 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:18:08.574539 sshd[3954]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:18:08.578239 systemd-logind[1302]: New session 23 of user core. Mar 17 18:18:08.580253 systemd[1]: Started session-23.scope. Mar 17 18:18:09.535085 sshd[3954]: pam_unix(sshd:session): session closed for user core Mar 17 18:18:09.536575 systemd[1]: Started sshd@23-10.0.0.58:22-10.0.0.1:33770.service. Mar 17 18:18:09.551421 kubelet[2182]: I0317 18:18:09.546970 2182 topology_manager.go:215] "Topology Admit Handler" podUID="e3b992c5-2a8f-474c-bb38-83a1c4300680" podNamespace="kube-system" podName="cilium-z4dfg" Mar 17 18:18:09.551421 kubelet[2182]: E0317 18:18:09.547062 2182 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4870db43-19b5-4216-aeea-3207490aa9e9" containerName="mount-cgroup" Mar 17 18:18:09.551421 kubelet[2182]: E0317 18:18:09.547083 2182 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4870db43-19b5-4216-aeea-3207490aa9e9" containerName="apply-sysctl-overwrites" Mar 17 18:18:09.551421 kubelet[2182]: E0317 18:18:09.547090 2182 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4870db43-19b5-4216-aeea-3207490aa9e9" containerName="mount-bpf-fs" Mar 17 18:18:09.551421 kubelet[2182]: E0317 18:18:09.547095 2182 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4808c6dc-414b-4eaa-b593-4c59c6a70ee1" containerName="cilium-operator" Mar 17 18:18:09.551421 kubelet[2182]: E0317 18:18:09.547102 2182 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4870db43-19b5-4216-aeea-3207490aa9e9" containerName="clean-cilium-state" Mar 17 18:18:09.551421 kubelet[2182]: E0317 18:18:09.547109 2182 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4870db43-19b5-4216-aeea-3207490aa9e9" containerName="cilium-agent" Mar 17 18:18:09.551421 kubelet[2182]: I0317 18:18:09.547131 2182 memory_manager.go:354] "RemoveStaleState removing state" podUID="4870db43-19b5-4216-aeea-3207490aa9e9" containerName="cilium-agent" Mar 17 18:18:09.551421 kubelet[2182]: I0317 18:18:09.547138 2182 memory_manager.go:354] "RemoveStaleState removing state" podUID="4808c6dc-414b-4eaa-b593-4c59c6a70ee1" containerName="cilium-operator" Mar 17 18:18:09.552811 systemd-logind[1302]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:18:09.553267 systemd[1]: sshd@22-10.0.0.58:22-10.0.0.1:33762.service: Deactivated successfully. Mar 17 18:18:09.554150 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:18:09.554684 systemd-logind[1302]: Removed session 23. Mar 17 18:18:09.572192 kubelet[2182]: I0317 18:18:09.571231 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-host-proc-sys-kernel\") pod \"cilium-z4dfg\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " pod="kube-system/cilium-z4dfg" Mar 17 18:18:09.572192 kubelet[2182]: I0317 18:18:09.571270 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3b992c5-2a8f-474c-bb38-83a1c4300680-hubble-tls\") pod \"cilium-z4dfg\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " pod="kube-system/cilium-z4dfg" Mar 17 18:18:09.572192 kubelet[2182]: I0317 18:18:09.571293 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-cilium-run\") pod \"cilium-z4dfg\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " pod="kube-system/cilium-z4dfg" Mar 17 18:18:09.572192 kubelet[2182]: I0317 18:18:09.571308 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-lib-modules\") pod \"cilium-z4dfg\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " pod="kube-system/cilium-z4dfg" Mar 17 18:18:09.572192 kubelet[2182]: I0317 18:18:09.571324 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-host-proc-sys-net\") pod \"cilium-z4dfg\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " pod="kube-system/cilium-z4dfg" Mar 17 18:18:09.572192 kubelet[2182]: I0317 18:18:09.571339 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-cni-path\") pod \"cilium-z4dfg\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " pod="kube-system/cilium-z4dfg" Mar 17 18:18:09.574641 kubelet[2182]: I0317 18:18:09.571354 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-xtables-lock\") pod \"cilium-z4dfg\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " pod="kube-system/cilium-z4dfg" Mar 17 18:18:09.574641 kubelet[2182]: I0317 18:18:09.571367 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-hostproc\") pod \"cilium-z4dfg\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " pod="kube-system/cilium-z4dfg" Mar 17 18:18:09.574641 kubelet[2182]: I0317 18:18:09.571384 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-etc-cni-netd\") pod \"cilium-z4dfg\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " pod="kube-system/cilium-z4dfg" Mar 17 18:18:09.574641 kubelet[2182]: I0317 18:18:09.571400 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3b992c5-2a8f-474c-bb38-83a1c4300680-clustermesh-secrets\") pod \"cilium-z4dfg\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " pod="kube-system/cilium-z4dfg" Mar 17 18:18:09.574641 kubelet[2182]: I0317 18:18:09.571418 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e3b992c5-2a8f-474c-bb38-83a1c4300680-cilium-ipsec-secrets\") pod \"cilium-z4dfg\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " pod="kube-system/cilium-z4dfg" Mar 17 18:18:09.574641 kubelet[2182]: I0317 18:18:09.571434 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-bpf-maps\") pod \"cilium-z4dfg\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " pod="kube-system/cilium-z4dfg" Mar 17 18:18:09.574793 kubelet[2182]: I0317 18:18:09.571449 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-cilium-cgroup\") pod \"cilium-z4dfg\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " pod="kube-system/cilium-z4dfg" Mar 17 18:18:09.574793 kubelet[2182]: I0317 18:18:09.571465 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3b992c5-2a8f-474c-bb38-83a1c4300680-cilium-config-path\") pod \"cilium-z4dfg\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " pod="kube-system/cilium-z4dfg" Mar 17 18:18:09.574793 kubelet[2182]: I0317 18:18:09.571480 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvgg5\" (UniqueName: \"kubernetes.io/projected/e3b992c5-2a8f-474c-bb38-83a1c4300680-kube-api-access-wvgg5\") pod \"cilium-z4dfg\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " pod="kube-system/cilium-z4dfg" Mar 17 18:18:09.584517 sshd[3969]: Accepted publickey for core from 10.0.0.1 port 33770 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:18:09.586205 sshd[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:18:09.591685 systemd-logind[1302]: New session 24 of user core. Mar 17 18:18:09.592505 systemd[1]: Started session-24.scope. Mar 17 18:18:09.726320 sshd[3969]: pam_unix(sshd:session): session closed for user core Mar 17 18:18:09.727541 systemd[1]: Started sshd@24-10.0.0.58:22-10.0.0.1:33774.service. Mar 17 18:18:09.730843 systemd[1]: sshd@23-10.0.0.58:22-10.0.0.1:33770.service: Deactivated successfully. Mar 17 18:18:09.731764 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:18:09.731866 systemd-logind[1302]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:18:09.738682 kubelet[2182]: E0317 18:18:09.737826 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:18:09.740614 env[1317]: time="2025-03-17T18:18:09.740064369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z4dfg,Uid:e3b992c5-2a8f-474c-bb38-83a1c4300680,Namespace:kube-system,Attempt:0,}" Mar 17 18:18:09.741391 systemd-logind[1302]: Removed session 24. Mar 17 18:18:09.756350 env[1317]: time="2025-03-17T18:18:09.756286016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:18:09.756350 env[1317]: time="2025-03-17T18:18:09.756331577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:18:09.756350 env[1317]: time="2025-03-17T18:18:09.756344417Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:18:09.756608 env[1317]: time="2025-03-17T18:18:09.756581900Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee7983f0deebbc0822218114a32eb2e71accd9506efb334f40b6d1d4087d5218 pid=3997 runtime=io.containerd.runc.v2 Mar 17 18:18:09.770622 sshd[3987]: Accepted publickey for core from 10.0.0.1 port 33774 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:18:09.770865 sshd[3987]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:18:09.784933 systemd[1]: Started session-25.scope. Mar 17 18:18:09.789221 systemd-logind[1302]: New session 25 of user core. Mar 17 18:18:09.806084 env[1317]: time="2025-03-17T18:18:09.806013892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-z4dfg,Uid:e3b992c5-2a8f-474c-bb38-83a1c4300680,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee7983f0deebbc0822218114a32eb2e71accd9506efb334f40b6d1d4087d5218\"" Mar 17 18:18:09.806690 kubelet[2182]: E0317 18:18:09.806652 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:18:09.809884 env[1317]: time="2025-03-17T18:18:09.809832581Z" level=info msg="CreateContainer within sandbox \"ee7983f0deebbc0822218114a32eb2e71accd9506efb334f40b6d1d4087d5218\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:18:09.819148 env[1317]: time="2025-03-17T18:18:09.819097299Z" level=info msg="CreateContainer within sandbox \"ee7983f0deebbc0822218114a32eb2e71accd9506efb334f40b6d1d4087d5218\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7d51295d894a458a4e3a4762b041bc28ba4ea331fd68ff9921c9ec3840885a09\"" Mar 17 18:18:09.820518 env[1317]: time="2025-03-17T18:18:09.819664626Z" level=info msg="StartContainer for \"7d51295d894a458a4e3a4762b041bc28ba4ea331fd68ff9921c9ec3840885a09\"" Mar 17 18:18:09.880986 env[1317]: time="2025-03-17T18:18:09.880636725Z" level=info msg="StartContainer for \"7d51295d894a458a4e3a4762b041bc28ba4ea331fd68ff9921c9ec3840885a09\" returns successfully" Mar 17 18:18:09.930575 env[1317]: time="2025-03-17T18:18:09.930504602Z" level=info msg="shim disconnected" id=7d51295d894a458a4e3a4762b041bc28ba4ea331fd68ff9921c9ec3840885a09 Mar 17 18:18:09.930575 env[1317]: time="2025-03-17T18:18:09.930555923Z" level=warning msg="cleaning up after shim disconnected" id=7d51295d894a458a4e3a4762b041bc28ba4ea331fd68ff9921c9ec3840885a09 namespace=k8s.io Mar 17 18:18:09.930575 env[1317]: time="2025-03-17T18:18:09.930565243Z" level=info msg="cleaning up dead shim" Mar 17 18:18:09.937283 env[1317]: time="2025-03-17T18:18:09.937194808Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:18:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4088 runtime=io.containerd.runc.v2\n" Mar 17 18:18:10.463905 kubelet[2182]: E0317 18:18:10.463859 2182 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:18:10.602215 env[1317]: time="2025-03-17T18:18:10.602179128Z" level=info msg="StopPodSandbox for \"ee7983f0deebbc0822218114a32eb2e71accd9506efb334f40b6d1d4087d5218\"" Mar 17 18:18:10.602459 env[1317]: time="2025-03-17T18:18:10.602435851Z" level=info msg="Container to stop \"7d51295d894a458a4e3a4762b041bc28ba4ea331fd68ff9921c9ec3840885a09\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:18:10.628998 env[1317]: time="2025-03-17T18:18:10.628956722Z" level=info msg="shim disconnected" id=ee7983f0deebbc0822218114a32eb2e71accd9506efb334f40b6d1d4087d5218 Mar 17 18:18:10.629603 env[1317]: time="2025-03-17T18:18:10.629574650Z" level=warning msg="cleaning up after shim disconnected" id=ee7983f0deebbc0822218114a32eb2e71accd9506efb334f40b6d1d4087d5218 namespace=k8s.io Mar 17 18:18:10.629693 env[1317]: time="2025-03-17T18:18:10.629679691Z" level=info msg="cleaning up dead shim" Mar 17 18:18:10.636181 env[1317]: time="2025-03-17T18:18:10.636136497Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:18:10Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4121 runtime=io.containerd.runc.v2\n" Mar 17 18:18:10.636442 env[1317]: time="2025-03-17T18:18:10.636419941Z" level=info msg="TearDown network for sandbox \"ee7983f0deebbc0822218114a32eb2e71accd9506efb334f40b6d1d4087d5218\" successfully" Mar 17 18:18:10.636485 env[1317]: time="2025-03-17T18:18:10.636441941Z" level=info msg="StopPodSandbox for \"ee7983f0deebbc0822218114a32eb2e71accd9506efb334f40b6d1d4087d5218\" returns successfully" Mar 17 18:18:10.676499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee7983f0deebbc0822218114a32eb2e71accd9506efb334f40b6d1d4087d5218-rootfs.mount: Deactivated successfully. Mar 17 18:18:10.676650 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee7983f0deebbc0822218114a32eb2e71accd9506efb334f40b6d1d4087d5218-shm.mount: Deactivated successfully. Mar 17 18:18:10.680249 kubelet[2182]: I0317 18:18:10.678387 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-host-proc-sys-net\") pod \"e3b992c5-2a8f-474c-bb38-83a1c4300680\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " Mar 17 18:18:10.680249 kubelet[2182]: I0317 18:18:10.678436 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e3b992c5-2a8f-474c-bb38-83a1c4300680-cilium-ipsec-secrets\") pod \"e3b992c5-2a8f-474c-bb38-83a1c4300680\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " Mar 17 18:18:10.680249 kubelet[2182]: I0317 18:18:10.678456 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-lib-modules\") pod \"e3b992c5-2a8f-474c-bb38-83a1c4300680\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " Mar 17 18:18:10.680249 kubelet[2182]: I0317 18:18:10.678470 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-cni-path\") pod \"e3b992c5-2a8f-474c-bb38-83a1c4300680\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " Mar 17 18:18:10.680249 kubelet[2182]: I0317 18:18:10.678488 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3b992c5-2a8f-474c-bb38-83a1c4300680-hubble-tls\") pod \"e3b992c5-2a8f-474c-bb38-83a1c4300680\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " Mar 17 18:18:10.680249 kubelet[2182]: I0317 18:18:10.678518 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-etc-cni-netd\") pod \"e3b992c5-2a8f-474c-bb38-83a1c4300680\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " Mar 17 18:18:10.680655 kubelet[2182]: I0317 18:18:10.678512 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e3b992c5-2a8f-474c-bb38-83a1c4300680" (UID: "e3b992c5-2a8f-474c-bb38-83a1c4300680"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:18:10.680655 kubelet[2182]: I0317 18:18:10.678532 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-cilium-cgroup\") pod \"e3b992c5-2a8f-474c-bb38-83a1c4300680\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " Mar 17 18:18:10.680655 kubelet[2182]: I0317 18:18:10.678551 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvgg5\" (UniqueName: \"kubernetes.io/projected/e3b992c5-2a8f-474c-bb38-83a1c4300680-kube-api-access-wvgg5\") pod \"e3b992c5-2a8f-474c-bb38-83a1c4300680\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " Mar 17 18:18:10.680655 kubelet[2182]: I0317 18:18:10.678552 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-cni-path" (OuterVolumeSpecName: "cni-path") pod "e3b992c5-2a8f-474c-bb38-83a1c4300680" (UID: "e3b992c5-2a8f-474c-bb38-83a1c4300680"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:18:10.680655 kubelet[2182]: I0317 18:18:10.678571 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-cilium-run\") pod \"e3b992c5-2a8f-474c-bb38-83a1c4300680\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " Mar 17 18:18:10.680655 kubelet[2182]: I0317 18:18:10.678592 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-bpf-maps\") pod \"e3b992c5-2a8f-474c-bb38-83a1c4300680\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " Mar 17 18:18:10.680793 kubelet[2182]: I0317 18:18:10.678609 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3b992c5-2a8f-474c-bb38-83a1c4300680-cilium-config-path\") pod \"e3b992c5-2a8f-474c-bb38-83a1c4300680\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " Mar 17 18:18:10.680793 kubelet[2182]: I0317 18:18:10.678627 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-hostproc\") pod \"e3b992c5-2a8f-474c-bb38-83a1c4300680\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " Mar 17 18:18:10.680793 kubelet[2182]: I0317 18:18:10.678643 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3b992c5-2a8f-474c-bb38-83a1c4300680-clustermesh-secrets\") pod \"e3b992c5-2a8f-474c-bb38-83a1c4300680\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " Mar 17 18:18:10.680793 kubelet[2182]: I0317 18:18:10.678665 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-xtables-lock\") pod \"e3b992c5-2a8f-474c-bb38-83a1c4300680\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " Mar 17 18:18:10.680793 kubelet[2182]: I0317 18:18:10.678679 2182 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-host-proc-sys-kernel\") pod \"e3b992c5-2a8f-474c-bb38-83a1c4300680\" (UID: \"e3b992c5-2a8f-474c-bb38-83a1c4300680\") " Mar 17 18:18:10.680793 kubelet[2182]: I0317 18:18:10.678708 2182 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:10.680922 kubelet[2182]: I0317 18:18:10.678718 2182 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:10.680922 kubelet[2182]: I0317 18:18:10.678751 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e3b992c5-2a8f-474c-bb38-83a1c4300680" (UID: "e3b992c5-2a8f-474c-bb38-83a1c4300680"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:18:10.680922 kubelet[2182]: I0317 18:18:10.678844 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e3b992c5-2a8f-474c-bb38-83a1c4300680" (UID: "e3b992c5-2a8f-474c-bb38-83a1c4300680"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:18:10.680922 kubelet[2182]: I0317 18:18:10.678877 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e3b992c5-2a8f-474c-bb38-83a1c4300680" (UID: "e3b992c5-2a8f-474c-bb38-83a1c4300680"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:18:10.680922 kubelet[2182]: I0317 18:18:10.678897 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e3b992c5-2a8f-474c-bb38-83a1c4300680" (UID: "e3b992c5-2a8f-474c-bb38-83a1c4300680"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:18:10.681066 kubelet[2182]: I0317 18:18:10.678959 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-hostproc" (OuterVolumeSpecName: "hostproc") pod "e3b992c5-2a8f-474c-bb38-83a1c4300680" (UID: "e3b992c5-2a8f-474c-bb38-83a1c4300680"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:18:10.681066 kubelet[2182]: I0317 18:18:10.679382 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e3b992c5-2a8f-474c-bb38-83a1c4300680" (UID: "e3b992c5-2a8f-474c-bb38-83a1c4300680"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:18:10.681066 kubelet[2182]: I0317 18:18:10.679415 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e3b992c5-2a8f-474c-bb38-83a1c4300680" (UID: "e3b992c5-2a8f-474c-bb38-83a1c4300680"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:18:10.681066 kubelet[2182]: I0317 18:18:10.679431 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e3b992c5-2a8f-474c-bb38-83a1c4300680" (UID: "e3b992c5-2a8f-474c-bb38-83a1c4300680"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:18:10.681066 kubelet[2182]: I0317 18:18:10.680950 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3b992c5-2a8f-474c-bb38-83a1c4300680-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e3b992c5-2a8f-474c-bb38-83a1c4300680" (UID: "e3b992c5-2a8f-474c-bb38-83a1c4300680"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:18:10.681589 kubelet[2182]: I0317 18:18:10.681556 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b992c5-2a8f-474c-bb38-83a1c4300680-kube-api-access-wvgg5" (OuterVolumeSpecName: "kube-api-access-wvgg5") pod "e3b992c5-2a8f-474c-bb38-83a1c4300680" (UID: "e3b992c5-2a8f-474c-bb38-83a1c4300680"). InnerVolumeSpecName "kube-api-access-wvgg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:18:10.682678 systemd[1]: var-lib-kubelet-pods-e3b992c5\x2d2a8f\x2d474c\x2dbb38\x2d83a1c4300680-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwvgg5.mount: Deactivated successfully. Mar 17 18:18:10.683245 kubelet[2182]: I0317 18:18:10.683215 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b992c5-2a8f-474c-bb38-83a1c4300680-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e3b992c5-2a8f-474c-bb38-83a1c4300680" (UID: "e3b992c5-2a8f-474c-bb38-83a1c4300680"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:18:10.683743 kubelet[2182]: I0317 18:18:10.683724 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b992c5-2a8f-474c-bb38-83a1c4300680-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e3b992c5-2a8f-474c-bb38-83a1c4300680" (UID: "e3b992c5-2a8f-474c-bb38-83a1c4300680"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:18:10.684771 systemd[1]: var-lib-kubelet-pods-e3b992c5\x2d2a8f\x2d474c\x2dbb38\x2d83a1c4300680-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:18:10.684887 systemd[1]: var-lib-kubelet-pods-e3b992c5\x2d2a8f\x2d474c\x2dbb38\x2d83a1c4300680-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:18:10.685902 kubelet[2182]: I0317 18:18:10.685878 2182 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b992c5-2a8f-474c-bb38-83a1c4300680-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "e3b992c5-2a8f-474c-bb38-83a1c4300680" (UID: "e3b992c5-2a8f-474c-bb38-83a1c4300680"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:18:10.687147 systemd[1]: var-lib-kubelet-pods-e3b992c5\x2d2a8f\x2d474c\x2dbb38\x2d83a1c4300680-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 18:18:10.779359 kubelet[2182]: I0317 18:18:10.779255 2182 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e3b992c5-2a8f-474c-bb38-83a1c4300680-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:10.779359 kubelet[2182]: I0317 18:18:10.779290 2182 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:10.779359 kubelet[2182]: I0317 18:18:10.779302 2182 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-wvgg5\" (UniqueName: \"kubernetes.io/projected/e3b992c5-2a8f-474c-bb38-83a1c4300680-kube-api-access-wvgg5\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:10.779359 kubelet[2182]: I0317 18:18:10.779310 2182 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3b992c5-2a8f-474c-bb38-83a1c4300680-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:10.779359 kubelet[2182]: I0317 18:18:10.779319 2182 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:10.779359 kubelet[2182]: I0317 18:18:10.779327 2182 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:10.779359 kubelet[2182]: I0317 18:18:10.779338 2182 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:10.779359 kubelet[2182]: I0317 18:18:10.779346 2182 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3b992c5-2a8f-474c-bb38-83a1c4300680-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:10.779618 kubelet[2182]: I0317 18:18:10.779355 2182 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:10.779618 kubelet[2182]: I0317 18:18:10.779364 2182 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:10.779618 kubelet[2182]: I0317 18:18:10.779427 2182 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:10.779618 kubelet[2182]: I0317 18:18:10.779438 2182 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3b992c5-2a8f-474c-bb38-83a1c4300680-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:10.779618 kubelet[2182]: I0317 18:18:10.779446 2182 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3b992c5-2a8f-474c-bb38-83a1c4300680-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 17 18:18:11.604920 kubelet[2182]: I0317 18:18:11.604894 2182 scope.go:117] "RemoveContainer" containerID="7d51295d894a458a4e3a4762b041bc28ba4ea331fd68ff9921c9ec3840885a09" Mar 17 18:18:11.606318 env[1317]: time="2025-03-17T18:18:11.606264136Z" level=info msg="RemoveContainer for \"7d51295d894a458a4e3a4762b041bc28ba4ea331fd68ff9921c9ec3840885a09\"" Mar 17 18:18:11.610233 env[1317]: time="2025-03-17T18:18:11.610195830Z" level=info msg="RemoveContainer for \"7d51295d894a458a4e3a4762b041bc28ba4ea331fd68ff9921c9ec3840885a09\" returns successfully" Mar 17 18:18:11.641055 kubelet[2182]: I0317 18:18:11.641003 2182 topology_manager.go:215] "Topology Admit Handler" podUID="c9c876e8-5b92-447e-af8e-a62917a80ea8" podNamespace="kube-system" podName="cilium-lk4hw" Mar 17 18:18:11.641373 kubelet[2182]: E0317 18:18:11.641356 2182 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e3b992c5-2a8f-474c-bb38-83a1c4300680" containerName="mount-cgroup" Mar 17 18:18:11.641468 kubelet[2182]: I0317 18:18:11.641456 2182 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b992c5-2a8f-474c-bb38-83a1c4300680" containerName="mount-cgroup" Mar 17 18:18:11.712190 kubelet[2182]: I0317 18:18:11.712150 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c9c876e8-5b92-447e-af8e-a62917a80ea8-etc-cni-netd\") pod \"cilium-lk4hw\" (UID: \"c9c876e8-5b92-447e-af8e-a62917a80ea8\") " pod="kube-system/cilium-lk4hw" Mar 17 18:18:11.712190 kubelet[2182]: I0317 18:18:11.712188 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c9c876e8-5b92-447e-af8e-a62917a80ea8-host-proc-sys-kernel\") pod \"cilium-lk4hw\" (UID: \"c9c876e8-5b92-447e-af8e-a62917a80ea8\") " pod="kube-system/cilium-lk4hw" Mar 17 18:18:11.712602 kubelet[2182]: I0317 18:18:11.712208 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwdfl\" (UniqueName: \"kubernetes.io/projected/c9c876e8-5b92-447e-af8e-a62917a80ea8-kube-api-access-gwdfl\") pod \"cilium-lk4hw\" (UID: \"c9c876e8-5b92-447e-af8e-a62917a80ea8\") " pod="kube-system/cilium-lk4hw" Mar 17 18:18:11.712602 kubelet[2182]: I0317 18:18:11.712234 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c9c876e8-5b92-447e-af8e-a62917a80ea8-host-proc-sys-net\") pod \"cilium-lk4hw\" (UID: \"c9c876e8-5b92-447e-af8e-a62917a80ea8\") " pod="kube-system/cilium-lk4hw" Mar 17 18:18:11.712602 kubelet[2182]: I0317 18:18:11.712252 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c9c876e8-5b92-447e-af8e-a62917a80ea8-clustermesh-secrets\") pod \"cilium-lk4hw\" (UID: \"c9c876e8-5b92-447e-af8e-a62917a80ea8\") " pod="kube-system/cilium-lk4hw" Mar 17 18:18:11.712602 kubelet[2182]: I0317 18:18:11.712269 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c9c876e8-5b92-447e-af8e-a62917a80ea8-cilium-config-path\") pod \"cilium-lk4hw\" (UID: \"c9c876e8-5b92-447e-af8e-a62917a80ea8\") " pod="kube-system/cilium-lk4hw" Mar 17 18:18:11.712602 kubelet[2182]: I0317 18:18:11.712298 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c9c876e8-5b92-447e-af8e-a62917a80ea8-cilium-run\") pod \"cilium-lk4hw\" (UID: \"c9c876e8-5b92-447e-af8e-a62917a80ea8\") " pod="kube-system/cilium-lk4hw" Mar 17 18:18:11.712723 kubelet[2182]: I0317 18:18:11.712313 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9c876e8-5b92-447e-af8e-a62917a80ea8-xtables-lock\") pod \"cilium-lk4hw\" (UID: \"c9c876e8-5b92-447e-af8e-a62917a80ea8\") " pod="kube-system/cilium-lk4hw" Mar 17 18:18:11.712723 kubelet[2182]: I0317 18:18:11.712329 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9c876e8-5b92-447e-af8e-a62917a80ea8-lib-modules\") pod \"cilium-lk4hw\" (UID: \"c9c876e8-5b92-447e-af8e-a62917a80ea8\") " pod="kube-system/cilium-lk4hw" Mar 17 18:18:11.712723 kubelet[2182]: I0317 18:18:11.712344 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c9c876e8-5b92-447e-af8e-a62917a80ea8-cilium-ipsec-secrets\") pod \"cilium-lk4hw\" (UID: \"c9c876e8-5b92-447e-af8e-a62917a80ea8\") " pod="kube-system/cilium-lk4hw" Mar 17 18:18:11.712723 kubelet[2182]: I0317 18:18:11.712374 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c9c876e8-5b92-447e-af8e-a62917a80ea8-cilium-cgroup\") pod \"cilium-lk4hw\" (UID: \"c9c876e8-5b92-447e-af8e-a62917a80ea8\") " pod="kube-system/cilium-lk4hw" Mar 17 18:18:11.712723 kubelet[2182]: I0317 18:18:11.712394 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c9c876e8-5b92-447e-af8e-a62917a80ea8-hubble-tls\") pod \"cilium-lk4hw\" (UID: \"c9c876e8-5b92-447e-af8e-a62917a80ea8\") " pod="kube-system/cilium-lk4hw" Mar 17 18:18:11.712723 kubelet[2182]: I0317 18:18:11.712408 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c9c876e8-5b92-447e-af8e-a62917a80ea8-cni-path\") pod \"cilium-lk4hw\" (UID: \"c9c876e8-5b92-447e-af8e-a62917a80ea8\") " pod="kube-system/cilium-lk4hw" Mar 17 18:18:11.712855 kubelet[2182]: I0317 18:18:11.712430 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c9c876e8-5b92-447e-af8e-a62917a80ea8-bpf-maps\") pod \"cilium-lk4hw\" (UID: \"c9c876e8-5b92-447e-af8e-a62917a80ea8\") " pod="kube-system/cilium-lk4hw" Mar 17 18:18:11.712855 kubelet[2182]: I0317 18:18:11.712462 2182 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c9c876e8-5b92-447e-af8e-a62917a80ea8-hostproc\") pod \"cilium-lk4hw\" (UID: \"c9c876e8-5b92-447e-af8e-a62917a80ea8\") " pod="kube-system/cilium-lk4hw" Mar 17 18:18:11.944397 kubelet[2182]: E0317 18:18:11.944362 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:18:11.945097 env[1317]: time="2025-03-17T18:18:11.944841356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lk4hw,Uid:c9c876e8-5b92-447e-af8e-a62917a80ea8,Namespace:kube-system,Attempt:0,}" Mar 17 18:18:11.957094 env[1317]: time="2025-03-17T18:18:11.956893880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:18:11.957094 env[1317]: time="2025-03-17T18:18:11.956931081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:18:11.957094 env[1317]: time="2025-03-17T18:18:11.956941881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:18:11.957249 env[1317]: time="2025-03-17T18:18:11.957113323Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a82e8bfab57b28a5fb282ea3eda05c864c9395fb746d2338c789dfb86c51556a pid=4150 runtime=io.containerd.runc.v2 Mar 17 18:18:11.993562 env[1317]: time="2025-03-17T18:18:11.993522740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lk4hw,Uid:c9c876e8-5b92-447e-af8e-a62917a80ea8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a82e8bfab57b28a5fb282ea3eda05c864c9395fb746d2338c789dfb86c51556a\"" Mar 17 18:18:11.994743 kubelet[2182]: E0317 18:18:11.994324 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:18:11.996851 env[1317]: time="2025-03-17T18:18:11.996818145Z" level=info msg="CreateContainer within sandbox \"a82e8bfab57b28a5fb282ea3eda05c864c9395fb746d2338c789dfb86c51556a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:18:12.007514 env[1317]: time="2025-03-17T18:18:12.007470653Z" level=info msg="CreateContainer within sandbox \"a82e8bfab57b28a5fb282ea3eda05c864c9395fb746d2338c789dfb86c51556a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"244cf15cf0f72bee1d253a365c06cdf410b9baf3221b0a682e5233ac7fa50fc9\"" Mar 17 18:18:12.008877 env[1317]: time="2025-03-17T18:18:12.008831992Z" level=info msg="StartContainer for \"244cf15cf0f72bee1d253a365c06cdf410b9baf3221b0a682e5233ac7fa50fc9\"" Mar 17 18:18:12.057627 env[1317]: time="2025-03-17T18:18:12.057581077Z" level=info msg="StartContainer for \"244cf15cf0f72bee1d253a365c06cdf410b9baf3221b0a682e5233ac7fa50fc9\" returns successfully" Mar 17 18:18:12.080313 env[1317]: time="2025-03-17T18:18:12.080265596Z" level=info msg="shim disconnected" id=244cf15cf0f72bee1d253a365c06cdf410b9baf3221b0a682e5233ac7fa50fc9 Mar 17 18:18:12.080313 env[1317]: time="2025-03-17T18:18:12.080315917Z" level=warning msg="cleaning up after shim disconnected" id=244cf15cf0f72bee1d253a365c06cdf410b9baf3221b0a682e5233ac7fa50fc9 namespace=k8s.io Mar 17 18:18:12.080537 env[1317]: time="2025-03-17T18:18:12.080324517Z" level=info msg="cleaning up dead shim" Mar 17 18:18:12.087791 env[1317]: time="2025-03-17T18:18:12.087753141Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:18:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4233 runtime=io.containerd.runc.v2\n" Mar 17 18:18:12.146344 kubelet[2182]: I0317 18:18:12.146303 2182 setters.go:580] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:18:12Z","lastTransitionTime":"2025-03-17T18:18:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:18:12.405863 kubelet[2182]: E0317 18:18:12.405483 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:18:12.407454 kubelet[2182]: I0317 18:18:12.407167 2182 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3b992c5-2a8f-474c-bb38-83a1c4300680" path="/var/lib/kubelet/pods/e3b992c5-2a8f-474c-bb38-83a1c4300680/volumes" Mar 17 18:18:12.608547 kubelet[2182]: E0317 18:18:12.608516 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:18:12.620104 env[1317]: time="2025-03-17T18:18:12.614524666Z" level=info msg="CreateContainer within sandbox \"a82e8bfab57b28a5fb282ea3eda05c864c9395fb746d2338c789dfb86c51556a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:18:12.627814 env[1317]: time="2025-03-17T18:18:12.627756292Z" level=info msg="CreateContainer within sandbox \"a82e8bfab57b28a5fb282ea3eda05c864c9395fb746d2338c789dfb86c51556a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e1cf9b2f558ed2e1fa5e78c48a9f1795cd2e00dcca8cd95760b0ec9977544aa4\"" Mar 17 18:18:12.629541 env[1317]: time="2025-03-17T18:18:12.628487623Z" level=info msg="StartContainer for \"e1cf9b2f558ed2e1fa5e78c48a9f1795cd2e00dcca8cd95760b0ec9977544aa4\"" Mar 17 18:18:12.675907 env[1317]: time="2025-03-17T18:18:12.675735367Z" level=info msg="StartContainer for \"e1cf9b2f558ed2e1fa5e78c48a9f1795cd2e00dcca8cd95760b0ec9977544aa4\" returns successfully" Mar 17 18:18:12.701605 env[1317]: time="2025-03-17T18:18:12.701557210Z" level=info msg="shim disconnected" id=e1cf9b2f558ed2e1fa5e78c48a9f1795cd2e00dcca8cd95760b0ec9977544aa4 Mar 17 18:18:12.702117 env[1317]: time="2025-03-17T18:18:12.702067097Z" level=warning msg="cleaning up after shim disconnected" id=e1cf9b2f558ed2e1fa5e78c48a9f1795cd2e00dcca8cd95760b0ec9977544aa4 namespace=k8s.io Mar 17 18:18:12.702208 env[1317]: time="2025-03-17T18:18:12.702193539Z" level=info msg="cleaning up dead shim" Mar 17 18:18:12.710971 env[1317]: time="2025-03-17T18:18:12.710931661Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:18:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4294 runtime=io.containerd.runc.v2\n" Mar 17 18:18:13.611798 kubelet[2182]: E0317 18:18:13.611729 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:18:13.615032 env[1317]: time="2025-03-17T18:18:13.614975016Z" level=info msg="CreateContainer within sandbox \"a82e8bfab57b28a5fb282ea3eda05c864c9395fb746d2338c789dfb86c51556a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:18:13.631738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount984568918.mount: Deactivated successfully. Mar 17 18:18:13.633904 env[1317]: time="2025-03-17T18:18:13.633841049Z" level=info msg="CreateContainer within sandbox \"a82e8bfab57b28a5fb282ea3eda05c864c9395fb746d2338c789dfb86c51556a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d8c7662e2ff17fdad890debc5c76b3ef1f89390ba8c627018a5d8013321e1cea\"" Mar 17 18:18:13.634477 env[1317]: time="2025-03-17T18:18:13.634452018Z" level=info msg="StartContainer for \"d8c7662e2ff17fdad890debc5c76b3ef1f89390ba8c627018a5d8013321e1cea\"" Mar 17 18:18:13.682933 env[1317]: time="2025-03-17T18:18:13.682891758Z" level=info msg="StartContainer for \"d8c7662e2ff17fdad890debc5c76b3ef1f89390ba8c627018a5d8013321e1cea\" returns successfully" Mar 17 18:18:13.705946 env[1317]: time="2025-03-17T18:18:13.705896251Z" level=info msg="shim disconnected" id=d8c7662e2ff17fdad890debc5c76b3ef1f89390ba8c627018a5d8013321e1cea Mar 17 18:18:13.706215 env[1317]: time="2025-03-17T18:18:13.706194775Z" level=warning msg="cleaning up after shim disconnected" id=d8c7662e2ff17fdad890debc5c76b3ef1f89390ba8c627018a5d8013321e1cea namespace=k8s.io Mar 17 18:18:13.706286 env[1317]: time="2025-03-17T18:18:13.706271896Z" level=info msg="cleaning up dead shim" Mar 17 18:18:13.712984 env[1317]: time="2025-03-17T18:18:13.712942073Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:18:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4351 runtime=io.containerd.runc.v2\n" Mar 17 18:18:14.614753 kubelet[2182]: E0317 18:18:14.614706 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:18:14.616901 env[1317]: time="2025-03-17T18:18:14.616861581Z" level=info msg="CreateContainer within sandbox \"a82e8bfab57b28a5fb282ea3eda05c864c9395fb746d2338c789dfb86c51556a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:18:14.631468 env[1317]: time="2025-03-17T18:18:14.631428477Z" level=info msg="CreateContainer within sandbox \"a82e8bfab57b28a5fb282ea3eda05c864c9395fb746d2338c789dfb86c51556a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"68aa3f3350839b9f7d2df61c47b1c2f362989ccf7fe847a91cd96328ee3b8688\"" Mar 17 18:18:14.631890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2654095055.mount: Deactivated successfully. Mar 17 18:18:14.632404 env[1317]: time="2025-03-17T18:18:14.632056087Z" level=info msg="StartContainer for \"68aa3f3350839b9f7d2df61c47b1c2f362989ccf7fe847a91cd96328ee3b8688\"" Mar 17 18:18:14.675530 env[1317]: time="2025-03-17T18:18:14.675467251Z" level=info msg="StartContainer for \"68aa3f3350839b9f7d2df61c47b1c2f362989ccf7fe847a91cd96328ee3b8688\" returns successfully" Mar 17 18:18:14.694182 env[1317]: time="2025-03-17T18:18:14.694095328Z" level=info msg="shim disconnected" id=68aa3f3350839b9f7d2df61c47b1c2f362989ccf7fe847a91cd96328ee3b8688 Mar 17 18:18:14.694182 env[1317]: time="2025-03-17T18:18:14.694143249Z" level=warning msg="cleaning up after shim disconnected" id=68aa3f3350839b9f7d2df61c47b1c2f362989ccf7fe847a91cd96328ee3b8688 namespace=k8s.io Mar 17 18:18:14.694182 env[1317]: time="2025-03-17T18:18:14.694152129Z" level=info msg="cleaning up dead shim" Mar 17 18:18:14.701282 env[1317]: time="2025-03-17T18:18:14.701229834Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:18:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4407 runtime=io.containerd.runc.v2\n" Mar 17 18:18:14.818509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68aa3f3350839b9f7d2df61c47b1c2f362989ccf7fe847a91cd96328ee3b8688-rootfs.mount: Deactivated successfully. Mar 17 18:18:15.465478 kubelet[2182]: E0317 18:18:15.465427 2182 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:18:15.619346 kubelet[2182]: E0317 18:18:15.619316 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:18:15.622295 env[1317]: time="2025-03-17T18:18:15.622248782Z" level=info msg="CreateContainer within sandbox \"a82e8bfab57b28a5fb282ea3eda05c864c9395fb746d2338c789dfb86c51556a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:18:15.636995 env[1317]: time="2025-03-17T18:18:15.636953845Z" level=info msg="CreateContainer within sandbox \"a82e8bfab57b28a5fb282ea3eda05c864c9395fb746d2338c789dfb86c51556a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"948e162182e90186a368f059cfb41cc97b20e3358de8dfe02d362500b08db243\"" Mar 17 18:18:15.637544 env[1317]: time="2025-03-17T18:18:15.637517214Z" level=info msg="StartContainer for \"948e162182e90186a368f059cfb41cc97b20e3358de8dfe02d362500b08db243\"" Mar 17 18:18:15.683648 env[1317]: time="2025-03-17T18:18:15.683602556Z" level=info msg="StartContainer for \"948e162182e90186a368f059cfb41cc97b20e3358de8dfe02d362500b08db243\" returns successfully" Mar 17 18:18:15.921275 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Mar 17 18:18:16.623895 kubelet[2182]: E0317 18:18:16.623853 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:18:16.638943 kubelet[2182]: I0317 18:18:16.638629 2182 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lk4hw" podStartSLOduration=5.638613926 podStartE2EDuration="5.638613926s" podCreationTimestamp="2025-03-17 18:18:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:18:16.638280321 +0000 UTC m=+86.338979970" watchObservedRunningTime="2025-03-17 18:18:16.638613926 +0000 UTC m=+86.339313575" Mar 17 18:18:17.946280 kubelet[2182]: E0317 18:18:17.946239 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:18:18.880057 systemd-networkd[1099]: lxc_health: Link UP Mar 17 18:18:18.890125 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:18:18.890203 systemd-networkd[1099]: lxc_health: Gained carrier Mar 17 18:18:19.946525 kubelet[2182]: E0317 18:18:19.946481 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:18:20.254566 systemd-networkd[1099]: lxc_health: Gained IPv6LL Mar 17 18:18:20.631207 kubelet[2182]: E0317 18:18:20.631168 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:18:21.632084 kubelet[2182]: E0317 18:18:21.632026 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:18:22.461470 systemd[1]: run-containerd-runc-k8s.io-948e162182e90186a368f059cfb41cc97b20e3358de8dfe02d362500b08db243-runc.xnDMbV.mount: Deactivated successfully. Mar 17 18:18:24.405273 kubelet[2182]: E0317 18:18:24.405195 2182 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:18:24.647555 sshd[3987]: pam_unix(sshd:session): session closed for user core Mar 17 18:18:24.649873 systemd[1]: sshd@24-10.0.0.58:22-10.0.0.1:33774.service: Deactivated successfully. Mar 17 18:18:24.650887 systemd-logind[1302]: Session 25 logged out. Waiting for processes to exit. Mar 17 18:18:24.650969 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 18:18:24.651797 systemd-logind[1302]: Removed session 25.