Mar 17 18:23:30.818559 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 18:23:30.818580 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Mar 17 17:11:44 -00 2025 Mar 17 18:23:30.818588 kernel: efi: EFI v2.70 by EDK II Mar 17 18:23:30.818594 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Mar 17 18:23:30.818599 kernel: random: crng init done Mar 17 18:23:30.818605 kernel: ACPI: Early table checksum verification disabled Mar 17 18:23:30.818611 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Mar 17 18:23:30.818619 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 17 18:23:30.818625 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:23:30.818630 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:23:30.818635 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:23:30.818641 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:23:30.818647 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:23:30.818653 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:23:30.818661 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:23:30.818675 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:23:30.818684 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:23:30.818692 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 17 18:23:30.818700 kernel: NUMA: Failed to initialise from firmware Mar 17 18:23:30.818709 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 18:23:30.818716 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Mar 17 18:23:30.818722 kernel: Zone ranges: Mar 17 18:23:30.818727 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 18:23:30.818736 kernel: DMA32 empty Mar 17 18:23:30.818741 kernel: Normal empty Mar 17 18:23:30.818747 kernel: Movable zone start for each node Mar 17 18:23:30.818753 kernel: Early memory node ranges Mar 17 18:23:30.818759 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Mar 17 18:23:30.818765 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Mar 17 18:23:30.818771 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Mar 17 18:23:30.818776 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Mar 17 18:23:30.818782 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Mar 17 18:23:30.818788 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Mar 17 18:23:30.818794 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Mar 17 18:23:30.818799 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 18:23:30.818806 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 17 18:23:30.818812 kernel: psci: probing for conduit method from ACPI. Mar 17 18:23:30.818818 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 18:23:30.818824 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 18:23:30.818830 kernel: psci: Trusted OS migration not required Mar 17 18:23:30.818838 kernel: psci: SMC Calling Convention v1.1 Mar 17 18:23:30.818845 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 17 18:23:30.818852 kernel: ACPI: SRAT not present Mar 17 18:23:30.818859 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Mar 17 18:23:30.818866 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Mar 17 18:23:30.818872 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 17 18:23:30.818878 kernel: Detected PIPT I-cache on CPU0 Mar 17 18:23:30.818885 kernel: CPU features: detected: GIC system register CPU interface Mar 17 18:23:30.818891 kernel: CPU features: detected: Hardware dirty bit management Mar 17 18:23:30.818897 kernel: CPU features: detected: Spectre-v4 Mar 17 18:23:30.818903 kernel: CPU features: detected: Spectre-BHB Mar 17 18:23:30.818910 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 18:23:30.818917 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 18:23:30.818923 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 18:23:30.818929 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 18:23:30.818935 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 17 18:23:30.818941 kernel: Policy zone: DMA Mar 17 18:23:30.818948 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:23:30.818955 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:23:30.818961 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:23:30.818978 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:23:30.818985 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:23:30.818993 kernel: Memory: 2457404K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 114884K reserved, 0K cma-reserved) Mar 17 18:23:30.819000 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 18:23:30.819006 kernel: trace event string verifier disabled Mar 17 18:23:30.819012 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 18:23:30.819020 kernel: rcu: RCU event tracing is enabled. Mar 17 18:23:30.819027 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 18:23:30.819033 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 18:23:30.819039 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:23:30.819046 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:23:30.819052 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 18:23:30.819059 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 18:23:30.819067 kernel: GICv3: 256 SPIs implemented Mar 17 18:23:30.819074 kernel: GICv3: 0 Extended SPIs implemented Mar 17 18:23:30.819080 kernel: GICv3: Distributor has no Range Selector support Mar 17 18:23:30.819086 kernel: Root IRQ handler: gic_handle_irq Mar 17 18:23:30.819092 kernel: GICv3: 16 PPIs implemented Mar 17 18:23:30.819099 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 17 18:23:30.819105 kernel: ACPI: SRAT not present Mar 17 18:23:30.819111 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 17 18:23:30.819118 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 18:23:30.819125 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Mar 17 18:23:30.819131 kernel: GICv3: using LPI property table @0x00000000400d0000 Mar 17 18:23:30.819138 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Mar 17 18:23:30.819145 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:23:30.819151 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 18:23:30.819158 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 18:23:30.819165 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 18:23:30.819171 kernel: arm-pv: using stolen time PV Mar 17 18:23:30.819177 kernel: Console: colour dummy device 80x25 Mar 17 18:23:30.819184 kernel: ACPI: Core revision 20210730 Mar 17 18:23:30.819191 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 18:23:30.819198 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:23:30.819204 kernel: LSM: Security Framework initializing Mar 17 18:23:30.819211 kernel: SELinux: Initializing. Mar 17 18:23:30.819218 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:23:30.819224 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:23:30.819230 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:23:30.819237 kernel: Platform MSI: ITS@0x8080000 domain created Mar 17 18:23:30.819243 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 17 18:23:30.819250 kernel: Remapping and enabling EFI services. Mar 17 18:23:30.819256 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:23:30.819262 kernel: Detected PIPT I-cache on CPU1 Mar 17 18:23:30.819270 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 17 18:23:30.819276 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Mar 17 18:23:30.819283 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:23:30.819289 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 18:23:30.819295 kernel: Detected PIPT I-cache on CPU2 Mar 17 18:23:30.819302 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 17 18:23:30.819308 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Mar 17 18:23:30.819315 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:23:30.819321 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 17 18:23:30.819328 kernel: Detected PIPT I-cache on CPU3 Mar 17 18:23:30.819335 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 17 18:23:30.819342 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Mar 17 18:23:30.819348 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:23:30.819354 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 17 18:23:30.819365 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 18:23:30.819373 kernel: SMP: Total of 4 processors activated. Mar 17 18:23:30.819380 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 18:23:30.819387 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 18:23:30.819393 kernel: CPU features: detected: Common not Private translations Mar 17 18:23:30.819400 kernel: CPU features: detected: CRC32 instructions Mar 17 18:23:30.819406 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 18:23:30.819413 kernel: CPU features: detected: LSE atomic instructions Mar 17 18:23:30.819421 kernel: CPU features: detected: Privileged Access Never Mar 17 18:23:30.819428 kernel: CPU features: detected: RAS Extension Support Mar 17 18:23:30.819436 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 17 18:23:30.819443 kernel: CPU: All CPU(s) started at EL1 Mar 17 18:23:30.819450 kernel: alternatives: patching kernel code Mar 17 18:23:30.819458 kernel: devtmpfs: initialized Mar 17 18:23:30.819465 kernel: KASLR enabled Mar 17 18:23:30.819472 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:23:30.819478 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 18:23:30.819485 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:23:30.819492 kernel: SMBIOS 3.0.0 present. Mar 17 18:23:30.819498 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Mar 17 18:23:30.819505 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:23:30.819512 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 18:23:30.819520 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 18:23:30.819526 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 18:23:30.819533 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:23:30.819540 kernel: audit: type=2000 audit(0.078:1): state=initialized audit_enabled=0 res=1 Mar 17 18:23:30.819547 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:23:30.819553 kernel: cpuidle: using governor menu Mar 17 18:23:30.819560 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 18:23:30.819567 kernel: ASID allocator initialised with 32768 entries Mar 17 18:23:30.819573 kernel: ACPI: bus type PCI registered Mar 17 18:23:30.819581 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:23:30.819588 kernel: Serial: AMBA PL011 UART driver Mar 17 18:23:30.819595 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:23:30.819602 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 18:23:30.819609 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:23:30.819616 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 18:23:30.819622 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:23:30.819629 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 18:23:30.819636 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:23:30.819644 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:23:30.819651 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:23:30.819658 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:23:30.819664 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:23:30.819675 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:23:30.819682 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:23:30.819689 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:23:30.819696 kernel: ACPI: Interpreter enabled Mar 17 18:23:30.819703 kernel: ACPI: Using GIC for interrupt routing Mar 17 18:23:30.819711 kernel: ACPI: MCFG table detected, 1 entries Mar 17 18:23:30.819718 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 17 18:23:30.819724 kernel: printk: console [ttyAMA0] enabled Mar 17 18:23:30.819731 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 18:23:30.819863 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:23:30.819929 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 18:23:30.820003 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 18:23:30.820067 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 17 18:23:30.820126 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 17 18:23:30.820135 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 17 18:23:30.820142 kernel: PCI host bridge to bus 0000:00 Mar 17 18:23:30.820207 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 17 18:23:30.820264 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 18:23:30.820317 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 17 18:23:30.820369 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 18:23:30.820445 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 17 18:23:30.820516 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 18:23:30.820580 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 17 18:23:30.820641 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 17 18:23:30.820713 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 18:23:30.820776 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 18:23:30.820841 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 17 18:23:30.820912 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 17 18:23:30.820975 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 17 18:23:30.821036 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 18:23:30.821089 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 17 18:23:30.821098 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 18:23:30.821105 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 18:23:30.821112 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 18:23:30.821121 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 18:23:30.821128 kernel: iommu: Default domain type: Translated Mar 17 18:23:30.821135 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 18:23:30.821142 kernel: vgaarb: loaded Mar 17 18:23:30.821149 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:23:30.821156 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:23:30.821162 kernel: PTP clock support registered Mar 17 18:23:30.821169 kernel: Registered efivars operations Mar 17 18:23:30.821176 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 18:23:30.821184 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:23:30.821191 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:23:30.821198 kernel: pnp: PnP ACPI init Mar 17 18:23:30.821268 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 17 18:23:30.821278 kernel: pnp: PnP ACPI: found 1 devices Mar 17 18:23:30.821285 kernel: NET: Registered PF_INET protocol family Mar 17 18:23:30.821292 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:23:30.821299 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 18:23:30.821307 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:23:30.821314 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:23:30.821321 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 17 18:23:30.821328 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 18:23:30.821334 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:23:30.821342 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:23:30.821348 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:23:30.821355 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:23:30.821362 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 17 18:23:30.821370 kernel: kvm [1]: HYP mode not available Mar 17 18:23:30.821377 kernel: Initialise system trusted keyrings Mar 17 18:23:30.821384 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 18:23:30.821390 kernel: Key type asymmetric registered Mar 17 18:23:30.821397 kernel: Asymmetric key parser 'x509' registered Mar 17 18:23:30.821404 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:23:30.821411 kernel: io scheduler mq-deadline registered Mar 17 18:23:30.821418 kernel: io scheduler kyber registered Mar 17 18:23:30.821425 kernel: io scheduler bfq registered Mar 17 18:23:30.821433 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 18:23:30.821440 kernel: ACPI: button: Power Button [PWRB] Mar 17 18:23:30.821447 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 18:23:30.821508 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 17 18:23:30.821517 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:23:30.821524 kernel: thunder_xcv, ver 1.0 Mar 17 18:23:30.821531 kernel: thunder_bgx, ver 1.0 Mar 17 18:23:30.821537 kernel: nicpf, ver 1.0 Mar 17 18:23:30.821544 kernel: nicvf, ver 1.0 Mar 17 18:23:30.821613 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 18:23:30.821743 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T18:23:30 UTC (1742235810) Mar 17 18:23:30.821755 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 18:23:30.821761 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:23:30.821768 kernel: Segment Routing with IPv6 Mar 17 18:23:30.821775 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:23:30.821782 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:23:30.821789 kernel: Key type dns_resolver registered Mar 17 18:23:30.821799 kernel: registered taskstats version 1 Mar 17 18:23:30.821806 kernel: Loading compiled-in X.509 certificates Mar 17 18:23:30.821813 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: c6f3fb83dc6bb7052b07ec5b1ef41d12f9b3f7e4' Mar 17 18:23:30.821819 kernel: Key type .fscrypt registered Mar 17 18:23:30.821826 kernel: Key type fscrypt-provisioning registered Mar 17 18:23:30.821833 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:23:30.821840 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:23:30.821846 kernel: ima: No architecture policies found Mar 17 18:23:30.821853 kernel: clk: Disabling unused clocks Mar 17 18:23:30.821861 kernel: Freeing unused kernel memory: 36416K Mar 17 18:23:30.821868 kernel: Run /init as init process Mar 17 18:23:30.821875 kernel: with arguments: Mar 17 18:23:30.821882 kernel: /init Mar 17 18:23:30.821889 kernel: with environment: Mar 17 18:23:30.821895 kernel: HOME=/ Mar 17 18:23:30.821902 kernel: TERM=linux Mar 17 18:23:30.821909 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:23:30.821918 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:23:30.821928 systemd[1]: Detected virtualization kvm. Mar 17 18:23:30.821935 systemd[1]: Detected architecture arm64. Mar 17 18:23:30.821942 systemd[1]: Running in initrd. Mar 17 18:23:30.821949 systemd[1]: No hostname configured, using default hostname. Mar 17 18:23:30.821956 systemd[1]: Hostname set to . Mar 17 18:23:30.821977 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:23:30.821985 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:23:30.821993 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:23:30.822000 systemd[1]: Reached target cryptsetup.target. Mar 17 18:23:30.822007 systemd[1]: Reached target paths.target. Mar 17 18:23:30.822014 systemd[1]: Reached target slices.target. Mar 17 18:23:30.822021 systemd[1]: Reached target swap.target. Mar 17 18:23:30.822028 systemd[1]: Reached target timers.target. Mar 17 18:23:30.822036 systemd[1]: Listening on iscsid.socket. Mar 17 18:23:30.822044 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:23:30.822051 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:23:30.822058 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:23:30.822066 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:23:30.822073 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:23:30.822081 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:23:30.822088 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:23:30.822095 systemd[1]: Reached target sockets.target. Mar 17 18:23:30.822102 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:23:30.822111 systemd[1]: Finished network-cleanup.service. Mar 17 18:23:30.822118 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:23:30.822125 systemd[1]: Starting systemd-journald.service... Mar 17 18:23:30.822132 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:23:30.822139 systemd[1]: Starting systemd-resolved.service... Mar 17 18:23:30.822146 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:23:30.822153 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:23:30.822161 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:23:30.822168 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:23:30.822176 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:23:30.822183 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:23:30.822190 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:23:30.822198 kernel: audit: type=1130 audit(1742235810.819:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:30.822208 systemd-journald[291]: Journal started Mar 17 18:23:30.822254 systemd-journald[291]: Runtime Journal (/run/log/journal/3c8ac23136184a8390dc52cbedd18e43) is 6.0M, max 48.7M, 42.6M free. Mar 17 18:23:30.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:30.802204 systemd-modules-load[292]: Inserted module 'overlay' Mar 17 18:23:30.823806 systemd[1]: Started systemd-journald.service. Mar 17 18:23:30.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:30.826735 systemd-resolved[293]: Positive Trust Anchors: Mar 17 18:23:30.827735 kernel: audit: type=1130 audit(1742235810.824:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:30.826749 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:23:30.826777 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:23:30.838462 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:23:30.838481 kernel: Bridge firewalling registered Mar 17 18:23:30.831514 systemd-resolved[293]: Defaulting to hostname 'linux'. Mar 17 18:23:30.843661 kernel: audit: type=1130 audit(1742235810.839:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:30.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:30.832374 systemd[1]: Started systemd-resolved.service. Mar 17 18:23:30.838474 systemd-modules-load[292]: Inserted module 'br_netfilter' Mar 17 18:23:30.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:30.839656 systemd[1]: Reached target nss-lookup.target. Mar 17 18:23:30.843892 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:23:30.850186 kernel: audit: type=1130 audit(1742235810.844:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:30.848500 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:23:30.857988 kernel: SCSI subsystem initialized Mar 17 18:23:30.860325 dracut-cmdline[309]: dracut-dracut-053 Mar 17 18:23:30.862440 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:23:30.869260 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:23:30.869292 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:23:30.870155 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:23:30.872410 systemd-modules-load[292]: Inserted module 'dm_multipath' Mar 17 18:23:30.880042 kernel: audit: type=1130 audit(1742235810.873:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:30.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:30.873573 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:23:30.875383 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:23:30.883271 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:23:30.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:30.886990 kernel: audit: type=1130 audit(1742235810.883:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:30.932991 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:23:30.947004 kernel: iscsi: registered transport (tcp) Mar 17 18:23:30.961985 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:23:30.962007 kernel: QLogic iSCSI HBA Driver Mar 17 18:23:30.995375 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:23:30.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:30.997075 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:23:30.999903 kernel: audit: type=1130 audit(1742235810.995:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:31.045991 kernel: raid6: neonx8 gen() 13794 MB/s Mar 17 18:23:31.062980 kernel: raid6: neonx8 xor() 10816 MB/s Mar 17 18:23:31.079981 kernel: raid6: neonx4 gen() 13549 MB/s Mar 17 18:23:31.096985 kernel: raid6: neonx4 xor() 11204 MB/s Mar 17 18:23:31.113985 kernel: raid6: neonx2 gen() 12940 MB/s Mar 17 18:23:31.130977 kernel: raid6: neonx2 xor() 10584 MB/s Mar 17 18:23:31.147977 kernel: raid6: neonx1 gen() 10548 MB/s Mar 17 18:23:31.164986 kernel: raid6: neonx1 xor() 8787 MB/s Mar 17 18:23:31.181979 kernel: raid6: int64x8 gen() 6273 MB/s Mar 17 18:23:31.198988 kernel: raid6: int64x8 xor() 3541 MB/s Mar 17 18:23:31.215988 kernel: raid6: int64x4 gen() 7207 MB/s Mar 17 18:23:31.232989 kernel: raid6: int64x4 xor() 3850 MB/s Mar 17 18:23:31.249987 kernel: raid6: int64x2 gen() 6153 MB/s Mar 17 18:23:31.266989 kernel: raid6: int64x2 xor() 3320 MB/s Mar 17 18:23:31.283979 kernel: raid6: int64x1 gen() 5047 MB/s Mar 17 18:23:31.301192 kernel: raid6: int64x1 xor() 2647 MB/s Mar 17 18:23:31.301214 kernel: raid6: using algorithm neonx8 gen() 13794 MB/s Mar 17 18:23:31.301223 kernel: raid6: .... xor() 10816 MB/s, rmw enabled Mar 17 18:23:31.301234 kernel: raid6: using neon recovery algorithm Mar 17 18:23:31.312328 kernel: xor: measuring software checksum speed Mar 17 18:23:31.312348 kernel: 8regs : 17231 MB/sec Mar 17 18:23:31.312357 kernel: 32regs : 20712 MB/sec Mar 17 18:23:31.313260 kernel: arm64_neon : 27747 MB/sec Mar 17 18:23:31.313283 kernel: xor: using function: arm64_neon (27747 MB/sec) Mar 17 18:23:31.369990 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Mar 17 18:23:31.380114 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:23:31.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:31.382000 audit: BPF prog-id=7 op=LOAD Mar 17 18:23:31.384109 kernel: audit: type=1130 audit(1742235811.380:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:31.384135 kernel: audit: type=1334 audit(1742235811.382:10): prog-id=7 op=LOAD Mar 17 18:23:31.383000 audit: BPF prog-id=8 op=LOAD Mar 17 18:23:31.384544 systemd[1]: Starting systemd-udevd.service... Mar 17 18:23:31.398546 systemd-udevd[492]: Using default interface naming scheme 'v252'. Mar 17 18:23:31.401943 systemd[1]: Started systemd-udevd.service. Mar 17 18:23:31.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:31.403558 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:23:31.415694 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Mar 17 18:23:31.444073 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:23:31.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:31.445628 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:23:31.481125 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:23:31.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:31.513583 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 18:23:31.515863 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 18:23:31.515878 kernel: GPT:9289727 != 19775487 Mar 17 18:23:31.515892 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 18:23:31.515902 kernel: GPT:9289727 != 19775487 Mar 17 18:23:31.515910 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:23:31.515920 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:23:31.536616 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:23:31.538731 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (551) Mar 17 18:23:31.537649 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:23:31.548072 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:23:31.553413 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:23:31.556935 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:23:31.558905 systemd[1]: Starting disk-uuid.service... Mar 17 18:23:31.569799 disk-uuid[563]: Primary Header is updated. Mar 17 18:23:31.569799 disk-uuid[563]: Secondary Entries is updated. Mar 17 18:23:31.569799 disk-uuid[563]: Secondary Header is updated. Mar 17 18:23:31.573985 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:23:32.584870 disk-uuid[564]: The operation has completed successfully. Mar 17 18:23:32.585911 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:23:32.610508 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:23:32.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:32.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:32.610600 systemd[1]: Finished disk-uuid.service. Mar 17 18:23:32.612118 systemd[1]: Starting verity-setup.service... Mar 17 18:23:32.625990 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 18:23:32.653380 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:23:32.655602 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:23:32.657418 systemd[1]: Finished verity-setup.service. Mar 17 18:23:32.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:32.706994 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:23:32.707350 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:23:32.708240 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:23:32.709052 systemd[1]: Starting ignition-setup.service... Mar 17 18:23:32.711274 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:23:32.718339 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:23:32.718376 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:23:32.718387 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:23:32.726745 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:23:32.732960 systemd[1]: Finished ignition-setup.service. Mar 17 18:23:32.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:32.735320 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:23:32.822607 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:23:32.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:32.823000 audit: BPF prog-id=9 op=LOAD Mar 17 18:23:32.825082 systemd[1]: Starting systemd-networkd.service... Mar 17 18:23:32.863714 systemd-networkd[741]: lo: Link UP Mar 17 18:23:32.863725 systemd-networkd[741]: lo: Gained carrier Mar 17 18:23:32.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:32.864174 systemd-networkd[741]: Enumeration completed Mar 17 18:23:32.864283 systemd[1]: Started systemd-networkd.service. Mar 17 18:23:32.864362 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:23:32.865602 systemd-networkd[741]: eth0: Link UP Mar 17 18:23:32.865606 systemd-networkd[741]: eth0: Gained carrier Mar 17 18:23:32.865723 systemd[1]: Reached target network.target. Mar 17 18:23:32.867891 systemd[1]: Starting iscsiuio.service... Mar 17 18:23:32.888420 systemd[1]: Started iscsiuio.service. Mar 17 18:23:32.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:32.890415 systemd[1]: Starting iscsid.service... Mar 17 18:23:32.896062 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.94/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:23:32.897371 iscsid[747]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:23:32.897371 iscsid[747]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:23:32.897371 iscsid[747]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:23:32.897371 iscsid[747]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:23:32.897371 iscsid[747]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:23:32.897371 iscsid[747]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:23:32.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:32.904903 systemd[1]: Started iscsid.service. Mar 17 18:23:32.906888 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:23:32.912631 ignition[650]: Ignition 2.14.0 Mar 17 18:23:32.912645 ignition[650]: Stage: fetch-offline Mar 17 18:23:32.912700 ignition[650]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:23:32.912710 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:23:32.912906 ignition[650]: parsed url from cmdline: "" Mar 17 18:23:32.912910 ignition[650]: no config URL provided Mar 17 18:23:32.912915 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:23:32.912921 ignition[650]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:23:32.912940 ignition[650]: op(1): [started] loading QEMU firmware config module Mar 17 18:23:32.918936 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:23:32.912946 ignition[650]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 18:23:32.920866 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:23:32.922300 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:23:32.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:32.923727 systemd[1]: Reached target remote-fs.target. Mar 17 18:23:32.925903 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:23:32.930469 ignition[650]: op(1): [finished] loading QEMU firmware config module Mar 17 18:23:32.930496 ignition[650]: QEMU firmware config was not found. Ignoring... Mar 17 18:23:32.935106 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:23:32.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:32.956622 systemd-resolved[293]: Detected conflict on linux IN A 10.0.0.94 Mar 17 18:23:32.956636 systemd-resolved[293]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Mar 17 18:23:32.974290 ignition[650]: parsing config with SHA512: 3e4893527dfcd4fa17488953c9eaaaef53836369e37ae22dae0f8c3301caa4ff5900f5754285666d76fff1ebd7362b0d11e79b312576ea738bbfb2cae4e3f22d Mar 17 18:23:32.981490 unknown[650]: fetched base config from "system" Mar 17 18:23:32.981946 ignition[650]: fetch-offline: fetch-offline passed Mar 17 18:23:32.981503 unknown[650]: fetched user config from "qemu" Mar 17 18:23:32.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:32.982014 ignition[650]: Ignition finished successfully Mar 17 18:23:32.982992 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:23:32.984696 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 18:23:32.985500 systemd[1]: Starting ignition-kargs.service... Mar 17 18:23:32.994244 ignition[763]: Ignition 2.14.0 Mar 17 18:23:32.994254 ignition[763]: Stage: kargs Mar 17 18:23:32.994348 ignition[763]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:23:32.996556 systemd[1]: Finished ignition-kargs.service. Mar 17 18:23:32.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:32.994357 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:23:32.995311 ignition[763]: kargs: kargs passed Mar 17 18:23:32.998849 systemd[1]: Starting ignition-disks.service... Mar 17 18:23:32.995353 ignition[763]: Ignition finished successfully Mar 17 18:23:33.005490 ignition[769]: Ignition 2.14.0 Mar 17 18:23:33.005500 ignition[769]: Stage: disks Mar 17 18:23:33.005594 ignition[769]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:23:33.005603 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:23:33.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:33.007328 systemd[1]: Finished ignition-disks.service. Mar 17 18:23:33.006534 ignition[769]: disks: disks passed Mar 17 18:23:33.008578 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:23:33.006578 ignition[769]: Ignition finished successfully Mar 17 18:23:33.010136 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:23:33.011520 systemd[1]: Reached target local-fs.target. Mar 17 18:23:33.012675 systemd[1]: Reached target sysinit.target. Mar 17 18:23:33.013946 systemd[1]: Reached target basic.target. Mar 17 18:23:33.016065 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:23:33.030548 systemd-fsck[777]: ROOT: clean, 623/553520 files, 56021/553472 blocks Mar 17 18:23:33.034446 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:23:33.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:33.036222 systemd[1]: Mounting sysroot.mount... Mar 17 18:23:33.044984 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:23:33.045003 systemd[1]: Mounted sysroot.mount. Mar 17 18:23:33.045754 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:23:33.048534 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:23:33.049429 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 18:23:33.049470 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:23:33.049494 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:23:33.051388 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:23:33.053261 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:23:33.058053 initrd-setup-root[787]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:23:33.063015 initrd-setup-root[795]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:23:33.068214 initrd-setup-root[803]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:23:33.071217 initrd-setup-root[811]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:23:33.101426 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:23:33.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:33.103084 systemd[1]: Starting ignition-mount.service... Mar 17 18:23:33.104392 systemd[1]: Starting sysroot-boot.service... Mar 17 18:23:33.109225 bash[828]: umount: /sysroot/usr/share/oem: not mounted. Mar 17 18:23:33.117260 ignition[830]: INFO : Ignition 2.14.0 Mar 17 18:23:33.118232 ignition[830]: INFO : Stage: mount Mar 17 18:23:33.118232 ignition[830]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:23:33.118232 ignition[830]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:23:33.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:33.121700 ignition[830]: INFO : mount: mount passed Mar 17 18:23:33.121700 ignition[830]: INFO : Ignition finished successfully Mar 17 18:23:33.119833 systemd[1]: Finished ignition-mount.service. Mar 17 18:23:33.131059 systemd[1]: Finished sysroot-boot.service. Mar 17 18:23:33.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:33.665012 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:23:33.671268 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (837) Mar 17 18:23:33.671306 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:23:33.671316 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:23:33.672206 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:23:33.674945 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:23:33.676575 systemd[1]: Starting ignition-files.service... Mar 17 18:23:33.690111 ignition[857]: INFO : Ignition 2.14.0 Mar 17 18:23:33.690111 ignition[857]: INFO : Stage: files Mar 17 18:23:33.691637 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:23:33.691637 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:23:33.691637 ignition[857]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:23:33.695157 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:23:33.695157 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:23:33.697841 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:23:33.697841 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:23:33.700408 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:23:33.700408 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 18:23:33.700408 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 17 18:23:33.699272 unknown[857]: wrote ssh authorized keys file for user: core Mar 17 18:23:33.834518 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 18:23:33.940482 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 18:23:33.942523 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:23:33.942523 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Mar 17 18:23:34.284137 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 18:23:34.338220 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 17 18:23:34.340049 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:23:34.340049 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:23:34.340049 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:23:34.340049 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:23:34.340049 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:23:34.340049 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:23:34.340049 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:23:34.340049 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:23:34.340049 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:23:34.340049 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:23:34.340049 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:23:34.340049 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:23:34.340049 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:23:34.340049 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Mar 17 18:23:34.599590 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 18:23:34.912220 systemd-networkd[741]: eth0: Gained IPv6LL Mar 17 18:23:34.918237 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Mar 17 18:23:34.918237 ignition[857]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 17 18:23:34.922243 ignition[857]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:23:34.922243 ignition[857]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:23:34.922243 ignition[857]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 17 18:23:34.922243 ignition[857]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 17 18:23:34.922243 ignition[857]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:23:34.922243 ignition[857]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:23:34.922243 ignition[857]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 17 18:23:34.922243 ignition[857]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:23:34.922243 ignition[857]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:23:34.922243 ignition[857]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 18:23:34.922243 ignition[857]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:23:34.953201 ignition[857]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:23:34.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:34.955909 ignition[857]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 18:23:34.955909 ignition[857]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:23:34.955909 ignition[857]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:23:34.955909 ignition[857]: INFO : files: files passed Mar 17 18:23:34.955909 ignition[857]: INFO : Ignition finished successfully Mar 17 18:23:34.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:34.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:34.954974 systemd[1]: Finished ignition-files.service. Mar 17 18:23:34.956569 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:23:34.958005 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:23:34.966817 initrd-setup-root-after-ignition[882]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Mar 17 18:23:34.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:34.958641 systemd[1]: Starting ignition-quench.service... Mar 17 18:23:34.970197 initrd-setup-root-after-ignition[884]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:23:34.961874 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:23:34.961963 systemd[1]: Finished ignition-quench.service. Mar 17 18:23:34.965976 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:23:34.967737 systemd[1]: Reached target ignition-complete.target. Mar 17 18:23:34.970185 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:23:34.981987 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:23:34.982084 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:23:34.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:34.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:34.983554 systemd[1]: Reached target initrd-fs.target. Mar 17 18:23:34.984712 systemd[1]: Reached target initrd.target. Mar 17 18:23:34.986032 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:23:34.986715 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:23:34.996679 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:23:34.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:34.998166 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:23:35.005662 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:23:35.006570 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:23:35.007863 systemd[1]: Stopped target timers.target. Mar 17 18:23:35.009152 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:23:35.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.009257 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:23:35.010480 systemd[1]: Stopped target initrd.target. Mar 17 18:23:35.011753 systemd[1]: Stopped target basic.target. Mar 17 18:23:35.012860 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:23:35.014131 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:23:35.015336 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:23:35.016628 systemd[1]: Stopped target remote-fs.target. Mar 17 18:23:35.017790 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:23:35.019100 systemd[1]: Stopped target sysinit.target. Mar 17 18:23:35.020207 systemd[1]: Stopped target local-fs.target. Mar 17 18:23:35.021413 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:23:35.022519 systemd[1]: Stopped target swap.target. Mar 17 18:23:35.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.023658 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:23:35.023766 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:23:35.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.025033 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:23:35.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.026063 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:23:35.026167 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:23:35.027575 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:23:35.027685 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:23:35.028928 systemd[1]: Stopped target paths.target. Mar 17 18:23:35.030044 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:23:35.035998 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:23:35.036860 systemd[1]: Stopped target slices.target. Mar 17 18:23:35.038091 systemd[1]: Stopped target sockets.target. Mar 17 18:23:35.039157 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:23:35.039228 systemd[1]: Closed iscsid.socket. Mar 17 18:23:35.044328 kernel: kauditd_printk_skb: 33 callbacks suppressed Mar 17 18:23:35.044355 kernel: audit: type=1131 audit(1742235815.041:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.040255 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:23:35.040352 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:23:35.041730 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:23:35.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.041821 systemd[1]: Stopped ignition-files.service. Mar 17 18:23:35.050546 kernel: audit: type=1131 audit(1742235815.045:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.047005 systemd[1]: Stopping ignition-mount.service... Mar 17 18:23:35.050053 systemd[1]: Stopping iscsiuio.service... Mar 17 18:23:35.052514 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:23:35.052656 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:23:35.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.054837 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:23:35.058060 kernel: audit: type=1131 audit(1742235815.053:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.058080 ignition[897]: INFO : Ignition 2.14.0 Mar 17 18:23:35.058080 ignition[897]: INFO : Stage: umount Mar 17 18:23:35.058080 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:23:35.058080 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:23:35.058080 ignition[897]: INFO : umount: umount passed Mar 17 18:23:35.058080 ignition[897]: INFO : Ignition finished successfully Mar 17 18:23:35.075697 kernel: audit: type=1131 audit(1742235815.058:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.075721 kernel: audit: type=1131 audit(1742235815.062:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.075732 kernel: audit: type=1131 audit(1742235815.067:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.075750 kernel: audit: type=1131 audit(1742235815.070:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.057444 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:23:35.057582 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:23:35.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.059142 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:23:35.084948 kernel: audit: type=1131 audit(1742235815.077:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.084981 kernel: audit: type=1131 audit(1742235815.080:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.059238 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:23:35.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.064245 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:23:35.090653 kernel: audit: type=1131 audit(1742235815.085:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.064769 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:23:35.064856 systemd[1]: Stopped iscsiuio.service. Mar 17 18:23:35.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.067524 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:23:35.067598 systemd[1]: Stopped ignition-mount.service. Mar 17 18:23:35.070910 systemd[1]: Stopped target network.target. Mar 17 18:23:35.074279 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:23:35.097000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.074314 systemd[1]: Closed iscsiuio.socket. Mar 17 18:23:35.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.076302 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:23:35.076348 systemd[1]: Stopped ignition-disks.service. Mar 17 18:23:35.078590 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:23:35.104000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:23:35.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.078640 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:23:35.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.081760 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:23:35.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.081798 systemd[1]: Stopped ignition-setup.service. Mar 17 18:23:35.085889 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:23:35.089393 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:23:35.091598 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:23:35.091697 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:23:35.095030 systemd-networkd[741]: eth0: DHCPv6 lease lost Mar 17 18:23:35.116000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:23:35.096295 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:23:35.096375 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:23:35.097726 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:23:35.097809 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:23:35.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.099945 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:23:35.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.100001 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:23:35.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.102569 systemd[1]: Stopping network-cleanup.service... Mar 17 18:23:35.103388 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:23:35.103448 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:23:35.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.105509 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:23:35.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.105552 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:23:35.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.107621 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:23:35.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.107664 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:23:35.108672 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:23:35.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.114480 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:23:35.119310 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:23:35.119439 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:23:35.120796 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:23:35.120873 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:23:35.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.121942 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:23:35.122035 systemd[1]: Stopped network-cleanup.service. Mar 17 18:23:35.123245 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:23:35.123278 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:23:35.124299 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:23:35.124326 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:23:35.125617 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:23:35.125667 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:23:35.126933 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:23:35.126990 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:23:35.128211 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:23:35.128248 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:23:35.129621 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:23:35.129661 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:23:35.131481 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:23:35.132326 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:23:35.132393 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:23:35.136637 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:23:35.136724 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:23:35.138252 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:23:35.140125 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:23:35.146329 systemd[1]: Switching root. Mar 17 18:23:35.163209 iscsid[747]: iscsid shutting down. Mar 17 18:23:35.163950 systemd-journald[291]: Journal stopped Mar 17 18:23:37.222834 systemd-journald[291]: Received SIGTERM from PID 1 (systemd). Mar 17 18:23:37.222888 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:23:37.222904 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:23:37.222914 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:23:37.222924 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:23:37.222934 kernel: SELinux: policy capability open_perms=1 Mar 17 18:23:37.222944 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:23:37.222954 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:23:37.222987 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:23:37.222999 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:23:37.223010 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:23:37.223020 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:23:37.223034 systemd[1]: Successfully loaded SELinux policy in 33.759ms. Mar 17 18:23:37.223050 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.868ms. Mar 17 18:23:37.223062 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:23:37.223073 systemd[1]: Detected virtualization kvm. Mar 17 18:23:37.223084 systemd[1]: Detected architecture arm64. Mar 17 18:23:37.223094 systemd[1]: Detected first boot. Mar 17 18:23:37.223109 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:23:37.223119 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:23:37.223131 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:23:37.223144 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:23:37.223158 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:23:37.223170 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:23:37.223181 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 18:23:37.223193 systemd[1]: Stopped iscsid.service. Mar 17 18:23:37.223205 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 18:23:37.223216 systemd[1]: Stopped initrd-switch-root.service. Mar 17 18:23:37.223227 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 18:23:37.223237 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:23:37.223247 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:23:37.223258 systemd[1]: Created slice system-getty.slice. Mar 17 18:23:37.223269 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:23:37.223280 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:23:37.223290 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:23:37.223301 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:23:37.223312 systemd[1]: Created slice user.slice. Mar 17 18:23:37.223322 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:23:37.223332 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:23:37.223343 systemd[1]: Set up automount boot.automount. Mar 17 18:23:37.223353 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:23:37.223365 systemd[1]: Stopped target initrd-switch-root.target. Mar 17 18:23:37.223376 systemd[1]: Stopped target initrd-fs.target. Mar 17 18:23:37.223386 systemd[1]: Stopped target initrd-root-fs.target. Mar 17 18:23:37.223397 systemd[1]: Reached target integritysetup.target. Mar 17 18:23:37.223407 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:23:37.223418 systemd[1]: Reached target remote-fs.target. Mar 17 18:23:37.223428 systemd[1]: Reached target slices.target. Mar 17 18:23:37.223438 systemd[1]: Reached target swap.target. Mar 17 18:23:37.223451 systemd[1]: Reached target torcx.target. Mar 17 18:23:37.223461 systemd[1]: Reached target veritysetup.target. Mar 17 18:23:37.223472 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:23:37.223482 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:23:37.223493 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:23:37.223503 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:23:37.223517 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:23:37.223528 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:23:37.223539 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:23:37.223549 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:23:37.223561 systemd[1]: Mounting media.mount... Mar 17 18:23:37.223572 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:23:37.223587 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:23:37.223603 systemd[1]: Mounting tmp.mount... Mar 17 18:23:37.223618 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:23:37.223630 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:23:37.223641 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:23:37.223652 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:23:37.223662 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:23:37.223675 systemd[1]: Starting modprobe@drm.service... Mar 17 18:23:37.223686 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:23:37.223696 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:23:37.223707 systemd[1]: Starting modprobe@loop.service... Mar 17 18:23:37.223718 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:23:37.223729 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 18:23:37.223740 systemd[1]: Stopped systemd-fsck-root.service. Mar 17 18:23:37.223750 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 18:23:37.223761 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 18:23:37.223773 systemd[1]: Stopped systemd-journald.service. Mar 17 18:23:37.223784 systemd[1]: Starting systemd-journald.service... Mar 17 18:23:37.223795 kernel: fuse: init (API version 7.34) Mar 17 18:23:37.223805 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:23:37.223815 kernel: loop: module loaded Mar 17 18:23:37.223826 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:23:37.223837 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:23:37.223847 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:23:37.223858 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 18:23:37.223869 systemd[1]: Stopped verity-setup.service. Mar 17 18:23:37.223880 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:23:37.223890 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:23:37.223901 systemd[1]: Mounted media.mount. Mar 17 18:23:37.223911 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:23:37.223922 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:23:37.223932 systemd[1]: Mounted tmp.mount. Mar 17 18:23:37.223944 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:23:37.223955 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:23:37.223974 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:23:37.223986 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:23:37.224008 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:23:37.224021 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:23:37.224031 systemd[1]: Finished modprobe@drm.service. Mar 17 18:23:37.224043 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:23:37.224054 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:23:37.224064 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:23:37.224075 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:23:37.224085 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:23:37.224095 systemd[1]: Finished modprobe@loop.service. Mar 17 18:23:37.224105 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:23:37.224115 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:23:37.224125 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:23:37.224141 systemd[1]: Reached target network-pre.target. Mar 17 18:23:37.224155 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:23:37.224166 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:23:37.224176 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:23:37.224187 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:23:37.224198 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:23:37.224212 systemd-journald[990]: Journal started Mar 17 18:23:37.224254 systemd-journald[990]: Runtime Journal (/run/log/journal/3c8ac23136184a8390dc52cbedd18e43) is 6.0M, max 48.7M, 42.6M free. Mar 17 18:23:35.226000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 18:23:35.326000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:23:35.326000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:23:35.326000 audit: BPF prog-id=10 op=LOAD Mar 17 18:23:35.326000 audit: BPF prog-id=10 op=UNLOAD Mar 17 18:23:35.326000 audit: BPF prog-id=11 op=LOAD Mar 17 18:23:35.326000 audit: BPF prog-id=11 op=UNLOAD Mar 17 18:23:35.361000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Mar 17 18:23:35.361000 audit[932]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58b2 a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:23:35.361000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:23:35.362000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Mar 17 18:23:35.362000 audit[932]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5989 a2=1ed a3=0 items=2 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:23:35.362000 audit: CWD cwd="/" Mar 17 18:23:35.362000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:23:35.362000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:23:35.362000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Mar 17 18:23:37.045000 audit: BPF prog-id=12 op=LOAD Mar 17 18:23:37.045000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:23:37.045000 audit: BPF prog-id=13 op=LOAD Mar 17 18:23:37.045000 audit: BPF prog-id=14 op=LOAD Mar 17 18:23:37.045000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:23:37.045000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:23:37.046000 audit: BPF prog-id=15 op=LOAD Mar 17 18:23:37.046000 audit: BPF prog-id=12 op=UNLOAD Mar 17 18:23:37.046000 audit: BPF prog-id=16 op=LOAD Mar 17 18:23:37.046000 audit: BPF prog-id=17 op=LOAD Mar 17 18:23:37.046000 audit: BPF prog-id=13 op=UNLOAD Mar 17 18:23:37.046000 audit: BPF prog-id=14 op=UNLOAD Mar 17 18:23:37.047000 audit: BPF prog-id=18 op=LOAD Mar 17 18:23:37.047000 audit: BPF prog-id=15 op=UNLOAD Mar 17 18:23:37.047000 audit: BPF prog-id=19 op=LOAD Mar 17 18:23:37.047000 audit: BPF prog-id=20 op=LOAD Mar 17 18:23:37.047000 audit: BPF prog-id=16 op=UNLOAD Mar 17 18:23:37.047000 audit: BPF prog-id=17 op=UNLOAD Mar 17 18:23:37.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.055000 audit: BPF prog-id=18 op=UNLOAD Mar 17 18:23:37.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.158000 audit: BPF prog-id=21 op=LOAD Mar 17 18:23:37.158000 audit: BPF prog-id=22 op=LOAD Mar 17 18:23:37.158000 audit: BPF prog-id=23 op=LOAD Mar 17 18:23:37.158000 audit: BPF prog-id=19 op=UNLOAD Mar 17 18:23:37.158000 audit: BPF prog-id=20 op=UNLOAD Mar 17 18:23:37.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.206000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:23:37.206000 audit[990]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffdbf7a260 a2=4000 a3=1 items=0 ppid=1 pid=990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:23:37.206000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:23:37.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:35.360976 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:23:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:23:37.044867 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:23:35.361242 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:23:35Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:23:37.044882 systemd[1]: Unnecessary job was removed for dev-vda6.device. Mar 17 18:23:35.361261 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:23:35Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:23:37.049078 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 18:23:35.361292 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:23:35Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Mar 17 18:23:35.361302 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:23:35Z" level=debug msg="skipped missing lower profile" missing profile=oem Mar 17 18:23:35.361330 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:23:35Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Mar 17 18:23:35.361341 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:23:35Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Mar 17 18:23:35.361527 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:23:35Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Mar 17 18:23:35.361559 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:23:35Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Mar 17 18:23:35.361570 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:23:35Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Mar 17 18:23:35.361983 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:23:35Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Mar 17 18:23:35.362022 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:23:35Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Mar 17 18:23:35.362041 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:23:35Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 Mar 17 18:23:37.226980 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:23:35.362054 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:23:35Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Mar 17 18:23:35.362071 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:23:35Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 Mar 17 18:23:35.362083 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:23:35Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Mar 17 18:23:36.790954 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:23:36Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:23:36.791238 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:23:36Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:23:36.791341 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:23:36Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:23:36.791502 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:23:36Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Mar 17 18:23:36.791551 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:23:36Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Mar 17 18:23:36.791623 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-03-17T18:23:36Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Mar 17 18:23:37.227996 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:23:37.230528 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:23:37.235135 systemd[1]: Started systemd-journald.service. Mar 17 18:23:37.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.236180 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:23:37.237346 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:23:37.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.238319 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:23:37.239633 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:23:37.241048 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:23:37.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.242330 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:23:37.244422 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:23:37.246632 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:23:37.248898 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:23:37.251682 systemd-journald[990]: Time spent on flushing to /var/log/journal/3c8ac23136184a8390dc52cbedd18e43 is 13.878ms for 1012 entries. Mar 17 18:23:37.251682 systemd-journald[990]: System Journal (/var/log/journal/3c8ac23136184a8390dc52cbedd18e43) is 8.0M, max 195.6M, 187.6M free. Mar 17 18:23:37.280562 systemd-journald[990]: Received client request to flush runtime journal. Mar 17 18:23:37.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.252273 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:23:37.281688 udevadm[1033]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 18:23:37.281544 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:23:37.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.286732 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:23:37.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.585000 audit: BPF prog-id=24 op=LOAD Mar 17 18:23:37.585000 audit: BPF prog-id=25 op=LOAD Mar 17 18:23:37.585000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:23:37.585000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:23:37.584830 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:23:37.587027 systemd[1]: Starting systemd-udevd.service... Mar 17 18:23:37.602121 systemd-udevd[1035]: Using default interface naming scheme 'v252'. Mar 17 18:23:37.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.614000 audit: BPF prog-id=26 op=LOAD Mar 17 18:23:37.613753 systemd[1]: Started systemd-udevd.service. Mar 17 18:23:37.616993 systemd[1]: Starting systemd-networkd.service... Mar 17 18:23:37.621000 audit: BPF prog-id=27 op=LOAD Mar 17 18:23:37.621000 audit: BPF prog-id=28 op=LOAD Mar 17 18:23:37.621000 audit: BPF prog-id=29 op=LOAD Mar 17 18:23:37.623083 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:23:37.639357 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Mar 17 18:23:37.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.653777 systemd[1]: Started systemd-userdbd.service. Mar 17 18:23:37.665277 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:23:37.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.701048 systemd-networkd[1046]: lo: Link UP Mar 17 18:23:37.701057 systemd-networkd[1046]: lo: Gained carrier Mar 17 18:23:37.701368 systemd-networkd[1046]: Enumeration completed Mar 17 18:23:37.701451 systemd[1]: Started systemd-networkd.service. Mar 17 18:23:37.702765 systemd-networkd[1046]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:23:37.706878 systemd-networkd[1046]: eth0: Link UP Mar 17 18:23:37.706888 systemd-networkd[1046]: eth0: Gained carrier Mar 17 18:23:37.723427 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:23:37.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.725217 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:23:37.729081 systemd-networkd[1046]: eth0: DHCPv4 address 10.0.0.94/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:23:37.735120 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:23:37.773804 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:23:37.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.774832 systemd[1]: Reached target cryptsetup.target. Mar 17 18:23:37.776794 systemd[1]: Starting lvm2-activation.service... Mar 17 18:23:37.780703 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:23:37.808790 systemd[1]: Finished lvm2-activation.service. Mar 17 18:23:37.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.809771 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:23:37.810628 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:23:37.810658 systemd[1]: Reached target local-fs.target. Mar 17 18:23:37.811409 systemd[1]: Reached target machines.target. Mar 17 18:23:37.813351 systemd[1]: Starting ldconfig.service... Mar 17 18:23:37.814337 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:23:37.814407 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:23:37.815571 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:23:37.817491 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:23:37.819830 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:23:37.822429 systemd[1]: Starting systemd-sysext.service... Mar 17 18:23:37.823710 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1071 (bootctl) Mar 17 18:23:37.824900 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:23:37.828613 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:23:37.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.838940 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:23:37.846763 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:23:37.847016 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:23:37.883995 kernel: loop0: detected capacity change from 0 to 189592 Mar 17 18:23:37.884807 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:23:37.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.896024 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:23:37.898629 systemd-fsck[1079]: fsck.fat 4.2 (2021-01-31) Mar 17 18:23:37.898629 systemd-fsck[1079]: /dev/vda1: 236 files, 117179/258078 clusters Mar 17 18:23:37.900817 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:23:37.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.924114 kernel: loop1: detected capacity change from 0 to 189592 Mar 17 18:23:37.927828 (sd-sysext)[1084]: Using extensions 'kubernetes'. Mar 17 18:23:37.928144 (sd-sysext)[1084]: Merged extensions into '/usr'. Mar 17 18:23:37.948156 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:23:37.949673 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:23:37.951591 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:23:37.953478 systemd[1]: Starting modprobe@loop.service... Mar 17 18:23:37.954362 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:23:37.954494 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:23:37.955243 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:23:37.955378 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:23:37.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.956690 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:23:37.956797 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:23:37.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.958245 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:23:37.958352 systemd[1]: Finished modprobe@loop.service. Mar 17 18:23:37.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:37.959588 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:23:37.959689 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:23:37.996680 ldconfig[1070]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:23:37.999934 systemd[1]: Finished ldconfig.service. Mar 17 18:23:37.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.178219 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:23:38.179948 systemd[1]: Mounting boot.mount... Mar 17 18:23:38.181926 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:23:38.188206 systemd[1]: Mounted boot.mount. Mar 17 18:23:38.189157 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:23:38.191235 systemd[1]: Finished systemd-sysext.service. Mar 17 18:23:38.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.193336 systemd[1]: Starting ensure-sysext.service... Mar 17 18:23:38.195186 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:23:38.196393 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:23:38.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.200564 systemd[1]: Reloading. Mar 17 18:23:38.207187 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:23:38.209062 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:23:38.211892 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:23:38.240085 /usr/lib/systemd/system-generators/torcx-generator[1112]: time="2025-03-17T18:23:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:23:38.240391 /usr/lib/systemd/system-generators/torcx-generator[1112]: time="2025-03-17T18:23:38Z" level=info msg="torcx already run" Mar 17 18:23:38.310986 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:23:38.311005 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:23:38.327141 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:23:38.370000 audit: BPF prog-id=30 op=LOAD Mar 17 18:23:38.370000 audit: BPF prog-id=27 op=UNLOAD Mar 17 18:23:38.370000 audit: BPF prog-id=31 op=LOAD Mar 17 18:23:38.370000 audit: BPF prog-id=32 op=LOAD Mar 17 18:23:38.370000 audit: BPF prog-id=28 op=UNLOAD Mar 17 18:23:38.370000 audit: BPF prog-id=29 op=UNLOAD Mar 17 18:23:38.373000 audit: BPF prog-id=33 op=LOAD Mar 17 18:23:38.373000 audit: BPF prog-id=34 op=LOAD Mar 17 18:23:38.373000 audit: BPF prog-id=24 op=UNLOAD Mar 17 18:23:38.373000 audit: BPF prog-id=25 op=UNLOAD Mar 17 18:23:38.373000 audit: BPF prog-id=35 op=LOAD Mar 17 18:23:38.373000 audit: BPF prog-id=21 op=UNLOAD Mar 17 18:23:38.374000 audit: BPF prog-id=36 op=LOAD Mar 17 18:23:38.374000 audit: BPF prog-id=37 op=LOAD Mar 17 18:23:38.374000 audit: BPF prog-id=22 op=UNLOAD Mar 17 18:23:38.374000 audit: BPF prog-id=23 op=UNLOAD Mar 17 18:23:38.374000 audit: BPF prog-id=38 op=LOAD Mar 17 18:23:38.374000 audit: BPF prog-id=26 op=UNLOAD Mar 17 18:23:38.377432 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:23:38.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.382209 systemd[1]: Starting audit-rules.service... Mar 17 18:23:38.384217 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:23:38.386463 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:23:38.387000 audit: BPF prog-id=39 op=LOAD Mar 17 18:23:38.389279 systemd[1]: Starting systemd-resolved.service... Mar 17 18:23:38.390000 audit: BPF prog-id=40 op=LOAD Mar 17 18:23:38.392313 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:23:38.394671 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:23:38.404141 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:23:38.399000 audit[1157]: SYSTEM_BOOT pid=1157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.406022 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:23:38.408101 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:23:38.411139 systemd[1]: Starting modprobe@loop.service... Mar 17 18:23:38.411800 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:23:38.411998 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:23:38.413192 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:23:38.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.414499 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:23:38.414631 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:23:38.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.415000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.415986 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:23:38.416115 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:23:38.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.417555 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:23:38.417683 systemd[1]: Finished modprobe@loop.service. Mar 17 18:23:38.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.419187 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:23:38.419321 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:23:38.419447 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:23:38.420958 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:23:38.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.422896 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:23:38.424227 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:23:38.426420 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:23:38.428256 systemd[1]: Starting modprobe@loop.service... Mar 17 18:23:38.428974 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:23:38.429096 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:23:38.429203 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:23:38.429955 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:23:38.430096 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:23:38.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.431315 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:23:38.431428 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:23:38.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.432750 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:23:38.432860 systemd[1]: Finished modprobe@loop.service. Mar 17 18:23:38.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.434078 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:23:38.434166 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:23:38.436449 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:23:38.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.437943 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:23:38.439191 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:23:38.441337 systemd[1]: Starting modprobe@drm.service... Mar 17 18:23:38.443203 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:23:38.446682 systemd[1]: Starting modprobe@loop.service... Mar 17 18:23:38.447468 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:23:38.447612 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:23:38.448825 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:23:38.451124 systemd[1]: Starting systemd-update-done.service... Mar 17 18:23:38.451945 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:23:38.453150 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:23:38.453283 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:23:38.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.454507 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:23:38.454646 systemd[1]: Finished modprobe@drm.service. Mar 17 18:23:38.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.455812 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:23:38.455940 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:23:38.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.457144 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:23:38.457265 systemd[1]: Finished modprobe@loop.service. Mar 17 18:23:38.458600 systemd[1]: Finished systemd-update-done.service. Mar 17 18:23:38.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.461122 systemd[1]: Finished ensure-sysext.service. Mar 17 18:23:38.461810 systemd-resolved[1155]: Positive Trust Anchors: Mar 17 18:23:38.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:23:38.461820 systemd-resolved[1155]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:23:38.461846 systemd-resolved[1155]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:23:38.462148 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:23:38.462180 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:23:38.461000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:23:38.461000 audit[1181]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffffb1b270 a2=420 a3=0 items=0 ppid=1151 pid=1181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:23:38.461000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:23:38.462821 augenrules[1181]: No rules Mar 17 18:23:38.464226 systemd[1]: Finished audit-rules.service. Mar 17 18:23:38.478608 systemd-resolved[1155]: Defaulting to hostname 'linux'. Mar 17 18:23:38.479958 systemd[1]: Started systemd-resolved.service. Mar 17 18:23:38.480751 systemd[1]: Reached target network.target. Mar 17 18:23:38.481463 systemd[1]: Reached target nss-lookup.target. Mar 17 18:23:38.482288 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:23:38.482835 systemd-timesyncd[1156]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 18:23:38.482884 systemd-timesyncd[1156]: Initial clock synchronization to Mon 2025-03-17 18:23:38.308768 UTC. Mar 17 18:23:38.483352 systemd[1]: Reached target sysinit.target. Mar 17 18:23:38.484106 systemd[1]: Started motdgen.path. Mar 17 18:23:38.484757 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:23:38.485844 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:23:38.486611 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:23:38.486639 systemd[1]: Reached target paths.target. Mar 17 18:23:38.487394 systemd[1]: Reached target time-set.target. Mar 17 18:23:38.488206 systemd[1]: Started logrotate.timer. Mar 17 18:23:38.488952 systemd[1]: Started mdadm.timer. Mar 17 18:23:38.489536 systemd[1]: Reached target timers.target. Mar 17 18:23:38.490503 systemd[1]: Listening on dbus.socket. Mar 17 18:23:38.492184 systemd[1]: Starting docker.socket... Mar 17 18:23:38.495253 systemd[1]: Listening on sshd.socket. Mar 17 18:23:38.496058 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:23:38.496464 systemd[1]: Listening on docker.socket. Mar 17 18:23:38.497272 systemd[1]: Reached target sockets.target. Mar 17 18:23:38.498010 systemd[1]: Reached target basic.target. Mar 17 18:23:38.498796 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:23:38.498826 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:23:38.499738 systemd[1]: Starting containerd.service... Mar 17 18:23:38.501331 systemd[1]: Starting dbus.service... Mar 17 18:23:38.502960 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:23:38.504787 systemd[1]: Starting extend-filesystems.service... Mar 17 18:23:38.505655 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:23:38.506979 systemd[1]: Starting motdgen.service... Mar 17 18:23:38.510606 jq[1193]: false Mar 17 18:23:38.512623 systemd[1]: Starting prepare-helm.service... Mar 17 18:23:38.514342 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:23:38.516239 systemd[1]: Starting sshd-keygen.service... Mar 17 18:23:38.520020 systemd[1]: Starting systemd-logind.service... Mar 17 18:23:38.520747 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:23:38.520820 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:23:38.521215 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 18:23:38.532168 jq[1210]: true Mar 17 18:23:38.521874 systemd[1]: Starting update-engine.service... Mar 17 18:23:38.523667 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:23:38.528961 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:23:38.529171 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:23:38.530145 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:23:38.530326 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:23:38.531767 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:23:38.531910 systemd[1]: Finished motdgen.service. Mar 17 18:23:38.535505 extend-filesystems[1194]: Found loop1 Mar 17 18:23:38.535505 extend-filesystems[1194]: Found vda Mar 17 18:23:38.535505 extend-filesystems[1194]: Found vda1 Mar 17 18:23:38.535505 extend-filesystems[1194]: Found vda2 Mar 17 18:23:38.535505 extend-filesystems[1194]: Found vda3 Mar 17 18:23:38.535505 extend-filesystems[1194]: Found usr Mar 17 18:23:38.535505 extend-filesystems[1194]: Found vda4 Mar 17 18:23:38.535505 extend-filesystems[1194]: Found vda6 Mar 17 18:23:38.535505 extend-filesystems[1194]: Found vda7 Mar 17 18:23:38.535505 extend-filesystems[1194]: Found vda9 Mar 17 18:23:38.535505 extend-filesystems[1194]: Checking size of /dev/vda9 Mar 17 18:23:38.537848 dbus-daemon[1192]: [system] SELinux support is enabled Mar 17 18:23:38.550751 tar[1214]: linux-arm64/helm Mar 17 18:23:38.538078 systemd[1]: Started dbus.service. Mar 17 18:23:38.541832 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:23:38.551016 jq[1217]: true Mar 17 18:23:38.541877 systemd[1]: Reached target system-config.target. Mar 17 18:23:38.542832 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:23:38.542847 systemd[1]: Reached target user-config.target. Mar 17 18:23:38.586950 extend-filesystems[1194]: Resized partition /dev/vda9 Mar 17 18:23:38.600302 extend-filesystems[1244]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 18:23:38.607609 systemd-logind[1207]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 18:23:38.610377 systemd-logind[1207]: New seat seat0. Mar 17 18:23:38.613654 bash[1242]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:23:38.614421 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:23:38.617531 systemd[1]: Started systemd-logind.service. Mar 17 18:23:38.621800 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 18:23:38.637596 update_engine[1208]: I0317 18:23:38.635833 1208 main.cc:92] Flatcar Update Engine starting Mar 17 18:23:38.643570 env[1216]: time="2025-03-17T18:23:38.643516400Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:23:38.645159 systemd[1]: Started update-engine.service. Mar 17 18:23:38.654612 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 18:23:38.647890 systemd[1]: Started locksmithd.service. Mar 17 18:23:38.654732 update_engine[1208]: I0317 18:23:38.645275 1208 update_check_scheduler.cc:74] Next update check in 7m13s Mar 17 18:23:38.659149 extend-filesystems[1244]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 18:23:38.659149 extend-filesystems[1244]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 18:23:38.659149 extend-filesystems[1244]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 18:23:38.663872 extend-filesystems[1194]: Resized filesystem in /dev/vda9 Mar 17 18:23:38.660525 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:23:38.660697 systemd[1]: Finished extend-filesystems.service. Mar 17 18:23:38.671106 env[1216]: time="2025-03-17T18:23:38.670146800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:23:38.671106 env[1216]: time="2025-03-17T18:23:38.670304120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:23:38.674482 env[1216]: time="2025-03-17T18:23:38.671693520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:23:38.674482 env[1216]: time="2025-03-17T18:23:38.671731040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:23:38.674482 env[1216]: time="2025-03-17T18:23:38.671944760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:23:38.674482 env[1216]: time="2025-03-17T18:23:38.671978280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:23:38.674482 env[1216]: time="2025-03-17T18:23:38.671993200Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:23:38.674482 env[1216]: time="2025-03-17T18:23:38.672003400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:23:38.674482 env[1216]: time="2025-03-17T18:23:38.672079520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:23:38.674482 env[1216]: time="2025-03-17T18:23:38.672452120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:23:38.674482 env[1216]: time="2025-03-17T18:23:38.672562160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:23:38.674482 env[1216]: time="2025-03-17T18:23:38.672591040Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:23:38.674722 env[1216]: time="2025-03-17T18:23:38.672645920Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:23:38.674722 env[1216]: time="2025-03-17T18:23:38.672659000Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:23:38.678320 env[1216]: time="2025-03-17T18:23:38.675796080Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:23:38.678320 env[1216]: time="2025-03-17T18:23:38.675836800Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:23:38.678320 env[1216]: time="2025-03-17T18:23:38.675849960Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:23:38.678320 env[1216]: time="2025-03-17T18:23:38.675886240Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:23:38.678320 env[1216]: time="2025-03-17T18:23:38.675900520Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:23:38.678320 env[1216]: time="2025-03-17T18:23:38.675917440Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:23:38.678320 env[1216]: time="2025-03-17T18:23:38.675931640Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:23:38.678320 env[1216]: time="2025-03-17T18:23:38.676266800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:23:38.678320 env[1216]: time="2025-03-17T18:23:38.676286520Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:23:38.678320 env[1216]: time="2025-03-17T18:23:38.676299360Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:23:38.678320 env[1216]: time="2025-03-17T18:23:38.676313440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:23:38.678320 env[1216]: time="2025-03-17T18:23:38.676326560Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:23:38.678320 env[1216]: time="2025-03-17T18:23:38.676435160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:23:38.678320 env[1216]: time="2025-03-17T18:23:38.676522480Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:23:38.678620 env[1216]: time="2025-03-17T18:23:38.676745920Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:23:38.678620 env[1216]: time="2025-03-17T18:23:38.676771840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:23:38.678620 env[1216]: time="2025-03-17T18:23:38.676785360Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:23:38.678620 env[1216]: time="2025-03-17T18:23:38.677007760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:23:38.678620 env[1216]: time="2025-03-17T18:23:38.677024120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:23:38.678620 env[1216]: time="2025-03-17T18:23:38.677041080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:23:38.678620 env[1216]: time="2025-03-17T18:23:38.677053680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:23:38.678620 env[1216]: time="2025-03-17T18:23:38.677068440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:23:38.678620 env[1216]: time="2025-03-17T18:23:38.677082040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:23:38.678620 env[1216]: time="2025-03-17T18:23:38.677096320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:23:38.678620 env[1216]: time="2025-03-17T18:23:38.677107960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:23:38.678620 env[1216]: time="2025-03-17T18:23:38.677135320Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:23:38.678620 env[1216]: time="2025-03-17T18:23:38.677252600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:23:38.678620 env[1216]: time="2025-03-17T18:23:38.677275080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:23:38.678620 env[1216]: time="2025-03-17T18:23:38.677288200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:23:38.680147 env[1216]: time="2025-03-17T18:23:38.677299880Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:23:38.680147 env[1216]: time="2025-03-17T18:23:38.677313520Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:23:38.680147 env[1216]: time="2025-03-17T18:23:38.677324560Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:23:38.680147 env[1216]: time="2025-03-17T18:23:38.677341600Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:23:38.680147 env[1216]: time="2025-03-17T18:23:38.677373840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:23:38.678948 systemd[1]: Started containerd.service. Mar 17 18:23:38.680314 env[1216]: time="2025-03-17T18:23:38.677580680Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:23:38.680314 env[1216]: time="2025-03-17T18:23:38.677634520Z" level=info msg="Connect containerd service" Mar 17 18:23:38.680314 env[1216]: time="2025-03-17T18:23:38.677666360Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:23:38.680314 env[1216]: time="2025-03-17T18:23:38.678364640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:23:38.680314 env[1216]: time="2025-03-17T18:23:38.678774080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:23:38.680314 env[1216]: time="2025-03-17T18:23:38.678817800Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:23:38.680314 env[1216]: time="2025-03-17T18:23:38.678863920Z" level=info msg="containerd successfully booted in 0.036101s" Mar 17 18:23:38.684638 env[1216]: time="2025-03-17T18:23:38.681853160Z" level=info msg="Start subscribing containerd event" Mar 17 18:23:38.684638 env[1216]: time="2025-03-17T18:23:38.681927080Z" level=info msg="Start recovering state" Mar 17 18:23:38.687119 env[1216]: time="2025-03-17T18:23:38.687025800Z" level=info msg="Start event monitor" Mar 17 18:23:38.687119 env[1216]: time="2025-03-17T18:23:38.687062400Z" level=info msg="Start snapshots syncer" Mar 17 18:23:38.687119 env[1216]: time="2025-03-17T18:23:38.687073760Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:23:38.687119 env[1216]: time="2025-03-17T18:23:38.687081240Z" level=info msg="Start streaming server" Mar 17 18:23:38.711109 locksmithd[1246]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:23:38.928114 tar[1214]: linux-arm64/LICENSE Mar 17 18:23:38.928227 tar[1214]: linux-arm64/README.md Mar 17 18:23:38.932310 systemd[1]: Finished prepare-helm.service. Mar 17 18:23:39.456125 systemd-networkd[1046]: eth0: Gained IPv6LL Mar 17 18:23:39.457859 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:23:39.459101 systemd[1]: Reached target network-online.target. Mar 17 18:23:39.461347 systemd[1]: Starting kubelet.service... Mar 17 18:23:39.959645 systemd[1]: Started kubelet.service. Mar 17 18:23:40.412539 kubelet[1260]: E0317 18:23:40.412431 1260 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:23:40.414519 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:23:40.414636 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:23:43.049795 sshd_keygen[1212]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:23:43.067273 systemd[1]: Finished sshd-keygen.service. Mar 17 18:23:43.069895 systemd[1]: Starting issuegen.service... Mar 17 18:23:43.074258 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:23:43.074405 systemd[1]: Finished issuegen.service. Mar 17 18:23:43.076356 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:23:43.081989 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:23:43.084057 systemd[1]: Started getty@tty1.service. Mar 17 18:23:43.085995 systemd[1]: Started serial-getty@ttyAMA0.service. Mar 17 18:23:43.086912 systemd[1]: Reached target getty.target. Mar 17 18:23:43.087753 systemd[1]: Reached target multi-user.target. Mar 17 18:23:43.089902 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:23:43.095981 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:23:43.096143 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:23:43.097046 systemd[1]: Startup finished in 630ms (kernel) + 4.608s (initrd) + 7.905s (userspace) = 13.144s. Mar 17 18:23:43.375345 systemd[1]: Created slice system-sshd.slice. Mar 17 18:23:43.377531 systemd[1]: Started sshd@0-10.0.0.94:22-10.0.0.1:40604.service. Mar 17 18:23:43.424697 sshd[1282]: Accepted publickey for core from 10.0.0.1 port 40604 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:23:43.426614 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:23:43.436293 systemd[1]: Created slice user-500.slice. Mar 17 18:23:43.437407 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:23:43.438892 systemd-logind[1207]: New session 1 of user core. Mar 17 18:23:43.445186 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:23:43.446729 systemd[1]: Starting user@500.service... Mar 17 18:23:43.449174 (systemd)[1285]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:23:43.511038 systemd[1285]: Queued start job for default target default.target. Mar 17 18:23:43.511537 systemd[1285]: Reached target paths.target. Mar 17 18:23:43.511568 systemd[1285]: Reached target sockets.target. Mar 17 18:23:43.511580 systemd[1285]: Reached target timers.target. Mar 17 18:23:43.511589 systemd[1285]: Reached target basic.target. Mar 17 18:23:43.511643 systemd[1285]: Reached target default.target. Mar 17 18:23:43.511668 systemd[1285]: Startup finished in 56ms. Mar 17 18:23:43.511748 systemd[1]: Started user@500.service. Mar 17 18:23:43.512754 systemd[1]: Started session-1.scope. Mar 17 18:23:43.563077 systemd[1]: Started sshd@1-10.0.0.94:22-10.0.0.1:40616.service. Mar 17 18:23:43.605565 sshd[1294]: Accepted publickey for core from 10.0.0.1 port 40616 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:23:43.606877 sshd[1294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:23:43.610466 systemd-logind[1207]: New session 2 of user core. Mar 17 18:23:43.611664 systemd[1]: Started session-2.scope. Mar 17 18:23:43.664330 sshd[1294]: pam_unix(sshd:session): session closed for user core Mar 17 18:23:43.667197 systemd[1]: sshd@1-10.0.0.94:22-10.0.0.1:40616.service: Deactivated successfully. Mar 17 18:23:43.667784 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 18:23:43.668272 systemd-logind[1207]: Session 2 logged out. Waiting for processes to exit. Mar 17 18:23:43.669323 systemd[1]: Started sshd@2-10.0.0.94:22-10.0.0.1:40620.service. Mar 17 18:23:43.669999 systemd-logind[1207]: Removed session 2. Mar 17 18:23:43.712277 sshd[1300]: Accepted publickey for core from 10.0.0.1 port 40620 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:23:43.714057 sshd[1300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:23:43.717317 systemd-logind[1207]: New session 3 of user core. Mar 17 18:23:43.718154 systemd[1]: Started session-3.scope. Mar 17 18:23:43.765868 sshd[1300]: pam_unix(sshd:session): session closed for user core Mar 17 18:23:43.768396 systemd[1]: sshd@2-10.0.0.94:22-10.0.0.1:40620.service: Deactivated successfully. Mar 17 18:23:43.768916 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 18:23:43.769417 systemd-logind[1207]: Session 3 logged out. Waiting for processes to exit. Mar 17 18:23:43.770426 systemd[1]: Started sshd@3-10.0.0.94:22-10.0.0.1:40628.service. Mar 17 18:23:43.771065 systemd-logind[1207]: Removed session 3. Mar 17 18:23:43.812920 sshd[1306]: Accepted publickey for core from 10.0.0.1 port 40628 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:23:43.813998 sshd[1306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:23:43.817644 systemd[1]: Started session-4.scope. Mar 17 18:23:43.818024 systemd-logind[1207]: New session 4 of user core. Mar 17 18:23:43.869535 sshd[1306]: pam_unix(sshd:session): session closed for user core Mar 17 18:23:43.872006 systemd[1]: sshd@3-10.0.0.94:22-10.0.0.1:40628.service: Deactivated successfully. Mar 17 18:23:43.872628 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:23:43.873131 systemd-logind[1207]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:23:43.874109 systemd[1]: Started sshd@4-10.0.0.94:22-10.0.0.1:40632.service. Mar 17 18:23:43.874691 systemd-logind[1207]: Removed session 4. Mar 17 18:23:43.917279 sshd[1312]: Accepted publickey for core from 10.0.0.1 port 40632 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:23:43.918503 sshd[1312]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:23:43.921921 systemd-logind[1207]: New session 5 of user core. Mar 17 18:23:43.922740 systemd[1]: Started session-5.scope. Mar 17 18:23:43.983616 sudo[1315]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:23:43.983838 sudo[1315]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:23:44.039993 systemd[1]: Starting docker.service... Mar 17 18:23:44.119906 env[1328]: time="2025-03-17T18:23:44.119823448Z" level=info msg="Starting up" Mar 17 18:23:44.121430 env[1328]: time="2025-03-17T18:23:44.121400338Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:23:44.121519 env[1328]: time="2025-03-17T18:23:44.121504642Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:23:44.121597 env[1328]: time="2025-03-17T18:23:44.121581613Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:23:44.121658 env[1328]: time="2025-03-17T18:23:44.121645176Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:23:44.123798 env[1328]: time="2025-03-17T18:23:44.123771270Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:23:44.123923 env[1328]: time="2025-03-17T18:23:44.123907849Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:23:44.124027 env[1328]: time="2025-03-17T18:23:44.124008039Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:23:44.124090 env[1328]: time="2025-03-17T18:23:44.124070138Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:23:44.266140 env[1328]: time="2025-03-17T18:23:44.266060279Z" level=info msg="Loading containers: start." Mar 17 18:23:44.383611 kernel: Initializing XFRM netlink socket Mar 17 18:23:44.406033 env[1328]: time="2025-03-17T18:23:44.406001969Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 18:23:44.453488 systemd-networkd[1046]: docker0: Link UP Mar 17 18:23:44.467040 env[1328]: time="2025-03-17T18:23:44.467005501Z" level=info msg="Loading containers: done." Mar 17 18:23:44.488861 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck489965006-merged.mount: Deactivated successfully. Mar 17 18:23:44.491117 env[1328]: time="2025-03-17T18:23:44.491081403Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:23:44.491250 env[1328]: time="2025-03-17T18:23:44.491233606Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 18:23:44.491333 env[1328]: time="2025-03-17T18:23:44.491319002Z" level=info msg="Daemon has completed initialization" Mar 17 18:23:44.505111 systemd[1]: Started docker.service. Mar 17 18:23:44.509370 env[1328]: time="2025-03-17T18:23:44.509323955Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:23:45.114782 env[1216]: time="2025-03-17T18:23:45.114735144Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 17 18:23:46.870510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1477183013.mount: Deactivated successfully. Mar 17 18:23:48.430259 env[1216]: time="2025-03-17T18:23:48.430200002Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:48.431519 env[1216]: time="2025-03-17T18:23:48.431486811Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:48.433239 env[1216]: time="2025-03-17T18:23:48.433210426Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:48.434727 env[1216]: time="2025-03-17T18:23:48.434697159Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:48.436276 env[1216]: time="2025-03-17T18:23:48.436241077Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:26ae5fde2308729bfda71fa20aa73cb5a1a4490f107f62dc7e1c4c49823cc084\"" Mar 17 18:23:48.436876 env[1216]: time="2025-03-17T18:23:48.436844703Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 17 18:23:50.113041 env[1216]: time="2025-03-17T18:23:50.112978782Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:50.114499 env[1216]: time="2025-03-17T18:23:50.114454078Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:50.116209 env[1216]: time="2025-03-17T18:23:50.116175934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:50.119039 env[1216]: time="2025-03-17T18:23:50.119005496Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:50.119909 env[1216]: time="2025-03-17T18:23:50.119863061Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:3f2886c2c7c101461e78c37591f8beb12ac073f8dcf5e32c95da9e9689d0c1d3\"" Mar 17 18:23:50.120455 env[1216]: time="2025-03-17T18:23:50.120432953Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 17 18:23:50.586323 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:23:50.586488 systemd[1]: Stopped kubelet.service. Mar 17 18:23:50.587838 systemd[1]: Starting kubelet.service... Mar 17 18:23:50.668916 systemd[1]: Started kubelet.service. Mar 17 18:23:50.700571 kubelet[1463]: E0317 18:23:50.700524 1463 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:23:50.703064 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:23:50.703185 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:23:51.758550 env[1216]: time="2025-03-17T18:23:51.758498667Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:51.793197 env[1216]: time="2025-03-17T18:23:51.793161983Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:51.795383 env[1216]: time="2025-03-17T18:23:51.795340119Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:51.797238 env[1216]: time="2025-03-17T18:23:51.797204834Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:51.797876 env[1216]: time="2025-03-17T18:23:51.797838326Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:3dd474fdc8c0d007008dd47bafecdd344fbdace928731ae8b09f58f633f4a30f\"" Mar 17 18:23:51.798340 env[1216]: time="2025-03-17T18:23:51.798317259Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 17 18:23:52.828230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount914025639.mount: Deactivated successfully. Mar 17 18:23:53.282983 env[1216]: time="2025-03-17T18:23:53.282931387Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:53.284040 env[1216]: time="2025-03-17T18:23:53.284014820Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:53.285313 env[1216]: time="2025-03-17T18:23:53.285278807Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:53.286521 env[1216]: time="2025-03-17T18:23:53.286487180Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:53.286862 env[1216]: time="2025-03-17T18:23:53.286837880Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:939054a0dc9c7c1596b061fc2380758139ce62751b44a0b21b3afc7abd7eb3ff\"" Mar 17 18:23:53.287401 env[1216]: time="2025-03-17T18:23:53.287374195Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 18:23:53.896669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount19658004.mount: Deactivated successfully. Mar 17 18:23:54.870684 env[1216]: time="2025-03-17T18:23:54.870627917Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:54.872040 env[1216]: time="2025-03-17T18:23:54.872004106Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:54.873600 env[1216]: time="2025-03-17T18:23:54.873568900Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:54.875933 env[1216]: time="2025-03-17T18:23:54.875902272Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:54.876581 env[1216]: time="2025-03-17T18:23:54.876548211Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 17 18:23:54.877035 env[1216]: time="2025-03-17T18:23:54.877007937Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 18:23:55.342615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3300615708.mount: Deactivated successfully. Mar 17 18:23:55.346413 env[1216]: time="2025-03-17T18:23:55.346368232Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:55.347598 env[1216]: time="2025-03-17T18:23:55.347570537Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:55.348888 env[1216]: time="2025-03-17T18:23:55.348866401Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:55.350426 env[1216]: time="2025-03-17T18:23:55.350407234Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:55.350885 env[1216]: time="2025-03-17T18:23:55.350854083Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Mar 17 18:23:55.351570 env[1216]: time="2025-03-17T18:23:55.351547817Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 17 18:23:55.834582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount569984947.mount: Deactivated successfully. Mar 17 18:23:58.746739 env[1216]: time="2025-03-17T18:23:58.746691975Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:58.748355 env[1216]: time="2025-03-17T18:23:58.748318692Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:58.750466 env[1216]: time="2025-03-17T18:23:58.750435724Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:58.752624 env[1216]: time="2025-03-17T18:23:58.752599635Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:23:58.753525 env[1216]: time="2025-03-17T18:23:58.753502239Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Mar 17 18:24:00.836317 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:24:00.836487 systemd[1]: Stopped kubelet.service. Mar 17 18:24:00.837890 systemd[1]: Starting kubelet.service... Mar 17 18:24:00.919243 systemd[1]: Started kubelet.service. Mar 17 18:24:00.949438 kubelet[1494]: E0317 18:24:00.949394 1494 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:24:00.951559 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:24:00.951679 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:24:04.866093 systemd[1]: Stopped kubelet.service. Mar 17 18:24:04.868049 systemd[1]: Starting kubelet.service... Mar 17 18:24:04.888428 systemd[1]: Reloading. Mar 17 18:24:04.937556 /usr/lib/systemd/system-generators/torcx-generator[1529]: time="2025-03-17T18:24:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:24:04.937606 /usr/lib/systemd/system-generators/torcx-generator[1529]: time="2025-03-17T18:24:04Z" level=info msg="torcx already run" Mar 17 18:24:05.002425 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:24:05.002593 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:24:05.017953 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:24:05.082281 systemd[1]: Started kubelet.service. Mar 17 18:24:05.083796 systemd[1]: Stopping kubelet.service... Mar 17 18:24:05.084201 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:24:05.084474 systemd[1]: Stopped kubelet.service. Mar 17 18:24:05.086058 systemd[1]: Starting kubelet.service... Mar 17 18:24:05.169851 systemd[1]: Started kubelet.service. Mar 17 18:24:05.213762 kubelet[1574]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:24:05.213762 kubelet[1574]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:24:05.213762 kubelet[1574]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:24:05.214121 kubelet[1574]: I0317 18:24:05.213927 1574 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:24:06.229325 kubelet[1574]: I0317 18:24:06.229274 1574 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 18:24:06.229325 kubelet[1574]: I0317 18:24:06.229312 1574 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:24:06.229654 kubelet[1574]: I0317 18:24:06.229552 1574 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 18:24:06.269745 kubelet[1574]: I0317 18:24:06.269717 1574 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:24:06.269948 kubelet[1574]: E0317 18:24:06.269831 1574 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.94:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:24:06.275727 kubelet[1574]: E0317 18:24:06.275697 1574 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:24:06.275842 kubelet[1574]: I0317 18:24:06.275729 1574 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:24:06.279154 kubelet[1574]: I0317 18:24:06.279132 1574 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:24:06.280066 kubelet[1574]: I0317 18:24:06.280041 1574 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 18:24:06.280223 kubelet[1574]: I0317 18:24:06.280191 1574 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:24:06.280381 kubelet[1574]: I0317 18:24:06.280218 1574 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:24:06.280519 kubelet[1574]: I0317 18:24:06.280509 1574 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:24:06.280546 kubelet[1574]: I0317 18:24:06.280520 1574 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 18:24:06.280709 kubelet[1574]: I0317 18:24:06.280690 1574 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:24:06.282774 kubelet[1574]: I0317 18:24:06.282747 1574 kubelet.go:408] "Attempting to sync node with API server" Mar 17 18:24:06.282823 kubelet[1574]: I0317 18:24:06.282782 1574 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:24:06.282823 kubelet[1574]: I0317 18:24:06.282811 1574 kubelet.go:314] "Adding apiserver pod source" Mar 17 18:24:06.282823 kubelet[1574]: I0317 18:24:06.282821 1574 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:24:06.283595 kubelet[1574]: W0317 18:24:06.283554 1574 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Mar 17 18:24:06.283732 kubelet[1574]: E0317 18:24:06.283710 1574 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:24:06.283810 kubelet[1574]: W0317 18:24:06.283613 1574 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Mar 17 18:24:06.283908 kubelet[1574]: E0317 18:24:06.283892 1574 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:24:06.286529 kubelet[1574]: I0317 18:24:06.286492 1574 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:24:06.288392 kubelet[1574]: I0317 18:24:06.288377 1574 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:24:06.289086 kubelet[1574]: W0317 18:24:06.289056 1574 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:24:06.289711 kubelet[1574]: I0317 18:24:06.289694 1574 server.go:1269] "Started kubelet" Mar 17 18:24:06.290070 kubelet[1574]: I0317 18:24:06.290040 1574 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:24:06.290866 kubelet[1574]: I0317 18:24:06.290822 1574 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:24:06.291168 kubelet[1574]: I0317 18:24:06.291147 1574 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:24:06.295794 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Mar 17 18:24:06.295937 kubelet[1574]: I0317 18:24:06.295912 1574 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:24:06.296057 kubelet[1574]: I0317 18:24:06.296037 1574 server.go:460] "Adding debug handlers to kubelet server" Mar 17 18:24:06.296368 kubelet[1574]: E0317 18:24:06.294859 1574 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.94:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.94:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182daa457d0c3978 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 18:24:06.289668472 +0000 UTC m=+1.116302196,LastTimestamp:2025-03-17 18:24:06.289668472 +0000 UTC m=+1.116302196,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 18:24:06.297249 kubelet[1574]: I0317 18:24:06.297230 1574 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 18:24:06.297497 kubelet[1574]: E0317 18:24:06.297480 1574 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:24:06.297553 kubelet[1574]: I0317 18:24:06.297542 1574 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 18:24:06.297603 kubelet[1574]: I0317 18:24:06.297593 1574 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:24:06.298630 kubelet[1574]: I0317 18:24:06.298436 1574 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:24:06.299325 kubelet[1574]: W0317 18:24:06.299283 1574 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Mar 17 18:24:06.299383 kubelet[1574]: E0317 18:24:06.299363 1574 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:24:06.299383 kubelet[1574]: E0317 18:24:06.299293 1574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="200ms" Mar 17 18:24:06.299533 kubelet[1574]: E0317 18:24:06.299512 1574 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:24:06.299566 kubelet[1574]: I0317 18:24:06.299554 1574 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:24:06.299648 kubelet[1574]: I0317 18:24:06.299624 1574 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:24:06.301725 kubelet[1574]: I0317 18:24:06.301703 1574 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:24:06.312038 kubelet[1574]: I0317 18:24:06.311991 1574 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:24:06.312883 kubelet[1574]: I0317 18:24:06.312851 1574 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:24:06.312883 kubelet[1574]: I0317 18:24:06.312877 1574 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:24:06.312941 kubelet[1574]: I0317 18:24:06.312895 1574 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 18:24:06.312972 kubelet[1574]: E0317 18:24:06.312934 1574 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:24:06.316254 kubelet[1574]: W0317 18:24:06.316199 1574 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Mar 17 18:24:06.316318 kubelet[1574]: E0317 18:24:06.316254 1574 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:24:06.316703 kubelet[1574]: I0317 18:24:06.316682 1574 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:24:06.316703 kubelet[1574]: I0317 18:24:06.316697 1574 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:24:06.316781 kubelet[1574]: I0317 18:24:06.316713 1574 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:24:06.398054 kubelet[1574]: E0317 18:24:06.398015 1574 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:24:06.400417 kubelet[1574]: I0317 18:24:06.400390 1574 policy_none.go:49] "None policy: Start" Mar 17 18:24:06.401056 kubelet[1574]: I0317 18:24:06.401039 1574 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:24:06.401119 kubelet[1574]: I0317 18:24:06.401068 1574 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:24:06.407502 systemd[1]: Created slice kubepods.slice. Mar 17 18:24:06.411473 systemd[1]: Created slice kubepods-burstable.slice. Mar 17 18:24:06.413886 kubelet[1574]: E0317 18:24:06.413863 1574 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 18:24:06.414098 systemd[1]: Created slice kubepods-besteffort.slice. Mar 17 18:24:06.427732 kubelet[1574]: I0317 18:24:06.427709 1574 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:24:06.427866 kubelet[1574]: I0317 18:24:06.427851 1574 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:24:06.427910 kubelet[1574]: I0317 18:24:06.427867 1574 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:24:06.428589 kubelet[1574]: I0317 18:24:06.428563 1574 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:24:06.429405 kubelet[1574]: E0317 18:24:06.429376 1574 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 18:24:06.500825 kubelet[1574]: E0317 18:24:06.500741 1574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="400ms" Mar 17 18:24:06.529682 kubelet[1574]: I0317 18:24:06.529651 1574 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 18:24:06.531862 kubelet[1574]: E0317 18:24:06.531839 1574 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Mar 17 18:24:06.620112 systemd[1]: Created slice kubepods-burstable-pod3f6568eb7ab8a9e9c63144560c36086f.slice. Mar 17 18:24:06.643019 systemd[1]: Created slice kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice. Mar 17 18:24:06.658950 systemd[1]: Created slice kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice. Mar 17 18:24:06.699510 kubelet[1574]: I0317 18:24:06.699464 1574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f6568eb7ab8a9e9c63144560c36086f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f6568eb7ab8a9e9c63144560c36086f\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:24:06.699600 kubelet[1574]: I0317 18:24:06.699560 1574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f6568eb7ab8a9e9c63144560c36086f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f6568eb7ab8a9e9c63144560c36086f\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:24:06.699600 kubelet[1574]: I0317 18:24:06.699580 1574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f6568eb7ab8a9e9c63144560c36086f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3f6568eb7ab8a9e9c63144560c36086f\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:24:06.699600 kubelet[1574]: I0317 18:24:06.699598 1574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:24:06.699678 kubelet[1574]: I0317 18:24:06.699640 1574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:24:06.699678 kubelet[1574]: I0317 18:24:06.699662 1574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:24:06.699724 kubelet[1574]: I0317 18:24:06.699677 1574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:24:06.699745 kubelet[1574]: I0317 18:24:06.699723 1574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:24:06.699745 kubelet[1574]: I0317 18:24:06.699738 1574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 17 18:24:06.733761 kubelet[1574]: I0317 18:24:06.733705 1574 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 18:24:06.734016 kubelet[1574]: E0317 18:24:06.733987 1574 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Mar 17 18:24:06.901477 kubelet[1574]: E0317 18:24:06.901410 1574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="800ms" Mar 17 18:24:06.940659 kubelet[1574]: E0317 18:24:06.940627 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:06.941347 env[1216]: time="2025-03-17T18:24:06.941299287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3f6568eb7ab8a9e9c63144560c36086f,Namespace:kube-system,Attempt:0,}" Mar 17 18:24:06.957651 kubelet[1574]: E0317 18:24:06.957440 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:06.957808 env[1216]: time="2025-03-17T18:24:06.957767946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,}" Mar 17 18:24:06.961705 kubelet[1574]: E0317 18:24:06.961654 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:06.962033 env[1216]: time="2025-03-17T18:24:06.961990008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,}" Mar 17 18:24:07.133551 kubelet[1574]: W0317 18:24:07.133433 1574 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Mar 17 18:24:07.133551 kubelet[1574]: E0317 18:24:07.133494 1574 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:24:07.136224 kubelet[1574]: I0317 18:24:07.135994 1574 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 18:24:07.136292 kubelet[1574]: E0317 18:24:07.136238 1574 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Mar 17 18:24:07.143036 kubelet[1574]: W0317 18:24:07.143004 1574 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Mar 17 18:24:07.143098 kubelet[1574]: E0317 18:24:07.143035 1574 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:24:07.358980 kubelet[1574]: W0317 18:24:07.358939 1574 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Mar 17 18:24:07.359286 kubelet[1574]: E0317 18:24:07.358997 1574 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:24:07.466561 kubelet[1574]: W0317 18:24:07.466494 1574 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Mar 17 18:24:07.466561 kubelet[1574]: E0317 18:24:07.466561 1574 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Mar 17 18:24:07.484863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2457194724.mount: Deactivated successfully. Mar 17 18:24:07.491737 env[1216]: time="2025-03-17T18:24:07.491696759Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:07.493489 env[1216]: time="2025-03-17T18:24:07.493453010Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:07.494217 env[1216]: time="2025-03-17T18:24:07.494155886Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:07.495568 env[1216]: time="2025-03-17T18:24:07.495507387Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:07.497045 env[1216]: time="2025-03-17T18:24:07.497014127Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:07.498522 env[1216]: time="2025-03-17T18:24:07.498494481Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:07.499442 env[1216]: time="2025-03-17T18:24:07.499282633Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:07.501495 env[1216]: time="2025-03-17T18:24:07.500948371Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:07.503229 env[1216]: time="2025-03-17T18:24:07.503150951Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:07.505149 env[1216]: time="2025-03-17T18:24:07.505117573Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:07.505756 env[1216]: time="2025-03-17T18:24:07.505729217Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:07.506457 env[1216]: time="2025-03-17T18:24:07.506429335Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:07.536308 env[1216]: time="2025-03-17T18:24:07.535290519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:24:07.536308 env[1216]: time="2025-03-17T18:24:07.535329858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:24:07.536308 env[1216]: time="2025-03-17T18:24:07.535345650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:24:07.536308 env[1216]: time="2025-03-17T18:24:07.535545907Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/366480dabb175f549a4485768b68736ebd2e2bb61006dd5066f8509b4212ee01 pid=1624 runtime=io.containerd.runc.v2 Mar 17 18:24:07.536522 env[1216]: time="2025-03-17T18:24:07.536455516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:24:07.536522 env[1216]: time="2025-03-17T18:24:07.536503131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:24:07.536578 env[1216]: time="2025-03-17T18:24:07.536518363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:24:07.536699 env[1216]: time="2025-03-17T18:24:07.536664368Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d7f5e6f5ad494ac8d16d9e4ba160aa1c99a93739c70aab14c56dab56800795e pid=1623 runtime=io.containerd.runc.v2 Mar 17 18:24:07.539404 env[1216]: time="2025-03-17T18:24:07.539334466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:24:07.539404 env[1216]: time="2025-03-17T18:24:07.539370167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:24:07.539404 env[1216]: time="2025-03-17T18:24:07.539380562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:24:07.539622 env[1216]: time="2025-03-17T18:24:07.539589054Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/49f227723adbd2d4f930d83b75672ff0194f69140d9c24eee0e3c7531c13add7 pid=1635 runtime=io.containerd.runc.v2 Mar 17 18:24:07.551437 systemd[1]: Started cri-containerd-366480dabb175f549a4485768b68736ebd2e2bb61006dd5066f8509b4212ee01.scope. Mar 17 18:24:07.552348 systemd[1]: Started cri-containerd-5d7f5e6f5ad494ac8d16d9e4ba160aa1c99a93739c70aab14c56dab56800795e.scope. Mar 17 18:24:07.568957 systemd[1]: Started cri-containerd-49f227723adbd2d4f930d83b75672ff0194f69140d9c24eee0e3c7531c13add7.scope. Mar 17 18:24:07.626379 env[1216]: time="2025-03-17T18:24:07.626133107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"366480dabb175f549a4485768b68736ebd2e2bb61006dd5066f8509b4212ee01\"" Mar 17 18:24:07.627642 kubelet[1574]: E0317 18:24:07.627602 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:07.629276 env[1216]: time="2025-03-17T18:24:07.629244457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"49f227723adbd2d4f930d83b75672ff0194f69140d9c24eee0e3c7531c13add7\"" Mar 17 18:24:07.629510 env[1216]: time="2025-03-17T18:24:07.629488131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3f6568eb7ab8a9e9c63144560c36086f,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d7f5e6f5ad494ac8d16d9e4ba160aa1c99a93739c70aab14c56dab56800795e\"" Mar 17 18:24:07.630552 kubelet[1574]: E0317 18:24:07.630510 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:07.630700 kubelet[1574]: E0317 18:24:07.630680 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:07.630954 env[1216]: time="2025-03-17T18:24:07.630881730Z" level=info msg="CreateContainer within sandbox \"366480dabb175f549a4485768b68736ebd2e2bb61006dd5066f8509b4212ee01\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:24:07.631836 env[1216]: time="2025-03-17T18:24:07.631788061Z" level=info msg="CreateContainer within sandbox \"49f227723adbd2d4f930d83b75672ff0194f69140d9c24eee0e3c7531c13add7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:24:07.632098 env[1216]: time="2025-03-17T18:24:07.632068955Z" level=info msg="CreateContainer within sandbox \"5d7f5e6f5ad494ac8d16d9e4ba160aa1c99a93739c70aab14c56dab56800795e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:24:07.650920 env[1216]: time="2025-03-17T18:24:07.650882259Z" level=info msg="CreateContainer within sandbox \"5d7f5e6f5ad494ac8d16d9e4ba160aa1c99a93739c70aab14c56dab56800795e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2bfd813e477c6014af63cb8539a4d0b1285b9dfe92c478fb0675a4334848f98a\"" Mar 17 18:24:07.651627 env[1216]: time="2025-03-17T18:24:07.651593012Z" level=info msg="CreateContainer within sandbox \"366480dabb175f549a4485768b68736ebd2e2bb61006dd5066f8509b4212ee01\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bb1661ded7b2dd6009a449e384abce7bf74fc18e477a4d20585c2cbbadc0fb88\"" Mar 17 18:24:07.651891 env[1216]: time="2025-03-17T18:24:07.651861393Z" level=info msg="StartContainer for \"2bfd813e477c6014af63cb8539a4d0b1285b9dfe92c478fb0675a4334848f98a\"" Mar 17 18:24:07.652585 env[1216]: time="2025-03-17T18:24:07.652554634Z" level=info msg="CreateContainer within sandbox \"49f227723adbd2d4f930d83b75672ff0194f69140d9c24eee0e3c7531c13add7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9d24984c397334bfb3107d0ff565ca5c178b46de66c138662d7c82bb115c5af7\"" Mar 17 18:24:07.657627 env[1216]: time="2025-03-17T18:24:07.657465532Z" level=info msg="StartContainer for \"bb1661ded7b2dd6009a449e384abce7bf74fc18e477a4d20585c2cbbadc0fb88\"" Mar 17 18:24:07.657793 env[1216]: time="2025-03-17T18:24:07.657761779Z" level=info msg="StartContainer for \"9d24984c397334bfb3107d0ff565ca5c178b46de66c138662d7c82bb115c5af7\"" Mar 17 18:24:07.667048 systemd[1]: Started cri-containerd-2bfd813e477c6014af63cb8539a4d0b1285b9dfe92c478fb0675a4334848f98a.scope. Mar 17 18:24:07.673470 systemd[1]: Started cri-containerd-9d24984c397334bfb3107d0ff565ca5c178b46de66c138662d7c82bb115c5af7.scope. Mar 17 18:24:07.689488 systemd[1]: Started cri-containerd-bb1661ded7b2dd6009a449e384abce7bf74fc18e477a4d20585c2cbbadc0fb88.scope. Mar 17 18:24:07.702707 kubelet[1574]: E0317 18:24:07.702645 1574 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="1.6s" Mar 17 18:24:07.748240 env[1216]: time="2025-03-17T18:24:07.742104092Z" level=info msg="StartContainer for \"bb1661ded7b2dd6009a449e384abce7bf74fc18e477a4d20585c2cbbadc0fb88\" returns successfully" Mar 17 18:24:07.753270 env[1216]: time="2025-03-17T18:24:07.753120071Z" level=info msg="StartContainer for \"2bfd813e477c6014af63cb8539a4d0b1285b9dfe92c478fb0675a4334848f98a\" returns successfully" Mar 17 18:24:07.765778 env[1216]: time="2025-03-17T18:24:07.765719271Z" level=info msg="StartContainer for \"9d24984c397334bfb3107d0ff565ca5c178b46de66c138662d7c82bb115c5af7\" returns successfully" Mar 17 18:24:07.938244 kubelet[1574]: I0317 18:24:07.938143 1574 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 18:24:07.938470 kubelet[1574]: E0317 18:24:07.938442 1574 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Mar 17 18:24:08.321919 kubelet[1574]: E0317 18:24:08.321828 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:08.324039 kubelet[1574]: E0317 18:24:08.324015 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:08.326072 kubelet[1574]: E0317 18:24:08.326050 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:09.327501 kubelet[1574]: E0317 18:24:09.327469 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:09.327817 kubelet[1574]: E0317 18:24:09.327610 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:09.366630 kubelet[1574]: E0317 18:24:09.366591 1574 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 17 18:24:09.540132 kubelet[1574]: I0317 18:24:09.540096 1574 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 18:24:09.544800 kubelet[1574]: I0317 18:24:09.544774 1574 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 17 18:24:09.544838 kubelet[1574]: E0317 18:24:09.544805 1574 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 17 18:24:09.552605 kubelet[1574]: E0317 18:24:09.552577 1574 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:24:09.652910 kubelet[1574]: E0317 18:24:09.652862 1574 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:24:09.753523 kubelet[1574]: E0317 18:24:09.753475 1574 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:24:09.854234 kubelet[1574]: E0317 18:24:09.854192 1574 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:24:09.901877 kubelet[1574]: E0317 18:24:09.901834 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:09.955338 kubelet[1574]: E0317 18:24:09.955229 1574 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:24:10.055813 kubelet[1574]: E0317 18:24:10.055778 1574 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:24:10.156298 kubelet[1574]: E0317 18:24:10.156270 1574 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:24:10.284927 kubelet[1574]: I0317 18:24:10.284821 1574 apiserver.go:52] "Watching apiserver" Mar 17 18:24:10.298365 kubelet[1574]: I0317 18:24:10.298333 1574 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 18:24:10.768935 kubelet[1574]: E0317 18:24:10.768893 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:11.149429 systemd[1]: Reloading. Mar 17 18:24:11.199366 /usr/lib/systemd/system-generators/torcx-generator[1873]: time="2025-03-17T18:24:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:24:11.199397 /usr/lib/systemd/system-generators/torcx-generator[1873]: time="2025-03-17T18:24:11Z" level=info msg="torcx already run" Mar 17 18:24:11.263389 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:24:11.263589 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:24:11.280067 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:24:11.328919 kubelet[1574]: E0317 18:24:11.328889 1574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:11.363317 systemd[1]: Stopping kubelet.service... Mar 17 18:24:11.379422 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:24:11.379616 systemd[1]: Stopped kubelet.service. Mar 17 18:24:11.379662 systemd[1]: kubelet.service: Consumed 1.456s CPU time. Mar 17 18:24:11.381163 systemd[1]: Starting kubelet.service... Mar 17 18:24:11.465080 systemd[1]: Started kubelet.service. Mar 17 18:24:11.503543 kubelet[1914]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:24:11.503543 kubelet[1914]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:24:11.503543 kubelet[1914]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:24:11.503898 kubelet[1914]: I0317 18:24:11.503593 1914 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:24:11.510482 kubelet[1914]: I0317 18:24:11.510428 1914 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 18:24:11.510482 kubelet[1914]: I0317 18:24:11.510465 1914 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:24:11.510721 kubelet[1914]: I0317 18:24:11.510692 1914 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 18:24:11.513304 kubelet[1914]: I0317 18:24:11.513280 1914 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:24:11.515509 kubelet[1914]: I0317 18:24:11.515473 1914 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:24:11.518528 kubelet[1914]: E0317 18:24:11.518498 1914 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 18:24:11.518528 kubelet[1914]: I0317 18:24:11.518524 1914 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 18:24:11.520855 kubelet[1914]: I0317 18:24:11.520830 1914 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:24:11.521070 kubelet[1914]: I0317 18:24:11.521057 1914 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 18:24:11.521271 kubelet[1914]: I0317 18:24:11.521247 1914 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:24:11.521486 kubelet[1914]: I0317 18:24:11.521330 1914 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 18:24:11.521621 kubelet[1914]: I0317 18:24:11.521607 1914 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:24:11.521684 kubelet[1914]: I0317 18:24:11.521674 1914 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 18:24:11.521765 kubelet[1914]: I0317 18:24:11.521755 1914 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:24:11.521923 kubelet[1914]: I0317 18:24:11.521910 1914 kubelet.go:408] "Attempting to sync node with API server" Mar 17 18:24:11.522025 kubelet[1914]: I0317 18:24:11.522013 1914 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:24:11.522104 kubelet[1914]: I0317 18:24:11.522093 1914 kubelet.go:314] "Adding apiserver pod source" Mar 17 18:24:11.522167 kubelet[1914]: I0317 18:24:11.522157 1914 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:24:11.525993 kubelet[1914]: I0317 18:24:11.522884 1914 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:24:11.525993 kubelet[1914]: I0317 18:24:11.523432 1914 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:24:11.525993 kubelet[1914]: I0317 18:24:11.523842 1914 server.go:1269] "Started kubelet" Mar 17 18:24:11.527000 kubelet[1914]: I0317 18:24:11.526945 1914 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:24:11.528727 kubelet[1914]: I0317 18:24:11.528705 1914 server.go:460] "Adding debug handlers to kubelet server" Mar 17 18:24:11.531638 kubelet[1914]: I0317 18:24:11.531591 1914 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:24:11.531955 kubelet[1914]: I0317 18:24:11.531940 1914 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:24:11.532918 kubelet[1914]: I0317 18:24:11.532904 1914 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:24:11.537552 kubelet[1914]: I0317 18:24:11.537527 1914 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 18:24:11.542241 kubelet[1914]: I0317 18:24:11.542213 1914 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 18:24:11.542468 kubelet[1914]: E0317 18:24:11.542434 1914 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 18:24:11.542958 kubelet[1914]: I0317 18:24:11.542937 1914 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 18:24:11.543114 kubelet[1914]: I0317 18:24:11.543099 1914 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:24:11.551436 kubelet[1914]: E0317 18:24:11.545254 1914 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:24:11.553351 kubelet[1914]: I0317 18:24:11.553325 1914 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:24:11.553668 kubelet[1914]: I0317 18:24:11.553646 1914 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:24:11.571655 kubelet[1914]: I0317 18:24:11.571625 1914 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:24:11.578000 kubelet[1914]: I0317 18:24:11.576884 1914 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:24:11.581097 kubelet[1914]: I0317 18:24:11.581064 1914 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:24:11.581097 kubelet[1914]: I0317 18:24:11.581092 1914 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:24:11.581222 kubelet[1914]: I0317 18:24:11.581110 1914 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 18:24:11.581222 kubelet[1914]: E0317 18:24:11.581156 1914 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:24:11.605136 kubelet[1914]: I0317 18:24:11.605108 1914 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:24:11.605242 kubelet[1914]: I0317 18:24:11.605178 1914 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:24:11.605242 kubelet[1914]: I0317 18:24:11.605200 1914 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:24:11.605400 kubelet[1914]: I0317 18:24:11.605384 1914 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:24:11.605433 kubelet[1914]: I0317 18:24:11.605399 1914 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:24:11.605433 kubelet[1914]: I0317 18:24:11.605426 1914 policy_none.go:49] "None policy: Start" Mar 17 18:24:11.606047 kubelet[1914]: I0317 18:24:11.606025 1914 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:24:11.606109 kubelet[1914]: I0317 18:24:11.606055 1914 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:24:11.606222 kubelet[1914]: I0317 18:24:11.606207 1914 state_mem.go:75] "Updated machine memory state" Mar 17 18:24:11.610535 kubelet[1914]: I0317 18:24:11.610506 1914 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:24:11.610874 kubelet[1914]: I0317 18:24:11.610862 1914 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 18:24:11.610923 kubelet[1914]: I0317 18:24:11.610877 1914 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:24:11.611814 kubelet[1914]: I0317 18:24:11.611773 1914 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:24:11.687717 kubelet[1914]: E0317 18:24:11.687665 1914 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 18:24:11.714567 kubelet[1914]: I0317 18:24:11.714542 1914 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 18:24:11.722332 kubelet[1914]: I0317 18:24:11.721234 1914 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Mar 17 18:24:11.722332 kubelet[1914]: I0317 18:24:11.721319 1914 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 17 18:24:11.844841 kubelet[1914]: I0317 18:24:11.844786 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f6568eb7ab8a9e9c63144560c36086f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3f6568eb7ab8a9e9c63144560c36086f\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:24:11.844841 kubelet[1914]: I0317 18:24:11.844838 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:24:11.845046 kubelet[1914]: I0317 18:24:11.844861 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:24:11.845046 kubelet[1914]: I0317 18:24:11.844876 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:24:11.845046 kubelet[1914]: I0317 18:24:11.844901 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f6568eb7ab8a9e9c63144560c36086f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f6568eb7ab8a9e9c63144560c36086f\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:24:11.845046 kubelet[1914]: I0317 18:24:11.844918 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:24:11.845046 kubelet[1914]: I0317 18:24:11.844938 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:24:11.845163 kubelet[1914]: I0317 18:24:11.844955 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 17 18:24:11.845163 kubelet[1914]: I0317 18:24:11.845011 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f6568eb7ab8a9e9c63144560c36086f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3f6568eb7ab8a9e9c63144560c36086f\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:24:11.988072 kubelet[1914]: E0317 18:24:11.987942 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:11.988191 kubelet[1914]: E0317 18:24:11.988157 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:11.988617 kubelet[1914]: E0317 18:24:11.988277 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:12.145515 sudo[1948]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 17 18:24:12.145736 sudo[1948]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Mar 17 18:24:12.523209 kubelet[1914]: I0317 18:24:12.523158 1914 apiserver.go:52] "Watching apiserver" Mar 17 18:24:12.543388 kubelet[1914]: I0317 18:24:12.543342 1914 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 18:24:12.589637 kubelet[1914]: E0317 18:24:12.589596 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:12.590287 kubelet[1914]: E0317 18:24:12.590266 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:12.590493 kubelet[1914]: E0317 18:24:12.590477 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:12.606650 sudo[1948]: pam_unix(sudo:session): session closed for user root Mar 17 18:24:12.610377 kubelet[1914]: I0317 18:24:12.610328 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.610315469 podStartE2EDuration="1.610315469s" podCreationTimestamp="2025-03-17 18:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:24:12.609942312 +0000 UTC m=+1.140349599" watchObservedRunningTime="2025-03-17 18:24:12.610315469 +0000 UTC m=+1.140722756" Mar 17 18:24:12.626490 kubelet[1914]: I0317 18:24:12.626436 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.626408874 podStartE2EDuration="2.626408874s" podCreationTimestamp="2025-03-17 18:24:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:24:12.626316755 +0000 UTC m=+1.156724042" watchObservedRunningTime="2025-03-17 18:24:12.626408874 +0000 UTC m=+1.156816161" Mar 17 18:24:12.626809 kubelet[1914]: I0317 18:24:12.626775 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.6267677109999998 podStartE2EDuration="1.626767711s" podCreationTimestamp="2025-03-17 18:24:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:24:12.618954626 +0000 UTC m=+1.149361913" watchObservedRunningTime="2025-03-17 18:24:12.626767711 +0000 UTC m=+1.157174998" Mar 17 18:24:13.591018 kubelet[1914]: E0317 18:24:13.590959 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:13.591412 kubelet[1914]: E0317 18:24:13.591138 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:14.591946 kubelet[1914]: E0317 18:24:14.591894 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:14.976915 sudo[1315]: pam_unix(sudo:session): session closed for user root Mar 17 18:24:14.978903 sshd[1312]: pam_unix(sshd:session): session closed for user core Mar 17 18:24:14.981535 systemd-logind[1207]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:24:14.981695 systemd[1]: sshd@4-10.0.0.94:22-10.0.0.1:40632.service: Deactivated successfully. Mar 17 18:24:14.982413 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:24:14.982579 systemd[1]: session-5.scope: Consumed 8.770s CPU time. Mar 17 18:24:14.983172 systemd-logind[1207]: Removed session 5. Mar 17 18:24:16.740932 kubelet[1914]: I0317 18:24:16.740807 1914 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:24:16.741257 env[1216]: time="2025-03-17T18:24:16.741102746Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:24:16.741432 kubelet[1914]: I0317 18:24:16.741265 1914 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:24:17.731643 systemd[1]: Created slice kubepods-besteffort-pod2c6a9b41_ff52_46cd_8d79_13b8410c97d4.slice. Mar 17 18:24:17.747090 systemd[1]: Created slice kubepods-burstable-podbff794a9_06d2_4001_8e11_4beec7a745fc.slice. Mar 17 18:24:17.787424 kubelet[1914]: I0317 18:24:17.787378 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts95p\" (UniqueName: \"kubernetes.io/projected/bff794a9-06d2-4001-8e11-4beec7a745fc-kube-api-access-ts95p\") pod \"cilium-khwg5\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " pod="kube-system/cilium-khwg5" Mar 17 18:24:17.787748 kubelet[1914]: I0317 18:24:17.787431 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c6a9b41-ff52-46cd-8d79-13b8410c97d4-xtables-lock\") pod \"kube-proxy-wnc5t\" (UID: \"2c6a9b41-ff52-46cd-8d79-13b8410c97d4\") " pod="kube-system/kube-proxy-wnc5t" Mar 17 18:24:17.787748 kubelet[1914]: I0317 18:24:17.787453 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bff794a9-06d2-4001-8e11-4beec7a745fc-hubble-tls\") pod \"cilium-khwg5\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " pod="kube-system/cilium-khwg5" Mar 17 18:24:17.787748 kubelet[1914]: I0317 18:24:17.787469 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c6a9b41-ff52-46cd-8d79-13b8410c97d4-lib-modules\") pod \"kube-proxy-wnc5t\" (UID: \"2c6a9b41-ff52-46cd-8d79-13b8410c97d4\") " pod="kube-system/kube-proxy-wnc5t" Mar 17 18:24:17.787748 kubelet[1914]: I0317 18:24:17.787486 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-cilium-run\") pod \"cilium-khwg5\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " pod="kube-system/cilium-khwg5" Mar 17 18:24:17.787748 kubelet[1914]: I0317 18:24:17.787510 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-hostproc\") pod \"cilium-khwg5\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " pod="kube-system/cilium-khwg5" Mar 17 18:24:17.787748 kubelet[1914]: I0317 18:24:17.787526 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-cilium-cgroup\") pod \"cilium-khwg5\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " pod="kube-system/cilium-khwg5" Mar 17 18:24:17.787904 kubelet[1914]: I0317 18:24:17.787541 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-lib-modules\") pod \"cilium-khwg5\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " pod="kube-system/cilium-khwg5" Mar 17 18:24:17.787904 kubelet[1914]: I0317 18:24:17.787558 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bff794a9-06d2-4001-8e11-4beec7a745fc-cilium-config-path\") pod \"cilium-khwg5\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " pod="kube-system/cilium-khwg5" Mar 17 18:24:17.787904 kubelet[1914]: I0317 18:24:17.787579 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2c6a9b41-ff52-46cd-8d79-13b8410c97d4-kube-proxy\") pod \"kube-proxy-wnc5t\" (UID: \"2c6a9b41-ff52-46cd-8d79-13b8410c97d4\") " pod="kube-system/kube-proxy-wnc5t" Mar 17 18:24:17.787904 kubelet[1914]: I0317 18:24:17.787595 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-cni-path\") pod \"cilium-khwg5\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " pod="kube-system/cilium-khwg5" Mar 17 18:24:17.787904 kubelet[1914]: I0317 18:24:17.787611 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bff794a9-06d2-4001-8e11-4beec7a745fc-clustermesh-secrets\") pod \"cilium-khwg5\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " pod="kube-system/cilium-khwg5" Mar 17 18:24:17.787904 kubelet[1914]: I0317 18:24:17.787625 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-bpf-maps\") pod \"cilium-khwg5\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " pod="kube-system/cilium-khwg5" Mar 17 18:24:17.788063 kubelet[1914]: I0317 18:24:17.787639 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-xtables-lock\") pod \"cilium-khwg5\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " pod="kube-system/cilium-khwg5" Mar 17 18:24:17.788063 kubelet[1914]: I0317 18:24:17.787662 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdw4t\" (UniqueName: \"kubernetes.io/projected/2c6a9b41-ff52-46cd-8d79-13b8410c97d4-kube-api-access-mdw4t\") pod \"kube-proxy-wnc5t\" (UID: \"2c6a9b41-ff52-46cd-8d79-13b8410c97d4\") " pod="kube-system/kube-proxy-wnc5t" Mar 17 18:24:17.788063 kubelet[1914]: I0317 18:24:17.787680 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-etc-cni-netd\") pod \"cilium-khwg5\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " pod="kube-system/cilium-khwg5" Mar 17 18:24:17.788063 kubelet[1914]: I0317 18:24:17.787695 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-host-proc-sys-net\") pod \"cilium-khwg5\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " pod="kube-system/cilium-khwg5" Mar 17 18:24:17.788063 kubelet[1914]: I0317 18:24:17.787710 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-host-proc-sys-kernel\") pod \"cilium-khwg5\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " pod="kube-system/cilium-khwg5" Mar 17 18:24:17.888462 kubelet[1914]: I0317 18:24:17.888422 1914 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 17 18:24:17.934230 systemd[1]: Created slice kubepods-besteffort-pod25ec6b0f_1b72_4713_bf49_030804f057f0.slice. Mar 17 18:24:17.988544 kubelet[1914]: I0317 18:24:17.988439 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdsf4\" (UniqueName: \"kubernetes.io/projected/25ec6b0f-1b72-4713-bf49-030804f057f0-kube-api-access-pdsf4\") pod \"cilium-operator-5d85765b45-cnf2x\" (UID: \"25ec6b0f-1b72-4713-bf49-030804f057f0\") " pod="kube-system/cilium-operator-5d85765b45-cnf2x" Mar 17 18:24:17.988544 kubelet[1914]: I0317 18:24:17.988479 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25ec6b0f-1b72-4713-bf49-030804f057f0-cilium-config-path\") pod \"cilium-operator-5d85765b45-cnf2x\" (UID: \"25ec6b0f-1b72-4713-bf49-030804f057f0\") " pod="kube-system/cilium-operator-5d85765b45-cnf2x" Mar 17 18:24:18.043150 kubelet[1914]: E0317 18:24:18.043120 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:18.043679 env[1216]: time="2025-03-17T18:24:18.043633720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wnc5t,Uid:2c6a9b41-ff52-46cd-8d79-13b8410c97d4,Namespace:kube-system,Attempt:0,}" Mar 17 18:24:18.049870 kubelet[1914]: E0317 18:24:18.049846 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:18.052177 env[1216]: time="2025-03-17T18:24:18.050216995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-khwg5,Uid:bff794a9-06d2-4001-8e11-4beec7a745fc,Namespace:kube-system,Attempt:0,}" Mar 17 18:24:18.057197 env[1216]: time="2025-03-17T18:24:18.057131947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:24:18.057197 env[1216]: time="2025-03-17T18:24:18.057176707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:24:18.057197 env[1216]: time="2025-03-17T18:24:18.057187867Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:24:18.057564 env[1216]: time="2025-03-17T18:24:18.057520225Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/87cec07068d374c6cf542ca21cd31c16ffa956813c4c4a7552bcdece7198c47a pid=2007 runtime=io.containerd.runc.v2 Mar 17 18:24:18.065058 env[1216]: time="2025-03-17T18:24:18.064878334Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:24:18.065058 env[1216]: time="2025-03-17T18:24:18.065035613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:24:18.065282 env[1216]: time="2025-03-17T18:24:18.065234252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:24:18.065788 env[1216]: time="2025-03-17T18:24:18.065695728Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fa551083b408bf0071f30f700b7355384d024ee928420b80212d692c82d06224 pid=2029 runtime=io.containerd.runc.v2 Mar 17 18:24:18.069852 systemd[1]: Started cri-containerd-87cec07068d374c6cf542ca21cd31c16ffa956813c4c4a7552bcdece7198c47a.scope. Mar 17 18:24:18.077686 systemd[1]: Started cri-containerd-fa551083b408bf0071f30f700b7355384d024ee928420b80212d692c82d06224.scope. Mar 17 18:24:18.112665 env[1216]: time="2025-03-17T18:24:18.112610806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wnc5t,Uid:2c6a9b41-ff52-46cd-8d79-13b8410c97d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"87cec07068d374c6cf542ca21cd31c16ffa956813c4c4a7552bcdece7198c47a\"" Mar 17 18:24:18.113593 kubelet[1914]: E0317 18:24:18.113349 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:18.116284 env[1216]: time="2025-03-17T18:24:18.116248541Z" level=info msg="CreateContainer within sandbox \"87cec07068d374c6cf542ca21cd31c16ffa956813c4c4a7552bcdece7198c47a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:24:18.117992 env[1216]: time="2025-03-17T18:24:18.117940209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-khwg5,Uid:bff794a9-06d2-4001-8e11-4beec7a745fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa551083b408bf0071f30f700b7355384d024ee928420b80212d692c82d06224\"" Mar 17 18:24:18.118399 kubelet[1914]: E0317 18:24:18.118377 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:18.119916 env[1216]: time="2025-03-17T18:24:18.119872476Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 17 18:24:18.133128 env[1216]: time="2025-03-17T18:24:18.133078545Z" level=info msg="CreateContainer within sandbox \"87cec07068d374c6cf542ca21cd31c16ffa956813c4c4a7552bcdece7198c47a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fd16b9cb8d0c7842d9957317baa80799c3158dd9a678c233ba28572a766f1c64\"" Mar 17 18:24:18.133642 env[1216]: time="2025-03-17T18:24:18.133583542Z" level=info msg="StartContainer for \"fd16b9cb8d0c7842d9957317baa80799c3158dd9a678c233ba28572a766f1c64\"" Mar 17 18:24:18.147735 systemd[1]: Started cri-containerd-fd16b9cb8d0c7842d9957317baa80799c3158dd9a678c233ba28572a766f1c64.scope. Mar 17 18:24:18.186485 env[1216]: time="2025-03-17T18:24:18.186426018Z" level=info msg="StartContainer for \"fd16b9cb8d0c7842d9957317baa80799c3158dd9a678c233ba28572a766f1c64\" returns successfully" Mar 17 18:24:18.236864 kubelet[1914]: E0317 18:24:18.236821 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:18.237563 env[1216]: time="2025-03-17T18:24:18.237527547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-cnf2x,Uid:25ec6b0f-1b72-4713-bf49-030804f057f0,Namespace:kube-system,Attempt:0,}" Mar 17 18:24:18.252477 env[1216]: time="2025-03-17T18:24:18.252357685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:24:18.252629 env[1216]: time="2025-03-17T18:24:18.252396684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:24:18.252629 env[1216]: time="2025-03-17T18:24:18.252407404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:24:18.253205 env[1216]: time="2025-03-17T18:24:18.252840241Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/96f623f95c5ca9c5b144c695a07a1bbc0d00d97470826c5d916b462b0061cd8d pid=2122 runtime=io.containerd.runc.v2 Mar 17 18:24:18.263245 systemd[1]: Started cri-containerd-96f623f95c5ca9c5b144c695a07a1bbc0d00d97470826c5d916b462b0061cd8d.scope. Mar 17 18:24:18.321792 env[1216]: time="2025-03-17T18:24:18.321544129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-cnf2x,Uid:25ec6b0f-1b72-4713-bf49-030804f057f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"96f623f95c5ca9c5b144c695a07a1bbc0d00d97470826c5d916b462b0061cd8d\"" Mar 17 18:24:18.323022 kubelet[1914]: E0317 18:24:18.322531 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:18.599517 kubelet[1914]: E0317 18:24:18.599487 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:20.212360 kubelet[1914]: E0317 18:24:20.212317 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:20.233082 kubelet[1914]: I0317 18:24:20.232861 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wnc5t" podStartSLOduration=3.232846827 podStartE2EDuration="3.232846827s" podCreationTimestamp="2025-03-17 18:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:24:18.609985586 +0000 UTC m=+7.140392873" watchObservedRunningTime="2025-03-17 18:24:20.232846827 +0000 UTC m=+8.763254114" Mar 17 18:24:20.606335 kubelet[1914]: E0317 18:24:20.606294 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:22.102796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount472284652.mount: Deactivated successfully. Mar 17 18:24:22.985306 kubelet[1914]: E0317 18:24:22.985233 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:23.150388 kubelet[1914]: E0317 18:24:23.150129 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:23.611515 kubelet[1914]: E0317 18:24:23.611476 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:23.956131 update_engine[1208]: I0317 18:24:23.956032 1208 update_attempter.cc:509] Updating boot flags... Mar 17 18:24:24.247646 env[1216]: time="2025-03-17T18:24:24.247532514Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:24.249182 env[1216]: time="2025-03-17T18:24:24.249155426Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:24.251118 env[1216]: time="2025-03-17T18:24:24.251086056Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:24.251768 env[1216]: time="2025-03-17T18:24:24.251738333Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Mar 17 18:24:24.269050 env[1216]: time="2025-03-17T18:24:24.269008886Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 17 18:24:24.279753 env[1216]: time="2025-03-17T18:24:24.279712512Z" level=info msg="CreateContainer within sandbox \"fa551083b408bf0071f30f700b7355384d024ee928420b80212d692c82d06224\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:24:24.289269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1210096289.mount: Deactivated successfully. Mar 17 18:24:24.293529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1810701832.mount: Deactivated successfully. Mar 17 18:24:24.295385 env[1216]: time="2025-03-17T18:24:24.295270394Z" level=info msg="CreateContainer within sandbox \"fa551083b408bf0071f30f700b7355384d024ee928420b80212d692c82d06224\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"72a37e5f0e97b11723c16ae3952afc5d8f628354b49f9b742fcbdd90e5267237\"" Mar 17 18:24:24.295976 env[1216]: time="2025-03-17T18:24:24.295936311Z" level=info msg="StartContainer for \"72a37e5f0e97b11723c16ae3952afc5d8f628354b49f9b742fcbdd90e5267237\"" Mar 17 18:24:24.315804 systemd[1]: Started cri-containerd-72a37e5f0e97b11723c16ae3952afc5d8f628354b49f9b742fcbdd90e5267237.scope. Mar 17 18:24:24.416009 systemd[1]: cri-containerd-72a37e5f0e97b11723c16ae3952afc5d8f628354b49f9b742fcbdd90e5267237.scope: Deactivated successfully. Mar 17 18:24:24.467413 env[1216]: time="2025-03-17T18:24:24.467356168Z" level=info msg="StartContainer for \"72a37e5f0e97b11723c16ae3952afc5d8f628354b49f9b742fcbdd90e5267237\" returns successfully" Mar 17 18:24:24.563593 env[1216]: time="2025-03-17T18:24:24.563170126Z" level=info msg="shim disconnected" id=72a37e5f0e97b11723c16ae3952afc5d8f628354b49f9b742fcbdd90e5267237 Mar 17 18:24:24.563593 env[1216]: time="2025-03-17T18:24:24.563221046Z" level=warning msg="cleaning up after shim disconnected" id=72a37e5f0e97b11723c16ae3952afc5d8f628354b49f9b742fcbdd90e5267237 namespace=k8s.io Mar 17 18:24:24.563593 env[1216]: time="2025-03-17T18:24:24.563230806Z" level=info msg="cleaning up dead shim" Mar 17 18:24:24.569785 env[1216]: time="2025-03-17T18:24:24.569742613Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:24:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2349 runtime=io.containerd.runc.v2\n" Mar 17 18:24:24.614921 kubelet[1914]: E0317 18:24:24.614723 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:24.618985 env[1216]: time="2025-03-17T18:24:24.618934686Z" level=info msg="CreateContainer within sandbox \"fa551083b408bf0071f30f700b7355384d024ee928420b80212d692c82d06224\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:24:24.631512 env[1216]: time="2025-03-17T18:24:24.631468503Z" level=info msg="CreateContainer within sandbox \"fa551083b408bf0071f30f700b7355384d024ee928420b80212d692c82d06224\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"584a381c89c31a7f7bd4b825dbf24da24fce0f1669fcf8ccccdd4bd67028c38f\"" Mar 17 18:24:24.635278 env[1216]: time="2025-03-17T18:24:24.635246284Z" level=info msg="StartContainer for \"584a381c89c31a7f7bd4b825dbf24da24fce0f1669fcf8ccccdd4bd67028c38f\"" Mar 17 18:24:24.648833 systemd[1]: Started cri-containerd-584a381c89c31a7f7bd4b825dbf24da24fce0f1669fcf8ccccdd4bd67028c38f.scope. Mar 17 18:24:24.690165 env[1216]: time="2025-03-17T18:24:24.690112368Z" level=info msg="StartContainer for \"584a381c89c31a7f7bd4b825dbf24da24fce0f1669fcf8ccccdd4bd67028c38f\" returns successfully" Mar 17 18:24:24.706518 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:24:24.706749 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:24:24.706921 systemd[1]: Stopping systemd-sysctl.service... Mar 17 18:24:24.708347 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:24:24.709340 systemd[1]: cri-containerd-584a381c89c31a7f7bd4b825dbf24da24fce0f1669fcf8ccccdd4bd67028c38f.scope: Deactivated successfully. Mar 17 18:24:24.716435 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:24:24.730007 env[1216]: time="2025-03-17T18:24:24.729924487Z" level=info msg="shim disconnected" id=584a381c89c31a7f7bd4b825dbf24da24fce0f1669fcf8ccccdd4bd67028c38f Mar 17 18:24:24.730282 env[1216]: time="2025-03-17T18:24:24.730259606Z" level=warning msg="cleaning up after shim disconnected" id=584a381c89c31a7f7bd4b825dbf24da24fce0f1669fcf8ccccdd4bd67028c38f namespace=k8s.io Mar 17 18:24:24.730683 env[1216]: time="2025-03-17T18:24:24.730654604Z" level=info msg="cleaning up dead shim" Mar 17 18:24:24.739212 env[1216]: time="2025-03-17T18:24:24.739173161Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:24:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2413 runtime=io.containerd.runc.v2\n" Mar 17 18:24:25.287683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72a37e5f0e97b11723c16ae3952afc5d8f628354b49f9b742fcbdd90e5267237-rootfs.mount: Deactivated successfully. Mar 17 18:24:25.618608 kubelet[1914]: E0317 18:24:25.618571 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:25.621817 env[1216]: time="2025-03-17T18:24:25.621768352Z" level=info msg="CreateContainer within sandbox \"fa551083b408bf0071f30f700b7355384d024ee928420b80212d692c82d06224\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:24:25.645983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3037705619.mount: Deactivated successfully. Mar 17 18:24:25.648839 env[1216]: time="2025-03-17T18:24:25.648796422Z" level=info msg="CreateContainer within sandbox \"fa551083b408bf0071f30f700b7355384d024ee928420b80212d692c82d06224\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2d1b3639b0e7e8b75c5bbf19dc456777d8523178fcfc7f7ec7bc5ade2c5feaad\"" Mar 17 18:24:25.651627 env[1216]: time="2025-03-17T18:24:25.651586489Z" level=info msg="StartContainer for \"2d1b3639b0e7e8b75c5bbf19dc456777d8523178fcfc7f7ec7bc5ade2c5feaad\"" Mar 17 18:24:25.673084 systemd[1]: Started cri-containerd-2d1b3639b0e7e8b75c5bbf19dc456777d8523178fcfc7f7ec7bc5ade2c5feaad.scope. Mar 17 18:24:25.712331 env[1216]: time="2025-03-17T18:24:25.712275759Z" level=info msg="StartContainer for \"2d1b3639b0e7e8b75c5bbf19dc456777d8523178fcfc7f7ec7bc5ade2c5feaad\" returns successfully" Mar 17 18:24:25.728606 systemd[1]: cri-containerd-2d1b3639b0e7e8b75c5bbf19dc456777d8523178fcfc7f7ec7bc5ade2c5feaad.scope: Deactivated successfully. Mar 17 18:24:25.749817 env[1216]: time="2025-03-17T18:24:25.749762579Z" level=info msg="shim disconnected" id=2d1b3639b0e7e8b75c5bbf19dc456777d8523178fcfc7f7ec7bc5ade2c5feaad Mar 17 18:24:25.749817 env[1216]: time="2025-03-17T18:24:25.749812099Z" level=warning msg="cleaning up after shim disconnected" id=2d1b3639b0e7e8b75c5bbf19dc456777d8523178fcfc7f7ec7bc5ade2c5feaad namespace=k8s.io Mar 17 18:24:25.749817 env[1216]: time="2025-03-17T18:24:25.749822979Z" level=info msg="cleaning up dead shim" Mar 17 18:24:25.756611 env[1216]: time="2025-03-17T18:24:25.756556467Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:24:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2471 runtime=io.containerd.runc.v2\n" Mar 17 18:24:26.623304 kubelet[1914]: E0317 18:24:26.622491 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:26.624755 env[1216]: time="2025-03-17T18:24:26.624449014Z" level=info msg="CreateContainer within sandbox \"fa551083b408bf0071f30f700b7355384d024ee928420b80212d692c82d06224\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:24:26.640165 env[1216]: time="2025-03-17T18:24:26.639559626Z" level=info msg="CreateContainer within sandbox \"fa551083b408bf0071f30f700b7355384d024ee928420b80212d692c82d06224\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"186e7aa9604718e693a599d9919c4b56bca337071d3e026b07f30faec6e17824\"" Mar 17 18:24:26.645814 env[1216]: time="2025-03-17T18:24:26.645761397Z" level=info msg="StartContainer for \"186e7aa9604718e693a599d9919c4b56bca337071d3e026b07f30faec6e17824\"" Mar 17 18:24:26.662777 systemd[1]: Started cri-containerd-186e7aa9604718e693a599d9919c4b56bca337071d3e026b07f30faec6e17824.scope. Mar 17 18:24:26.697047 systemd[1]: cri-containerd-186e7aa9604718e693a599d9919c4b56bca337071d3e026b07f30faec6e17824.scope: Deactivated successfully. Mar 17 18:24:26.698112 env[1216]: time="2025-03-17T18:24:26.697906080Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbff794a9_06d2_4001_8e11_4beec7a745fc.slice/cri-containerd-186e7aa9604718e693a599d9919c4b56bca337071d3e026b07f30faec6e17824.scope/memory.events\": no such file or directory" Mar 17 18:24:26.706863 env[1216]: time="2025-03-17T18:24:26.706811199Z" level=info msg="StartContainer for \"186e7aa9604718e693a599d9919c4b56bca337071d3e026b07f30faec6e17824\" returns successfully" Mar 17 18:24:26.726455 env[1216]: time="2025-03-17T18:24:26.726400030Z" level=info msg="shim disconnected" id=186e7aa9604718e693a599d9919c4b56bca337071d3e026b07f30faec6e17824 Mar 17 18:24:26.726455 env[1216]: time="2025-03-17T18:24:26.726445749Z" level=warning msg="cleaning up after shim disconnected" id=186e7aa9604718e693a599d9919c4b56bca337071d3e026b07f30faec6e17824 namespace=k8s.io Mar 17 18:24:26.726455 env[1216]: time="2025-03-17T18:24:26.726455669Z" level=info msg="cleaning up dead shim" Mar 17 18:24:26.733841 env[1216]: time="2025-03-17T18:24:26.733794596Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:24:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2524 runtime=io.containerd.runc.v2\n" Mar 17 18:24:27.287257 systemd[1]: run-containerd-runc-k8s.io-186e7aa9604718e693a599d9919c4b56bca337071d3e026b07f30faec6e17824-runc.CBX4ek.mount: Deactivated successfully. Mar 17 18:24:27.287350 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-186e7aa9604718e693a599d9919c4b56bca337071d3e026b07f30faec6e17824-rootfs.mount: Deactivated successfully. Mar 17 18:24:27.627526 kubelet[1914]: E0317 18:24:27.627433 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:27.637840 env[1216]: time="2025-03-17T18:24:27.637212894Z" level=info msg="CreateContainer within sandbox \"fa551083b408bf0071f30f700b7355384d024ee928420b80212d692c82d06224\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:24:27.655328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3907133907.mount: Deactivated successfully. Mar 17 18:24:27.659341 env[1216]: time="2025-03-17T18:24:27.659291718Z" level=info msg="CreateContainer within sandbox \"fa551083b408bf0071f30f700b7355384d024ee928420b80212d692c82d06224\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9\"" Mar 17 18:24:27.662865 env[1216]: time="2025-03-17T18:24:27.661778587Z" level=info msg="StartContainer for \"a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9\"" Mar 17 18:24:27.675894 systemd[1]: Started cri-containerd-a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9.scope. Mar 17 18:24:27.729198 env[1216]: time="2025-03-17T18:24:27.729146015Z" level=info msg="StartContainer for \"a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9\" returns successfully" Mar 17 18:24:27.896066 kubelet[1914]: I0317 18:24:27.895617 1914 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 17 18:24:27.927830 systemd[1]: Created slice kubepods-burstable-pod3624c613_85b4_4ea3_9241_0ab993e3b71d.slice. Mar 17 18:24:27.931774 systemd[1]: Created slice kubepods-burstable-poda21f414f_7d5a_4baf_8ce9_2bed63b5adbb.slice. Mar 17 18:24:28.061006 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Mar 17 18:24:28.064424 kubelet[1914]: I0317 18:24:28.064394 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3624c613-85b4-4ea3-9241-0ab993e3b71d-config-volume\") pod \"coredns-6f6b679f8f-5qr2x\" (UID: \"3624c613-85b4-4ea3-9241-0ab993e3b71d\") " pod="kube-system/coredns-6f6b679f8f-5qr2x" Mar 17 18:24:28.064531 kubelet[1914]: I0317 18:24:28.064436 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmvgh\" (UniqueName: \"kubernetes.io/projected/a21f414f-7d5a-4baf-8ce9-2bed63b5adbb-kube-api-access-nmvgh\") pod \"coredns-6f6b679f8f-jj9fl\" (UID: \"a21f414f-7d5a-4baf-8ce9-2bed63b5adbb\") " pod="kube-system/coredns-6f6b679f8f-jj9fl" Mar 17 18:24:28.064531 kubelet[1914]: I0317 18:24:28.064471 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvrl8\" (UniqueName: \"kubernetes.io/projected/3624c613-85b4-4ea3-9241-0ab993e3b71d-kube-api-access-kvrl8\") pod \"coredns-6f6b679f8f-5qr2x\" (UID: \"3624c613-85b4-4ea3-9241-0ab993e3b71d\") " pod="kube-system/coredns-6f6b679f8f-5qr2x" Mar 17 18:24:28.064531 kubelet[1914]: I0317 18:24:28.064492 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a21f414f-7d5a-4baf-8ce9-2bed63b5adbb-config-volume\") pod \"coredns-6f6b679f8f-jj9fl\" (UID: \"a21f414f-7d5a-4baf-8ce9-2bed63b5adbb\") " pod="kube-system/coredns-6f6b679f8f-jj9fl" Mar 17 18:24:28.230739 kubelet[1914]: E0317 18:24:28.230633 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:28.231587 env[1216]: time="2025-03-17T18:24:28.231547919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5qr2x,Uid:3624c613-85b4-4ea3-9241-0ab993e3b71d,Namespace:kube-system,Attempt:0,}" Mar 17 18:24:28.234368 kubelet[1914]: E0317 18:24:28.234335 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:28.234784 env[1216]: time="2025-03-17T18:24:28.234745145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jj9fl,Uid:a21f414f-7d5a-4baf-8ce9-2bed63b5adbb,Namespace:kube-system,Attempt:0,}" Mar 17 18:24:28.361992 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Mar 17 18:24:28.630701 kubelet[1914]: E0317 18:24:28.630651 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:28.648712 kubelet[1914]: I0317 18:24:28.648638 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-khwg5" podStartSLOduration=5.507402303 podStartE2EDuration="11.648620431s" podCreationTimestamp="2025-03-17 18:24:17 +0000 UTC" firstStartedPulling="2025-03-17 18:24:18.118881883 +0000 UTC m=+6.649289170" lastFinishedPulling="2025-03-17 18:24:24.260100011 +0000 UTC m=+12.790507298" observedRunningTime="2025-03-17 18:24:28.648291472 +0000 UTC m=+17.178698799" watchObservedRunningTime="2025-03-17 18:24:28.648620431 +0000 UTC m=+17.179027718" Mar 17 18:24:29.633018 kubelet[1914]: E0317 18:24:29.632798 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:30.130650 env[1216]: time="2025-03-17T18:24:30.130602966Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:30.134828 env[1216]: time="2025-03-17T18:24:30.134769510Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:30.136258 env[1216]: time="2025-03-17T18:24:30.136227425Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:24:30.136670 env[1216]: time="2025-03-17T18:24:30.136638663Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Mar 17 18:24:30.139286 env[1216]: time="2025-03-17T18:24:30.139077174Z" level=info msg="CreateContainer within sandbox \"96f623f95c5ca9c5b144c695a07a1bbc0d00d97470826c5d916b462b0061cd8d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 17 18:24:30.156831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount882432981.mount: Deactivated successfully. Mar 17 18:24:30.158498 env[1216]: time="2025-03-17T18:24:30.158456941Z" level=info msg="CreateContainer within sandbox \"96f623f95c5ca9c5b144c695a07a1bbc0d00d97470826c5d916b462b0061cd8d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dbc4b24085a987a9ed894e172600f367fd865d27f29f1f7c49ba18f72ac25d63\"" Mar 17 18:24:30.159254 env[1216]: time="2025-03-17T18:24:30.159049499Z" level=info msg="StartContainer for \"dbc4b24085a987a9ed894e172600f367fd865d27f29f1f7c49ba18f72ac25d63\"" Mar 17 18:24:30.176202 systemd[1]: Started cri-containerd-dbc4b24085a987a9ed894e172600f367fd865d27f29f1f7c49ba18f72ac25d63.scope. Mar 17 18:24:30.237446 env[1216]: time="2025-03-17T18:24:30.237398283Z" level=info msg="StartContainer for \"dbc4b24085a987a9ed894e172600f367fd865d27f29f1f7c49ba18f72ac25d63\" returns successfully" Mar 17 18:24:30.635911 kubelet[1914]: E0317 18:24:30.635868 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:30.636441 kubelet[1914]: E0317 18:24:30.636217 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:30.650708 kubelet[1914]: I0317 18:24:30.650645 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-cnf2x" podStartSLOduration=1.8365526170000002 podStartE2EDuration="13.650632801s" podCreationTimestamp="2025-03-17 18:24:17 +0000 UTC" firstStartedPulling="2025-03-17 18:24:18.323479716 +0000 UTC m=+6.853887003" lastFinishedPulling="2025-03-17 18:24:30.13755994 +0000 UTC m=+18.667967187" observedRunningTime="2025-03-17 18:24:30.650041404 +0000 UTC m=+19.180448691" watchObservedRunningTime="2025-03-17 18:24:30.650632801 +0000 UTC m=+19.181040088" Mar 17 18:24:31.637096 kubelet[1914]: E0317 18:24:31.637058 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:33.993795 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Mar 17 18:24:33.993910 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Mar 17 18:24:33.991514 systemd-networkd[1046]: cilium_host: Link UP Mar 17 18:24:33.991681 systemd-networkd[1046]: cilium_net: Link UP Mar 17 18:24:33.994703 systemd-networkd[1046]: cilium_net: Gained carrier Mar 17 18:24:33.994871 systemd-networkd[1046]: cilium_host: Gained carrier Mar 17 18:24:34.081717 systemd-networkd[1046]: cilium_vxlan: Link UP Mar 17 18:24:34.081723 systemd-networkd[1046]: cilium_vxlan: Gained carrier Mar 17 18:24:34.386998 kernel: NET: Registered PF_ALG protocol family Mar 17 18:24:34.480165 systemd-networkd[1046]: cilium_net: Gained IPv6LL Mar 17 18:24:34.689175 systemd-networkd[1046]: cilium_host: Gained IPv6LL Mar 17 18:24:34.990401 systemd-networkd[1046]: lxc_health: Link UP Mar 17 18:24:34.992310 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:24:34.992005 systemd-networkd[1046]: lxc_health: Gained carrier Mar 17 18:24:35.344988 kernel: eth0: renamed from tmp7a9f1 Mar 17 18:24:35.352437 systemd-networkd[1046]: lxc1ca8102cd833: Link UP Mar 17 18:24:35.356060 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1ca8102cd833: link becomes ready Mar 17 18:24:35.356162 systemd-networkd[1046]: lxc1ca8102cd833: Gained carrier Mar 17 18:24:35.361434 systemd-networkd[1046]: lxc09722e3aad07: Link UP Mar 17 18:24:35.368138 kernel: eth0: renamed from tmp33174 Mar 17 18:24:35.373818 systemd-networkd[1046]: lxc09722e3aad07: Gained carrier Mar 17 18:24:35.374128 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc09722e3aad07: link becomes ready Mar 17 18:24:35.712120 systemd-networkd[1046]: cilium_vxlan: Gained IPv6LL Mar 17 18:24:36.064423 kubelet[1914]: E0317 18:24:36.064320 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:36.288149 systemd-networkd[1046]: lxc_health: Gained IPv6LL Mar 17 18:24:36.993111 systemd-networkd[1046]: lxc09722e3aad07: Gained IPv6LL Mar 17 18:24:37.249085 systemd-networkd[1046]: lxc1ca8102cd833: Gained IPv6LL Mar 17 18:24:37.600009 systemd[1]: Started sshd@5-10.0.0.94:22-10.0.0.1:33402.service. Mar 17 18:24:37.650917 sshd[3116]: Accepted publickey for core from 10.0.0.1 port 33402 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:24:37.652522 sshd[3116]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:24:37.656886 systemd-logind[1207]: New session 6 of user core. Mar 17 18:24:37.657344 systemd[1]: Started session-6.scope. Mar 17 18:24:37.804254 sshd[3116]: pam_unix(sshd:session): session closed for user core Mar 17 18:24:37.806576 systemd[1]: sshd@5-10.0.0.94:22-10.0.0.1:33402.service: Deactivated successfully. Mar 17 18:24:37.807314 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:24:37.808270 systemd-logind[1207]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:24:37.809128 systemd-logind[1207]: Removed session 6. Mar 17 18:24:39.015009 env[1216]: time="2025-03-17T18:24:39.014924264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:24:39.015336 env[1216]: time="2025-03-17T18:24:39.015012864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:24:39.015336 env[1216]: time="2025-03-17T18:24:39.015040264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:24:39.015336 env[1216]: time="2025-03-17T18:24:39.015189784Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/331744d768bed1a1bbc4cdc845c95cf4778d2c48df27f2ed054fdfbd902c26a6 pid=3151 runtime=io.containerd.runc.v2 Mar 17 18:24:39.016430 env[1216]: time="2025-03-17T18:24:39.016375301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:24:39.016430 env[1216]: time="2025-03-17T18:24:39.016407221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:24:39.016603 env[1216]: time="2025-03-17T18:24:39.016417300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:24:39.016806 env[1216]: time="2025-03-17T18:24:39.016775140Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7a9f1917de51e3b8047061509cb119eeb6e2a679a0041f0d9434498eada148eb pid=3160 runtime=io.containerd.runc.v2 Mar 17 18:24:39.037228 systemd[1]: Started cri-containerd-331744d768bed1a1bbc4cdc845c95cf4778d2c48df27f2ed054fdfbd902c26a6.scope. Mar 17 18:24:39.044703 systemd[1]: Started cri-containerd-7a9f1917de51e3b8047061509cb119eeb6e2a679a0041f0d9434498eada148eb.scope. Mar 17 18:24:39.095200 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:24:39.098377 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:24:39.116011 env[1216]: time="2025-03-17T18:24:39.115941641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5qr2x,Uid:3624c613-85b4-4ea3-9241-0ab993e3b71d,Namespace:kube-system,Attempt:0,} returns sandbox id \"331744d768bed1a1bbc4cdc845c95cf4778d2c48df27f2ed054fdfbd902c26a6\"" Mar 17 18:24:39.116959 kubelet[1914]: E0317 18:24:39.116708 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:39.118682 env[1216]: time="2025-03-17T18:24:39.118642074Z" level=info msg="CreateContainer within sandbox \"331744d768bed1a1bbc4cdc845c95cf4778d2c48df27f2ed054fdfbd902c26a6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:24:39.125312 env[1216]: time="2025-03-17T18:24:39.124800658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-jj9fl,Uid:a21f414f-7d5a-4baf-8ce9-2bed63b5adbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a9f1917de51e3b8047061509cb119eeb6e2a679a0041f0d9434498eada148eb\"" Mar 17 18:24:39.125707 kubelet[1914]: E0317 18:24:39.125685 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:39.127842 env[1216]: time="2025-03-17T18:24:39.127802850Z" level=info msg="CreateContainer within sandbox \"7a9f1917de51e3b8047061509cb119eeb6e2a679a0041f0d9434498eada148eb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:24:39.134015 env[1216]: time="2025-03-17T18:24:39.133575755Z" level=info msg="CreateContainer within sandbox \"331744d768bed1a1bbc4cdc845c95cf4778d2c48df27f2ed054fdfbd902c26a6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"42d3aa0cc0dc3e507f774fccf0c4b995bfd4dd2ed0aa8fb9cb75ab27e5a74a7e\"" Mar 17 18:24:39.135089 env[1216]: time="2025-03-17T18:24:39.134940231Z" level=info msg="StartContainer for \"42d3aa0cc0dc3e507f774fccf0c4b995bfd4dd2ed0aa8fb9cb75ab27e5a74a7e\"" Mar 17 18:24:39.147697 env[1216]: time="2025-03-17T18:24:39.147569158Z" level=info msg="CreateContainer within sandbox \"7a9f1917de51e3b8047061509cb119eeb6e2a679a0041f0d9434498eada148eb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dabda6a8cefa7c0bb06c7fc9198429be3d77a2ccdf9a27934107a8b4ff835c2c\"" Mar 17 18:24:39.148594 env[1216]: time="2025-03-17T18:24:39.148560475Z" level=info msg="StartContainer for \"dabda6a8cefa7c0bb06c7fc9198429be3d77a2ccdf9a27934107a8b4ff835c2c\"" Mar 17 18:24:39.158109 systemd[1]: Started cri-containerd-42d3aa0cc0dc3e507f774fccf0c4b995bfd4dd2ed0aa8fb9cb75ab27e5a74a7e.scope. Mar 17 18:24:39.176165 systemd[1]: Started cri-containerd-dabda6a8cefa7c0bb06c7fc9198429be3d77a2ccdf9a27934107a8b4ff835c2c.scope. Mar 17 18:24:39.224423 env[1216]: time="2025-03-17T18:24:39.224041878Z" level=info msg="StartContainer for \"42d3aa0cc0dc3e507f774fccf0c4b995bfd4dd2ed0aa8fb9cb75ab27e5a74a7e\" returns successfully" Mar 17 18:24:39.226917 env[1216]: time="2025-03-17T18:24:39.226153673Z" level=info msg="StartContainer for \"dabda6a8cefa7c0bb06c7fc9198429be3d77a2ccdf9a27934107a8b4ff835c2c\" returns successfully" Mar 17 18:24:39.318760 kubelet[1914]: I0317 18:24:39.318646 1914 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:24:39.319155 kubelet[1914]: E0317 18:24:39.319133 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:39.656191 kubelet[1914]: E0317 18:24:39.656160 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:39.661274 kubelet[1914]: E0317 18:24:39.661204 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:39.661274 kubelet[1914]: E0317 18:24:39.661220 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:39.682180 kubelet[1914]: I0317 18:24:39.682120 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-5qr2x" podStartSLOduration=22.682103522 podStartE2EDuration="22.682103522s" podCreationTimestamp="2025-03-17 18:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:24:39.670137474 +0000 UTC m=+28.200544761" watchObservedRunningTime="2025-03-17 18:24:39.682103522 +0000 UTC m=+28.212510809" Mar 17 18:24:40.662343 kubelet[1914]: E0317 18:24:40.662309 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:41.663318 kubelet[1914]: E0317 18:24:41.663285 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:42.808996 systemd[1]: Started sshd@6-10.0.0.94:22-10.0.0.1:38206.service. Mar 17 18:24:42.854838 sshd[3304]: Accepted publickey for core from 10.0.0.1 port 38206 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:24:42.856749 sshd[3304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:24:42.860526 systemd-logind[1207]: New session 7 of user core. Mar 17 18:24:42.861536 systemd[1]: Started session-7.scope. Mar 17 18:24:42.983080 sshd[3304]: pam_unix(sshd:session): session closed for user core Mar 17 18:24:42.985533 systemd[1]: sshd@6-10.0.0.94:22-10.0.0.1:38206.service: Deactivated successfully. Mar 17 18:24:42.986331 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:24:42.986821 systemd-logind[1207]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:24:42.987529 systemd-logind[1207]: Removed session 7. Mar 17 18:24:47.987623 systemd[1]: Started sshd@7-10.0.0.94:22-10.0.0.1:38214.service. Mar 17 18:24:48.033412 sshd[3318]: Accepted publickey for core from 10.0.0.1 port 38214 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:24:48.034825 sshd[3318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:24:48.038376 systemd-logind[1207]: New session 8 of user core. Mar 17 18:24:48.038724 systemd[1]: Started session-8.scope. Mar 17 18:24:48.154122 sshd[3318]: pam_unix(sshd:session): session closed for user core Mar 17 18:24:48.157377 systemd[1]: sshd@7-10.0.0.94:22-10.0.0.1:38214.service: Deactivated successfully. Mar 17 18:24:48.158174 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:24:48.158812 systemd-logind[1207]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:24:48.159866 systemd-logind[1207]: Removed session 8. Mar 17 18:24:48.236116 kubelet[1914]: E0317 18:24:48.236079 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:48.253605 kubelet[1914]: I0317 18:24:48.253473 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-jj9fl" podStartSLOduration=31.253458 podStartE2EDuration="31.253458s" podCreationTimestamp="2025-03-17 18:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:24:39.691722377 +0000 UTC m=+28.222129624" watchObservedRunningTime="2025-03-17 18:24:48.253458 +0000 UTC m=+36.783865287" Mar 17 18:24:48.676020 kubelet[1914]: E0317 18:24:48.675989 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:24:53.158756 systemd[1]: Started sshd@8-10.0.0.94:22-10.0.0.1:43774.service. Mar 17 18:24:53.206555 sshd[3342]: Accepted publickey for core from 10.0.0.1 port 43774 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:24:53.208266 sshd[3342]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:24:53.211702 systemd-logind[1207]: New session 9 of user core. Mar 17 18:24:53.212554 systemd[1]: Started session-9.scope. Mar 17 18:24:53.320564 sshd[3342]: pam_unix(sshd:session): session closed for user core Mar 17 18:24:53.323591 systemd[1]: sshd@8-10.0.0.94:22-10.0.0.1:43774.service: Deactivated successfully. Mar 17 18:24:53.324282 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:24:53.324803 systemd-logind[1207]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:24:53.325898 systemd[1]: Started sshd@9-10.0.0.94:22-10.0.0.1:43776.service. Mar 17 18:24:53.326537 systemd-logind[1207]: Removed session 9. Mar 17 18:24:53.368687 sshd[3357]: Accepted publickey for core from 10.0.0.1 port 43776 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:24:53.369829 sshd[3357]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:24:53.372917 systemd-logind[1207]: New session 10 of user core. Mar 17 18:24:53.373713 systemd[1]: Started session-10.scope. Mar 17 18:24:53.528413 sshd[3357]: pam_unix(sshd:session): session closed for user core Mar 17 18:24:53.532427 systemd[1]: Started sshd@10-10.0.0.94:22-10.0.0.1:43792.service. Mar 17 18:24:53.537392 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:24:53.539160 systemd-logind[1207]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:24:53.539332 systemd[1]: sshd@9-10.0.0.94:22-10.0.0.1:43776.service: Deactivated successfully. Mar 17 18:24:53.540309 systemd-logind[1207]: Removed session 10. Mar 17 18:24:53.591732 sshd[3369]: Accepted publickey for core from 10.0.0.1 port 43792 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:24:53.593429 sshd[3369]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:24:53.596954 systemd-logind[1207]: New session 11 of user core. Mar 17 18:24:53.597817 systemd[1]: Started session-11.scope. Mar 17 18:24:53.712154 sshd[3369]: pam_unix(sshd:session): session closed for user core Mar 17 18:24:53.714570 systemd[1]: sshd@10-10.0.0.94:22-10.0.0.1:43792.service: Deactivated successfully. Mar 17 18:24:53.715360 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:24:53.715867 systemd-logind[1207]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:24:53.716499 systemd-logind[1207]: Removed session 11. Mar 17 18:24:58.715416 systemd[1]: Started sshd@11-10.0.0.94:22-10.0.0.1:43796.service. Mar 17 18:24:58.758763 sshd[3383]: Accepted publickey for core from 10.0.0.1 port 43796 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:24:58.760079 sshd[3383]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:24:58.763520 systemd-logind[1207]: New session 12 of user core. Mar 17 18:24:58.764441 systemd[1]: Started session-12.scope. Mar 17 18:24:58.876006 sshd[3383]: pam_unix(sshd:session): session closed for user core Mar 17 18:24:58.878864 systemd[1]: sshd@11-10.0.0.94:22-10.0.0.1:43796.service: Deactivated successfully. Mar 17 18:24:58.879746 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:24:58.880287 systemd-logind[1207]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:24:58.881051 systemd-logind[1207]: Removed session 12. Mar 17 18:25:03.879991 systemd[1]: Started sshd@12-10.0.0.94:22-10.0.0.1:39476.service. Mar 17 18:25:03.922851 sshd[3398]: Accepted publickey for core from 10.0.0.1 port 39476 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:25:03.923945 sshd[3398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:25:03.927612 systemd-logind[1207]: New session 13 of user core. Mar 17 18:25:03.927805 systemd[1]: Started session-13.scope. Mar 17 18:25:04.040138 sshd[3398]: pam_unix(sshd:session): session closed for user core Mar 17 18:25:04.043014 systemd[1]: sshd@12-10.0.0.94:22-10.0.0.1:39476.service: Deactivated successfully. Mar 17 18:25:04.043677 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:25:04.044187 systemd-logind[1207]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:25:04.045242 systemd[1]: Started sshd@13-10.0.0.94:22-10.0.0.1:39482.service. Mar 17 18:25:04.045784 systemd-logind[1207]: Removed session 13. Mar 17 18:25:04.089091 sshd[3412]: Accepted publickey for core from 10.0.0.1 port 39482 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:25:04.090238 sshd[3412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:25:04.093356 systemd-logind[1207]: New session 14 of user core. Mar 17 18:25:04.094144 systemd[1]: Started session-14.scope. Mar 17 18:25:04.304314 sshd[3412]: pam_unix(sshd:session): session closed for user core Mar 17 18:25:04.307131 systemd[1]: sshd@13-10.0.0.94:22-10.0.0.1:39482.service: Deactivated successfully. Mar 17 18:25:04.307792 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:25:04.308293 systemd-logind[1207]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:25:04.309322 systemd[1]: Started sshd@14-10.0.0.94:22-10.0.0.1:39484.service. Mar 17 18:25:04.309949 systemd-logind[1207]: Removed session 14. Mar 17 18:25:04.354797 sshd[3423]: Accepted publickey for core from 10.0.0.1 port 39484 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:25:04.356121 sshd[3423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:25:04.359569 systemd-logind[1207]: New session 15 of user core. Mar 17 18:25:04.360417 systemd[1]: Started session-15.scope. Mar 17 18:25:05.570054 sshd[3423]: pam_unix(sshd:session): session closed for user core Mar 17 18:25:05.573587 systemd[1]: Started sshd@15-10.0.0.94:22-10.0.0.1:39498.service. Mar 17 18:25:05.574217 systemd[1]: sshd@14-10.0.0.94:22-10.0.0.1:39484.service: Deactivated successfully. Mar 17 18:25:05.575332 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:25:05.575907 systemd-logind[1207]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:25:05.577741 systemd-logind[1207]: Removed session 15. Mar 17 18:25:05.620652 sshd[3441]: Accepted publickey for core from 10.0.0.1 port 39498 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:25:05.622142 sshd[3441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:25:05.625600 systemd-logind[1207]: New session 16 of user core. Mar 17 18:25:05.626499 systemd[1]: Started session-16.scope. Mar 17 18:25:05.877516 sshd[3441]: pam_unix(sshd:session): session closed for user core Mar 17 18:25:05.879690 systemd[1]: Started sshd@16-10.0.0.94:22-10.0.0.1:39502.service. Mar 17 18:25:05.881269 systemd[1]: sshd@15-10.0.0.94:22-10.0.0.1:39498.service: Deactivated successfully. Mar 17 18:25:05.881920 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:25:05.882799 systemd-logind[1207]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:25:05.883898 systemd-logind[1207]: Removed session 16. Mar 17 18:25:05.927225 sshd[3455]: Accepted publickey for core from 10.0.0.1 port 39502 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:25:05.928598 sshd[3455]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:25:05.932805 systemd-logind[1207]: New session 17 of user core. Mar 17 18:25:05.933027 systemd[1]: Started session-17.scope. Mar 17 18:25:06.041281 sshd[3455]: pam_unix(sshd:session): session closed for user core Mar 17 18:25:06.043612 systemd[1]: sshd@16-10.0.0.94:22-10.0.0.1:39502.service: Deactivated successfully. Mar 17 18:25:06.044325 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:25:06.044886 systemd-logind[1207]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:25:06.045688 systemd-logind[1207]: Removed session 17. Mar 17 18:25:11.046549 systemd[1]: Started sshd@17-10.0.0.94:22-10.0.0.1:39510.service. Mar 17 18:25:11.090311 sshd[3474]: Accepted publickey for core from 10.0.0.1 port 39510 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:25:11.091787 sshd[3474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:25:11.095270 systemd-logind[1207]: New session 18 of user core. Mar 17 18:25:11.096075 systemd[1]: Started session-18.scope. Mar 17 18:25:11.206568 sshd[3474]: pam_unix(sshd:session): session closed for user core Mar 17 18:25:11.210403 systemd[1]: sshd@17-10.0.0.94:22-10.0.0.1:39510.service: Deactivated successfully. Mar 17 18:25:11.211175 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:25:11.211904 systemd-logind[1207]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:25:11.213149 systemd-logind[1207]: Removed session 18. Mar 17 18:25:16.210351 systemd[1]: Started sshd@18-10.0.0.94:22-10.0.0.1:48048.service. Mar 17 18:25:16.254650 sshd[3489]: Accepted publickey for core from 10.0.0.1 port 48048 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:25:16.256194 sshd[3489]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:25:16.259377 systemd-logind[1207]: New session 19 of user core. Mar 17 18:25:16.260160 systemd[1]: Started session-19.scope. Mar 17 18:25:16.363921 sshd[3489]: pam_unix(sshd:session): session closed for user core Mar 17 18:25:16.366193 systemd[1]: sshd@18-10.0.0.94:22-10.0.0.1:48048.service: Deactivated successfully. Mar 17 18:25:16.366890 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:25:16.367387 systemd-logind[1207]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:25:16.368144 systemd-logind[1207]: Removed session 19. Mar 17 18:25:21.369000 systemd[1]: Started sshd@19-10.0.0.94:22-10.0.0.1:48060.service. Mar 17 18:25:21.412209 sshd[3504]: Accepted publickey for core from 10.0.0.1 port 48060 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:25:21.413479 sshd[3504]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:25:21.418029 systemd-logind[1207]: New session 20 of user core. Mar 17 18:25:21.418703 systemd[1]: Started session-20.scope. Mar 17 18:25:21.521073 sshd[3504]: pam_unix(sshd:session): session closed for user core Mar 17 18:25:21.523886 systemd[1]: sshd@19-10.0.0.94:22-10.0.0.1:48060.service: Deactivated successfully. Mar 17 18:25:21.524445 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:25:21.524949 systemd-logind[1207]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:25:21.526004 systemd[1]: Started sshd@20-10.0.0.94:22-10.0.0.1:48066.service. Mar 17 18:25:21.526621 systemd-logind[1207]: Removed session 20. Mar 17 18:25:21.569746 sshd[3517]: Accepted publickey for core from 10.0.0.1 port 48066 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:25:21.571085 sshd[3517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:25:21.574119 systemd-logind[1207]: New session 21 of user core. Mar 17 18:25:21.575008 systemd[1]: Started session-21.scope. Mar 17 18:25:24.090355 env[1216]: time="2025-03-17T18:25:24.090296017Z" level=info msg="StopContainer for \"dbc4b24085a987a9ed894e172600f367fd865d27f29f1f7c49ba18f72ac25d63\" with timeout 30 (s)" Mar 17 18:25:24.091926 env[1216]: time="2025-03-17T18:25:24.091892942Z" level=info msg="Stop container \"dbc4b24085a987a9ed894e172600f367fd865d27f29f1f7c49ba18f72ac25d63\" with signal terminated" Mar 17 18:25:24.100864 systemd[1]: run-containerd-runc-k8s.io-a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9-runc.YR4ajA.mount: Deactivated successfully. Mar 17 18:25:24.116450 systemd[1]: cri-containerd-dbc4b24085a987a9ed894e172600f367fd865d27f29f1f7c49ba18f72ac25d63.scope: Deactivated successfully. Mar 17 18:25:24.138285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbc4b24085a987a9ed894e172600f367fd865d27f29f1f7c49ba18f72ac25d63-rootfs.mount: Deactivated successfully. Mar 17 18:25:24.144057 env[1216]: time="2025-03-17T18:25:24.143995584Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:25:24.145314 env[1216]: time="2025-03-17T18:25:24.145269339Z" level=info msg="shim disconnected" id=dbc4b24085a987a9ed894e172600f367fd865d27f29f1f7c49ba18f72ac25d63 Mar 17 18:25:24.145377 env[1216]: time="2025-03-17T18:25:24.145316380Z" level=warning msg="cleaning up after shim disconnected" id=dbc4b24085a987a9ed894e172600f367fd865d27f29f1f7c49ba18f72ac25d63 namespace=k8s.io Mar 17 18:25:24.145377 env[1216]: time="2025-03-17T18:25:24.145326700Z" level=info msg="cleaning up dead shim" Mar 17 18:25:24.149238 env[1216]: time="2025-03-17T18:25:24.149200568Z" level=info msg="StopContainer for \"a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9\" with timeout 2 (s)" Mar 17 18:25:24.149460 env[1216]: time="2025-03-17T18:25:24.149431094Z" level=info msg="Stop container \"a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9\" with signal terminated" Mar 17 18:25:24.153233 env[1216]: time="2025-03-17T18:25:24.153176598Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:25:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3564 runtime=io.containerd.runc.v2\n" Mar 17 18:25:24.154780 systemd-networkd[1046]: lxc_health: Link DOWN Mar 17 18:25:24.154788 systemd-networkd[1046]: lxc_health: Lost carrier Mar 17 18:25:24.157410 env[1216]: time="2025-03-17T18:25:24.157350353Z" level=info msg="StopContainer for \"dbc4b24085a987a9ed894e172600f367fd865d27f29f1f7c49ba18f72ac25d63\" returns successfully" Mar 17 18:25:24.158049 env[1216]: time="2025-03-17T18:25:24.158019292Z" level=info msg="StopPodSandbox for \"96f623f95c5ca9c5b144c695a07a1bbc0d00d97470826c5d916b462b0061cd8d\"" Mar 17 18:25:24.158104 env[1216]: time="2025-03-17T18:25:24.158083013Z" level=info msg="Container to stop \"dbc4b24085a987a9ed894e172600f367fd865d27f29f1f7c49ba18f72ac25d63\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:25:24.159914 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-96f623f95c5ca9c5b144c695a07a1bbc0d00d97470826c5d916b462b0061cd8d-shm.mount: Deactivated successfully. Mar 17 18:25:24.167222 systemd[1]: cri-containerd-96f623f95c5ca9c5b144c695a07a1bbc0d00d97470826c5d916b462b0061cd8d.scope: Deactivated successfully. Mar 17 18:25:24.184729 systemd[1]: cri-containerd-a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9.scope: Deactivated successfully. Mar 17 18:25:24.185088 systemd[1]: cri-containerd-a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9.scope: Consumed 6.584s CPU time. Mar 17 18:25:24.186781 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96f623f95c5ca9c5b144c695a07a1bbc0d00d97470826c5d916b462b0061cd8d-rootfs.mount: Deactivated successfully. Mar 17 18:25:24.198811 env[1216]: time="2025-03-17T18:25:24.198757299Z" level=info msg="shim disconnected" id=96f623f95c5ca9c5b144c695a07a1bbc0d00d97470826c5d916b462b0061cd8d Mar 17 18:25:24.198811 env[1216]: time="2025-03-17T18:25:24.198804220Z" level=warning msg="cleaning up after shim disconnected" id=96f623f95c5ca9c5b144c695a07a1bbc0d00d97470826c5d916b462b0061cd8d namespace=k8s.io Mar 17 18:25:24.198811 env[1216]: time="2025-03-17T18:25:24.198813701Z" level=info msg="cleaning up dead shim" Mar 17 18:25:24.205538 env[1216]: time="2025-03-17T18:25:24.205495046Z" level=info msg="shim disconnected" id=a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9 Mar 17 18:25:24.205698 env[1216]: time="2025-03-17T18:25:24.205539527Z" level=warning msg="cleaning up after shim disconnected" id=a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9 namespace=k8s.io Mar 17 18:25:24.205698 env[1216]: time="2025-03-17T18:25:24.205564927Z" level=info msg="cleaning up dead shim" Mar 17 18:25:24.208769 env[1216]: time="2025-03-17T18:25:24.208733575Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:25:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3616 runtime=io.containerd.runc.v2\n" Mar 17 18:25:24.209203 env[1216]: time="2025-03-17T18:25:24.209123786Z" level=info msg="TearDown network for sandbox \"96f623f95c5ca9c5b144c695a07a1bbc0d00d97470826c5d916b462b0061cd8d\" successfully" Mar 17 18:25:24.209203 env[1216]: time="2025-03-17T18:25:24.209177627Z" level=info msg="StopPodSandbox for \"96f623f95c5ca9c5b144c695a07a1bbc0d00d97470826c5d916b462b0061cd8d\" returns successfully" Mar 17 18:25:24.220516 env[1216]: time="2025-03-17T18:25:24.220479860Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:25:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3628 runtime=io.containerd.runc.v2\n" Mar 17 18:25:24.223407 env[1216]: time="2025-03-17T18:25:24.223357300Z" level=info msg="StopContainer for \"a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9\" returns successfully" Mar 17 18:25:24.223782 env[1216]: time="2025-03-17T18:25:24.223751071Z" level=info msg="StopPodSandbox for \"fa551083b408bf0071f30f700b7355384d024ee928420b80212d692c82d06224\"" Mar 17 18:25:24.223913 env[1216]: time="2025-03-17T18:25:24.223890395Z" level=info msg="Container to stop \"72a37e5f0e97b11723c16ae3952afc5d8f628354b49f9b742fcbdd90e5267237\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:25:24.224013 env[1216]: time="2025-03-17T18:25:24.223994118Z" level=info msg="Container to stop \"584a381c89c31a7f7bd4b825dbf24da24fce0f1669fcf8ccccdd4bd67028c38f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:25:24.224104 env[1216]: time="2025-03-17T18:25:24.224086280Z" level=info msg="Container to stop \"186e7aa9604718e693a599d9919c4b56bca337071d3e026b07f30faec6e17824\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:25:24.224169 env[1216]: time="2025-03-17T18:25:24.224152482Z" level=info msg="Container to stop \"a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:25:24.224231 env[1216]: time="2025-03-17T18:25:24.224214364Z" level=info msg="Container to stop \"2d1b3639b0e7e8b75c5bbf19dc456777d8523178fcfc7f7ec7bc5ade2c5feaad\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 17 18:25:24.229254 systemd[1]: cri-containerd-fa551083b408bf0071f30f700b7355384d024ee928420b80212d692c82d06224.scope: Deactivated successfully. Mar 17 18:25:24.272242 env[1216]: time="2025-03-17T18:25:24.271694238Z" level=info msg="shim disconnected" id=fa551083b408bf0071f30f700b7355384d024ee928420b80212d692c82d06224 Mar 17 18:25:24.272242 env[1216]: time="2025-03-17T18:25:24.272235093Z" level=warning msg="cleaning up after shim disconnected" id=fa551083b408bf0071f30f700b7355384d024ee928420b80212d692c82d06224 namespace=k8s.io Mar 17 18:25:24.272242 env[1216]: time="2025-03-17T18:25:24.272247173Z" level=info msg="cleaning up dead shim" Mar 17 18:25:24.279833 env[1216]: time="2025-03-17T18:25:24.279775941Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:25:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3658 runtime=io.containerd.runc.v2\n" Mar 17 18:25:24.280153 env[1216]: time="2025-03-17T18:25:24.280118671Z" level=info msg="TearDown network for sandbox \"fa551083b408bf0071f30f700b7355384d024ee928420b80212d692c82d06224\" successfully" Mar 17 18:25:24.280153 env[1216]: time="2025-03-17T18:25:24.280147112Z" level=info msg="StopPodSandbox for \"fa551083b408bf0071f30f700b7355384d024ee928420b80212d692c82d06224\" returns successfully" Mar 17 18:25:24.324348 kubelet[1914]: I0317 18:25:24.324302 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdsf4\" (UniqueName: \"kubernetes.io/projected/25ec6b0f-1b72-4713-bf49-030804f057f0-kube-api-access-pdsf4\") pod \"25ec6b0f-1b72-4713-bf49-030804f057f0\" (UID: \"25ec6b0f-1b72-4713-bf49-030804f057f0\") " Mar 17 18:25:24.324748 kubelet[1914]: I0317 18:25:24.324366 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25ec6b0f-1b72-4713-bf49-030804f057f0-cilium-config-path\") pod \"25ec6b0f-1b72-4713-bf49-030804f057f0\" (UID: \"25ec6b0f-1b72-4713-bf49-030804f057f0\") " Mar 17 18:25:24.327745 kubelet[1914]: I0317 18:25:24.327688 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25ec6b0f-1b72-4713-bf49-030804f057f0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "25ec6b0f-1b72-4713-bf49-030804f057f0" (UID: "25ec6b0f-1b72-4713-bf49-030804f057f0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:25:24.328785 kubelet[1914]: I0317 18:25:24.328739 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25ec6b0f-1b72-4713-bf49-030804f057f0-kube-api-access-pdsf4" (OuterVolumeSpecName: "kube-api-access-pdsf4") pod "25ec6b0f-1b72-4713-bf49-030804f057f0" (UID: "25ec6b0f-1b72-4713-bf49-030804f057f0"). InnerVolumeSpecName "kube-api-access-pdsf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:25:24.425892 kubelet[1914]: I0317 18:25:24.425120 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-cni-path\") pod \"bff794a9-06d2-4001-8e11-4beec7a745fc\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " Mar 17 18:25:24.425892 kubelet[1914]: I0317 18:25:24.425160 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-etc-cni-netd\") pod \"bff794a9-06d2-4001-8e11-4beec7a745fc\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " Mar 17 18:25:24.425892 kubelet[1914]: I0317 18:25:24.425183 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bff794a9-06d2-4001-8e11-4beec7a745fc-hubble-tls\") pod \"bff794a9-06d2-4001-8e11-4beec7a745fc\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " Mar 17 18:25:24.425892 kubelet[1914]: I0317 18:25:24.425198 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-hostproc\") pod \"bff794a9-06d2-4001-8e11-4beec7a745fc\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " Mar 17 18:25:24.425892 kubelet[1914]: I0317 18:25:24.425216 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts95p\" (UniqueName: \"kubernetes.io/projected/bff794a9-06d2-4001-8e11-4beec7a745fc-kube-api-access-ts95p\") pod \"bff794a9-06d2-4001-8e11-4beec7a745fc\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " Mar 17 18:25:24.425892 kubelet[1914]: I0317 18:25:24.425231 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-cilium-cgroup\") pod \"bff794a9-06d2-4001-8e11-4beec7a745fc\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " Mar 17 18:25:24.426171 kubelet[1914]: I0317 18:25:24.425248 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bff794a9-06d2-4001-8e11-4beec7a745fc-clustermesh-secrets\") pod \"bff794a9-06d2-4001-8e11-4beec7a745fc\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " Mar 17 18:25:24.426171 kubelet[1914]: I0317 18:25:24.425263 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-cilium-run\") pod \"bff794a9-06d2-4001-8e11-4beec7a745fc\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " Mar 17 18:25:24.426171 kubelet[1914]: I0317 18:25:24.425280 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bff794a9-06d2-4001-8e11-4beec7a745fc-cilium-config-path\") pod \"bff794a9-06d2-4001-8e11-4beec7a745fc\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " Mar 17 18:25:24.426171 kubelet[1914]: I0317 18:25:24.425295 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-xtables-lock\") pod \"bff794a9-06d2-4001-8e11-4beec7a745fc\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " Mar 17 18:25:24.426171 kubelet[1914]: I0317 18:25:24.425311 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-lib-modules\") pod \"bff794a9-06d2-4001-8e11-4beec7a745fc\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " Mar 17 18:25:24.426171 kubelet[1914]: I0317 18:25:24.425347 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-host-proc-sys-kernel\") pod \"bff794a9-06d2-4001-8e11-4beec7a745fc\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " Mar 17 18:25:24.426303 kubelet[1914]: I0317 18:25:24.425362 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-bpf-maps\") pod \"bff794a9-06d2-4001-8e11-4beec7a745fc\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " Mar 17 18:25:24.426303 kubelet[1914]: I0317 18:25:24.425375 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-host-proc-sys-net\") pod \"bff794a9-06d2-4001-8e11-4beec7a745fc\" (UID: \"bff794a9-06d2-4001-8e11-4beec7a745fc\") " Mar 17 18:25:24.426303 kubelet[1914]: I0317 18:25:24.425412 1914 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pdsf4\" (UniqueName: \"kubernetes.io/projected/25ec6b0f-1b72-4713-bf49-030804f057f0-kube-api-access-pdsf4\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:24.426303 kubelet[1914]: I0317 18:25:24.425424 1914 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/25ec6b0f-1b72-4713-bf49-030804f057f0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:24.426303 kubelet[1914]: I0317 18:25:24.425474 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bff794a9-06d2-4001-8e11-4beec7a745fc" (UID: "bff794a9-06d2-4001-8e11-4beec7a745fc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:24.426303 kubelet[1914]: I0317 18:25:24.425507 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-cni-path" (OuterVolumeSpecName: "cni-path") pod "bff794a9-06d2-4001-8e11-4beec7a745fc" (UID: "bff794a9-06d2-4001-8e11-4beec7a745fc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:24.426437 kubelet[1914]: I0317 18:25:24.425520 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bff794a9-06d2-4001-8e11-4beec7a745fc" (UID: "bff794a9-06d2-4001-8e11-4beec7a745fc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:24.426437 kubelet[1914]: I0317 18:25:24.426054 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bff794a9-06d2-4001-8e11-4beec7a745fc" (UID: "bff794a9-06d2-4001-8e11-4beec7a745fc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:24.426437 kubelet[1914]: I0317 18:25:24.426091 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-hostproc" (OuterVolumeSpecName: "hostproc") pod "bff794a9-06d2-4001-8e11-4beec7a745fc" (UID: "bff794a9-06d2-4001-8e11-4beec7a745fc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:24.427973 kubelet[1914]: I0317 18:25:24.427933 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bff794a9-06d2-4001-8e11-4beec7a745fc" (UID: "bff794a9-06d2-4001-8e11-4beec7a745fc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:24.428095 kubelet[1914]: I0317 18:25:24.428014 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bff794a9-06d2-4001-8e11-4beec7a745fc" (UID: "bff794a9-06d2-4001-8e11-4beec7a745fc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:24.428154 kubelet[1914]: I0317 18:25:24.428103 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bff794a9-06d2-4001-8e11-4beec7a745fc" (UID: "bff794a9-06d2-4001-8e11-4beec7a745fc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:24.428154 kubelet[1914]: I0317 18:25:24.428130 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bff794a9-06d2-4001-8e11-4beec7a745fc" (UID: "bff794a9-06d2-4001-8e11-4beec7a745fc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:24.428522 kubelet[1914]: I0317 18:25:24.428486 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bff794a9-06d2-4001-8e11-4beec7a745fc-kube-api-access-ts95p" (OuterVolumeSpecName: "kube-api-access-ts95p") pod "bff794a9-06d2-4001-8e11-4beec7a745fc" (UID: "bff794a9-06d2-4001-8e11-4beec7a745fc"). InnerVolumeSpecName "kube-api-access-ts95p". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:25:24.428583 kubelet[1914]: I0317 18:25:24.428532 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bff794a9-06d2-4001-8e11-4beec7a745fc" (UID: "bff794a9-06d2-4001-8e11-4beec7a745fc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:24.429827 kubelet[1914]: I0317 18:25:24.429798 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bff794a9-06d2-4001-8e11-4beec7a745fc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bff794a9-06d2-4001-8e11-4beec7a745fc" (UID: "bff794a9-06d2-4001-8e11-4beec7a745fc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:25:24.429948 kubelet[1914]: I0317 18:25:24.429802 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bff794a9-06d2-4001-8e11-4beec7a745fc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bff794a9-06d2-4001-8e11-4beec7a745fc" (UID: "bff794a9-06d2-4001-8e11-4beec7a745fc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:25:24.430917 kubelet[1914]: I0317 18:25:24.430870 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bff794a9-06d2-4001-8e11-4beec7a745fc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bff794a9-06d2-4001-8e11-4beec7a745fc" (UID: "bff794a9-06d2-4001-8e11-4beec7a745fc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:25:24.526601 kubelet[1914]: I0317 18:25:24.526539 1914 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:24.526601 kubelet[1914]: I0317 18:25:24.526579 1914 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:24.526601 kubelet[1914]: I0317 18:25:24.526588 1914 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:24.526601 kubelet[1914]: I0317 18:25:24.526598 1914 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:24.526601 kubelet[1914]: I0317 18:25:24.526607 1914 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:24.526601 kubelet[1914]: I0317 18:25:24.526615 1914 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:24.526601 kubelet[1914]: I0317 18:25:24.526624 1914 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:24.527005 kubelet[1914]: I0317 18:25:24.526631 1914 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:24.527005 kubelet[1914]: I0317 18:25:24.526655 1914 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bff794a9-06d2-4001-8e11-4beec7a745fc-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:24.527005 kubelet[1914]: I0317 18:25:24.526664 1914 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ts95p\" (UniqueName: \"kubernetes.io/projected/bff794a9-06d2-4001-8e11-4beec7a745fc-kube-api-access-ts95p\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:24.527005 kubelet[1914]: I0317 18:25:24.526672 1914 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:24.527005 kubelet[1914]: I0317 18:25:24.526680 1914 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bff794a9-06d2-4001-8e11-4beec7a745fc-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:24.527005 kubelet[1914]: I0317 18:25:24.526687 1914 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bff794a9-06d2-4001-8e11-4beec7a745fc-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:24.527005 kubelet[1914]: I0317 18:25:24.526694 1914 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bff794a9-06d2-4001-8e11-4beec7a745fc-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:24.742694 kubelet[1914]: I0317 18:25:24.742567 1914 scope.go:117] "RemoveContainer" containerID="dbc4b24085a987a9ed894e172600f367fd865d27f29f1f7c49ba18f72ac25d63" Mar 17 18:25:24.745831 systemd[1]: Removed slice kubepods-besteffort-pod25ec6b0f_1b72_4713_bf49_030804f057f0.slice. Mar 17 18:25:24.749407 env[1216]: time="2025-03-17T18:25:24.749362817Z" level=info msg="RemoveContainer for \"dbc4b24085a987a9ed894e172600f367fd865d27f29f1f7c49ba18f72ac25d63\"" Mar 17 18:25:24.753938 env[1216]: time="2025-03-17T18:25:24.753894142Z" level=info msg="RemoveContainer for \"dbc4b24085a987a9ed894e172600f367fd865d27f29f1f7c49ba18f72ac25d63\" returns successfully" Mar 17 18:25:24.755202 kubelet[1914]: I0317 18:25:24.754636 1914 scope.go:117] "RemoveContainer" containerID="dbc4b24085a987a9ed894e172600f367fd865d27f29f1f7c49ba18f72ac25d63" Mar 17 18:25:24.755173 systemd[1]: Removed slice kubepods-burstable-podbff794a9_06d2_4001_8e11_4beec7a745fc.slice. Mar 17 18:25:24.755258 systemd[1]: kubepods-burstable-podbff794a9_06d2_4001_8e11_4beec7a745fc.slice: Consumed 6.815s CPU time. Mar 17 18:25:24.756695 env[1216]: time="2025-03-17T18:25:24.756127604Z" level=error msg="ContainerStatus for \"dbc4b24085a987a9ed894e172600f367fd865d27f29f1f7c49ba18f72ac25d63\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dbc4b24085a987a9ed894e172600f367fd865d27f29f1f7c49ba18f72ac25d63\": not found" Mar 17 18:25:24.756770 kubelet[1914]: E0317 18:25:24.756317 1914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dbc4b24085a987a9ed894e172600f367fd865d27f29f1f7c49ba18f72ac25d63\": not found" containerID="dbc4b24085a987a9ed894e172600f367fd865d27f29f1f7c49ba18f72ac25d63" Mar 17 18:25:24.756770 kubelet[1914]: I0317 18:25:24.756342 1914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dbc4b24085a987a9ed894e172600f367fd865d27f29f1f7c49ba18f72ac25d63"} err="failed to get container status \"dbc4b24085a987a9ed894e172600f367fd865d27f29f1f7c49ba18f72ac25d63\": rpc error: code = NotFound desc = an error occurred when try to find container \"dbc4b24085a987a9ed894e172600f367fd865d27f29f1f7c49ba18f72ac25d63\": not found" Mar 17 18:25:24.756770 kubelet[1914]: I0317 18:25:24.756525 1914 scope.go:117] "RemoveContainer" containerID="a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9" Mar 17 18:25:24.757595 env[1216]: time="2025-03-17T18:25:24.757561764Z" level=info msg="RemoveContainer for \"a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9\"" Mar 17 18:25:24.761137 env[1216]: time="2025-03-17T18:25:24.760437164Z" level=info msg="RemoveContainer for \"a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9\" returns successfully" Mar 17 18:25:24.761954 kubelet[1914]: I0317 18:25:24.761925 1914 scope.go:117] "RemoveContainer" containerID="186e7aa9604718e693a599d9919c4b56bca337071d3e026b07f30faec6e17824" Mar 17 18:25:24.767001 env[1216]: time="2025-03-17T18:25:24.766944464Z" level=info msg="RemoveContainer for \"186e7aa9604718e693a599d9919c4b56bca337071d3e026b07f30faec6e17824\"" Mar 17 18:25:24.772626 env[1216]: time="2025-03-17T18:25:24.772581540Z" level=info msg="RemoveContainer for \"186e7aa9604718e693a599d9919c4b56bca337071d3e026b07f30faec6e17824\" returns successfully" Mar 17 18:25:24.772842 kubelet[1914]: I0317 18:25:24.772816 1914 scope.go:117] "RemoveContainer" containerID="2d1b3639b0e7e8b75c5bbf19dc456777d8523178fcfc7f7ec7bc5ade2c5feaad" Mar 17 18:25:24.776614 env[1216]: time="2025-03-17T18:25:24.776088517Z" level=info msg="RemoveContainer for \"2d1b3639b0e7e8b75c5bbf19dc456777d8523178fcfc7f7ec7bc5ade2c5feaad\"" Mar 17 18:25:24.778822 env[1216]: time="2025-03-17T18:25:24.778779871Z" level=info msg="RemoveContainer for \"2d1b3639b0e7e8b75c5bbf19dc456777d8523178fcfc7f7ec7bc5ade2c5feaad\" returns successfully" Mar 17 18:25:24.778983 kubelet[1914]: I0317 18:25:24.778949 1914 scope.go:117] "RemoveContainer" containerID="584a381c89c31a7f7bd4b825dbf24da24fce0f1669fcf8ccccdd4bd67028c38f" Mar 17 18:25:24.779900 env[1216]: time="2025-03-17T18:25:24.779863981Z" level=info msg="RemoveContainer for \"584a381c89c31a7f7bd4b825dbf24da24fce0f1669fcf8ccccdd4bd67028c38f\"" Mar 17 18:25:24.781943 env[1216]: time="2025-03-17T18:25:24.781908118Z" level=info msg="RemoveContainer for \"584a381c89c31a7f7bd4b825dbf24da24fce0f1669fcf8ccccdd4bd67028c38f\" returns successfully" Mar 17 18:25:24.782123 kubelet[1914]: I0317 18:25:24.782090 1914 scope.go:117] "RemoveContainer" containerID="72a37e5f0e97b11723c16ae3952afc5d8f628354b49f9b742fcbdd90e5267237" Mar 17 18:25:24.782971 env[1216]: time="2025-03-17T18:25:24.782934106Z" level=info msg="RemoveContainer for \"72a37e5f0e97b11723c16ae3952afc5d8f628354b49f9b742fcbdd90e5267237\"" Mar 17 18:25:24.784976 env[1216]: time="2025-03-17T18:25:24.784942562Z" level=info msg="RemoveContainer for \"72a37e5f0e97b11723c16ae3952afc5d8f628354b49f9b742fcbdd90e5267237\" returns successfully" Mar 17 18:25:24.785131 kubelet[1914]: I0317 18:25:24.785103 1914 scope.go:117] "RemoveContainer" containerID="a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9" Mar 17 18:25:24.785334 env[1216]: time="2025-03-17T18:25:24.785272771Z" level=error msg="ContainerStatus for \"a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9\": not found" Mar 17 18:25:24.785472 kubelet[1914]: E0317 18:25:24.785443 1914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9\": not found" containerID="a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9" Mar 17 18:25:24.785532 kubelet[1914]: I0317 18:25:24.785478 1914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9"} err="failed to get container status \"a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9\": not found" Mar 17 18:25:24.785532 kubelet[1914]: I0317 18:25:24.785508 1914 scope.go:117] "RemoveContainer" containerID="186e7aa9604718e693a599d9919c4b56bca337071d3e026b07f30faec6e17824" Mar 17 18:25:24.785740 env[1216]: time="2025-03-17T18:25:24.785685902Z" level=error msg="ContainerStatus for \"186e7aa9604718e693a599d9919c4b56bca337071d3e026b07f30faec6e17824\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"186e7aa9604718e693a599d9919c4b56bca337071d3e026b07f30faec6e17824\": not found" Mar 17 18:25:24.785848 kubelet[1914]: E0317 18:25:24.785828 1914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"186e7aa9604718e693a599d9919c4b56bca337071d3e026b07f30faec6e17824\": not found" containerID="186e7aa9604718e693a599d9919c4b56bca337071d3e026b07f30faec6e17824" Mar 17 18:25:24.785888 kubelet[1914]: I0317 18:25:24.785856 1914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"186e7aa9604718e693a599d9919c4b56bca337071d3e026b07f30faec6e17824"} err="failed to get container status \"186e7aa9604718e693a599d9919c4b56bca337071d3e026b07f30faec6e17824\": rpc error: code = NotFound desc = an error occurred when try to find container \"186e7aa9604718e693a599d9919c4b56bca337071d3e026b07f30faec6e17824\": not found" Mar 17 18:25:24.785917 kubelet[1914]: I0317 18:25:24.785891 1914 scope.go:117] "RemoveContainer" containerID="2d1b3639b0e7e8b75c5bbf19dc456777d8523178fcfc7f7ec7bc5ade2c5feaad" Mar 17 18:25:24.786097 env[1216]: time="2025-03-17T18:25:24.786048792Z" level=error msg="ContainerStatus for \"2d1b3639b0e7e8b75c5bbf19dc456777d8523178fcfc7f7ec7bc5ade2c5feaad\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d1b3639b0e7e8b75c5bbf19dc456777d8523178fcfc7f7ec7bc5ade2c5feaad\": not found" Mar 17 18:25:24.786206 kubelet[1914]: E0317 18:25:24.786184 1914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d1b3639b0e7e8b75c5bbf19dc456777d8523178fcfc7f7ec7bc5ade2c5feaad\": not found" containerID="2d1b3639b0e7e8b75c5bbf19dc456777d8523178fcfc7f7ec7bc5ade2c5feaad" Mar 17 18:25:24.786235 kubelet[1914]: I0317 18:25:24.786213 1914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d1b3639b0e7e8b75c5bbf19dc456777d8523178fcfc7f7ec7bc5ade2c5feaad"} err="failed to get container status \"2d1b3639b0e7e8b75c5bbf19dc456777d8523178fcfc7f7ec7bc5ade2c5feaad\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d1b3639b0e7e8b75c5bbf19dc456777d8523178fcfc7f7ec7bc5ade2c5feaad\": not found" Mar 17 18:25:24.786235 kubelet[1914]: I0317 18:25:24.786231 1914 scope.go:117] "RemoveContainer" containerID="584a381c89c31a7f7bd4b825dbf24da24fce0f1669fcf8ccccdd4bd67028c38f" Mar 17 18:25:24.786420 env[1216]: time="2025-03-17T18:25:24.786378961Z" level=error msg="ContainerStatus for \"584a381c89c31a7f7bd4b825dbf24da24fce0f1669fcf8ccccdd4bd67028c38f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"584a381c89c31a7f7bd4b825dbf24da24fce0f1669fcf8ccccdd4bd67028c38f\": not found" Mar 17 18:25:24.786510 kubelet[1914]: E0317 18:25:24.786495 1914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"584a381c89c31a7f7bd4b825dbf24da24fce0f1669fcf8ccccdd4bd67028c38f\": not found" containerID="584a381c89c31a7f7bd4b825dbf24da24fce0f1669fcf8ccccdd4bd67028c38f" Mar 17 18:25:24.786535 kubelet[1914]: I0317 18:25:24.786517 1914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"584a381c89c31a7f7bd4b825dbf24da24fce0f1669fcf8ccccdd4bd67028c38f"} err="failed to get container status \"584a381c89c31a7f7bd4b825dbf24da24fce0f1669fcf8ccccdd4bd67028c38f\": rpc error: code = NotFound desc = an error occurred when try to find container \"584a381c89c31a7f7bd4b825dbf24da24fce0f1669fcf8ccccdd4bd67028c38f\": not found" Mar 17 18:25:24.786579 kubelet[1914]: I0317 18:25:24.786543 1914 scope.go:117] "RemoveContainer" containerID="72a37e5f0e97b11723c16ae3952afc5d8f628354b49f9b742fcbdd90e5267237" Mar 17 18:25:24.786751 env[1216]: time="2025-03-17T18:25:24.786711251Z" level=error msg="ContainerStatus for \"72a37e5f0e97b11723c16ae3952afc5d8f628354b49f9b742fcbdd90e5267237\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"72a37e5f0e97b11723c16ae3952afc5d8f628354b49f9b742fcbdd90e5267237\": not found" Mar 17 18:25:24.786829 kubelet[1914]: E0317 18:25:24.786813 1914 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"72a37e5f0e97b11723c16ae3952afc5d8f628354b49f9b742fcbdd90e5267237\": not found" containerID="72a37e5f0e97b11723c16ae3952afc5d8f628354b49f9b742fcbdd90e5267237" Mar 17 18:25:24.786855 kubelet[1914]: I0317 18:25:24.786834 1914 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"72a37e5f0e97b11723c16ae3952afc5d8f628354b49f9b742fcbdd90e5267237"} err="failed to get container status \"72a37e5f0e97b11723c16ae3952afc5d8f628354b49f9b742fcbdd90e5267237\": rpc error: code = NotFound desc = an error occurred when try to find container \"72a37e5f0e97b11723c16ae3952afc5d8f628354b49f9b742fcbdd90e5267237\": not found" Mar 17 18:25:25.095235 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a90a66760a86fb0fcd8fa9b851c931d54931fdc3293d5f6e31e505dba829d7b9-rootfs.mount: Deactivated successfully. Mar 17 18:25:25.095333 systemd[1]: var-lib-kubelet-pods-25ec6b0f\x2d1b72\x2d4713\x2dbf49\x2d030804f057f0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpdsf4.mount: Deactivated successfully. Mar 17 18:25:25.095396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa551083b408bf0071f30f700b7355384d024ee928420b80212d692c82d06224-rootfs.mount: Deactivated successfully. Mar 17 18:25:25.095443 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fa551083b408bf0071f30f700b7355384d024ee928420b80212d692c82d06224-shm.mount: Deactivated successfully. Mar 17 18:25:25.095491 systemd[1]: var-lib-kubelet-pods-bff794a9\x2d06d2\x2d4001\x2d8e11\x2d4beec7a745fc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dts95p.mount: Deactivated successfully. Mar 17 18:25:25.095543 systemd[1]: var-lib-kubelet-pods-bff794a9\x2d06d2\x2d4001\x2d8e11\x2d4beec7a745fc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:25:25.095590 systemd[1]: var-lib-kubelet-pods-bff794a9\x2d06d2\x2d4001\x2d8e11\x2d4beec7a745fc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:25:25.584399 kubelet[1914]: I0317 18:25:25.584291 1914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25ec6b0f-1b72-4713-bf49-030804f057f0" path="/var/lib/kubelet/pods/25ec6b0f-1b72-4713-bf49-030804f057f0/volumes" Mar 17 18:25:25.584726 kubelet[1914]: I0317 18:25:25.584706 1914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bff794a9-06d2-4001-8e11-4beec7a745fc" path="/var/lib/kubelet/pods/bff794a9-06d2-4001-8e11-4beec7a745fc/volumes" Mar 17 18:25:26.051168 sshd[3517]: pam_unix(sshd:session): session closed for user core Mar 17 18:25:26.054894 systemd[1]: Started sshd@21-10.0.0.94:22-10.0.0.1:45272.service. Mar 17 18:25:26.055406 systemd[1]: sshd@20-10.0.0.94:22-10.0.0.1:48066.service: Deactivated successfully. Mar 17 18:25:26.056170 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:25:26.056370 systemd[1]: session-21.scope: Consumed 1.818s CPU time. Mar 17 18:25:26.057193 systemd-logind[1207]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:25:26.058024 systemd-logind[1207]: Removed session 21. Mar 17 18:25:26.102565 sshd[3676]: Accepted publickey for core from 10.0.0.1 port 45272 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:25:26.103756 sshd[3676]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:25:26.107635 systemd-logind[1207]: New session 22 of user core. Mar 17 18:25:26.107873 systemd[1]: Started session-22.scope. Mar 17 18:25:26.625402 kubelet[1914]: E0317 18:25:26.625362 1914 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:25:27.270430 sshd[3676]: pam_unix(sshd:session): session closed for user core Mar 17 18:25:27.273627 systemd[1]: sshd@21-10.0.0.94:22-10.0.0.1:45272.service: Deactivated successfully. Mar 17 18:25:27.274324 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:25:27.274520 systemd[1]: session-22.scope: Consumed 1.070s CPU time. Mar 17 18:25:27.275157 systemd-logind[1207]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:25:27.276654 systemd[1]: Started sshd@22-10.0.0.94:22-10.0.0.1:45286.service. Mar 17 18:25:27.281039 systemd-logind[1207]: Removed session 22. Mar 17 18:25:27.302175 kubelet[1914]: E0317 18:25:27.302131 1914 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bff794a9-06d2-4001-8e11-4beec7a745fc" containerName="apply-sysctl-overwrites" Mar 17 18:25:27.302175 kubelet[1914]: E0317 18:25:27.302159 1914 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bff794a9-06d2-4001-8e11-4beec7a745fc" containerName="mount-bpf-fs" Mar 17 18:25:27.302175 kubelet[1914]: E0317 18:25:27.302166 1914 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bff794a9-06d2-4001-8e11-4beec7a745fc" containerName="cilium-agent" Mar 17 18:25:27.302175 kubelet[1914]: E0317 18:25:27.302173 1914 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bff794a9-06d2-4001-8e11-4beec7a745fc" containerName="mount-cgroup" Mar 17 18:25:27.302175 kubelet[1914]: E0317 18:25:27.302179 1914 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bff794a9-06d2-4001-8e11-4beec7a745fc" containerName="clean-cilium-state" Mar 17 18:25:27.302175 kubelet[1914]: E0317 18:25:27.302186 1914 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="25ec6b0f-1b72-4713-bf49-030804f057f0" containerName="cilium-operator" Mar 17 18:25:27.302436 kubelet[1914]: I0317 18:25:27.302210 1914 memory_manager.go:354] "RemoveStaleState removing state" podUID="bff794a9-06d2-4001-8e11-4beec7a745fc" containerName="cilium-agent" Mar 17 18:25:27.302436 kubelet[1914]: I0317 18:25:27.302216 1914 memory_manager.go:354] "RemoveStaleState removing state" podUID="25ec6b0f-1b72-4713-bf49-030804f057f0" containerName="cilium-operator" Mar 17 18:25:27.308890 systemd[1]: Created slice kubepods-burstable-podf8a3cb68_af00_4205_a6d5_4e5acd723a39.slice. Mar 17 18:25:27.325670 sshd[3689]: Accepted publickey for core from 10.0.0.1 port 45286 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:25:27.327326 sshd[3689]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:25:27.331195 systemd-logind[1207]: New session 23 of user core. Mar 17 18:25:27.332080 systemd[1]: Started session-23.scope. Mar 17 18:25:27.443329 kubelet[1914]: I0317 18:25:27.443295 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-host-proc-sys-net\") pod \"cilium-sh7vn\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " pod="kube-system/cilium-sh7vn" Mar 17 18:25:27.443523 kubelet[1914]: I0317 18:25:27.443504 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9crcd\" (UniqueName: \"kubernetes.io/projected/f8a3cb68-af00-4205-a6d5-4e5acd723a39-kube-api-access-9crcd\") pod \"cilium-sh7vn\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " pod="kube-system/cilium-sh7vn" Mar 17 18:25:27.443617 kubelet[1914]: I0317 18:25:27.443603 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-lib-modules\") pod \"cilium-sh7vn\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " pod="kube-system/cilium-sh7vn" Mar 17 18:25:27.443710 kubelet[1914]: I0317 18:25:27.443695 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f8a3cb68-af00-4205-a6d5-4e5acd723a39-clustermesh-secrets\") pod \"cilium-sh7vn\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " pod="kube-system/cilium-sh7vn" Mar 17 18:25:27.443809 kubelet[1914]: I0317 18:25:27.443794 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-cilium-run\") pod \"cilium-sh7vn\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " pod="kube-system/cilium-sh7vn" Mar 17 18:25:27.443892 kubelet[1914]: I0317 18:25:27.443878 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-cni-path\") pod \"cilium-sh7vn\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " pod="kube-system/cilium-sh7vn" Mar 17 18:25:27.443987 kubelet[1914]: I0317 18:25:27.443954 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-xtables-lock\") pod \"cilium-sh7vn\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " pod="kube-system/cilium-sh7vn" Mar 17 18:25:27.444066 kubelet[1914]: I0317 18:25:27.444054 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8a3cb68-af00-4205-a6d5-4e5acd723a39-cilium-config-path\") pod \"cilium-sh7vn\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " pod="kube-system/cilium-sh7vn" Mar 17 18:25:27.444147 kubelet[1914]: I0317 18:25:27.444134 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f8a3cb68-af00-4205-a6d5-4e5acd723a39-hubble-tls\") pod \"cilium-sh7vn\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " pod="kube-system/cilium-sh7vn" Mar 17 18:25:27.444227 kubelet[1914]: I0317 18:25:27.444213 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-cilium-cgroup\") pod \"cilium-sh7vn\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " pod="kube-system/cilium-sh7vn" Mar 17 18:25:27.444304 kubelet[1914]: I0317 18:25:27.444291 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-etc-cni-netd\") pod \"cilium-sh7vn\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " pod="kube-system/cilium-sh7vn" Mar 17 18:25:27.444372 kubelet[1914]: I0317 18:25:27.444359 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f8a3cb68-af00-4205-a6d5-4e5acd723a39-cilium-ipsec-secrets\") pod \"cilium-sh7vn\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " pod="kube-system/cilium-sh7vn" Mar 17 18:25:27.444446 kubelet[1914]: I0317 18:25:27.444434 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-bpf-maps\") pod \"cilium-sh7vn\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " pod="kube-system/cilium-sh7vn" Mar 17 18:25:27.444517 kubelet[1914]: I0317 18:25:27.444503 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-hostproc\") pod \"cilium-sh7vn\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " pod="kube-system/cilium-sh7vn" Mar 17 18:25:27.444593 kubelet[1914]: I0317 18:25:27.444580 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-host-proc-sys-kernel\") pod \"cilium-sh7vn\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " pod="kube-system/cilium-sh7vn" Mar 17 18:25:27.456869 sshd[3689]: pam_unix(sshd:session): session closed for user core Mar 17 18:25:27.460012 systemd[1]: Started sshd@23-10.0.0.94:22-10.0.0.1:45296.service. Mar 17 18:25:27.461550 systemd[1]: sshd@22-10.0.0.94:22-10.0.0.1:45286.service: Deactivated successfully. Mar 17 18:25:27.463101 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:25:27.463410 kubelet[1914]: E0317 18:25:27.463371 1914 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-9crcd lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-sh7vn" podUID="f8a3cb68-af00-4205-a6d5-4e5acd723a39" Mar 17 18:25:27.465575 systemd-logind[1207]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:25:27.467478 systemd-logind[1207]: Removed session 23. Mar 17 18:25:27.505105 sshd[3701]: Accepted publickey for core from 10.0.0.1 port 45296 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:25:27.506335 sshd[3701]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:25:27.509383 systemd-logind[1207]: New session 24 of user core. Mar 17 18:25:27.510277 systemd[1]: Started session-24.scope. Mar 17 18:25:27.948820 kubelet[1914]: I0317 18:25:27.948778 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-bpf-maps\") pod \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " Mar 17 18:25:27.948820 kubelet[1914]: I0317 18:25:27.948811 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-xtables-lock\") pod \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " Mar 17 18:25:27.949225 kubelet[1914]: I0317 18:25:27.948837 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f8a3cb68-af00-4205-a6d5-4e5acd723a39-cilium-ipsec-secrets\") pod \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " Mar 17 18:25:27.949225 kubelet[1914]: I0317 18:25:27.948867 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9crcd\" (UniqueName: \"kubernetes.io/projected/f8a3cb68-af00-4205-a6d5-4e5acd723a39-kube-api-access-9crcd\") pod \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " Mar 17 18:25:27.949225 kubelet[1914]: I0317 18:25:27.948882 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-lib-modules\") pod \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " Mar 17 18:25:27.949225 kubelet[1914]: I0317 18:25:27.948899 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8a3cb68-af00-4205-a6d5-4e5acd723a39-cilium-config-path\") pod \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " Mar 17 18:25:27.949225 kubelet[1914]: I0317 18:25:27.948899 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f8a3cb68-af00-4205-a6d5-4e5acd723a39" (UID: "f8a3cb68-af00-4205-a6d5-4e5acd723a39"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:27.949340 kubelet[1914]: I0317 18:25:27.948899 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f8a3cb68-af00-4205-a6d5-4e5acd723a39" (UID: "f8a3cb68-af00-4205-a6d5-4e5acd723a39"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:27.949340 kubelet[1914]: I0317 18:25:27.948915 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-etc-cni-netd\") pod \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " Mar 17 18:25:27.949340 kubelet[1914]: I0317 18:25:27.948995 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-cilium-cgroup\") pod \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " Mar 17 18:25:27.949340 kubelet[1914]: I0317 18:25:27.949014 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-hostproc\") pod \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " Mar 17 18:25:27.949340 kubelet[1914]: I0317 18:25:27.949029 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-host-proc-sys-net\") pod \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " Mar 17 18:25:27.949340 kubelet[1914]: I0317 18:25:27.949048 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-host-proc-sys-kernel\") pod \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " Mar 17 18:25:27.949471 kubelet[1914]: I0317 18:25:27.949074 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-cilium-run\") pod \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " Mar 17 18:25:27.949471 kubelet[1914]: I0317 18:25:27.949092 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f8a3cb68-af00-4205-a6d5-4e5acd723a39-hubble-tls\") pod \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " Mar 17 18:25:27.949471 kubelet[1914]: I0317 18:25:27.949108 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-cni-path\") pod \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " Mar 17 18:25:27.949471 kubelet[1914]: I0317 18:25:27.949127 1914 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f8a3cb68-af00-4205-a6d5-4e5acd723a39-clustermesh-secrets\") pod \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\" (UID: \"f8a3cb68-af00-4205-a6d5-4e5acd723a39\") " Mar 17 18:25:27.949471 kubelet[1914]: I0317 18:25:27.949174 1914 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:27.949471 kubelet[1914]: I0317 18:25:27.949183 1914 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:27.949753 kubelet[1914]: I0317 18:25:27.948937 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f8a3cb68-af00-4205-a6d5-4e5acd723a39" (UID: "f8a3cb68-af00-4205-a6d5-4e5acd723a39"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:27.949753 kubelet[1914]: I0317 18:25:27.948959 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f8a3cb68-af00-4205-a6d5-4e5acd723a39" (UID: "f8a3cb68-af00-4205-a6d5-4e5acd723a39"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:27.949823 kubelet[1914]: I0317 18:25:27.949777 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f8a3cb68-af00-4205-a6d5-4e5acd723a39" (UID: "f8a3cb68-af00-4205-a6d5-4e5acd723a39"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:27.949823 kubelet[1914]: I0317 18:25:27.949797 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-hostproc" (OuterVolumeSpecName: "hostproc") pod "f8a3cb68-af00-4205-a6d5-4e5acd723a39" (UID: "f8a3cb68-af00-4205-a6d5-4e5acd723a39"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:27.949823 kubelet[1914]: I0317 18:25:27.949810 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f8a3cb68-af00-4205-a6d5-4e5acd723a39" (UID: "f8a3cb68-af00-4205-a6d5-4e5acd723a39"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:27.949899 kubelet[1914]: I0317 18:25:27.949823 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f8a3cb68-af00-4205-a6d5-4e5acd723a39" (UID: "f8a3cb68-af00-4205-a6d5-4e5acd723a39"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:27.949899 kubelet[1914]: I0317 18:25:27.949837 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f8a3cb68-af00-4205-a6d5-4e5acd723a39" (UID: "f8a3cb68-af00-4205-a6d5-4e5acd723a39"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:27.950654 kubelet[1914]: I0317 18:25:27.950601 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8a3cb68-af00-4205-a6d5-4e5acd723a39-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f8a3cb68-af00-4205-a6d5-4e5acd723a39" (UID: "f8a3cb68-af00-4205-a6d5-4e5acd723a39"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 17 18:25:27.950757 kubelet[1914]: I0317 18:25:27.950663 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-cni-path" (OuterVolumeSpecName: "cni-path") pod "f8a3cb68-af00-4205-a6d5-4e5acd723a39" (UID: "f8a3cb68-af00-4205-a6d5-4e5acd723a39"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 17 18:25:27.952081 kubelet[1914]: I0317 18:25:27.952035 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8a3cb68-af00-4205-a6d5-4e5acd723a39-kube-api-access-9crcd" (OuterVolumeSpecName: "kube-api-access-9crcd") pod "f8a3cb68-af00-4205-a6d5-4e5acd723a39" (UID: "f8a3cb68-af00-4205-a6d5-4e5acd723a39"). InnerVolumeSpecName "kube-api-access-9crcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:25:27.953028 systemd[1]: var-lib-kubelet-pods-f8a3cb68\x2daf00\x2d4205\x2da6d5\x2d4e5acd723a39-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9crcd.mount: Deactivated successfully. Mar 17 18:25:27.954797 systemd[1]: var-lib-kubelet-pods-f8a3cb68\x2daf00\x2d4205\x2da6d5\x2d4e5acd723a39-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 17 18:25:27.954882 systemd[1]: var-lib-kubelet-pods-f8a3cb68\x2daf00\x2d4205\x2da6d5\x2d4e5acd723a39-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Mar 17 18:25:27.954938 systemd[1]: var-lib-kubelet-pods-f8a3cb68\x2daf00\x2d4205\x2da6d5\x2d4e5acd723a39-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 17 18:25:27.955265 kubelet[1914]: I0317 18:25:27.955237 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8a3cb68-af00-4205-a6d5-4e5acd723a39-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f8a3cb68-af00-4205-a6d5-4e5acd723a39" (UID: "f8a3cb68-af00-4205-a6d5-4e5acd723a39"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 17 18:25:27.955910 kubelet[1914]: I0317 18:25:27.955886 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a3cb68-af00-4205-a6d5-4e5acd723a39-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f8a3cb68-af00-4205-a6d5-4e5acd723a39" (UID: "f8a3cb68-af00-4205-a6d5-4e5acd723a39"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:25:27.956030 kubelet[1914]: I0317 18:25:27.955918 1914 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8a3cb68-af00-4205-a6d5-4e5acd723a39-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f8a3cb68-af00-4205-a6d5-4e5acd723a39" (UID: "f8a3cb68-af00-4205-a6d5-4e5acd723a39"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 17 18:25:28.049669 kubelet[1914]: I0317 18:25:28.049634 1914 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:28.049669 kubelet[1914]: I0317 18:25:28.049666 1914 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:28.049669 kubelet[1914]: I0317 18:25:28.049675 1914 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:28.049669 kubelet[1914]: I0317 18:25:28.049683 1914 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:28.049886 kubelet[1914]: I0317 18:25:28.049701 1914 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:28.049886 kubelet[1914]: I0317 18:25:28.049711 1914 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f8a3cb68-af00-4205-a6d5-4e5acd723a39-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:28.049886 kubelet[1914]: I0317 18:25:28.049719 1914 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:28.049886 kubelet[1914]: I0317 18:25:28.049727 1914 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f8a3cb68-af00-4205-a6d5-4e5acd723a39-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:28.049886 kubelet[1914]: I0317 18:25:28.049734 1914 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f8a3cb68-af00-4205-a6d5-4e5acd723a39-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:28.049886 kubelet[1914]: I0317 18:25:28.049741 1914 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:28.049886 kubelet[1914]: I0317 18:25:28.049753 1914 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-9crcd\" (UniqueName: \"kubernetes.io/projected/f8a3cb68-af00-4205-a6d5-4e5acd723a39-kube-api-access-9crcd\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:28.049886 kubelet[1914]: I0317 18:25:28.049760 1914 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8a3cb68-af00-4205-a6d5-4e5acd723a39-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:28.050089 kubelet[1914]: I0317 18:25:28.049767 1914 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8a3cb68-af00-4205-a6d5-4e5acd723a39-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 17 18:25:28.760795 systemd[1]: Removed slice kubepods-burstable-podf8a3cb68_af00_4205_a6d5_4e5acd723a39.slice. Mar 17 18:25:28.799914 systemd[1]: Created slice kubepods-burstable-podf1404b6b_87e6_4810_8eda_c5e8de02297f.slice. Mar 17 18:25:28.955577 kubelet[1914]: I0317 18:25:28.955528 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1404b6b-87e6-4810-8eda-c5e8de02297f-hubble-tls\") pod \"cilium-xwj9s\" (UID: \"f1404b6b-87e6-4810-8eda-c5e8de02297f\") " pod="kube-system/cilium-xwj9s" Mar 17 18:25:28.955577 kubelet[1914]: I0317 18:25:28.955567 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1404b6b-87e6-4810-8eda-c5e8de02297f-cilium-run\") pod \"cilium-xwj9s\" (UID: \"f1404b6b-87e6-4810-8eda-c5e8de02297f\") " pod="kube-system/cilium-xwj9s" Mar 17 18:25:28.955947 kubelet[1914]: I0317 18:25:28.955588 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1404b6b-87e6-4810-8eda-c5e8de02297f-cilium-cgroup\") pod \"cilium-xwj9s\" (UID: \"f1404b6b-87e6-4810-8eda-c5e8de02297f\") " pod="kube-system/cilium-xwj9s" Mar 17 18:25:28.955947 kubelet[1914]: I0317 18:25:28.955624 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1404b6b-87e6-4810-8eda-c5e8de02297f-bpf-maps\") pod \"cilium-xwj9s\" (UID: \"f1404b6b-87e6-4810-8eda-c5e8de02297f\") " pod="kube-system/cilium-xwj9s" Mar 17 18:25:28.955947 kubelet[1914]: I0317 18:25:28.955641 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1404b6b-87e6-4810-8eda-c5e8de02297f-xtables-lock\") pod \"cilium-xwj9s\" (UID: \"f1404b6b-87e6-4810-8eda-c5e8de02297f\") " pod="kube-system/cilium-xwj9s" Mar 17 18:25:28.955947 kubelet[1914]: I0317 18:25:28.955659 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1404b6b-87e6-4810-8eda-c5e8de02297f-etc-cni-netd\") pod \"cilium-xwj9s\" (UID: \"f1404b6b-87e6-4810-8eda-c5e8de02297f\") " pod="kube-system/cilium-xwj9s" Mar 17 18:25:28.955947 kubelet[1914]: I0317 18:25:28.955684 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1404b6b-87e6-4810-8eda-c5e8de02297f-lib-modules\") pod \"cilium-xwj9s\" (UID: \"f1404b6b-87e6-4810-8eda-c5e8de02297f\") " pod="kube-system/cilium-xwj9s" Mar 17 18:25:28.955947 kubelet[1914]: I0317 18:25:28.955710 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1404b6b-87e6-4810-8eda-c5e8de02297f-host-proc-sys-net\") pod \"cilium-xwj9s\" (UID: \"f1404b6b-87e6-4810-8eda-c5e8de02297f\") " pod="kube-system/cilium-xwj9s" Mar 17 18:25:28.956155 kubelet[1914]: I0317 18:25:28.955733 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f1404b6b-87e6-4810-8eda-c5e8de02297f-cilium-ipsec-secrets\") pod \"cilium-xwj9s\" (UID: \"f1404b6b-87e6-4810-8eda-c5e8de02297f\") " pod="kube-system/cilium-xwj9s" Mar 17 18:25:28.956155 kubelet[1914]: I0317 18:25:28.955758 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrjbl\" (UniqueName: \"kubernetes.io/projected/f1404b6b-87e6-4810-8eda-c5e8de02297f-kube-api-access-xrjbl\") pod \"cilium-xwj9s\" (UID: \"f1404b6b-87e6-4810-8eda-c5e8de02297f\") " pod="kube-system/cilium-xwj9s" Mar 17 18:25:28.956155 kubelet[1914]: I0317 18:25:28.955776 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1404b6b-87e6-4810-8eda-c5e8de02297f-hostproc\") pod \"cilium-xwj9s\" (UID: \"f1404b6b-87e6-4810-8eda-c5e8de02297f\") " pod="kube-system/cilium-xwj9s" Mar 17 18:25:28.956155 kubelet[1914]: I0317 18:25:28.955793 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1404b6b-87e6-4810-8eda-c5e8de02297f-cni-path\") pod \"cilium-xwj9s\" (UID: \"f1404b6b-87e6-4810-8eda-c5e8de02297f\") " pod="kube-system/cilium-xwj9s" Mar 17 18:25:28.956155 kubelet[1914]: I0317 18:25:28.955818 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1404b6b-87e6-4810-8eda-c5e8de02297f-clustermesh-secrets\") pod \"cilium-xwj9s\" (UID: \"f1404b6b-87e6-4810-8eda-c5e8de02297f\") " pod="kube-system/cilium-xwj9s" Mar 17 18:25:28.956155 kubelet[1914]: I0317 18:25:28.955856 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1404b6b-87e6-4810-8eda-c5e8de02297f-cilium-config-path\") pod \"cilium-xwj9s\" (UID: \"f1404b6b-87e6-4810-8eda-c5e8de02297f\") " pod="kube-system/cilium-xwj9s" Mar 17 18:25:28.956308 kubelet[1914]: I0317 18:25:28.955881 1914 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1404b6b-87e6-4810-8eda-c5e8de02297f-host-proc-sys-kernel\") pod \"cilium-xwj9s\" (UID: \"f1404b6b-87e6-4810-8eda-c5e8de02297f\") " pod="kube-system/cilium-xwj9s" Mar 17 18:25:29.103672 kubelet[1914]: E0317 18:25:29.103641 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:29.104369 env[1216]: time="2025-03-17T18:25:29.104320246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xwj9s,Uid:f1404b6b-87e6-4810-8eda-c5e8de02297f,Namespace:kube-system,Attempt:0,}" Mar 17 18:25:29.117383 env[1216]: time="2025-03-17T18:25:29.117315799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:25:29.117383 env[1216]: time="2025-03-17T18:25:29.117352960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:25:29.117383 env[1216]: time="2025-03-17T18:25:29.117363320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:25:29.117534 env[1216]: time="2025-03-17T18:25:29.117471122Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8d28013f25c12c90281aa2e670a4e9b87e39d5879c4cc1dc27e054f71d6f0e50 pid=3733 runtime=io.containerd.runc.v2 Mar 17 18:25:29.126939 systemd[1]: Started cri-containerd-8d28013f25c12c90281aa2e670a4e9b87e39d5879c4cc1dc27e054f71d6f0e50.scope. Mar 17 18:25:29.156397 env[1216]: time="2025-03-17T18:25:29.156347977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xwj9s,Uid:f1404b6b-87e6-4810-8eda-c5e8de02297f,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d28013f25c12c90281aa2e670a4e9b87e39d5879c4cc1dc27e054f71d6f0e50\"" Mar 17 18:25:29.157495 kubelet[1914]: E0317 18:25:29.157007 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:29.159123 env[1216]: time="2025-03-17T18:25:29.159071203Z" level=info msg="CreateContainer within sandbox \"8d28013f25c12c90281aa2e670a4e9b87e39d5879c4cc1dc27e054f71d6f0e50\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 17 18:25:29.168914 env[1216]: time="2025-03-17T18:25:29.168869559Z" level=info msg="CreateContainer within sandbox \"8d28013f25c12c90281aa2e670a4e9b87e39d5879c4cc1dc27e054f71d6f0e50\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"825a0006a71ec716f1e40a068db12163bf4a916bd4e6015c88eca9b4d95a643c\"" Mar 17 18:25:29.169397 env[1216]: time="2025-03-17T18:25:29.169275688Z" level=info msg="StartContainer for \"825a0006a71ec716f1e40a068db12163bf4a916bd4e6015c88eca9b4d95a643c\"" Mar 17 18:25:29.181989 systemd[1]: Started cri-containerd-825a0006a71ec716f1e40a068db12163bf4a916bd4e6015c88eca9b4d95a643c.scope. Mar 17 18:25:29.214249 env[1216]: time="2025-03-17T18:25:29.214085366Z" level=info msg="StartContainer for \"825a0006a71ec716f1e40a068db12163bf4a916bd4e6015c88eca9b4d95a643c\" returns successfully" Mar 17 18:25:29.222326 systemd[1]: cri-containerd-825a0006a71ec716f1e40a068db12163bf4a916bd4e6015c88eca9b4d95a643c.scope: Deactivated successfully. Mar 17 18:25:29.247584 env[1216]: time="2025-03-17T18:25:29.247528490Z" level=info msg="shim disconnected" id=825a0006a71ec716f1e40a068db12163bf4a916bd4e6015c88eca9b4d95a643c Mar 17 18:25:29.247584 env[1216]: time="2025-03-17T18:25:29.247576211Z" level=warning msg="cleaning up after shim disconnected" id=825a0006a71ec716f1e40a068db12163bf4a916bd4e6015c88eca9b4d95a643c namespace=k8s.io Mar 17 18:25:29.247584 env[1216]: time="2025-03-17T18:25:29.247584732Z" level=info msg="cleaning up dead shim" Mar 17 18:25:29.253828 env[1216]: time="2025-03-17T18:25:29.253780601Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:25:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3818 runtime=io.containerd.runc.v2\n" Mar 17 18:25:29.584812 kubelet[1914]: I0317 18:25:29.584698 1914 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8a3cb68-af00-4205-a6d5-4e5acd723a39" path="/var/lib/kubelet/pods/f8a3cb68-af00-4205-a6d5-4e5acd723a39/volumes" Mar 17 18:25:29.761506 kubelet[1914]: E0317 18:25:29.761475 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:29.763321 env[1216]: time="2025-03-17T18:25:29.763260334Z" level=info msg="CreateContainer within sandbox \"8d28013f25c12c90281aa2e670a4e9b87e39d5879c4cc1dc27e054f71d6f0e50\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 17 18:25:29.772482 env[1216]: time="2025-03-17T18:25:29.772434194Z" level=info msg="CreateContainer within sandbox \"8d28013f25c12c90281aa2e670a4e9b87e39d5879c4cc1dc27e054f71d6f0e50\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"be1c91318f118f5cf6c8b365777ffec102d5fff0123dc0a1ab378d6373913413\"" Mar 17 18:25:29.775182 env[1216]: time="2025-03-17T18:25:29.775135619Z" level=info msg="StartContainer for \"be1c91318f118f5cf6c8b365777ffec102d5fff0123dc0a1ab378d6373913413\"" Mar 17 18:25:29.789595 systemd[1]: Started cri-containerd-be1c91318f118f5cf6c8b365777ffec102d5fff0123dc0a1ab378d6373913413.scope. Mar 17 18:25:29.820686 env[1216]: time="2025-03-17T18:25:29.820635314Z" level=info msg="StartContainer for \"be1c91318f118f5cf6c8b365777ffec102d5fff0123dc0a1ab378d6373913413\" returns successfully" Mar 17 18:25:29.833383 systemd[1]: cri-containerd-be1c91318f118f5cf6c8b365777ffec102d5fff0123dc0a1ab378d6373913413.scope: Deactivated successfully. Mar 17 18:25:29.858216 env[1216]: time="2025-03-17T18:25:29.858156656Z" level=info msg="shim disconnected" id=be1c91318f118f5cf6c8b365777ffec102d5fff0123dc0a1ab378d6373913413 Mar 17 18:25:29.858454 env[1216]: time="2025-03-17T18:25:29.858432223Z" level=warning msg="cleaning up after shim disconnected" id=be1c91318f118f5cf6c8b365777ffec102d5fff0123dc0a1ab378d6373913413 namespace=k8s.io Mar 17 18:25:29.858521 env[1216]: time="2025-03-17T18:25:29.858502144Z" level=info msg="cleaning up dead shim" Mar 17 18:25:29.869602 env[1216]: time="2025-03-17T18:25:29.869556650Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:25:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3881 runtime=io.containerd.runc.v2\n" Mar 17 18:25:30.765675 kubelet[1914]: E0317 18:25:30.765640 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:30.768082 env[1216]: time="2025-03-17T18:25:30.768039474Z" level=info msg="CreateContainer within sandbox \"8d28013f25c12c90281aa2e670a4e9b87e39d5879c4cc1dc27e054f71d6f0e50\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 17 18:25:30.780251 env[1216]: time="2025-03-17T18:25:30.780210358Z" level=info msg="CreateContainer within sandbox \"8d28013f25c12c90281aa2e670a4e9b87e39d5879c4cc1dc27e054f71d6f0e50\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3d937d3abe4ef907bd6844d92fdff6526100cad1df8483aecc54b41c03a43f16\"" Mar 17 18:25:30.780851 env[1216]: time="2025-03-17T18:25:30.780824173Z" level=info msg="StartContainer for \"3d937d3abe4ef907bd6844d92fdff6526100cad1df8483aecc54b41c03a43f16\"" Mar 17 18:25:30.802188 systemd[1]: Started cri-containerd-3d937d3abe4ef907bd6844d92fdff6526100cad1df8483aecc54b41c03a43f16.scope. Mar 17 18:25:30.833164 env[1216]: time="2025-03-17T18:25:30.833106956Z" level=info msg="StartContainer for \"3d937d3abe4ef907bd6844d92fdff6526100cad1df8483aecc54b41c03a43f16\" returns successfully" Mar 17 18:25:30.835605 systemd[1]: cri-containerd-3d937d3abe4ef907bd6844d92fdff6526100cad1df8483aecc54b41c03a43f16.scope: Deactivated successfully. Mar 17 18:25:30.855346 env[1216]: time="2025-03-17T18:25:30.855285074Z" level=info msg="shim disconnected" id=3d937d3abe4ef907bd6844d92fdff6526100cad1df8483aecc54b41c03a43f16 Mar 17 18:25:30.855346 env[1216]: time="2025-03-17T18:25:30.855335076Z" level=warning msg="cleaning up after shim disconnected" id=3d937d3abe4ef907bd6844d92fdff6526100cad1df8483aecc54b41c03a43f16 namespace=k8s.io Mar 17 18:25:30.855346 env[1216]: time="2025-03-17T18:25:30.855344836Z" level=info msg="cleaning up dead shim" Mar 17 18:25:30.861180 env[1216]: time="2025-03-17T18:25:30.861145051Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:25:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3939 runtime=io.containerd.runc.v2\n" Mar 17 18:25:31.060930 systemd[1]: run-containerd-runc-k8s.io-3d937d3abe4ef907bd6844d92fdff6526100cad1df8483aecc54b41c03a43f16-runc.6xpoAR.mount: Deactivated successfully. Mar 17 18:25:31.061049 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d937d3abe4ef907bd6844d92fdff6526100cad1df8483aecc54b41c03a43f16-rootfs.mount: Deactivated successfully. Mar 17 18:25:31.627045 kubelet[1914]: E0317 18:25:31.626996 1914 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 17 18:25:31.769253 kubelet[1914]: E0317 18:25:31.769167 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:31.776559 env[1216]: time="2025-03-17T18:25:31.776509249Z" level=info msg="CreateContainer within sandbox \"8d28013f25c12c90281aa2e670a4e9b87e39d5879c4cc1dc27e054f71d6f0e50\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 17 18:25:31.789765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1354035261.mount: Deactivated successfully. Mar 17 18:25:31.792045 env[1216]: time="2025-03-17T18:25:31.792002761Z" level=info msg="CreateContainer within sandbox \"8d28013f25c12c90281aa2e670a4e9b87e39d5879c4cc1dc27e054f71d6f0e50\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"066c3af8ab0892348e567e2827f3104f4d141299434d123b35f3febf7393bd06\"" Mar 17 18:25:31.792652 env[1216]: time="2025-03-17T18:25:31.792624535Z" level=info msg="StartContainer for \"066c3af8ab0892348e567e2827f3104f4d141299434d123b35f3febf7393bd06\"" Mar 17 18:25:31.810133 systemd[1]: Started cri-containerd-066c3af8ab0892348e567e2827f3104f4d141299434d123b35f3febf7393bd06.scope. Mar 17 18:25:31.836356 systemd[1]: cri-containerd-066c3af8ab0892348e567e2827f3104f4d141299434d123b35f3febf7393bd06.scope: Deactivated successfully. Mar 17 18:25:31.837920 env[1216]: time="2025-03-17T18:25:31.837830484Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1404b6b_87e6_4810_8eda_c5e8de02297f.slice/cri-containerd-066c3af8ab0892348e567e2827f3104f4d141299434d123b35f3febf7393bd06.scope/memory.events\": no such file or directory" Mar 17 18:25:31.839709 env[1216]: time="2025-03-17T18:25:31.839658766Z" level=info msg="StartContainer for \"066c3af8ab0892348e567e2827f3104f4d141299434d123b35f3febf7393bd06\" returns successfully" Mar 17 18:25:31.857932 env[1216]: time="2025-03-17T18:25:31.857889620Z" level=info msg="shim disconnected" id=066c3af8ab0892348e567e2827f3104f4d141299434d123b35f3febf7393bd06 Mar 17 18:25:31.857932 env[1216]: time="2025-03-17T18:25:31.857928621Z" level=warning msg="cleaning up after shim disconnected" id=066c3af8ab0892348e567e2827f3104f4d141299434d123b35f3febf7393bd06 namespace=k8s.io Mar 17 18:25:31.857932 env[1216]: time="2025-03-17T18:25:31.857937061Z" level=info msg="cleaning up dead shim" Mar 17 18:25:31.868690 env[1216]: time="2025-03-17T18:25:31.868501822Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:25:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3995 runtime=io.containerd.runc.v2\n" Mar 17 18:25:32.060958 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-066c3af8ab0892348e567e2827f3104f4d141299434d123b35f3febf7393bd06-rootfs.mount: Deactivated successfully. Mar 17 18:25:32.772659 kubelet[1914]: E0317 18:25:32.772611 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:32.774498 env[1216]: time="2025-03-17T18:25:32.774443837Z" level=info msg="CreateContainer within sandbox \"8d28013f25c12c90281aa2e670a4e9b87e39d5879c4cc1dc27e054f71d6f0e50\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 17 18:25:32.785399 env[1216]: time="2025-03-17T18:25:32.785356319Z" level=info msg="CreateContainer within sandbox \"8d28013f25c12c90281aa2e670a4e9b87e39d5879c4cc1dc27e054f71d6f0e50\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f86c112d4021bb2a1a5bf324ce32e6a7208880b37ee6193f56a1ff78c443ea23\"" Mar 17 18:25:32.785888 env[1216]: time="2025-03-17T18:25:32.785846690Z" level=info msg="StartContainer for \"f86c112d4021bb2a1a5bf324ce32e6a7208880b37ee6193f56a1ff78c443ea23\"" Mar 17 18:25:32.805071 systemd[1]: Started cri-containerd-f86c112d4021bb2a1a5bf324ce32e6a7208880b37ee6193f56a1ff78c443ea23.scope. Mar 17 18:25:32.850362 kubelet[1914]: I0317 18:25:32.850305 1914 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-03-17T18:25:32Z","lastTransitionTime":"2025-03-17T18:25:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 17 18:25:32.850512 env[1216]: time="2025-03-17T18:25:32.850364398Z" level=info msg="StartContainer for \"f86c112d4021bb2a1a5bf324ce32e6a7208880b37ee6193f56a1ff78c443ea23\" returns successfully" Mar 17 18:25:33.103024 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Mar 17 18:25:33.777538 kubelet[1914]: E0317 18:25:33.777506 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:34.582364 kubelet[1914]: E0317 18:25:34.582326 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:35.104796 kubelet[1914]: E0317 18:25:35.104756 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:35.925424 systemd-networkd[1046]: lxc_health: Link UP Mar 17 18:25:35.936669 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Mar 17 18:25:35.937173 systemd-networkd[1046]: lxc_health: Gained carrier Mar 17 18:25:36.582544 kubelet[1914]: E0317 18:25:36.582511 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:37.106335 kubelet[1914]: E0317 18:25:37.105761 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:37.125355 kubelet[1914]: I0317 18:25:37.125303 1914 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xwj9s" podStartSLOduration=9.125276475 podStartE2EDuration="9.125276475s" podCreationTimestamp="2025-03-17 18:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:25:33.796196659 +0000 UTC m=+82.326604026" watchObservedRunningTime="2025-03-17 18:25:37.125276475 +0000 UTC m=+85.655683762" Mar 17 18:25:37.152260 systemd-networkd[1046]: lxc_health: Gained IPv6LL Mar 17 18:25:37.788020 kubelet[1914]: E0317 18:25:37.787985 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:38.790371 kubelet[1914]: E0317 18:25:38.790331 1914 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:25:42.230101 systemd[1]: run-containerd-runc-k8s.io-f86c112d4021bb2a1a5bf324ce32e6a7208880b37ee6193f56a1ff78c443ea23-runc.Kcw89H.mount: Deactivated successfully. Mar 17 18:25:42.299677 sshd[3701]: pam_unix(sshd:session): session closed for user core Mar 17 18:25:42.302154 systemd[1]: sshd@23-10.0.0.94:22-10.0.0.1:45296.service: Deactivated successfully. Mar 17 18:25:42.303010 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:25:42.303562 systemd-logind[1207]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:25:42.304448 systemd-logind[1207]: Removed session 24.