Oct 29 00:41:53.678373 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 29 00:41:53.678401 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Tue Oct 28 23:18:12 -00 2025 Oct 29 00:41:53.678409 kernel: efi: EFI v2.70 by EDK II Oct 29 00:41:53.678415 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Oct 29 00:41:53.678420 kernel: random: crng init done Oct 29 00:41:53.678426 kernel: ACPI: Early table checksum verification disabled Oct 29 00:41:53.678432 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Oct 29 00:41:53.678439 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 29 00:41:53.678445 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:41:53.678450 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:41:53.678456 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:41:53.678461 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:41:53.678466 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:41:53.678472 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:41:53.678479 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:41:53.678485 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:41:53.678491 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 29 00:41:53.678496 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 29 00:41:53.678502 kernel: NUMA: Failed to initialise from firmware Oct 29 00:41:53.678508 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 29 00:41:53.678513 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Oct 29 00:41:53.678519 kernel: Zone ranges: Oct 29 00:41:53.678524 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 29 00:41:53.678531 kernel: DMA32 empty Oct 29 00:41:53.678537 kernel: Normal empty Oct 29 00:41:53.678542 kernel: Movable zone start for each node Oct 29 00:41:53.678548 kernel: Early memory node ranges Oct 29 00:41:53.678553 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Oct 29 00:41:53.678559 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Oct 29 00:41:53.678564 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Oct 29 00:41:53.678570 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Oct 29 00:41:53.678576 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Oct 29 00:41:53.678581 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Oct 29 00:41:53.678587 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Oct 29 00:41:53.678592 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 29 00:41:53.678599 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 29 00:41:53.678605 kernel: psci: probing for conduit method from ACPI. Oct 29 00:41:53.678610 kernel: psci: PSCIv1.1 detected in firmware. Oct 29 00:41:53.678616 kernel: psci: Using standard PSCI v0.2 function IDs Oct 29 00:41:53.678622 kernel: psci: Trusted OS migration not required Oct 29 00:41:53.678630 kernel: psci: SMC Calling Convention v1.1 Oct 29 00:41:53.678636 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 29 00:41:53.678643 kernel: ACPI: SRAT not present Oct 29 00:41:53.678649 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Oct 29 00:41:53.678655 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Oct 29 00:41:53.678661 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 29 00:41:53.678667 kernel: Detected PIPT I-cache on CPU0 Oct 29 00:41:53.678673 kernel: CPU features: detected: GIC system register CPU interface Oct 29 00:41:53.678679 kernel: CPU features: detected: Hardware dirty bit management Oct 29 00:41:53.678685 kernel: CPU features: detected: Spectre-v4 Oct 29 00:41:53.678691 kernel: CPU features: detected: Spectre-BHB Oct 29 00:41:53.678698 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 29 00:41:53.678704 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 29 00:41:53.678710 kernel: CPU features: detected: ARM erratum 1418040 Oct 29 00:41:53.678716 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 29 00:41:53.678722 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 29 00:41:53.678728 kernel: Policy zone: DMA Oct 29 00:41:53.678735 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ddcdcc5923a51dfb24bee27c235aa754769d72fd417f60397f96d58c38c7a3e3 Oct 29 00:41:53.678741 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 29 00:41:53.678747 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 29 00:41:53.678754 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 29 00:41:53.678760 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 29 00:41:53.678767 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Oct 29 00:41:53.678774 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 29 00:41:53.678780 kernel: trace event string verifier disabled Oct 29 00:41:53.678786 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 29 00:41:53.678792 kernel: rcu: RCU event tracing is enabled. Oct 29 00:41:53.678798 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 29 00:41:53.678804 kernel: Trampoline variant of Tasks RCU enabled. Oct 29 00:41:53.678810 kernel: Tracing variant of Tasks RCU enabled. Oct 29 00:41:53.678816 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 29 00:41:53.678822 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 29 00:41:53.678828 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 29 00:41:53.678835 kernel: GICv3: 256 SPIs implemented Oct 29 00:41:53.678841 kernel: GICv3: 0 Extended SPIs implemented Oct 29 00:41:53.678847 kernel: GICv3: Distributor has no Range Selector support Oct 29 00:41:53.678853 kernel: Root IRQ handler: gic_handle_irq Oct 29 00:41:53.678859 kernel: GICv3: 16 PPIs implemented Oct 29 00:41:53.678865 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 29 00:41:53.678870 kernel: ACPI: SRAT not present Oct 29 00:41:53.678876 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 29 00:41:53.678882 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Oct 29 00:41:53.678889 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Oct 29 00:41:53.678895 kernel: GICv3: using LPI property table @0x00000000400d0000 Oct 29 00:41:53.678901 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Oct 29 00:41:53.678908 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 29 00:41:53.678914 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 29 00:41:53.678920 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 29 00:41:53.678926 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 29 00:41:53.678932 kernel: arm-pv: using stolen time PV Oct 29 00:41:53.678938 kernel: Console: colour dummy device 80x25 Oct 29 00:41:53.678944 kernel: ACPI: Core revision 20210730 Oct 29 00:41:53.678951 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 29 00:41:53.678957 kernel: pid_max: default: 32768 minimum: 301 Oct 29 00:41:53.678963 kernel: LSM: Security Framework initializing Oct 29 00:41:53.678970 kernel: SELinux: Initializing. Oct 29 00:41:53.678977 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 00:41:53.678983 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 29 00:41:53.678989 kernel: rcu: Hierarchical SRCU implementation. Oct 29 00:41:53.678995 kernel: Platform MSI: ITS@0x8080000 domain created Oct 29 00:41:53.679002 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 29 00:41:53.679008 kernel: Remapping and enabling EFI services. Oct 29 00:41:53.679014 kernel: smp: Bringing up secondary CPUs ... Oct 29 00:41:53.679020 kernel: Detected PIPT I-cache on CPU1 Oct 29 00:41:53.679027 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 29 00:41:53.679033 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Oct 29 00:41:53.679040 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 29 00:41:53.679046 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 29 00:41:53.679052 kernel: Detected PIPT I-cache on CPU2 Oct 29 00:41:53.679059 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 29 00:41:53.679065 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Oct 29 00:41:53.679071 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 29 00:41:53.679077 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 29 00:41:53.679083 kernel: Detected PIPT I-cache on CPU3 Oct 29 00:41:53.679090 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 29 00:41:53.679097 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Oct 29 00:41:53.679103 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 29 00:41:53.679109 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 29 00:41:53.679119 kernel: smp: Brought up 1 node, 4 CPUs Oct 29 00:41:53.679126 kernel: SMP: Total of 4 processors activated. Oct 29 00:41:53.679133 kernel: CPU features: detected: 32-bit EL0 Support Oct 29 00:41:53.679139 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 29 00:41:53.679146 kernel: CPU features: detected: Common not Private translations Oct 29 00:41:53.679152 kernel: CPU features: detected: CRC32 instructions Oct 29 00:41:53.679159 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 29 00:41:53.679165 kernel: CPU features: detected: LSE atomic instructions Oct 29 00:41:53.679173 kernel: CPU features: detected: Privileged Access Never Oct 29 00:41:53.679179 kernel: CPU features: detected: RAS Extension Support Oct 29 00:41:53.679186 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 29 00:41:53.679201 kernel: CPU: All CPU(s) started at EL1 Oct 29 00:41:53.679208 kernel: alternatives: patching kernel code Oct 29 00:41:53.679217 kernel: devtmpfs: initialized Oct 29 00:41:53.679223 kernel: KASLR enabled Oct 29 00:41:53.679230 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 29 00:41:53.679237 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 29 00:41:53.679243 kernel: pinctrl core: initialized pinctrl subsystem Oct 29 00:41:53.679250 kernel: SMBIOS 3.0.0 present. Oct 29 00:41:53.679256 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Oct 29 00:41:53.679263 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 29 00:41:53.679269 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 29 00:41:53.679277 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 29 00:41:53.679284 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 29 00:41:53.679290 kernel: audit: initializing netlink subsys (disabled) Oct 29 00:41:53.679297 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 Oct 29 00:41:53.679303 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 29 00:41:53.679310 kernel: cpuidle: using governor menu Oct 29 00:41:53.679316 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 29 00:41:53.679323 kernel: ASID allocator initialised with 32768 entries Oct 29 00:41:53.679329 kernel: ACPI: bus type PCI registered Oct 29 00:41:53.679337 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 29 00:41:53.679343 kernel: Serial: AMBA PL011 UART driver Oct 29 00:41:53.679350 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 29 00:41:53.679356 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Oct 29 00:41:53.679363 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 29 00:41:53.679369 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Oct 29 00:41:53.679376 kernel: cryptd: max_cpu_qlen set to 1000 Oct 29 00:41:53.679383 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 29 00:41:53.679395 kernel: ACPI: Added _OSI(Module Device) Oct 29 00:41:53.679404 kernel: ACPI: Added _OSI(Processor Device) Oct 29 00:41:53.679410 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 29 00:41:53.679417 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 29 00:41:53.679423 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 29 00:41:53.679429 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 29 00:41:53.679436 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 29 00:41:53.679442 kernel: ACPI: Interpreter enabled Oct 29 00:41:53.679449 kernel: ACPI: Using GIC for interrupt routing Oct 29 00:41:53.679455 kernel: ACPI: MCFG table detected, 1 entries Oct 29 00:41:53.679463 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 29 00:41:53.679469 kernel: printk: console [ttyAMA0] enabled Oct 29 00:41:53.679476 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 29 00:41:53.679593 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 29 00:41:53.679655 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 29 00:41:53.679762 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 29 00:41:53.679826 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 29 00:41:53.680062 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 29 00:41:53.680077 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 29 00:41:53.680084 kernel: PCI host bridge to bus 0000:00 Oct 29 00:41:53.683715 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 29 00:41:53.683792 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 29 00:41:53.683853 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 29 00:41:53.683905 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 29 00:41:53.683980 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 29 00:41:53.684046 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 29 00:41:53.684105 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 29 00:41:53.684163 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 29 00:41:53.684259 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 29 00:41:53.684318 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 29 00:41:53.684375 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 29 00:41:53.684450 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 29 00:41:53.684510 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 29 00:41:53.684569 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 29 00:41:53.684620 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 29 00:41:53.684629 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 29 00:41:53.684635 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 29 00:41:53.684642 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 29 00:41:53.684652 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 29 00:41:53.684658 kernel: iommu: Default domain type: Translated Oct 29 00:41:53.684665 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 29 00:41:53.684673 kernel: vgaarb: loaded Oct 29 00:41:53.684680 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 29 00:41:53.684686 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 29 00:41:53.684693 kernel: PTP clock support registered Oct 29 00:41:53.684699 kernel: Registered efivars operations Oct 29 00:41:53.684706 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 29 00:41:53.684712 kernel: VFS: Disk quotas dquot_6.6.0 Oct 29 00:41:53.684721 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 29 00:41:53.684727 kernel: pnp: PnP ACPI init Oct 29 00:41:53.684795 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 29 00:41:53.684805 kernel: pnp: PnP ACPI: found 1 devices Oct 29 00:41:53.684811 kernel: NET: Registered PF_INET protocol family Oct 29 00:41:53.684818 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 29 00:41:53.684825 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 29 00:41:53.684831 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 29 00:41:53.684840 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 29 00:41:53.684846 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 29 00:41:53.684853 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 29 00:41:53.684859 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 00:41:53.684866 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 29 00:41:53.684873 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 29 00:41:53.684879 kernel: PCI: CLS 0 bytes, default 64 Oct 29 00:41:53.684886 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 29 00:41:53.684892 kernel: kvm [1]: HYP mode not available Oct 29 00:41:53.684900 kernel: Initialise system trusted keyrings Oct 29 00:41:53.684906 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 29 00:41:53.684913 kernel: Key type asymmetric registered Oct 29 00:41:53.684919 kernel: Asymmetric key parser 'x509' registered Oct 29 00:41:53.684926 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 29 00:41:53.684932 kernel: io scheduler mq-deadline registered Oct 29 00:41:53.684939 kernel: io scheduler kyber registered Oct 29 00:41:53.684945 kernel: io scheduler bfq registered Oct 29 00:41:53.684952 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 29 00:41:53.684960 kernel: ACPI: button: Power Button [PWRB] Oct 29 00:41:53.684967 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 29 00:41:53.685026 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 29 00:41:53.685035 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 29 00:41:53.685041 kernel: thunder_xcv, ver 1.0 Oct 29 00:41:53.685048 kernel: thunder_bgx, ver 1.0 Oct 29 00:41:53.685054 kernel: nicpf, ver 1.0 Oct 29 00:41:53.685061 kernel: nicvf, ver 1.0 Oct 29 00:41:53.685124 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 29 00:41:53.685181 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-29T00:41:53 UTC (1761698513) Oct 29 00:41:53.685199 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 29 00:41:53.685206 kernel: NET: Registered PF_INET6 protocol family Oct 29 00:41:53.685212 kernel: Segment Routing with IPv6 Oct 29 00:41:53.685221 kernel: In-situ OAM (IOAM) with IPv6 Oct 29 00:41:53.685228 kernel: NET: Registered PF_PACKET protocol family Oct 29 00:41:53.685234 kernel: Key type dns_resolver registered Oct 29 00:41:53.685241 kernel: registered taskstats version 1 Oct 29 00:41:53.685249 kernel: Loading compiled-in X.509 certificates Oct 29 00:41:53.685256 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 365034a3270fb89208cc05b5e556df135e9c6322' Oct 29 00:41:53.685263 kernel: Key type .fscrypt registered Oct 29 00:41:53.685269 kernel: Key type fscrypt-provisioning registered Oct 29 00:41:53.685276 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 29 00:41:53.685283 kernel: ima: Allocated hash algorithm: sha1 Oct 29 00:41:53.685291 kernel: ima: No architecture policies found Oct 29 00:41:53.685298 kernel: clk: Disabling unused clocks Oct 29 00:41:53.685304 kernel: Freeing unused kernel memory: 36416K Oct 29 00:41:53.685312 kernel: Run /init as init process Oct 29 00:41:53.685318 kernel: with arguments: Oct 29 00:41:53.685325 kernel: /init Oct 29 00:41:53.685331 kernel: with environment: Oct 29 00:41:53.685337 kernel: HOME=/ Oct 29 00:41:53.685343 kernel: TERM=linux Oct 29 00:41:53.685350 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 29 00:41:53.685358 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 29 00:41:53.685368 systemd[1]: Detected virtualization kvm. Oct 29 00:41:53.685376 systemd[1]: Detected architecture arm64. Oct 29 00:41:53.685383 systemd[1]: Running in initrd. Oct 29 00:41:53.685397 systemd[1]: No hostname configured, using default hostname. Oct 29 00:41:53.685405 systemd[1]: Hostname set to . Oct 29 00:41:53.685412 systemd[1]: Initializing machine ID from VM UUID. Oct 29 00:41:53.685420 systemd[1]: Queued start job for default target initrd.target. Oct 29 00:41:53.685427 systemd[1]: Started systemd-ask-password-console.path. Oct 29 00:41:53.685436 systemd[1]: Reached target cryptsetup.target. Oct 29 00:41:53.685443 systemd[1]: Reached target paths.target. Oct 29 00:41:53.685451 systemd[1]: Reached target slices.target. Oct 29 00:41:53.685458 systemd[1]: Reached target swap.target. Oct 29 00:41:53.685465 systemd[1]: Reached target timers.target. Oct 29 00:41:53.685473 systemd[1]: Listening on iscsid.socket. Oct 29 00:41:53.685480 systemd[1]: Listening on iscsiuio.socket. Oct 29 00:41:53.685489 systemd[1]: Listening on systemd-journald-audit.socket. Oct 29 00:41:53.685496 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 29 00:41:53.685504 systemd[1]: Listening on systemd-journald.socket. Oct 29 00:41:53.685511 systemd[1]: Listening on systemd-networkd.socket. Oct 29 00:41:53.685518 systemd[1]: Listening on systemd-udevd-control.socket. Oct 29 00:41:53.685526 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 29 00:41:53.685533 systemd[1]: Reached target sockets.target. Oct 29 00:41:53.685540 systemd[1]: Starting kmod-static-nodes.service... Oct 29 00:41:53.685548 systemd[1]: Finished network-cleanup.service. Oct 29 00:41:53.685556 systemd[1]: Starting systemd-fsck-usr.service... Oct 29 00:41:53.685564 systemd[1]: Starting systemd-journald.service... Oct 29 00:41:53.685571 systemd[1]: Starting systemd-modules-load.service... Oct 29 00:41:53.685578 systemd[1]: Starting systemd-resolved.service... Oct 29 00:41:53.685586 systemd[1]: Starting systemd-vconsole-setup.service... Oct 29 00:41:53.685593 systemd[1]: Finished kmod-static-nodes.service. Oct 29 00:41:53.685601 systemd[1]: Finished systemd-fsck-usr.service. Oct 29 00:41:53.685609 kernel: audit: type=1130 audit(1761698513.677:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:53.685616 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 29 00:41:53.685624 systemd[1]: Finished systemd-vconsole-setup.service. Oct 29 00:41:53.685632 kernel: audit: type=1130 audit(1761698513.685:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:53.685642 systemd-journald[290]: Journal started Oct 29 00:41:53.685682 systemd-journald[290]: Runtime Journal (/run/log/journal/eb71c354584046679b779154a7105ef6) is 6.0M, max 48.7M, 42.6M free. Oct 29 00:41:53.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:53.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:53.680306 systemd-modules-load[291]: Inserted module 'overlay' Oct 29 00:41:53.688782 systemd[1]: Started systemd-journald.service. Oct 29 00:41:53.693088 kernel: audit: type=1130 audit(1761698513.689:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:53.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:53.692719 systemd[1]: Starting dracut-cmdline-ask.service... Oct 29 00:41:53.693541 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 29 00:41:53.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:53.698224 kernel: audit: type=1130 audit(1761698513.694:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:53.701206 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 29 00:41:53.702024 systemd-resolved[292]: Positive Trust Anchors: Oct 29 00:41:53.702037 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 00:41:53.702064 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 29 00:41:53.711814 kernel: Bridge firewalling registered Oct 29 00:41:53.707111 systemd-resolved[292]: Defaulting to hostname 'linux'. Oct 29 00:41:53.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:53.708640 systemd[1]: Started systemd-resolved.service. Oct 29 00:41:53.717447 kernel: audit: type=1130 audit(1761698513.712:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:53.710716 systemd-modules-load[291]: Inserted module 'br_netfilter' Oct 29 00:41:53.715305 systemd[1]: Reached target nss-lookup.target. Oct 29 00:41:53.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:53.717548 systemd[1]: Finished dracut-cmdline-ask.service. Oct 29 00:41:53.722374 kernel: audit: type=1130 audit(1761698513.718:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:53.722296 systemd[1]: Starting dracut-cmdline.service... Oct 29 00:41:53.724212 kernel: SCSI subsystem initialized Oct 29 00:41:53.730327 dracut-cmdline[308]: dracut-dracut-053 Oct 29 00:41:53.732531 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 29 00:41:53.732547 kernel: device-mapper: uevent: version 1.0.3 Oct 29 00:41:53.732556 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 29 00:41:53.732564 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ddcdcc5923a51dfb24bee27c235aa754769d72fd417f60397f96d58c38c7a3e3 Oct 29 00:41:53.737920 systemd-modules-load[291]: Inserted module 'dm_multipath' Oct 29 00:41:53.738680 systemd[1]: Finished systemd-modules-load.service. Oct 29 00:41:53.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:53.742205 kernel: audit: type=1130 audit(1761698513.739:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:53.742954 systemd[1]: Starting systemd-sysctl.service... Oct 29 00:41:53.749127 systemd[1]: Finished systemd-sysctl.service. Oct 29 00:41:53.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:53.753220 kernel: audit: type=1130 audit(1761698513.749:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:53.798210 kernel: Loading iSCSI transport class v2.0-870. Oct 29 00:41:53.810219 kernel: iscsi: registered transport (tcp) Oct 29 00:41:53.825209 kernel: iscsi: registered transport (qla4xxx) Oct 29 00:41:53.825243 kernel: QLogic iSCSI HBA Driver Oct 29 00:41:53.858041 systemd[1]: Finished dracut-cmdline.service. Oct 29 00:41:53.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:53.859661 systemd[1]: Starting dracut-pre-udev.service... Oct 29 00:41:53.862849 kernel: audit: type=1130 audit(1761698513.858:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:53.901210 kernel: raid6: neonx8 gen() 13749 MB/s Oct 29 00:41:53.918203 kernel: raid6: neonx8 xor() 10828 MB/s Oct 29 00:41:53.935206 kernel: raid6: neonx4 gen() 13548 MB/s Oct 29 00:41:53.952204 kernel: raid6: neonx4 xor() 11160 MB/s Oct 29 00:41:53.969202 kernel: raid6: neonx2 gen() 12953 MB/s Oct 29 00:41:53.986205 kernel: raid6: neonx2 xor() 10240 MB/s Oct 29 00:41:54.003202 kernel: raid6: neonx1 gen() 10593 MB/s Oct 29 00:41:54.020205 kernel: raid6: neonx1 xor() 8784 MB/s Oct 29 00:41:54.037204 kernel: raid6: int64x8 gen() 6263 MB/s Oct 29 00:41:54.054219 kernel: raid6: int64x8 xor() 3539 MB/s Oct 29 00:41:54.071210 kernel: raid6: int64x4 gen() 7158 MB/s Oct 29 00:41:54.088210 kernel: raid6: int64x4 xor() 3856 MB/s Oct 29 00:41:54.105218 kernel: raid6: int64x2 gen() 6152 MB/s Oct 29 00:41:54.122220 kernel: raid6: int64x2 xor() 3321 MB/s Oct 29 00:41:54.139217 kernel: raid6: int64x1 gen() 5041 MB/s Oct 29 00:41:54.156286 kernel: raid6: int64x1 xor() 2643 MB/s Oct 29 00:41:54.156311 kernel: raid6: using algorithm neonx8 gen() 13749 MB/s Oct 29 00:41:54.156328 kernel: raid6: .... xor() 10828 MB/s, rmw enabled Oct 29 00:41:54.157351 kernel: raid6: using neon recovery algorithm Oct 29 00:41:54.168628 kernel: xor: measuring software checksum speed Oct 29 00:41:54.168669 kernel: 8regs : 16688 MB/sec Oct 29 00:41:54.168678 kernel: 32regs : 20717 MB/sec Oct 29 00:41:54.169213 kernel: arm64_neon : 27813 MB/sec Oct 29 00:41:54.169226 kernel: xor: using function: arm64_neon (27813 MB/sec) Oct 29 00:41:54.222212 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Oct 29 00:41:54.231923 systemd[1]: Finished dracut-pre-udev.service. Oct 29 00:41:54.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:54.233000 audit: BPF prog-id=7 op=LOAD Oct 29 00:41:54.233000 audit: BPF prog-id=8 op=LOAD Oct 29 00:41:54.233721 systemd[1]: Starting systemd-udevd.service... Oct 29 00:41:54.245635 systemd-udevd[490]: Using default interface naming scheme 'v252'. Oct 29 00:41:54.249086 systemd[1]: Started systemd-udevd.service. Oct 29 00:41:54.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:54.251057 systemd[1]: Starting dracut-pre-trigger.service... Oct 29 00:41:54.260987 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Oct 29 00:41:54.287108 systemd[1]: Finished dracut-pre-trigger.service. Oct 29 00:41:54.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:54.288729 systemd[1]: Starting systemd-udev-trigger.service... Oct 29 00:41:54.321869 systemd[1]: Finished systemd-udev-trigger.service. Oct 29 00:41:54.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:54.350230 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 29 00:41:54.356148 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 29 00:41:54.356163 kernel: GPT:9289727 != 19775487 Oct 29 00:41:54.356178 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 29 00:41:54.356187 kernel: GPT:9289727 != 19775487 Oct 29 00:41:54.356222 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 29 00:41:54.356231 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 00:41:54.369222 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (547) Oct 29 00:41:54.371239 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 29 00:41:54.372177 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 29 00:41:54.376750 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 29 00:41:54.380285 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 29 00:41:54.386169 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 29 00:41:54.387807 systemd[1]: Starting disk-uuid.service... Oct 29 00:41:54.394005 disk-uuid[564]: Primary Header is updated. Oct 29 00:41:54.394005 disk-uuid[564]: Secondary Entries is updated. Oct 29 00:41:54.394005 disk-uuid[564]: Secondary Header is updated. Oct 29 00:41:54.397225 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 00:41:55.406441 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 29 00:41:55.406518 disk-uuid[565]: The operation has completed successfully. Oct 29 00:41:55.445910 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 29 00:41:55.446696 systemd[1]: Finished disk-uuid.service. Oct 29 00:41:55.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:55.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:55.449798 systemd[1]: Starting verity-setup.service... Oct 29 00:41:55.468270 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 29 00:41:55.490948 systemd[1]: Found device dev-mapper-usr.device. Oct 29 00:41:55.493992 systemd[1]: Mounting sysusr-usr.mount... Oct 29 00:41:55.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:55.498100 systemd[1]: Finished verity-setup.service. Oct 29 00:41:55.545144 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 29 00:41:55.543754 systemd[1]: Mounted sysusr-usr.mount. Oct 29 00:41:55.544520 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 29 00:41:55.546778 systemd[1]: Starting ignition-setup.service... Oct 29 00:41:55.548234 systemd[1]: Starting parse-ip-for-networkd.service... Oct 29 00:41:55.556692 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 29 00:41:55.556732 kernel: BTRFS info (device vda6): using free space tree Oct 29 00:41:55.556742 kernel: BTRFS info (device vda6): has skinny extents Oct 29 00:41:55.564501 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 29 00:41:55.569871 systemd[1]: Finished ignition-setup.service. Oct 29 00:41:55.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:55.571384 systemd[1]: Starting ignition-fetch-offline.service... Oct 29 00:41:55.619203 ignition[649]: Ignition 2.14.0 Oct 29 00:41:55.619214 ignition[649]: Stage: fetch-offline Oct 29 00:41:55.619253 ignition[649]: no configs at "/usr/lib/ignition/base.d" Oct 29 00:41:55.619262 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:41:55.619397 ignition[649]: parsed url from cmdline: "" Oct 29 00:41:55.619400 ignition[649]: no config URL provided Oct 29 00:41:55.619405 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" Oct 29 00:41:55.619412 ignition[649]: no config at "/usr/lib/ignition/user.ign" Oct 29 00:41:55.619429 ignition[649]: op(1): [started] loading QEMU firmware config module Oct 29 00:41:55.619433 ignition[649]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 29 00:41:55.631016 ignition[649]: op(1): [finished] loading QEMU firmware config module Oct 29 00:41:55.633517 systemd[1]: Finished parse-ip-for-networkd.service. Oct 29 00:41:55.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:55.634000 audit: BPF prog-id=9 op=LOAD Oct 29 00:41:55.635481 systemd[1]: Starting systemd-networkd.service... Oct 29 00:41:55.655061 systemd-networkd[743]: lo: Link UP Oct 29 00:41:55.655888 systemd-networkd[743]: lo: Gained carrier Oct 29 00:41:55.657077 systemd-networkd[743]: Enumeration completed Oct 29 00:41:55.658092 systemd[1]: Started systemd-networkd.service. Oct 29 00:41:55.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:55.659044 systemd[1]: Reached target network.target. Oct 29 00:41:55.659388 systemd-networkd[743]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 00:41:55.661131 systemd-networkd[743]: eth0: Link UP Oct 29 00:41:55.661135 systemd-networkd[743]: eth0: Gained carrier Oct 29 00:41:55.661802 systemd[1]: Starting iscsiuio.service... Oct 29 00:41:55.669829 systemd[1]: Started iscsiuio.service. Oct 29 00:41:55.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:55.671408 systemd[1]: Starting iscsid.service... Oct 29 00:41:55.674739 iscsid[748]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 29 00:41:55.674739 iscsid[748]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 29 00:41:55.674739 iscsid[748]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 29 00:41:55.674739 iscsid[748]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 29 00:41:55.674739 iscsid[748]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 29 00:41:55.674739 iscsid[748]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 29 00:41:55.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:55.674765 systemd-networkd[743]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 00:41:55.677688 systemd[1]: Started iscsid.service. Oct 29 00:41:55.683062 systemd[1]: Starting dracut-initqueue.service... Oct 29 00:41:55.693715 systemd[1]: Finished dracut-initqueue.service. Oct 29 00:41:55.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:55.694800 systemd[1]: Reached target remote-fs-pre.target. Oct 29 00:41:55.696167 systemd[1]: Reached target remote-cryptsetup.target. Oct 29 00:41:55.697726 systemd[1]: Reached target remote-fs.target. Oct 29 00:41:55.699867 systemd[1]: Starting dracut-pre-mount.service... Oct 29 00:41:55.701659 ignition[649]: parsing config with SHA512: a4560111fac4e7a48e1d5fda54eeecdc9e5e4819625c0fb99d252fd402cfd2b61d3c84de1084e293fa38e382b173331c7ed720ba91cd0e907794c64349e85a31 Oct 29 00:41:55.707737 systemd[1]: Finished dracut-pre-mount.service. Oct 29 00:41:55.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:55.711404 unknown[649]: fetched base config from "system" Oct 29 00:41:55.711419 unknown[649]: fetched user config from "qemu" Oct 29 00:41:55.711990 ignition[649]: fetch-offline: fetch-offline passed Oct 29 00:41:55.713015 systemd[1]: Finished ignition-fetch-offline.service. Oct 29 00:41:55.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:55.712051 ignition[649]: Ignition finished successfully Oct 29 00:41:55.714365 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 29 00:41:55.715045 systemd[1]: Starting ignition-kargs.service... Oct 29 00:41:55.724706 ignition[762]: Ignition 2.14.0 Oct 29 00:41:55.724715 ignition[762]: Stage: kargs Oct 29 00:41:55.724805 ignition[762]: no configs at "/usr/lib/ignition/base.d" Oct 29 00:41:55.726945 systemd[1]: Finished ignition-kargs.service. Oct 29 00:41:55.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:55.724815 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:41:55.725773 ignition[762]: kargs: kargs passed Oct 29 00:41:55.729099 systemd[1]: Starting ignition-disks.service... Oct 29 00:41:55.725817 ignition[762]: Ignition finished successfully Oct 29 00:41:55.735498 ignition[768]: Ignition 2.14.0 Oct 29 00:41:55.735507 ignition[768]: Stage: disks Oct 29 00:41:55.735595 ignition[768]: no configs at "/usr/lib/ignition/base.d" Oct 29 00:41:55.735605 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:41:55.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:55.737384 systemd[1]: Finished ignition-disks.service. Oct 29 00:41:55.736747 ignition[768]: disks: disks passed Oct 29 00:41:55.738171 systemd[1]: Reached target initrd-root-device.target. Oct 29 00:41:55.736794 ignition[768]: Ignition finished successfully Oct 29 00:41:55.739632 systemd[1]: Reached target local-fs-pre.target. Oct 29 00:41:55.740866 systemd[1]: Reached target local-fs.target. Oct 29 00:41:55.741937 systemd[1]: Reached target sysinit.target. Oct 29 00:41:55.743149 systemd[1]: Reached target basic.target. Oct 29 00:41:55.745129 systemd[1]: Starting systemd-fsck-root.service... Oct 29 00:41:55.757130 systemd-fsck[776]: ROOT: clean, 637/553520 files, 56031/553472 blocks Oct 29 00:41:55.761324 systemd[1]: Finished systemd-fsck-root.service. Oct 29 00:41:55.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:55.762793 systemd[1]: Mounting sysroot.mount... Oct 29 00:41:55.769210 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 29 00:41:55.769668 systemd[1]: Mounted sysroot.mount. Oct 29 00:41:55.770345 systemd[1]: Reached target initrd-root-fs.target. Oct 29 00:41:55.772450 systemd[1]: Mounting sysroot-usr.mount... Oct 29 00:41:55.773252 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 29 00:41:55.773290 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 29 00:41:55.773312 systemd[1]: Reached target ignition-diskful.target. Oct 29 00:41:55.775049 systemd[1]: Mounted sysroot-usr.mount. Oct 29 00:41:55.776600 systemd[1]: Starting initrd-setup-root.service... Oct 29 00:41:55.780977 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory Oct 29 00:41:55.786050 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory Oct 29 00:41:55.790237 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory Oct 29 00:41:55.794177 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory Oct 29 00:41:55.821990 systemd[1]: Finished initrd-setup-root.service. Oct 29 00:41:55.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:55.823559 systemd[1]: Starting ignition-mount.service... Oct 29 00:41:55.824797 systemd[1]: Starting sysroot-boot.service... Oct 29 00:41:55.828823 bash[827]: umount: /sysroot/usr/share/oem: not mounted. Oct 29 00:41:55.836320 ignition[828]: INFO : Ignition 2.14.0 Oct 29 00:41:55.836320 ignition[828]: INFO : Stage: mount Oct 29 00:41:55.838641 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 00:41:55.838641 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:41:55.838641 ignition[828]: INFO : mount: mount passed Oct 29 00:41:55.838641 ignition[828]: INFO : Ignition finished successfully Oct 29 00:41:55.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:55.838680 systemd[1]: Finished ignition-mount.service. Oct 29 00:41:55.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:55.843049 systemd[1]: Finished sysroot-boot.service. Oct 29 00:41:56.505979 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 29 00:41:56.512897 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (837) Oct 29 00:41:56.512932 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 29 00:41:56.512942 kernel: BTRFS info (device vda6): using free space tree Oct 29 00:41:56.514229 kernel: BTRFS info (device vda6): has skinny extents Oct 29 00:41:56.517228 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 29 00:41:56.518724 systemd[1]: Starting ignition-files.service... Oct 29 00:41:56.531790 ignition[857]: INFO : Ignition 2.14.0 Oct 29 00:41:56.531790 ignition[857]: INFO : Stage: files Oct 29 00:41:56.533261 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 00:41:56.533261 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:41:56.533261 ignition[857]: DEBUG : files: compiled without relabeling support, skipping Oct 29 00:41:56.536476 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 29 00:41:56.536476 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 29 00:41:56.536476 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 29 00:41:56.540166 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 29 00:41:56.540166 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 29 00:41:56.540166 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 29 00:41:56.540166 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Oct 29 00:41:56.539593 unknown[857]: wrote ssh authorized keys file for user: core Oct 29 00:41:56.670148 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 29 00:41:56.821781 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Oct 29 00:41:56.823674 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 29 00:41:56.823674 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Oct 29 00:41:57.039922 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 29 00:41:57.074521 systemd-networkd[743]: eth0: Gained IPv6LL Oct 29 00:41:57.131228 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 29 00:41:57.133078 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 29 00:41:57.133078 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 29 00:41:57.133078 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 29 00:41:57.133078 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 29 00:41:57.133078 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 00:41:57.133078 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 29 00:41:57.133078 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 00:41:57.133078 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 29 00:41:57.133078 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 00:41:57.133078 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 29 00:41:57.133078 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 29 00:41:57.133078 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 29 00:41:57.133078 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 29 00:41:57.133078 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Oct 29 00:41:57.411611 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 29 00:41:57.813098 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Oct 29 00:41:57.815174 ignition[857]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 29 00:41:57.816526 ignition[857]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 00:41:57.818261 ignition[857]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 29 00:41:57.818261 ignition[857]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 29 00:41:57.818261 ignition[857]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 29 00:41:57.818261 ignition[857]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 00:41:57.818261 ignition[857]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 29 00:41:57.818261 ignition[857]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 29 00:41:57.818261 ignition[857]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Oct 29 00:41:57.818261 ignition[857]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Oct 29 00:41:57.818261 ignition[857]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Oct 29 00:41:57.818261 ignition[857]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 00:41:57.845597 ignition[857]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 29 00:41:57.847123 ignition[857]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Oct 29 00:41:57.847123 ignition[857]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 29 00:41:57.847123 ignition[857]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 29 00:41:57.847123 ignition[857]: INFO : files: files passed Oct 29 00:41:57.847123 ignition[857]: INFO : Ignition finished successfully Oct 29 00:41:57.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.847086 systemd[1]: Finished ignition-files.service. Oct 29 00:41:57.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.848841 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 29 00:41:57.849949 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 29 00:41:57.859186 initrd-setup-root-after-ignition[883]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 29 00:41:57.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.850625 systemd[1]: Starting ignition-quench.service... Oct 29 00:41:57.862712 initrd-setup-root-after-ignition[885]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 29 00:41:57.854216 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 29 00:41:57.854301 systemd[1]: Finished ignition-quench.service. Oct 29 00:41:57.857537 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 29 00:41:57.860152 systemd[1]: Reached target ignition-complete.target. Oct 29 00:41:57.862620 systemd[1]: Starting initrd-parse-etc.service... Oct 29 00:41:57.874079 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 29 00:41:57.874166 systemd[1]: Finished initrd-parse-etc.service. Oct 29 00:41:57.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.876023 systemd[1]: Reached target initrd-fs.target. Oct 29 00:41:57.877117 systemd[1]: Reached target initrd.target. Oct 29 00:41:57.878364 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 29 00:41:57.879016 systemd[1]: Starting dracut-pre-pivot.service... Oct 29 00:41:57.888787 systemd[1]: Finished dracut-pre-pivot.service. Oct 29 00:41:57.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.890222 systemd[1]: Starting initrd-cleanup.service... Oct 29 00:41:57.897395 systemd[1]: Stopped target nss-lookup.target. Oct 29 00:41:57.898202 systemd[1]: Stopped target remote-cryptsetup.target. Oct 29 00:41:57.899482 systemd[1]: Stopped target timers.target. Oct 29 00:41:57.900758 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 29 00:41:57.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.900856 systemd[1]: Stopped dracut-pre-pivot.service. Oct 29 00:41:57.902065 systemd[1]: Stopped target initrd.target. Oct 29 00:41:57.903289 systemd[1]: Stopped target basic.target. Oct 29 00:41:57.904504 systemd[1]: Stopped target ignition-complete.target. Oct 29 00:41:57.905745 systemd[1]: Stopped target ignition-diskful.target. Oct 29 00:41:57.906984 systemd[1]: Stopped target initrd-root-device.target. Oct 29 00:41:57.908309 systemd[1]: Stopped target remote-fs.target. Oct 29 00:41:57.909605 systemd[1]: Stopped target remote-fs-pre.target. Oct 29 00:41:57.910960 systemd[1]: Stopped target sysinit.target. Oct 29 00:41:57.912112 systemd[1]: Stopped target local-fs.target. Oct 29 00:41:57.913341 systemd[1]: Stopped target local-fs-pre.target. Oct 29 00:41:57.914527 systemd[1]: Stopped target swap.target. Oct 29 00:41:57.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.915634 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 29 00:41:57.915735 systemd[1]: Stopped dracut-pre-mount.service. Oct 29 00:41:57.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.917012 systemd[1]: Stopped target cryptsetup.target. Oct 29 00:41:57.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.918029 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 29 00:41:57.918123 systemd[1]: Stopped dracut-initqueue.service. Oct 29 00:41:57.919506 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 29 00:41:57.919602 systemd[1]: Stopped ignition-fetch-offline.service. Oct 29 00:41:57.920792 systemd[1]: Stopped target paths.target. Oct 29 00:41:57.921872 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 29 00:41:57.923246 systemd[1]: Stopped systemd-ask-password-console.path. Oct 29 00:41:57.924830 systemd[1]: Stopped target slices.target. Oct 29 00:41:57.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.925983 systemd[1]: Stopped target sockets.target. Oct 29 00:41:57.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.927089 systemd[1]: iscsid.socket: Deactivated successfully. Oct 29 00:41:57.927156 systemd[1]: Closed iscsid.socket. Oct 29 00:41:57.928504 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 29 00:41:57.928599 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 29 00:41:57.929945 systemd[1]: ignition-files.service: Deactivated successfully. Oct 29 00:41:57.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.930036 systemd[1]: Stopped ignition-files.service. Oct 29 00:41:57.931817 systemd[1]: Stopping ignition-mount.service... Oct 29 00:41:57.932947 systemd[1]: Stopping iscsiuio.service... Oct 29 00:41:57.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.934957 systemd[1]: Stopping sysroot-boot.service... Oct 29 00:41:57.941467 ignition[898]: INFO : Ignition 2.14.0 Oct 29 00:41:57.941467 ignition[898]: INFO : Stage: umount Oct 29 00:41:57.941467 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 29 00:41:57.941467 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 29 00:41:57.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.935898 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 29 00:41:57.947340 ignition[898]: INFO : umount: umount passed Oct 29 00:41:57.947340 ignition[898]: INFO : Ignition finished successfully Oct 29 00:41:57.936031 systemd[1]: Stopped systemd-udev-trigger.service. Oct 29 00:41:57.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.937306 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 29 00:41:57.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.937414 systemd[1]: Stopped dracut-pre-trigger.service. Oct 29 00:41:57.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.942096 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 29 00:41:57.942187 systemd[1]: Stopped iscsiuio.service. Oct 29 00:41:57.943918 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 29 00:41:57.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.943988 systemd[1]: Stopped ignition-mount.service. Oct 29 00:41:57.945991 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 29 00:41:57.946521 systemd[1]: Stopped target network.target. Oct 29 00:41:57.948053 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 29 00:41:57.948086 systemd[1]: Closed iscsiuio.socket. Oct 29 00:41:57.949233 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 29 00:41:57.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.949271 systemd[1]: Stopped ignition-disks.service. Oct 29 00:41:57.950779 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 29 00:41:57.950819 systemd[1]: Stopped ignition-kargs.service. Oct 29 00:41:57.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.952003 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 29 00:41:57.952042 systemd[1]: Stopped ignition-setup.service. Oct 29 00:41:57.953781 systemd[1]: Stopping systemd-networkd.service... Oct 29 00:41:57.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.970000 audit: BPF prog-id=6 op=UNLOAD Oct 29 00:41:57.954994 systemd[1]: Stopping systemd-resolved.service... Oct 29 00:41:57.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.956641 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 29 00:41:57.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.956723 systemd[1]: Finished initrd-cleanup.service. Oct 29 00:41:57.961640 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 29 00:41:57.961734 systemd[1]: Stopped systemd-resolved.service. Oct 29 00:41:57.963502 systemd-networkd[743]: eth0: DHCPv6 lease lost Oct 29 00:41:57.978000 audit: BPF prog-id=9 op=UNLOAD Oct 29 00:41:57.964446 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 29 00:41:57.964533 systemd[1]: Stopped systemd-networkd.service. Oct 29 00:41:57.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.966063 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 29 00:41:57.966096 systemd[1]: Closed systemd-networkd.socket. Oct 29 00:41:57.967725 systemd[1]: Stopping network-cleanup.service... Oct 29 00:41:57.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.968458 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 29 00:41:57.968514 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 29 00:41:57.969870 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 29 00:41:57.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.969909 systemd[1]: Stopped systemd-sysctl.service. Oct 29 00:41:57.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.972062 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 29 00:41:57.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.972105 systemd[1]: Stopped systemd-modules-load.service. Oct 29 00:41:57.973019 systemd[1]: Stopping systemd-udevd.service... Oct 29 00:41:57.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.977603 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 29 00:41:57.980173 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 29 00:41:57.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.980289 systemd[1]: Stopped network-cleanup.service. Oct 29 00:41:57.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.983447 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 29 00:41:57.983579 systemd[1]: Stopped systemd-udevd.service. Oct 29 00:41:57.999000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.984478 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 29 00:41:58.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:58.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.984515 systemd[1]: Closed systemd-udevd-control.socket. Oct 29 00:41:57.985731 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 29 00:41:58.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:57.985763 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 29 00:41:57.987138 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 29 00:41:57.987179 systemd[1]: Stopped dracut-pre-udev.service. Oct 29 00:41:57.988535 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 29 00:41:57.988572 systemd[1]: Stopped dracut-cmdline.service. Oct 29 00:41:57.989974 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 29 00:41:57.990013 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 29 00:41:57.991838 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 29 00:41:57.992616 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 29 00:41:57.992668 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 29 00:41:57.994666 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 29 00:41:57.994708 systemd[1]: Stopped kmod-static-nodes.service. Oct 29 00:41:57.996458 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 29 00:41:57.996495 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 29 00:41:57.998631 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 29 00:41:57.999028 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 29 00:41:57.999111 systemd[1]: Stopped sysroot-boot.service. Oct 29 00:41:58.000055 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 29 00:41:58.000127 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 29 00:41:58.022433 systemd-journald[290]: Received SIGTERM from PID 1 (n/a). Oct 29 00:41:58.022462 iscsid[748]: iscsid shutting down. Oct 29 00:41:58.001262 systemd[1]: Reached target initrd-switch-root.target. Oct 29 00:41:58.002762 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 29 00:41:58.002807 systemd[1]: Stopped initrd-setup-root.service. Oct 29 00:41:58.004652 systemd[1]: Starting initrd-switch-root.service... Oct 29 00:41:58.010421 systemd[1]: Switching root. Oct 29 00:41:58.026434 systemd-journald[290]: Journal stopped Oct 29 00:42:00.032281 kernel: SELinux: Class mctp_socket not defined in policy. Oct 29 00:42:00.032342 kernel: SELinux: Class anon_inode not defined in policy. Oct 29 00:42:00.032360 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 29 00:42:00.032371 kernel: SELinux: policy capability network_peer_controls=1 Oct 29 00:42:00.032381 kernel: SELinux: policy capability open_perms=1 Oct 29 00:42:00.032390 kernel: SELinux: policy capability extended_socket_class=1 Oct 29 00:42:00.032404 kernel: SELinux: policy capability always_check_network=0 Oct 29 00:42:00.032417 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 29 00:42:00.032426 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 29 00:42:00.032435 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 29 00:42:00.032446 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 29 00:42:00.032456 kernel: kauditd_printk_skb: 65 callbacks suppressed Oct 29 00:42:00.032470 kernel: audit: type=1403 audit(1761698518.077:76): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 29 00:42:00.032481 systemd[1]: Successfully loaded SELinux policy in 34.099ms. Oct 29 00:42:00.032498 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.905ms. Oct 29 00:42:00.032509 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 29 00:42:00.032520 systemd[1]: Detected virtualization kvm. Oct 29 00:42:00.032530 systemd[1]: Detected architecture arm64. Oct 29 00:42:00.032542 systemd[1]: Detected first boot. Oct 29 00:42:00.032552 systemd[1]: Initializing machine ID from VM UUID. Oct 29 00:42:00.032564 kernel: audit: type=1400 audit(1761698518.182:77): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 29 00:42:00.032577 kernel: audit: type=1400 audit(1761698518.182:78): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 29 00:42:00.032587 kernel: audit: type=1334 audit(1761698518.184:79): prog-id=10 op=LOAD Oct 29 00:42:00.032598 kernel: audit: type=1334 audit(1761698518.184:80): prog-id=10 op=UNLOAD Oct 29 00:42:00.032607 kernel: audit: type=1334 audit(1761698518.186:81): prog-id=11 op=LOAD Oct 29 00:42:00.032620 kernel: audit: type=1334 audit(1761698518.186:82): prog-id=11 op=UNLOAD Oct 29 00:42:00.032630 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Oct 29 00:42:00.032641 kernel: audit: type=1400 audit(1761698518.223:83): avc: denied { associate } for pid=932 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Oct 29 00:42:00.032652 kernel: audit: type=1300 audit(1761698518.223:83): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c589c a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 00:42:00.032663 kernel: audit: type=1327 audit(1761698518.223:83): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Oct 29 00:42:00.032675 systemd[1]: Populated /etc with preset unit settings. Oct 29 00:42:00.032686 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 29 00:42:00.032696 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 29 00:42:00.032707 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 29 00:42:00.032718 systemd[1]: iscsid.service: Deactivated successfully. Oct 29 00:42:00.032728 systemd[1]: Stopped iscsid.service. Oct 29 00:42:00.032752 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 29 00:42:00.032763 systemd[1]: Stopped initrd-switch-root.service. Oct 29 00:42:00.032774 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 29 00:42:00.032784 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 29 00:42:00.032794 systemd[1]: Created slice system-addon\x2drun.slice. Oct 29 00:42:00.032804 systemd[1]: Created slice system-getty.slice. Oct 29 00:42:00.032815 systemd[1]: Created slice system-modprobe.slice. Oct 29 00:42:00.032826 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 29 00:42:00.032837 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 29 00:42:00.032848 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 29 00:42:00.032858 systemd[1]: Created slice user.slice. Oct 29 00:42:00.032868 systemd[1]: Started systemd-ask-password-console.path. Oct 29 00:42:00.032878 systemd[1]: Started systemd-ask-password-wall.path. Oct 29 00:42:00.032888 systemd[1]: Set up automount boot.automount. Oct 29 00:42:00.032899 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 29 00:42:00.032909 systemd[1]: Stopped target initrd-switch-root.target. Oct 29 00:42:00.032919 systemd[1]: Stopped target initrd-fs.target. Oct 29 00:42:00.032931 systemd[1]: Stopped target initrd-root-fs.target. Oct 29 00:42:00.032941 systemd[1]: Reached target integritysetup.target. Oct 29 00:42:00.032953 systemd[1]: Reached target remote-cryptsetup.target. Oct 29 00:42:00.032963 systemd[1]: Reached target remote-fs.target. Oct 29 00:42:00.032973 systemd[1]: Reached target slices.target. Oct 29 00:42:00.032983 systemd[1]: Reached target swap.target. Oct 29 00:42:00.032993 systemd[1]: Reached target torcx.target. Oct 29 00:42:00.033003 systemd[1]: Reached target veritysetup.target. Oct 29 00:42:00.033013 systemd[1]: Listening on systemd-coredump.socket. Oct 29 00:42:00.033024 systemd[1]: Listening on systemd-initctl.socket. Oct 29 00:42:00.033036 systemd[1]: Listening on systemd-networkd.socket. Oct 29 00:42:00.033046 systemd[1]: Listening on systemd-udevd-control.socket. Oct 29 00:42:00.033056 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 29 00:42:00.033066 systemd[1]: Listening on systemd-userdbd.socket. Oct 29 00:42:00.033077 systemd[1]: Mounting dev-hugepages.mount... Oct 29 00:42:00.033087 systemd[1]: Mounting dev-mqueue.mount... Oct 29 00:42:00.033097 systemd[1]: Mounting media.mount... Oct 29 00:42:00.033107 systemd[1]: Mounting sys-kernel-debug.mount... Oct 29 00:42:00.033118 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 29 00:42:00.033128 systemd[1]: Mounting tmp.mount... Oct 29 00:42:00.033139 systemd[1]: Starting flatcar-tmpfiles.service... Oct 29 00:42:00.033149 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 29 00:42:00.033159 systemd[1]: Starting kmod-static-nodes.service... Oct 29 00:42:00.033170 systemd[1]: Starting modprobe@configfs.service... Oct 29 00:42:00.033183 systemd[1]: Starting modprobe@dm_mod.service... Oct 29 00:42:00.033202 systemd[1]: Starting modprobe@drm.service... Oct 29 00:42:00.033214 systemd[1]: Starting modprobe@efi_pstore.service... Oct 29 00:42:00.033223 systemd[1]: Starting modprobe@fuse.service... Oct 29 00:42:00.033235 systemd[1]: Starting modprobe@loop.service... Oct 29 00:42:00.033333 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 29 00:42:00.033357 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 29 00:42:00.033372 systemd[1]: Stopped systemd-fsck-root.service. Oct 29 00:42:00.033382 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 29 00:42:00.033392 kernel: fuse: init (API version 7.34) Oct 29 00:42:00.033405 systemd[1]: Stopped systemd-fsck-usr.service. Oct 29 00:42:00.033417 systemd[1]: Stopped systemd-journald.service. Oct 29 00:42:00.033427 kernel: loop: module loaded Oct 29 00:42:00.033437 systemd[1]: Starting systemd-journald.service... Oct 29 00:42:00.033447 systemd[1]: Starting systemd-modules-load.service... Oct 29 00:42:00.033458 systemd[1]: Starting systemd-network-generator.service... Oct 29 00:42:00.033469 systemd[1]: Starting systemd-remount-fs.service... Oct 29 00:42:00.033479 systemd[1]: Starting systemd-udev-trigger.service... Oct 29 00:42:00.033489 systemd[1]: verity-setup.service: Deactivated successfully. Oct 29 00:42:00.033500 systemd[1]: Stopped verity-setup.service. Oct 29 00:42:00.033511 systemd[1]: Mounted dev-hugepages.mount. Oct 29 00:42:00.033524 systemd-journald[1006]: Journal started Oct 29 00:42:00.033569 systemd-journald[1006]: Runtime Journal (/run/log/journal/eb71c354584046679b779154a7105ef6) is 6.0M, max 48.7M, 42.6M free. Oct 29 00:41:58.077000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 29 00:41:58.182000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 29 00:41:58.182000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 29 00:41:58.184000 audit: BPF prog-id=10 op=LOAD Oct 29 00:41:58.184000 audit: BPF prog-id=10 op=UNLOAD Oct 29 00:41:58.186000 audit: BPF prog-id=11 op=LOAD Oct 29 00:41:58.186000 audit: BPF prog-id=11 op=UNLOAD Oct 29 00:41:58.223000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Oct 29 00:41:58.223000 audit[932]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c589c a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 00:41:58.223000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Oct 29 00:41:58.225000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Oct 29 00:41:58.225000 audit[932]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5975 a2=1ed a3=0 items=2 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 00:41:58.225000 audit: CWD cwd="/" Oct 29 00:41:58.225000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 00:41:58.225000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 29 00:41:58.225000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Oct 29 00:41:59.915000 audit: BPF prog-id=12 op=LOAD Oct 29 00:41:59.915000 audit: BPF prog-id=3 op=UNLOAD Oct 29 00:41:59.915000 audit: BPF prog-id=13 op=LOAD Oct 29 00:41:59.915000 audit: BPF prog-id=14 op=LOAD Oct 29 00:41:59.915000 audit: BPF prog-id=4 op=UNLOAD Oct 29 00:41:59.915000 audit: BPF prog-id=5 op=UNLOAD Oct 29 00:41:59.916000 audit: BPF prog-id=15 op=LOAD Oct 29 00:41:59.916000 audit: BPF prog-id=12 op=UNLOAD Oct 29 00:41:59.916000 audit: BPF prog-id=16 op=LOAD Oct 29 00:41:59.916000 audit: BPF prog-id=17 op=LOAD Oct 29 00:41:59.916000 audit: BPF prog-id=13 op=UNLOAD Oct 29 00:41:59.916000 audit: BPF prog-id=14 op=UNLOAD Oct 29 00:41:59.917000 audit: BPF prog-id=18 op=LOAD Oct 29 00:41:59.917000 audit: BPF prog-id=15 op=UNLOAD Oct 29 00:41:59.917000 audit: BPF prog-id=19 op=LOAD Oct 29 00:41:59.917000 audit: BPF prog-id=20 op=LOAD Oct 29 00:41:59.917000 audit: BPF prog-id=16 op=UNLOAD Oct 29 00:41:59.917000 audit: BPF prog-id=17 op=UNLOAD Oct 29 00:41:59.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:59.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:59.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:59.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:59.925000 audit: BPF prog-id=18 op=UNLOAD Oct 29 00:42:00.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.012000 audit: BPF prog-id=21 op=LOAD Oct 29 00:42:00.013000 audit: BPF prog-id=22 op=LOAD Oct 29 00:42:00.013000 audit: BPF prog-id=23 op=LOAD Oct 29 00:42:00.013000 audit: BPF prog-id=19 op=UNLOAD Oct 29 00:42:00.013000 audit: BPF prog-id=20 op=UNLOAD Oct 29 00:42:00.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.031000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 29 00:42:00.031000 audit[1006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd766c780 a2=4000 a3=1 items=0 ppid=1 pid=1006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 00:42:00.031000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 29 00:41:58.222219 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-10-29T00:41:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 29 00:41:59.914121 systemd[1]: Queued start job for default target multi-user.target. Oct 29 00:41:58.222487 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-10-29T00:41:58Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 29 00:41:59.914133 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 29 00:41:58.222506 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-10-29T00:41:58Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 29 00:41:59.917941 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 29 00:41:58.222536 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-10-29T00:41:58Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 29 00:41:58.222545 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-10-29T00:41:58Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 29 00:41:58.222572 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-10-29T00:41:58Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 29 00:41:58.222584 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-10-29T00:41:58Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 29 00:41:58.222775 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-10-29T00:41:58Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 29 00:41:58.222809 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-10-29T00:41:58Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 29 00:41:58.222821 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-10-29T00:41:58Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 29 00:41:58.223434 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-10-29T00:41:58Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 29 00:42:00.035784 systemd[1]: Started systemd-journald.service. Oct 29 00:41:58.223469 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-10-29T00:41:58Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 29 00:41:58.223488 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-10-29T00:41:58Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.8: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.8 Oct 29 00:42:00.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:41:58.223502 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-10-29T00:41:58Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 29 00:41:58.223519 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-10-29T00:41:58Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.8: no such file or directory" path=/var/lib/torcx/store/3510.3.8 Oct 29 00:41:58.223532 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-10-29T00:41:58Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 29 00:41:59.639529 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-10-29T00:41:59Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 29 00:41:59.639779 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-10-29T00:41:59Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 29 00:41:59.639883 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-10-29T00:41:59Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 29 00:41:59.640036 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-10-29T00:41:59Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 29 00:42:00.036386 systemd[1]: Mounted dev-mqueue.mount. Oct 29 00:41:59.640085 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-10-29T00:41:59Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 29 00:41:59.640138 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-10-29T00:41:59Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 29 00:42:00.037154 systemd[1]: Mounted media.mount. Oct 29 00:42:00.037924 systemd[1]: Mounted sys-kernel-debug.mount. Oct 29 00:42:00.038758 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 29 00:42:00.039614 systemd[1]: Mounted tmp.mount. Oct 29 00:42:00.040474 systemd[1]: Finished flatcar-tmpfiles.service. Oct 29 00:42:00.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.041497 systemd[1]: Finished kmod-static-nodes.service. Oct 29 00:42:00.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.042455 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 29 00:42:00.042611 systemd[1]: Finished modprobe@configfs.service. Oct 29 00:42:00.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.043586 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:42:00.043755 systemd[1]: Finished modprobe@dm_mod.service. Oct 29 00:42:00.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.044754 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 00:42:00.044938 systemd[1]: Finished modprobe@drm.service. Oct 29 00:42:00.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.045968 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:42:00.046138 systemd[1]: Finished modprobe@efi_pstore.service. Oct 29 00:42:00.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.047240 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 29 00:42:00.047397 systemd[1]: Finished modprobe@fuse.service. Oct 29 00:42:00.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.048311 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:42:00.048478 systemd[1]: Finished modprobe@loop.service. Oct 29 00:42:00.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.049590 systemd[1]: Finished systemd-modules-load.service. Oct 29 00:42:00.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.050663 systemd[1]: Finished systemd-network-generator.service. Oct 29 00:42:00.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.051750 systemd[1]: Finished systemd-remount-fs.service. Oct 29 00:42:00.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.053080 systemd[1]: Reached target network-pre.target. Oct 29 00:42:00.054983 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 29 00:42:00.056960 systemd[1]: Mounting sys-kernel-config.mount... Oct 29 00:42:00.057669 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 29 00:42:00.059034 systemd[1]: Starting systemd-hwdb-update.service... Oct 29 00:42:00.061044 systemd[1]: Starting systemd-journal-flush.service... Oct 29 00:42:00.062018 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 00:42:00.063088 systemd[1]: Starting systemd-random-seed.service... Oct 29 00:42:00.064048 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 29 00:42:00.065157 systemd[1]: Starting systemd-sysctl.service... Oct 29 00:42:00.067383 systemd[1]: Starting systemd-sysusers.service... Oct 29 00:42:00.070645 systemd-journald[1006]: Time spent on flushing to /var/log/journal/eb71c354584046679b779154a7105ef6 is 15.441ms for 1006 entries. Oct 29 00:42:00.070645 systemd-journald[1006]: System Journal (/var/log/journal/eb71c354584046679b779154a7105ef6) is 8.0M, max 195.6M, 187.6M free. Oct 29 00:42:00.099980 systemd-journald[1006]: Received client request to flush runtime journal. Oct 29 00:42:00.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.071393 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 29 00:42:00.100455 udevadm[1034]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 29 00:42:00.072910 systemd[1]: Mounted sys-kernel-config.mount. Oct 29 00:42:00.074299 systemd[1]: Finished systemd-random-seed.service. Oct 29 00:42:00.075398 systemd[1]: Reached target first-boot-complete.target. Oct 29 00:42:00.077370 systemd[1]: Finished systemd-udev-trigger.service. Oct 29 00:42:00.080477 systemd[1]: Starting systemd-udev-settle.service... Oct 29 00:42:00.086595 systemd[1]: Finished systemd-sysctl.service. Oct 29 00:42:00.100945 systemd[1]: Finished systemd-journal-flush.service. Oct 29 00:42:00.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.106866 systemd[1]: Finished systemd-sysusers.service. Oct 29 00:42:00.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.108778 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 29 00:42:00.124404 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 29 00:42:00.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.435000 audit: BPF prog-id=24 op=LOAD Oct 29 00:42:00.435000 audit: BPF prog-id=25 op=LOAD Oct 29 00:42:00.435000 audit: BPF prog-id=7 op=UNLOAD Oct 29 00:42:00.435000 audit: BPF prog-id=8 op=UNLOAD Oct 29 00:42:00.434014 systemd[1]: Finished systemd-hwdb-update.service. Oct 29 00:42:00.436241 systemd[1]: Starting systemd-udevd.service... Oct 29 00:42:00.451788 systemd-udevd[1038]: Using default interface naming scheme 'v252'. Oct 29 00:42:00.466409 systemd[1]: Started systemd-udevd.service. Oct 29 00:42:00.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.468000 audit: BPF prog-id=26 op=LOAD Oct 29 00:42:00.470775 systemd[1]: Starting systemd-networkd.service... Oct 29 00:42:00.480000 audit: BPF prog-id=27 op=LOAD Oct 29 00:42:00.480000 audit: BPF prog-id=28 op=LOAD Oct 29 00:42:00.480000 audit: BPF prog-id=29 op=LOAD Oct 29 00:42:00.481499 systemd[1]: Starting systemd-userdbd.service... Oct 29 00:42:00.490944 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Oct 29 00:42:00.505793 systemd[1]: Started systemd-userdbd.service. Oct 29 00:42:00.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.523853 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 29 00:42:00.559741 systemd-networkd[1047]: lo: Link UP Oct 29 00:42:00.559750 systemd-networkd[1047]: lo: Gained carrier Oct 29 00:42:00.560111 systemd-networkd[1047]: Enumeration completed Oct 29 00:42:00.560228 systemd-networkd[1047]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 29 00:42:00.560248 systemd[1]: Started systemd-networkd.service. Oct 29 00:42:00.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.562006 systemd-networkd[1047]: eth0: Link UP Oct 29 00:42:00.562018 systemd-networkd[1047]: eth0: Gained carrier Oct 29 00:42:00.583350 systemd-networkd[1047]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 29 00:42:00.585621 systemd[1]: Finished systemd-udev-settle.service. Oct 29 00:42:00.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.587685 systemd[1]: Starting lvm2-activation-early.service... Oct 29 00:42:00.596574 lvm[1071]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 29 00:42:00.626083 systemd[1]: Finished lvm2-activation-early.service. Oct 29 00:42:00.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.627118 systemd[1]: Reached target cryptsetup.target. Oct 29 00:42:00.629128 systemd[1]: Starting lvm2-activation.service... Oct 29 00:42:00.632794 lvm[1072]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 29 00:42:00.666150 systemd[1]: Finished lvm2-activation.service. Oct 29 00:42:00.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.667078 systemd[1]: Reached target local-fs-pre.target. Oct 29 00:42:00.667884 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 29 00:42:00.667915 systemd[1]: Reached target local-fs.target. Oct 29 00:42:00.668629 systemd[1]: Reached target machines.target. Oct 29 00:42:00.670479 systemd[1]: Starting ldconfig.service... Oct 29 00:42:00.671497 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 00:42:00.671549 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 00:42:00.672726 systemd[1]: Starting systemd-boot-update.service... Oct 29 00:42:00.674798 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 29 00:42:00.676961 systemd[1]: Starting systemd-machine-id-commit.service... Oct 29 00:42:00.679046 systemd[1]: Starting systemd-sysext.service... Oct 29 00:42:00.680146 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1074 (bootctl) Oct 29 00:42:00.681448 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 29 00:42:00.683910 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 29 00:42:00.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.692797 systemd[1]: Unmounting usr-share-oem.mount... Oct 29 00:42:00.698839 systemd[1]: usr-share-oem.mount: Deactivated successfully. Oct 29 00:42:00.699019 systemd[1]: Unmounted usr-share-oem.mount. Oct 29 00:42:00.744226 kernel: loop0: detected capacity change from 0 to 200800 Oct 29 00:42:00.749729 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 29 00:42:00.750864 systemd[1]: Finished systemd-machine-id-commit.service. Oct 29 00:42:00.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.757205 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 29 00:42:00.774217 kernel: loop1: detected capacity change from 0 to 200800 Oct 29 00:42:00.778309 (sd-sysext)[1087]: Using extensions 'kubernetes'. Oct 29 00:42:00.778644 (sd-sysext)[1087]: Merged extensions into '/usr'. Oct 29 00:42:00.780837 systemd-fsck[1084]: fsck.fat 4.2 (2021-01-31) Oct 29 00:42:00.780837 systemd-fsck[1084]: /dev/vda1: 236 files, 117310/258078 clusters Oct 29 00:42:00.790856 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 29 00:42:00.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.803472 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 29 00:42:00.805177 systemd[1]: Starting modprobe@dm_mod.service... Oct 29 00:42:00.807184 systemd[1]: Starting modprobe@efi_pstore.service... Oct 29 00:42:00.809444 systemd[1]: Starting modprobe@loop.service... Oct 29 00:42:00.810288 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 00:42:00.810477 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 00:42:00.811526 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:42:00.811716 systemd[1]: Finished modprobe@dm_mod.service. Oct 29 00:42:00.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.813062 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:42:00.813226 systemd[1]: Finished modprobe@efi_pstore.service. Oct 29 00:42:00.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.814890 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:42:00.815039 systemd[1]: Finished modprobe@loop.service. Oct 29 00:42:00.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:00.816488 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 00:42:00.816639 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 29 00:42:00.855573 ldconfig[1073]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 29 00:42:00.859235 systemd[1]: Finished ldconfig.service. Oct 29 00:42:00.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:01.034322 systemd[1]: Mounting boot.mount... Oct 29 00:42:01.036168 systemd[1]: Mounting usr-share-oem.mount... Oct 29 00:42:01.040765 systemd[1]: Mounted usr-share-oem.mount. Oct 29 00:42:01.043263 systemd[1]: Finished systemd-sysext.service. Oct 29 00:42:01.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:01.044133 systemd[1]: Mounted boot.mount. Oct 29 00:42:01.046625 systemd[1]: Starting ensure-sysext.service... Oct 29 00:42:01.048488 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 29 00:42:01.053400 systemd[1]: Finished systemd-boot-update.service. Oct 29 00:42:01.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:01.054384 systemd[1]: Reloading. Oct 29 00:42:01.058086 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 29 00:42:01.059318 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 29 00:42:01.060740 systemd-tmpfiles[1095]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 29 00:42:01.094319 /usr/lib/systemd/system-generators/torcx-generator[1115]: time="2025-10-29T00:42:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 29 00:42:01.094355 /usr/lib/systemd/system-generators/torcx-generator[1115]: time="2025-10-29T00:42:01Z" level=info msg="torcx already run" Oct 29 00:42:01.159593 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 29 00:42:01.159613 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 29 00:42:01.177752 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 29 00:42:01.228000 audit: BPF prog-id=30 op=LOAD Oct 29 00:42:01.228000 audit: BPF prog-id=31 op=LOAD Oct 29 00:42:01.228000 audit: BPF prog-id=24 op=UNLOAD Oct 29 00:42:01.228000 audit: BPF prog-id=25 op=UNLOAD Oct 29 00:42:01.228000 audit: BPF prog-id=32 op=LOAD Oct 29 00:42:01.228000 audit: BPF prog-id=26 op=UNLOAD Oct 29 00:42:01.229000 audit: BPF prog-id=33 op=LOAD Oct 29 00:42:01.229000 audit: BPF prog-id=27 op=UNLOAD Oct 29 00:42:01.230000 audit: BPF prog-id=34 op=LOAD Oct 29 00:42:01.230000 audit: BPF prog-id=35 op=LOAD Oct 29 00:42:01.230000 audit: BPF prog-id=28 op=UNLOAD Oct 29 00:42:01.230000 audit: BPF prog-id=29 op=UNLOAD Oct 29 00:42:01.230000 audit: BPF prog-id=36 op=LOAD Oct 29 00:42:01.230000 audit: BPF prog-id=21 op=UNLOAD Oct 29 00:42:01.230000 audit: BPF prog-id=37 op=LOAD Oct 29 00:42:01.230000 audit: BPF prog-id=38 op=LOAD Oct 29 00:42:01.230000 audit: BPF prog-id=22 op=UNLOAD Oct 29 00:42:01.230000 audit: BPF prog-id=23 op=UNLOAD Oct 29 00:42:01.240239 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 29 00:42:01.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:01.243613 systemd[1]: Starting audit-rules.service... Oct 29 00:42:01.245382 systemd[1]: Starting clean-ca-certificates.service... Oct 29 00:42:01.247799 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 29 00:42:01.249000 audit: BPF prog-id=39 op=LOAD Oct 29 00:42:01.251000 audit: BPF prog-id=40 op=LOAD Oct 29 00:42:01.250095 systemd[1]: Starting systemd-resolved.service... Oct 29 00:42:01.252296 systemd[1]: Starting systemd-timesyncd.service... Oct 29 00:42:01.254469 systemd[1]: Starting systemd-update-utmp.service... Oct 29 00:42:01.255759 systemd[1]: Finished clean-ca-certificates.service. Oct 29 00:42:01.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:01.259002 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 00:42:01.261000 audit[1165]: SYSTEM_BOOT pid=1165 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 29 00:42:01.263634 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 29 00:42:01.265582 systemd[1]: Starting modprobe@dm_mod.service... Oct 29 00:42:01.267969 systemd[1]: Starting modprobe@efi_pstore.service... Oct 29 00:42:01.269928 systemd[1]: Starting modprobe@loop.service... Oct 29 00:42:01.270724 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 00:42:01.270914 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 00:42:01.271074 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 00:42:01.272395 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:42:01.272523 systemd[1]: Finished modprobe@dm_mod.service. Oct 29 00:42:01.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:01.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:01.273966 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:42:01.274155 systemd[1]: Finished modprobe@efi_pstore.service. Oct 29 00:42:01.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:01.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:01.275600 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 29 00:42:01.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:01.277084 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:42:01.277226 systemd[1]: Finished modprobe@loop.service. Oct 29 00:42:01.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:01.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:01.282389 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 29 00:42:01.283638 systemd[1]: Starting modprobe@dm_mod.service... Oct 29 00:42:01.285623 systemd[1]: Starting modprobe@efi_pstore.service... Oct 29 00:42:01.287463 systemd[1]: Starting modprobe@loop.service... Oct 29 00:42:01.288135 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 00:42:01.288315 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 00:42:01.289606 systemd[1]: Starting systemd-update-done.service... Oct 29 00:42:01.290429 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 00:42:01.291540 systemd[1]: Finished systemd-update-utmp.service. Oct 29 00:42:01.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 29 00:42:01.295000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 29 00:42:01.295000 audit[1177]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd1fb8de0 a2=420 a3=0 items=0 ppid=1154 pid=1177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 29 00:42:01.295000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 29 00:42:01.296144 augenrules[1177]: No rules Oct 29 00:42:01.296380 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:42:01.296540 systemd[1]: Finished modprobe@dm_mod.service. Oct 29 00:42:01.297811 systemd[1]: Finished audit-rules.service. Oct 29 00:42:01.298971 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:42:01.299083 systemd[1]: Finished modprobe@efi_pstore.service. Oct 29 00:42:01.300325 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:42:01.300448 systemd[1]: Finished modprobe@loop.service. Oct 29 00:42:01.301611 systemd[1]: Finished systemd-update-done.service. Oct 29 00:42:01.306124 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 29 00:42:01.307598 systemd[1]: Starting modprobe@dm_mod.service... Oct 29 00:42:01.309696 systemd-resolved[1160]: Positive Trust Anchors: Oct 29 00:42:01.309705 systemd-resolved[1160]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 29 00:42:01.309732 systemd-resolved[1160]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 29 00:42:01.309769 systemd[1]: Starting modprobe@drm.service... Oct 29 00:42:01.311530 systemd-timesyncd[1161]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 29 00:42:01.311829 systemd[1]: Starting modprobe@efi_pstore.service... Oct 29 00:42:01.311991 systemd-timesyncd[1161]: Initial clock synchronization to Wed 2025-10-29 00:42:01.036615 UTC. Oct 29 00:42:01.313728 systemd[1]: Starting modprobe@loop.service... Oct 29 00:42:01.314636 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 29 00:42:01.314770 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 00:42:01.316004 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 29 00:42:01.317229 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 29 00:42:01.318430 systemd[1]: Started systemd-timesyncd.service. Oct 29 00:42:01.320051 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 29 00:42:01.320183 systemd[1]: Finished modprobe@dm_mod.service. Oct 29 00:42:01.321515 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 29 00:42:01.321636 systemd-resolved[1160]: Defaulting to hostname 'linux'. Oct 29 00:42:01.321647 systemd[1]: Finished modprobe@drm.service. Oct 29 00:42:01.322878 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 29 00:42:01.322996 systemd[1]: Finished modprobe@efi_pstore.service. Oct 29 00:42:01.324065 systemd[1]: Started systemd-resolved.service. Oct 29 00:42:01.325229 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 29 00:42:01.325370 systemd[1]: Finished modprobe@loop.service. Oct 29 00:42:01.326781 systemd[1]: Reached target network.target. Oct 29 00:42:01.327529 systemd[1]: Reached target nss-lookup.target. Oct 29 00:42:01.328256 systemd[1]: Reached target time-set.target. Oct 29 00:42:01.328946 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 29 00:42:01.328983 systemd[1]: Reached target sysinit.target. Oct 29 00:42:01.329782 systemd[1]: Started motdgen.path. Oct 29 00:42:01.330439 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 29 00:42:01.331579 systemd[1]: Started logrotate.timer. Oct 29 00:42:01.332305 systemd[1]: Started mdadm.timer. Oct 29 00:42:01.332923 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 29 00:42:01.333848 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 29 00:42:01.333885 systemd[1]: Reached target paths.target. Oct 29 00:42:01.334569 systemd[1]: Reached target timers.target. Oct 29 00:42:01.335591 systemd[1]: Listening on dbus.socket. Oct 29 00:42:01.337426 systemd[1]: Starting docker.socket... Oct 29 00:42:01.340596 systemd[1]: Listening on sshd.socket. Oct 29 00:42:01.341396 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 00:42:01.341453 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 29 00:42:01.342042 systemd[1]: Finished ensure-sysext.service. Oct 29 00:42:01.342946 systemd[1]: Listening on docker.socket. Oct 29 00:42:01.344448 systemd[1]: Reached target sockets.target. Oct 29 00:42:01.345122 systemd[1]: Reached target basic.target. Oct 29 00:42:01.345919 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 29 00:42:01.345950 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 29 00:42:01.346951 systemd[1]: Starting containerd.service... Oct 29 00:42:01.348748 systemd[1]: Starting dbus.service... Oct 29 00:42:01.350501 systemd[1]: Starting enable-oem-cloudinit.service... Oct 29 00:42:01.352745 systemd[1]: Starting extend-filesystems.service... Oct 29 00:42:01.353726 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 29 00:42:01.354830 systemd[1]: Starting motdgen.service... Oct 29 00:42:01.356054 jq[1197]: false Oct 29 00:42:01.356662 systemd[1]: Starting prepare-helm.service... Oct 29 00:42:01.358522 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 29 00:42:01.360285 systemd[1]: Starting sshd-keygen.service... Oct 29 00:42:01.363607 systemd[1]: Starting systemd-logind.service... Oct 29 00:42:01.364434 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 29 00:42:01.364511 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 29 00:42:01.365791 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 29 00:42:01.366613 systemd[1]: Starting update-engine.service... Oct 29 00:42:01.368530 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 29 00:42:01.371486 jq[1215]: true Oct 29 00:42:01.371350 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 29 00:42:01.371559 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 29 00:42:01.372792 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 29 00:42:01.372966 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 29 00:42:01.374439 systemd[1]: motdgen.service: Deactivated successfully. Oct 29 00:42:01.374593 systemd[1]: Finished motdgen.service. Oct 29 00:42:01.382081 extend-filesystems[1198]: Found loop1 Oct 29 00:42:01.382081 extend-filesystems[1198]: Found vda Oct 29 00:42:01.382081 extend-filesystems[1198]: Found vda1 Oct 29 00:42:01.382081 extend-filesystems[1198]: Found vda2 Oct 29 00:42:01.382081 extend-filesystems[1198]: Found vda3 Oct 29 00:42:01.382081 extend-filesystems[1198]: Found usr Oct 29 00:42:01.382081 extend-filesystems[1198]: Found vda4 Oct 29 00:42:01.382081 extend-filesystems[1198]: Found vda6 Oct 29 00:42:01.382081 extend-filesystems[1198]: Found vda7 Oct 29 00:42:01.382081 extend-filesystems[1198]: Found vda9 Oct 29 00:42:01.382081 extend-filesystems[1198]: Checking size of /dev/vda9 Oct 29 00:42:01.408828 extend-filesystems[1198]: Resized partition /dev/vda9 Oct 29 00:42:01.412181 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 29 00:42:01.412248 jq[1219]: true Oct 29 00:42:01.388151 systemd[1]: Started dbus.service. Oct 29 00:42:01.387957 dbus-daemon[1196]: [system] SELinux support is enabled Oct 29 00:42:01.412659 tar[1218]: linux-arm64/LICENSE Oct 29 00:42:01.412659 tar[1218]: linux-arm64/helm Oct 29 00:42:01.412814 extend-filesystems[1239]: resize2fs 1.46.5 (30-Dec-2021) Oct 29 00:42:01.391127 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 29 00:42:01.391151 systemd[1]: Reached target system-config.target. Oct 29 00:42:01.392059 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 29 00:42:01.392073 systemd[1]: Reached target user-config.target. Oct 29 00:42:01.435283 update_engine[1213]: I1029 00:42:01.434286 1213 main.cc:92] Flatcar Update Engine starting Oct 29 00:42:01.437669 systemd[1]: Started update-engine.service. Oct 29 00:42:01.437806 update_engine[1213]: I1029 00:42:01.437670 1213 update_check_scheduler.cc:74] Next update check in 10m15s Oct 29 00:42:01.439223 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 29 00:42:01.440893 systemd[1]: Started locksmithd.service. Oct 29 00:42:01.455145 systemd-logind[1208]: Watching system buttons on /dev/input/event0 (Power Button) Oct 29 00:42:01.455940 extend-filesystems[1239]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 29 00:42:01.455940 extend-filesystems[1239]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 29 00:42:01.455940 extend-filesystems[1239]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 29 00:42:01.461324 extend-filesystems[1198]: Resized filesystem in /dev/vda9 Oct 29 00:42:01.456250 systemd-logind[1208]: New seat seat0. Oct 29 00:42:01.456277 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 29 00:42:01.456480 systemd[1]: Finished extend-filesystems.service. Oct 29 00:42:01.465411 bash[1246]: Updated "/home/core/.ssh/authorized_keys" Oct 29 00:42:01.466586 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 29 00:42:01.467868 systemd[1]: Started systemd-logind.service. Oct 29 00:42:01.470843 env[1220]: time="2025-10-29T00:42:01.470770400Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 29 00:42:01.505616 env[1220]: time="2025-10-29T00:42:01.505562640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 29 00:42:01.505748 env[1220]: time="2025-10-29T00:42:01.505727960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 29 00:42:01.507004 env[1220]: time="2025-10-29T00:42:01.506961440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 29 00:42:01.507004 env[1220]: time="2025-10-29T00:42:01.506999840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 29 00:42:01.507246 env[1220]: time="2025-10-29T00:42:01.507222760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 29 00:42:01.507246 env[1220]: time="2025-10-29T00:42:01.507244240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 29 00:42:01.507327 env[1220]: time="2025-10-29T00:42:01.507258920Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 29 00:42:01.507327 env[1220]: time="2025-10-29T00:42:01.507269440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 29 00:42:01.507382 env[1220]: time="2025-10-29T00:42:01.507349640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 29 00:42:01.507713 env[1220]: time="2025-10-29T00:42:01.507686000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 29 00:42:01.507833 env[1220]: time="2025-10-29T00:42:01.507811160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 29 00:42:01.507833 env[1220]: time="2025-10-29T00:42:01.507830640Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 29 00:42:01.507894 env[1220]: time="2025-10-29T00:42:01.507882800Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 29 00:42:01.507921 env[1220]: time="2025-10-29T00:42:01.507894960Z" level=info msg="metadata content store policy set" policy=shared Oct 29 00:42:01.511455 env[1220]: time="2025-10-29T00:42:01.511422800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 29 00:42:01.511522 env[1220]: time="2025-10-29T00:42:01.511469360Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 29 00:42:01.511522 env[1220]: time="2025-10-29T00:42:01.511484000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 29 00:42:01.511522 env[1220]: time="2025-10-29T00:42:01.511511280Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 29 00:42:01.511597 env[1220]: time="2025-10-29T00:42:01.511533000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 29 00:42:01.511597 env[1220]: time="2025-10-29T00:42:01.511554720Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 29 00:42:01.511597 env[1220]: time="2025-10-29T00:42:01.511568160Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 29 00:42:01.511942 env[1220]: time="2025-10-29T00:42:01.511920480Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 29 00:42:01.511977 env[1220]: time="2025-10-29T00:42:01.511943320Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 29 00:42:01.511977 env[1220]: time="2025-10-29T00:42:01.511957240Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 29 00:42:01.512015 env[1220]: time="2025-10-29T00:42:01.511978760Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 29 00:42:01.512015 env[1220]: time="2025-10-29T00:42:01.511993080Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 29 00:42:01.512136 env[1220]: time="2025-10-29T00:42:01.512114760Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 29 00:42:01.512260 env[1220]: time="2025-10-29T00:42:01.512241120Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 29 00:42:01.512521 env[1220]: time="2025-10-29T00:42:01.512500720Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 29 00:42:01.512557 env[1220]: time="2025-10-29T00:42:01.512538520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 29 00:42:01.512557 env[1220]: time="2025-10-29T00:42:01.512553640Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 29 00:42:01.512741 env[1220]: time="2025-10-29T00:42:01.512723760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 29 00:42:01.512776 env[1220]: time="2025-10-29T00:42:01.512741440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 29 00:42:01.512776 env[1220]: time="2025-10-29T00:42:01.512754000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 29 00:42:01.512776 env[1220]: time="2025-10-29T00:42:01.512765960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 29 00:42:01.512832 env[1220]: time="2025-10-29T00:42:01.512785280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 29 00:42:01.512832 env[1220]: time="2025-10-29T00:42:01.512799520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 29 00:42:01.512832 env[1220]: time="2025-10-29T00:42:01.512810680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 29 00:42:01.512832 env[1220]: time="2025-10-29T00:42:01.512823160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 29 00:42:01.512913 env[1220]: time="2025-10-29T00:42:01.512836240Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 29 00:42:01.513004 env[1220]: time="2025-10-29T00:42:01.512984040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 29 00:42:01.513033 env[1220]: time="2025-10-29T00:42:01.513006960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 29 00:42:01.513060 env[1220]: time="2025-10-29T00:42:01.513031640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 29 00:42:01.513060 env[1220]: time="2025-10-29T00:42:01.513045000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 29 00:42:01.513101 env[1220]: time="2025-10-29T00:42:01.513059120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 29 00:42:01.513101 env[1220]: time="2025-10-29T00:42:01.513070520Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 29 00:42:01.513101 env[1220]: time="2025-10-29T00:42:01.513095760Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 29 00:42:01.513157 env[1220]: time="2025-10-29T00:42:01.513130560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 29 00:42:01.513422 env[1220]: time="2025-10-29T00:42:01.513367240Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 29 00:42:01.514225 env[1220]: time="2025-10-29T00:42:01.513434280Z" level=info msg="Connect containerd service" Oct 29 00:42:01.514225 env[1220]: time="2025-10-29T00:42:01.513467560Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 29 00:42:01.514299 env[1220]: time="2025-10-29T00:42:01.514274800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 00:42:01.514710 env[1220]: time="2025-10-29T00:42:01.514683400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 29 00:42:01.514765 env[1220]: time="2025-10-29T00:42:01.514749640Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 29 00:42:01.514887 systemd[1]: Started containerd.service. Oct 29 00:42:01.516379 env[1220]: time="2025-10-29T00:42:01.515007360Z" level=info msg="Start subscribing containerd event" Oct 29 00:42:01.516379 env[1220]: time="2025-10-29T00:42:01.515062240Z" level=info msg="Start recovering state" Oct 29 00:42:01.516379 env[1220]: time="2025-10-29T00:42:01.515123560Z" level=info msg="Start event monitor" Oct 29 00:42:01.516379 env[1220]: time="2025-10-29T00:42:01.515144080Z" level=info msg="Start snapshots syncer" Oct 29 00:42:01.516379 env[1220]: time="2025-10-29T00:42:01.515153520Z" level=info msg="Start cni network conf syncer for default" Oct 29 00:42:01.516379 env[1220]: time="2025-10-29T00:42:01.515160800Z" level=info msg="Start streaming server" Oct 29 00:42:01.516379 env[1220]: time="2025-10-29T00:42:01.515870920Z" level=info msg="containerd successfully booted in 0.062635s" Oct 29 00:42:01.516942 locksmithd[1248]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 29 00:42:01.618345 systemd-networkd[1047]: eth0: Gained IPv6LL Oct 29 00:42:01.620107 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 29 00:42:01.621437 systemd[1]: Reached target network-online.target. Oct 29 00:42:01.623726 systemd[1]: Starting kubelet.service... Oct 29 00:42:01.796950 tar[1218]: linux-arm64/README.md Oct 29 00:42:01.801642 systemd[1]: Finished prepare-helm.service. Oct 29 00:42:02.178309 systemd[1]: Started kubelet.service. Oct 29 00:42:02.473725 sshd_keygen[1217]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 29 00:42:02.488756 kubelet[1263]: E1029 00:42:02.488709 1263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 00:42:02.490403 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 00:42:02.490527 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 00:42:02.491518 systemd[1]: Finished sshd-keygen.service. Oct 29 00:42:02.493558 systemd[1]: Starting issuegen.service... Oct 29 00:42:02.497858 systemd[1]: issuegen.service: Deactivated successfully. Oct 29 00:42:02.497996 systemd[1]: Finished issuegen.service. Oct 29 00:42:02.500046 systemd[1]: Starting systemd-user-sessions.service... Oct 29 00:42:02.505656 systemd[1]: Finished systemd-user-sessions.service. Oct 29 00:42:02.507607 systemd[1]: Started getty@tty1.service. Oct 29 00:42:02.509406 systemd[1]: Started serial-getty@ttyAMA0.service. Oct 29 00:42:02.510319 systemd[1]: Reached target getty.target. Oct 29 00:42:02.511073 systemd[1]: Reached target multi-user.target. Oct 29 00:42:02.512915 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 29 00:42:02.519095 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 29 00:42:02.519259 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 29 00:42:02.520136 systemd[1]: Startup finished in 541ms (kernel) + 4.488s (initrd) + 4.478s (userspace) = 9.508s. Oct 29 00:42:06.428319 systemd[1]: Created slice system-sshd.slice. Oct 29 00:42:06.429385 systemd[1]: Started sshd@0-10.0.0.113:22-10.0.0.1:41432.service. Oct 29 00:42:06.467040 sshd[1286]: Accepted publickey for core from 10.0.0.1 port 41432 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:42:06.468926 sshd[1286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:42:06.477358 systemd[1]: Created slice user-500.slice. Oct 29 00:42:06.478461 systemd[1]: Starting user-runtime-dir@500.service... Oct 29 00:42:06.480181 systemd-logind[1208]: New session 1 of user core. Oct 29 00:42:06.486627 systemd[1]: Finished user-runtime-dir@500.service. Oct 29 00:42:06.487936 systemd[1]: Starting user@500.service... Oct 29 00:42:06.490775 (systemd)[1289]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:42:06.552444 systemd[1289]: Queued start job for default target default.target. Oct 29 00:42:06.552942 systemd[1289]: Reached target paths.target. Oct 29 00:42:06.552975 systemd[1289]: Reached target sockets.target. Oct 29 00:42:06.552986 systemd[1289]: Reached target timers.target. Oct 29 00:42:06.552996 systemd[1289]: Reached target basic.target. Oct 29 00:42:06.553037 systemd[1289]: Reached target default.target. Oct 29 00:42:06.553062 systemd[1289]: Startup finished in 56ms. Oct 29 00:42:06.553132 systemd[1]: Started user@500.service. Oct 29 00:42:06.554604 systemd[1]: Started session-1.scope. Oct 29 00:42:06.604830 systemd[1]: Started sshd@1-10.0.0.113:22-10.0.0.1:41444.service. Oct 29 00:42:06.640712 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 41444 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:42:06.642288 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:42:06.645791 systemd-logind[1208]: New session 2 of user core. Oct 29 00:42:06.647139 systemd[1]: Started session-2.scope. Oct 29 00:42:06.699434 sshd[1298]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:06.701898 systemd[1]: sshd@1-10.0.0.113:22-10.0.0.1:41444.service: Deactivated successfully. Oct 29 00:42:06.702496 systemd[1]: session-2.scope: Deactivated successfully. Oct 29 00:42:06.702963 systemd-logind[1208]: Session 2 logged out. Waiting for processes to exit. Oct 29 00:42:06.703972 systemd[1]: Started sshd@2-10.0.0.113:22-10.0.0.1:41450.service. Oct 29 00:42:06.704574 systemd-logind[1208]: Removed session 2. Oct 29 00:42:06.737098 sshd[1304]: Accepted publickey for core from 10.0.0.1 port 41450 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:42:06.738268 sshd[1304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:42:06.741613 systemd-logind[1208]: New session 3 of user core. Oct 29 00:42:06.742460 systemd[1]: Started session-3.scope. Oct 29 00:42:06.790565 sshd[1304]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:06.793908 systemd[1]: sshd@2-10.0.0.113:22-10.0.0.1:41450.service: Deactivated successfully. Oct 29 00:42:06.794471 systemd[1]: session-3.scope: Deactivated successfully. Oct 29 00:42:06.794964 systemd-logind[1208]: Session 3 logged out. Waiting for processes to exit. Oct 29 00:42:06.795995 systemd[1]: Started sshd@3-10.0.0.113:22-10.0.0.1:41454.service. Oct 29 00:42:06.796642 systemd-logind[1208]: Removed session 3. Oct 29 00:42:06.831715 sshd[1310]: Accepted publickey for core from 10.0.0.1 port 41454 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:42:06.833089 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:42:06.837301 systemd-logind[1208]: New session 4 of user core. Oct 29 00:42:06.838706 systemd[1]: Started session-4.scope. Oct 29 00:42:06.892269 sshd[1310]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:06.895244 systemd[1]: Started sshd@4-10.0.0.113:22-10.0.0.1:41464.service. Oct 29 00:42:06.897429 systemd[1]: sshd@3-10.0.0.113:22-10.0.0.1:41454.service: Deactivated successfully. Oct 29 00:42:06.898016 systemd[1]: session-4.scope: Deactivated successfully. Oct 29 00:42:06.898535 systemd-logind[1208]: Session 4 logged out. Waiting for processes to exit. Oct 29 00:42:06.899309 systemd-logind[1208]: Removed session 4. Oct 29 00:42:06.930493 sshd[1315]: Accepted publickey for core from 10.0.0.1 port 41464 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:42:06.932064 sshd[1315]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:42:06.935136 systemd-logind[1208]: New session 5 of user core. Oct 29 00:42:06.935957 systemd[1]: Started session-5.scope. Oct 29 00:42:06.990313 sudo[1319]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 29 00:42:06.991148 sudo[1319]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 29 00:42:07.027564 systemd[1]: Starting docker.service... Oct 29 00:42:07.081467 env[1331]: time="2025-10-29T00:42:07.081412466Z" level=info msg="Starting up" Oct 29 00:42:07.082790 env[1331]: time="2025-10-29T00:42:07.082764850Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 29 00:42:07.082790 env[1331]: time="2025-10-29T00:42:07.082786501Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 29 00:42:07.082879 env[1331]: time="2025-10-29T00:42:07.082805913Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 29 00:42:07.082879 env[1331]: time="2025-10-29T00:42:07.082815108Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 29 00:42:07.084880 env[1331]: time="2025-10-29T00:42:07.084858262Z" level=info msg="parsed scheme: \"unix\"" module=grpc Oct 29 00:42:07.084976 env[1331]: time="2025-10-29T00:42:07.084959211Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Oct 29 00:42:07.085048 env[1331]: time="2025-10-29T00:42:07.085033440Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Oct 29 00:42:07.085102 env[1331]: time="2025-10-29T00:42:07.085091204Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Oct 29 00:42:07.217356 env[1331]: time="2025-10-29T00:42:07.217320516Z" level=info msg="Loading containers: start." Oct 29 00:42:07.332224 kernel: Initializing XFRM netlink socket Oct 29 00:42:07.354713 env[1331]: time="2025-10-29T00:42:07.354671995Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Oct 29 00:42:07.404394 systemd-networkd[1047]: docker0: Link UP Oct 29 00:42:07.425622 env[1331]: time="2025-10-29T00:42:07.425580174Z" level=info msg="Loading containers: done." Oct 29 00:42:07.442065 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2146992638-merged.mount: Deactivated successfully. Oct 29 00:42:07.443243 env[1331]: time="2025-10-29T00:42:07.443211692Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 29 00:42:07.443513 env[1331]: time="2025-10-29T00:42:07.443491316Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Oct 29 00:42:07.443680 env[1331]: time="2025-10-29T00:42:07.443663312Z" level=info msg="Daemon has completed initialization" Oct 29 00:42:07.457315 systemd[1]: Started docker.service. Oct 29 00:42:07.464315 env[1331]: time="2025-10-29T00:42:07.464269515Z" level=info msg="API listen on /run/docker.sock" Oct 29 00:42:07.916631 env[1220]: time="2025-10-29T00:42:07.916590526Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 29 00:42:08.470995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2361482836.mount: Deactivated successfully. Oct 29 00:42:09.579305 env[1220]: time="2025-10-29T00:42:09.579257960Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:09.580999 env[1220]: time="2025-10-29T00:42:09.580968765Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:09.583302 env[1220]: time="2025-10-29T00:42:09.583264097Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:09.584872 env[1220]: time="2025-10-29T00:42:09.584840657Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:09.585818 env[1220]: time="2025-10-29T00:42:09.585775711Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\"" Oct 29 00:42:09.586562 env[1220]: time="2025-10-29T00:42:09.586527076Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 29 00:42:10.762511 env[1220]: time="2025-10-29T00:42:10.762458949Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:10.764909 env[1220]: time="2025-10-29T00:42:10.764864178Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:10.767839 env[1220]: time="2025-10-29T00:42:10.767799120Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:10.769245 env[1220]: time="2025-10-29T00:42:10.769218604Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:10.770045 env[1220]: time="2025-10-29T00:42:10.769995049Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\"" Oct 29 00:42:10.773762 env[1220]: time="2025-10-29T00:42:10.773723710Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 29 00:42:11.843574 env[1220]: time="2025-10-29T00:42:11.843525806Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:11.844842 env[1220]: time="2025-10-29T00:42:11.844810993Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:11.846494 env[1220]: time="2025-10-29T00:42:11.846466951Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:11.848134 env[1220]: time="2025-10-29T00:42:11.848109370Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:11.849722 env[1220]: time="2025-10-29T00:42:11.849692765Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\"" Oct 29 00:42:11.850265 env[1220]: time="2025-10-29T00:42:11.850243023Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 29 00:42:12.625693 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 29 00:42:12.625907 systemd[1]: Stopped kubelet.service. Oct 29 00:42:12.627321 systemd[1]: Starting kubelet.service... Oct 29 00:42:12.720314 systemd[1]: Started kubelet.service. Oct 29 00:42:12.756331 kubelet[1467]: E1029 00:42:12.756293 1467 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 00:42:12.758822 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 00:42:12.758939 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 00:42:13.075388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1651575781.mount: Deactivated successfully. Oct 29 00:42:13.414497 env[1220]: time="2025-10-29T00:42:13.414456035Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:13.416203 env[1220]: time="2025-10-29T00:42:13.416140806Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:13.419011 env[1220]: time="2025-10-29T00:42:13.418976404Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.34.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:13.420580 env[1220]: time="2025-10-29T00:42:13.420552323Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:13.420927 env[1220]: time="2025-10-29T00:42:13.420906460Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\"" Oct 29 00:42:13.421371 env[1220]: time="2025-10-29T00:42:13.421346750Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 29 00:42:14.022736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount34731591.mount: Deactivated successfully. Oct 29 00:42:15.024387 env[1220]: time="2025-10-29T00:42:15.024331488Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.12.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:15.025874 env[1220]: time="2025-10-29T00:42:15.025843001Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:15.027992 env[1220]: time="2025-10-29T00:42:15.027954340Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.12.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:15.030420 env[1220]: time="2025-10-29T00:42:15.030393045Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:15.031164 env[1220]: time="2025-10-29T00:42:15.031136159Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Oct 29 00:42:15.031852 env[1220]: time="2025-10-29T00:42:15.031813791Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 29 00:42:15.476527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount762959689.mount: Deactivated successfully. Oct 29 00:42:15.481495 env[1220]: time="2025-10-29T00:42:15.481450949Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:15.483449 env[1220]: time="2025-10-29T00:42:15.483410692Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:15.484653 env[1220]: time="2025-10-29T00:42:15.484620045Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:15.485866 env[1220]: time="2025-10-29T00:42:15.485834249Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:15.486426 env[1220]: time="2025-10-29T00:42:15.486387638Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Oct 29 00:42:15.486896 env[1220]: time="2025-10-29T00:42:15.486870855Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 29 00:42:19.274732 env[1220]: time="2025-10-29T00:42:19.274684860Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.6.4-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:19.280287 env[1220]: time="2025-10-29T00:42:19.280247328Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:19.282970 env[1220]: time="2025-10-29T00:42:19.282942888Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.6.4-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:19.285326 env[1220]: time="2025-10-29T00:42:19.285302882Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:19.285689 env[1220]: time="2025-10-29T00:42:19.285661485Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Oct 29 00:42:22.875673 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 29 00:42:22.875852 systemd[1]: Stopped kubelet.service. Oct 29 00:42:22.877378 systemd[1]: Starting kubelet.service... Oct 29 00:42:22.976163 systemd[1]: Started kubelet.service. Oct 29 00:42:23.010178 kubelet[1500]: E1029 00:42:23.010127 1500 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 29 00:42:23.012334 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 29 00:42:23.012459 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 29 00:42:24.712775 systemd[1]: Stopped kubelet.service. Oct 29 00:42:24.715333 systemd[1]: Starting kubelet.service... Oct 29 00:42:25.024988 systemd[1]: Reloading. Oct 29 00:42:25.096662 /usr/lib/systemd/system-generators/torcx-generator[1534]: time="2025-10-29T00:42:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 29 00:42:25.097026 /usr/lib/systemd/system-generators/torcx-generator[1534]: time="2025-10-29T00:42:25Z" level=info msg="torcx already run" Oct 29 00:42:25.187142 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 29 00:42:25.187163 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 29 00:42:25.205492 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 29 00:42:25.279000 systemd[1]: Started kubelet.service. Oct 29 00:42:25.280310 systemd[1]: Stopping kubelet.service... Oct 29 00:42:25.280566 systemd[1]: kubelet.service: Deactivated successfully. Oct 29 00:42:25.280723 systemd[1]: Stopped kubelet.service. Oct 29 00:42:25.282163 systemd[1]: Starting kubelet.service... Oct 29 00:42:25.386115 systemd[1]: Started kubelet.service. Oct 29 00:42:25.418548 kubelet[1578]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 00:42:25.418548 kubelet[1578]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 00:42:25.419775 kubelet[1578]: I1029 00:42:25.419729 1578 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 00:42:26.309668 kubelet[1578]: I1029 00:42:26.309318 1578 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 29 00:42:26.309668 kubelet[1578]: I1029 00:42:26.309657 1578 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 00:42:26.311774 kubelet[1578]: I1029 00:42:26.311752 1578 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 29 00:42:26.311886 kubelet[1578]: I1029 00:42:26.311874 1578 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 00:42:26.312538 kubelet[1578]: I1029 00:42:26.312505 1578 server.go:956] "Client rotation is on, will bootstrap in background" Oct 29 00:42:26.412159 kubelet[1578]: E1029 00:42:26.412121 1578 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 29 00:42:26.413753 kubelet[1578]: I1029 00:42:26.413723 1578 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 00:42:26.416363 kubelet[1578]: E1029 00:42:26.416322 1578 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 29 00:42:26.416435 kubelet[1578]: I1029 00:42:26.416399 1578 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Oct 29 00:42:26.418878 kubelet[1578]: I1029 00:42:26.418855 1578 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 29 00:42:26.419088 kubelet[1578]: I1029 00:42:26.419046 1578 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 00:42:26.419222 kubelet[1578]: I1029 00:42:26.419064 1578 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 00:42:26.419222 kubelet[1578]: I1029 00:42:26.419220 1578 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 00:42:26.419339 kubelet[1578]: I1029 00:42:26.419229 1578 container_manager_linux.go:306] "Creating device plugin manager" Oct 29 00:42:26.419339 kubelet[1578]: I1029 00:42:26.419313 1578 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 29 00:42:26.421292 kubelet[1578]: I1029 00:42:26.421265 1578 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:42:26.422462 kubelet[1578]: I1029 00:42:26.422437 1578 kubelet.go:475] "Attempting to sync node with API server" Oct 29 00:42:26.422514 kubelet[1578]: I1029 00:42:26.422466 1578 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 00:42:26.422514 kubelet[1578]: I1029 00:42:26.422488 1578 kubelet.go:387] "Adding apiserver pod source" Oct 29 00:42:26.423039 kubelet[1578]: E1029 00:42:26.422994 1578 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 29 00:42:26.423605 kubelet[1578]: I1029 00:42:26.423587 1578 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 00:42:26.424002 kubelet[1578]: E1029 00:42:26.423976 1578 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 29 00:42:26.424634 kubelet[1578]: I1029 00:42:26.424619 1578 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 29 00:42:26.427370 kubelet[1578]: I1029 00:42:26.427337 1578 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 29 00:42:26.427449 kubelet[1578]: I1029 00:42:26.427376 1578 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 29 00:42:26.427449 kubelet[1578]: W1029 00:42:26.427418 1578 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 29 00:42:26.432880 kubelet[1578]: I1029 00:42:26.432858 1578 server.go:1262] "Started kubelet" Oct 29 00:42:26.433306 kubelet[1578]: I1029 00:42:26.433253 1578 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 00:42:26.433354 kubelet[1578]: I1029 00:42:26.433310 1578 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 29 00:42:26.433354 kubelet[1578]: I1029 00:42:26.433313 1578 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 00:42:26.433686 kubelet[1578]: I1029 00:42:26.433662 1578 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 00:42:26.434156 kubelet[1578]: I1029 00:42:26.434099 1578 server.go:310] "Adding debug handlers to kubelet server" Oct 29 00:42:26.436404 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Oct 29 00:42:26.437152 kubelet[1578]: I1029 00:42:26.437116 1578 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 00:42:26.437383 kubelet[1578]: I1029 00:42:26.437362 1578 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 00:42:26.437723 kubelet[1578]: E1029 00:42:26.435906 1578 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.113:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.113:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1872cf7e7d735724 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-29 00:42:26.432825124 +0000 UTC m=+1.042880992,LastTimestamp:2025-10-29 00:42:26.432825124 +0000 UTC m=+1.042880992,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 29 00:42:26.438795 kubelet[1578]: E1029 00:42:26.438772 1578 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:42:26.438905 kubelet[1578]: I1029 00:42:26.438893 1578 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 29 00:42:26.439096 kubelet[1578]: I1029 00:42:26.439079 1578 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 29 00:42:26.439234 kubelet[1578]: I1029 00:42:26.439223 1578 reconciler.go:29] "Reconciler: start to sync state" Oct 29 00:42:26.439463 kubelet[1578]: E1029 00:42:26.439437 1578 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 29 00:42:26.439743 kubelet[1578]: E1029 00:42:26.439696 1578 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="200ms" Oct 29 00:42:26.439840 kubelet[1578]: E1029 00:42:26.439818 1578 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 00:42:26.440043 kubelet[1578]: I1029 00:42:26.440022 1578 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 00:42:26.441081 kubelet[1578]: I1029 00:42:26.441060 1578 factory.go:223] Registration of the containerd container factory successfully Oct 29 00:42:26.441081 kubelet[1578]: I1029 00:42:26.441080 1578 factory.go:223] Registration of the systemd container factory successfully Oct 29 00:42:26.452034 kubelet[1578]: I1029 00:42:26.451983 1578 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 00:42:26.452034 kubelet[1578]: I1029 00:42:26.452000 1578 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 00:42:26.452034 kubelet[1578]: I1029 00:42:26.452021 1578 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:42:26.454170 kubelet[1578]: I1029 00:42:26.454133 1578 policy_none.go:49] "None policy: Start" Oct 29 00:42:26.454170 kubelet[1578]: I1029 00:42:26.454156 1578 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 29 00:42:26.454170 kubelet[1578]: I1029 00:42:26.454168 1578 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 29 00:42:26.455656 kubelet[1578]: I1029 00:42:26.455631 1578 policy_none.go:47] "Start" Oct 29 00:42:26.459207 systemd[1]: Created slice kubepods.slice. Oct 29 00:42:26.459624 kubelet[1578]: I1029 00:42:26.459574 1578 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 29 00:42:26.460560 kubelet[1578]: I1029 00:42:26.460533 1578 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 29 00:42:26.460560 kubelet[1578]: I1029 00:42:26.460556 1578 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 29 00:42:26.460660 kubelet[1578]: I1029 00:42:26.460593 1578 kubelet.go:2427] "Starting kubelet main sync loop" Oct 29 00:42:26.460660 kubelet[1578]: E1029 00:42:26.460635 1578 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 00:42:26.461532 kubelet[1578]: E1029 00:42:26.461499 1578 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 29 00:42:26.463529 systemd[1]: Created slice kubepods-burstable.slice. Oct 29 00:42:26.466073 systemd[1]: Created slice kubepods-besteffort.slice. Oct 29 00:42:26.472880 kubelet[1578]: E1029 00:42:26.472856 1578 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 29 00:42:26.473104 kubelet[1578]: I1029 00:42:26.473088 1578 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 00:42:26.473239 kubelet[1578]: I1029 00:42:26.473183 1578 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 00:42:26.473574 kubelet[1578]: I1029 00:42:26.473556 1578 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 00:42:26.474536 kubelet[1578]: E1029 00:42:26.474506 1578 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 00:42:26.474622 kubelet[1578]: E1029 00:42:26.474551 1578 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 29 00:42:26.568110 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 29 00:42:26.574734 kubelet[1578]: I1029 00:42:26.574706 1578 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:42:26.575078 kubelet[1578]: E1029 00:42:26.575056 1578 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Oct 29 00:42:26.576856 kubelet[1578]: E1029 00:42:26.576838 1578 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:42:26.580057 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 29 00:42:26.581467 kubelet[1578]: E1029 00:42:26.581447 1578 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:42:26.583445 systemd[1]: Created slice kubepods-burstable-podb3b7be264d011303e42592e0e86373be.slice. Oct 29 00:42:26.584659 kubelet[1578]: E1029 00:42:26.584639 1578 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:42:26.640529 kubelet[1578]: E1029 00:42:26.640495 1578 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="400ms" Oct 29 00:42:26.740916 kubelet[1578]: I1029 00:42:26.740877 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:42:26.740916 kubelet[1578]: I1029 00:42:26.740915 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:42:26.741048 kubelet[1578]: I1029 00:42:26.740933 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:42:26.741048 kubelet[1578]: I1029 00:42:26.740953 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 29 00:42:26.741048 kubelet[1578]: I1029 00:42:26.740968 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b3b7be264d011303e42592e0e86373be-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b3b7be264d011303e42592e0e86373be\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:42:26.741048 kubelet[1578]: I1029 00:42:26.740985 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b3b7be264d011303e42592e0e86373be-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b3b7be264d011303e42592e0e86373be\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:42:26.741048 kubelet[1578]: I1029 00:42:26.741007 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:42:26.741172 kubelet[1578]: I1029 00:42:26.741022 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:42:26.741172 kubelet[1578]: I1029 00:42:26.741035 1578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b3b7be264d011303e42592e0e86373be-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b3b7be264d011303e42592e0e86373be\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:42:26.777093 kubelet[1578]: I1029 00:42:26.777049 1578 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:42:26.777422 kubelet[1578]: E1029 00:42:26.777388 1578 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Oct 29 00:42:26.879832 kubelet[1578]: E1029 00:42:26.879801 1578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:26.881247 env[1220]: time="2025-10-29T00:42:26.881035311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 29 00:42:26.883948 kubelet[1578]: E1029 00:42:26.883919 1578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:26.884320 env[1220]: time="2025-10-29T00:42:26.884283898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 29 00:42:26.887528 kubelet[1578]: E1029 00:42:26.887500 1578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:26.887845 env[1220]: time="2025-10-29T00:42:26.887812613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b3b7be264d011303e42592e0e86373be,Namespace:kube-system,Attempt:0,}" Oct 29 00:42:27.041438 kubelet[1578]: E1029 00:42:27.041360 1578 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="800ms" Oct 29 00:42:27.179437 kubelet[1578]: I1029 00:42:27.178954 1578 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:42:27.179437 kubelet[1578]: E1029 00:42:27.179297 1578 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Oct 29 00:42:27.340531 kubelet[1578]: E1029 00:42:27.340444 1578 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 29 00:42:27.367357 kubelet[1578]: E1029 00:42:27.367060 1578 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 29 00:42:27.797868 kubelet[1578]: E1029 00:42:27.797814 1578 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 29 00:42:27.809250 kubelet[1578]: E1029 00:42:27.809169 1578 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 29 00:42:27.833143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount550572338.mount: Deactivated successfully. Oct 29 00:42:27.839381 env[1220]: time="2025-10-29T00:42:27.839337099Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:27.841980 kubelet[1578]: E1029 00:42:27.841938 1578 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="1.6s" Oct 29 00:42:27.845890 env[1220]: time="2025-10-29T00:42:27.845835125Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:27.848269 env[1220]: time="2025-10-29T00:42:27.848242147Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:27.850060 env[1220]: time="2025-10-29T00:42:27.850025569Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:27.852358 env[1220]: time="2025-10-29T00:42:27.852328078Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:27.854995 env[1220]: time="2025-10-29T00:42:27.853507797Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:27.855774 env[1220]: time="2025-10-29T00:42:27.855748901Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:27.856531 env[1220]: time="2025-10-29T00:42:27.856506496Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:27.857374 env[1220]: time="2025-10-29T00:42:27.857349667Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:27.860936 env[1220]: time="2025-10-29T00:42:27.860908681Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:27.862261 env[1220]: time="2025-10-29T00:42:27.862232145Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:27.863565 env[1220]: time="2025-10-29T00:42:27.863530760Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:27.894957 env[1220]: time="2025-10-29T00:42:27.894885757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 00:42:27.894957 env[1220]: time="2025-10-29T00:42:27.894925908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 00:42:27.894957 env[1220]: time="2025-10-29T00:42:27.894936615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 00:42:27.896000 env[1220]: time="2025-10-29T00:42:27.895598886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 00:42:27.896000 env[1220]: time="2025-10-29T00:42:27.895641953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 00:42:27.896000 env[1220]: time="2025-10-29T00:42:27.895653260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 00:42:27.896000 env[1220]: time="2025-10-29T00:42:27.895822693Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4e1c7f85cf4eac30c06d9c0bfb9e2ce1896d5f102a897566fee4ce3c558b5f2b pid=1640 runtime=io.containerd.runc.v2 Oct 29 00:42:27.896844 env[1220]: time="2025-10-29T00:42:27.896341379Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/97d1a620372d22d30b659e4ff2c9e6a57fed00c9ec84f4801952a3602fc30b4c pid=1625 runtime=io.containerd.runc.v2 Oct 29 00:42:27.904619 env[1220]: time="2025-10-29T00:42:27.904369577Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 00:42:27.904619 env[1220]: time="2025-10-29T00:42:27.904405414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 00:42:27.904619 env[1220]: time="2025-10-29T00:42:27.904416001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 00:42:27.904619 env[1220]: time="2025-10-29T00:42:27.904530101Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b9e02d26a77e316804838ac1cb16baf849af88cd79c443b9f52d2edd21cedbf7 pid=1669 runtime=io.containerd.runc.v2 Oct 29 00:42:27.908607 systemd[1]: Started cri-containerd-97d1a620372d22d30b659e4ff2c9e6a57fed00c9ec84f4801952a3602fc30b4c.scope. Oct 29 00:42:27.919460 systemd[1]: Started cri-containerd-4e1c7f85cf4eac30c06d9c0bfb9e2ce1896d5f102a897566fee4ce3c558b5f2b.scope. Oct 29 00:42:27.923651 systemd[1]: Started cri-containerd-b9e02d26a77e316804838ac1cb16baf849af88cd79c443b9f52d2edd21cedbf7.scope. Oct 29 00:42:27.953628 env[1220]: time="2025-10-29T00:42:27.953589562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b3b7be264d011303e42592e0e86373be,Namespace:kube-system,Attempt:0,} returns sandbox id \"97d1a620372d22d30b659e4ff2c9e6a57fed00c9ec84f4801952a3602fc30b4c\"" Oct 29 00:42:27.955985 kubelet[1578]: E1029 00:42:27.955870 1578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:27.960845 env[1220]: time="2025-10-29T00:42:27.960518582Z" level=info msg="CreateContainer within sandbox \"97d1a620372d22d30b659e4ff2c9e6a57fed00c9ec84f4801952a3602fc30b4c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 29 00:42:27.968492 env[1220]: time="2025-10-29T00:42:27.968160931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"b9e02d26a77e316804838ac1cb16baf849af88cd79c443b9f52d2edd21cedbf7\"" Oct 29 00:42:27.970344 kubelet[1578]: E1029 00:42:27.969350 1578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:27.971409 env[1220]: time="2025-10-29T00:42:27.971366577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e1c7f85cf4eac30c06d9c0bfb9e2ce1896d5f102a897566fee4ce3c558b5f2b\"" Oct 29 00:42:27.972094 kubelet[1578]: E1029 00:42:27.971974 1578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:27.972412 env[1220]: time="2025-10-29T00:42:27.972380219Z" level=info msg="CreateContainer within sandbox \"b9e02d26a77e316804838ac1cb16baf849af88cd79c443b9f52d2edd21cedbf7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 29 00:42:27.975073 env[1220]: time="2025-10-29T00:42:27.975039972Z" level=info msg="CreateContainer within sandbox \"4e1c7f85cf4eac30c06d9c0bfb9e2ce1896d5f102a897566fee4ce3c558b5f2b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 29 00:42:27.981177 env[1220]: time="2025-10-29T00:42:27.981138726Z" level=info msg="CreateContainer within sandbox \"97d1a620372d22d30b659e4ff2c9e6a57fed00c9ec84f4801952a3602fc30b4c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"279ffbf5a49ad7951a0bd755a3bdd830309108bfc99ab15229845e6769669b39\"" Oct 29 00:42:27.981282 kubelet[1578]: I1029 00:42:27.981174 1578 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:42:27.981583 kubelet[1578]: E1029 00:42:27.981559 1578 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Oct 29 00:42:27.982058 env[1220]: time="2025-10-29T00:42:27.982028719Z" level=info msg="StartContainer for \"279ffbf5a49ad7951a0bd755a3bdd830309108bfc99ab15229845e6769669b39\"" Oct 29 00:42:27.986985 env[1220]: time="2025-10-29T00:42:27.986940562Z" level=info msg="CreateContainer within sandbox \"b9e02d26a77e316804838ac1cb16baf849af88cd79c443b9f52d2edd21cedbf7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0c21b678436028f57d9725959975338a1b52262922304e1371c24bbf30c83991\"" Oct 29 00:42:27.987469 env[1220]: time="2025-10-29T00:42:27.987387656Z" level=info msg="StartContainer for \"0c21b678436028f57d9725959975338a1b52262922304e1371c24bbf30c83991\"" Oct 29 00:42:27.992296 env[1220]: time="2025-10-29T00:42:27.992255872Z" level=info msg="CreateContainer within sandbox \"4e1c7f85cf4eac30c06d9c0bfb9e2ce1896d5f102a897566fee4ce3c558b5f2b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"31f8167906c8be5ee6ec70fb0662c3fb395cf9945e0716d8b562cef2c891714a\"" Oct 29 00:42:27.993009 env[1220]: time="2025-10-29T00:42:27.992968162Z" level=info msg="StartContainer for \"31f8167906c8be5ee6ec70fb0662c3fb395cf9945e0716d8b562cef2c891714a\"" Oct 29 00:42:28.000378 systemd[1]: Started cri-containerd-279ffbf5a49ad7951a0bd755a3bdd830309108bfc99ab15229845e6769669b39.scope. Oct 29 00:42:28.006924 systemd[1]: Started cri-containerd-0c21b678436028f57d9725959975338a1b52262922304e1371c24bbf30c83991.scope. Oct 29 00:42:28.024855 systemd[1]: Started cri-containerd-31f8167906c8be5ee6ec70fb0662c3fb395cf9945e0716d8b562cef2c891714a.scope. Oct 29 00:42:28.062153 env[1220]: time="2025-10-29T00:42:28.062060743Z" level=info msg="StartContainer for \"0c21b678436028f57d9725959975338a1b52262922304e1371c24bbf30c83991\" returns successfully" Oct 29 00:42:28.062523 env[1220]: time="2025-10-29T00:42:28.062499554Z" level=info msg="StartContainer for \"279ffbf5a49ad7951a0bd755a3bdd830309108bfc99ab15229845e6769669b39\" returns successfully" Oct 29 00:42:28.070924 env[1220]: time="2025-10-29T00:42:28.070890671Z" level=info msg="StartContainer for \"31f8167906c8be5ee6ec70fb0662c3fb395cf9945e0716d8b562cef2c891714a\" returns successfully" Oct 29 00:42:28.473503 kubelet[1578]: E1029 00:42:28.473309 1578 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:42:28.473503 kubelet[1578]: E1029 00:42:28.473437 1578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:28.474907 kubelet[1578]: E1029 00:42:28.474879 1578 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:42:28.475006 kubelet[1578]: E1029 00:42:28.474998 1578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:28.476366 kubelet[1578]: E1029 00:42:28.476347 1578 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:42:28.476461 kubelet[1578]: E1029 00:42:28.476443 1578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:29.478547 kubelet[1578]: E1029 00:42:29.478020 1578 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:42:29.478547 kubelet[1578]: E1029 00:42:29.478138 1578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:29.478547 kubelet[1578]: E1029 00:42:29.478398 1578 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 29 00:42:29.478547 kubelet[1578]: E1029 00:42:29.478476 1578 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:29.583898 kubelet[1578]: I1029 00:42:29.583598 1578 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:42:29.669361 kubelet[1578]: E1029 00:42:29.669322 1578 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 29 00:42:29.763898 kubelet[1578]: I1029 00:42:29.763783 1578 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 00:42:29.763898 kubelet[1578]: E1029 00:42:29.763819 1578 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 29 00:42:29.779177 kubelet[1578]: E1029 00:42:29.779144 1578 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:42:29.879759 kubelet[1578]: E1029 00:42:29.879717 1578 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:42:29.980102 kubelet[1578]: E1029 00:42:29.980067 1578 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:42:30.080702 kubelet[1578]: E1029 00:42:30.080584 1578 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:42:30.181347 kubelet[1578]: E1029 00:42:30.181321 1578 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:42:30.281890 kubelet[1578]: E1029 00:42:30.281838 1578 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:42:30.382390 kubelet[1578]: E1029 00:42:30.382367 1578 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 29 00:42:30.425888 kubelet[1578]: I1029 00:42:30.425837 1578 apiserver.go:52] "Watching apiserver" Oct 29 00:42:30.440114 kubelet[1578]: I1029 00:42:30.440082 1578 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:42:30.440390 kubelet[1578]: I1029 00:42:30.440111 1578 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 29 00:42:30.444952 kubelet[1578]: E1029 00:42:30.444925 1578 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:42:30.445063 kubelet[1578]: I1029 00:42:30.445050 1578 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 00:42:30.446577 kubelet[1578]: E1029 00:42:30.446553 1578 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 29 00:42:30.446681 kubelet[1578]: I1029 00:42:30.446668 1578 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 00:42:30.448280 kubelet[1578]: E1029 00:42:30.448256 1578 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 29 00:42:31.835330 systemd[1]: Reloading. Oct 29 00:42:31.901871 /usr/lib/systemd/system-generators/torcx-generator[1887]: time="2025-10-29T00:42:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Oct 29 00:42:31.901906 /usr/lib/systemd/system-generators/torcx-generator[1887]: time="2025-10-29T00:42:31Z" level=info msg="torcx already run" Oct 29 00:42:31.960073 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 29 00:42:31.960095 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 29 00:42:31.978458 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 29 00:42:32.070269 systemd[1]: Stopping kubelet.service... Oct 29 00:42:32.095328 systemd[1]: kubelet.service: Deactivated successfully. Oct 29 00:42:32.095533 systemd[1]: Stopped kubelet.service. Oct 29 00:42:32.095581 systemd[1]: kubelet.service: Consumed 1.296s CPU time. Oct 29 00:42:32.097184 systemd[1]: Starting kubelet.service... Oct 29 00:42:32.192578 systemd[1]: Started kubelet.service. Oct 29 00:42:32.228420 kubelet[1929]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 29 00:42:32.228420 kubelet[1929]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 29 00:42:32.228790 kubelet[1929]: I1029 00:42:32.228481 1929 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 29 00:42:32.239019 kubelet[1929]: I1029 00:42:32.238240 1929 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 29 00:42:32.239019 kubelet[1929]: I1029 00:42:32.238300 1929 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 29 00:42:32.239019 kubelet[1929]: I1029 00:42:32.238333 1929 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 29 00:42:32.239019 kubelet[1929]: I1029 00:42:32.238339 1929 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 29 00:42:32.239019 kubelet[1929]: I1029 00:42:32.238565 1929 server.go:956] "Client rotation is on, will bootstrap in background" Oct 29 00:42:32.242054 kubelet[1929]: I1029 00:42:32.242023 1929 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 29 00:42:32.244913 kubelet[1929]: I1029 00:42:32.244884 1929 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 29 00:42:32.247127 kubelet[1929]: E1029 00:42:32.247099 1929 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 29 00:42:32.247312 kubelet[1929]: I1029 00:42:32.247297 1929 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Oct 29 00:42:32.249589 kubelet[1929]: I1029 00:42:32.249573 1929 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 29 00:42:32.249777 kubelet[1929]: I1029 00:42:32.249746 1929 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 29 00:42:32.249916 kubelet[1929]: I1029 00:42:32.249779 1929 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 29 00:42:32.250001 kubelet[1929]: I1029 00:42:32.249917 1929 topology_manager.go:138] "Creating topology manager with none policy" Oct 29 00:42:32.250001 kubelet[1929]: I1029 00:42:32.249925 1929 container_manager_linux.go:306] "Creating device plugin manager" Oct 29 00:42:32.250001 kubelet[1929]: I1029 00:42:32.249946 1929 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 29 00:42:32.250875 kubelet[1929]: I1029 00:42:32.250837 1929 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:42:32.251052 kubelet[1929]: I1029 00:42:32.251023 1929 kubelet.go:475] "Attempting to sync node with API server" Oct 29 00:42:32.251052 kubelet[1929]: I1029 00:42:32.251042 1929 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 29 00:42:32.251138 kubelet[1929]: I1029 00:42:32.251060 1929 kubelet.go:387] "Adding apiserver pod source" Oct 29 00:42:32.251138 kubelet[1929]: I1029 00:42:32.251072 1929 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 29 00:42:32.253052 kubelet[1929]: I1029 00:42:32.253013 1929 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 29 00:42:32.253650 kubelet[1929]: I1029 00:42:32.253622 1929 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 29 00:42:32.253717 kubelet[1929]: I1029 00:42:32.253655 1929 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 29 00:42:32.267340 kubelet[1929]: I1029 00:42:32.267317 1929 server.go:1262] "Started kubelet" Oct 29 00:42:32.268039 kubelet[1929]: I1029 00:42:32.267978 1929 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 29 00:42:32.268075 kubelet[1929]: I1029 00:42:32.268056 1929 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 29 00:42:32.268362 kubelet[1929]: I1029 00:42:32.268326 1929 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 29 00:42:32.268446 kubelet[1929]: I1029 00:42:32.268423 1929 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 29 00:42:32.268653 kubelet[1929]: I1029 00:42:32.268623 1929 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 29 00:42:32.269596 kubelet[1929]: I1029 00:42:32.269573 1929 server.go:310] "Adding debug handlers to kubelet server" Oct 29 00:42:32.272461 kubelet[1929]: I1029 00:42:32.272435 1929 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 29 00:42:32.272615 kubelet[1929]: I1029 00:42:32.272582 1929 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 29 00:42:32.272699 kubelet[1929]: I1029 00:42:32.272682 1929 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 29 00:42:32.272839 kubelet[1929]: I1029 00:42:32.272822 1929 reconciler.go:29] "Reconciler: start to sync state" Oct 29 00:42:32.274636 kubelet[1929]: I1029 00:42:32.274565 1929 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 29 00:42:32.279172 kubelet[1929]: I1029 00:42:32.279144 1929 factory.go:223] Registration of the containerd container factory successfully Oct 29 00:42:32.279271 kubelet[1929]: I1029 00:42:32.279185 1929 factory.go:223] Registration of the systemd container factory successfully Oct 29 00:42:32.279855 kubelet[1929]: E1029 00:42:32.279831 1929 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 29 00:42:32.286582 kubelet[1929]: I1029 00:42:32.286534 1929 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 29 00:42:32.287640 kubelet[1929]: I1029 00:42:32.287602 1929 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 29 00:42:32.287640 kubelet[1929]: I1029 00:42:32.287623 1929 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 29 00:42:32.287753 kubelet[1929]: I1029 00:42:32.287649 1929 kubelet.go:2427] "Starting kubelet main sync loop" Oct 29 00:42:32.287753 kubelet[1929]: E1029 00:42:32.287688 1929 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 29 00:42:32.309792 kubelet[1929]: I1029 00:42:32.309747 1929 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 29 00:42:32.309792 kubelet[1929]: I1029 00:42:32.309772 1929 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 29 00:42:32.309792 kubelet[1929]: I1029 00:42:32.309794 1929 state_mem.go:36] "Initialized new in-memory state store" Oct 29 00:42:32.309950 kubelet[1929]: I1029 00:42:32.309930 1929 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 29 00:42:32.309990 kubelet[1929]: I1029 00:42:32.309941 1929 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 29 00:42:32.309990 kubelet[1929]: I1029 00:42:32.309983 1929 policy_none.go:49] "None policy: Start" Oct 29 00:42:32.310040 kubelet[1929]: I1029 00:42:32.309994 1929 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 29 00:42:32.310040 kubelet[1929]: I1029 00:42:32.310003 1929 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 29 00:42:32.310167 kubelet[1929]: I1029 00:42:32.310136 1929 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 29 00:42:32.310167 kubelet[1929]: I1029 00:42:32.310153 1929 policy_none.go:47] "Start" Oct 29 00:42:32.313961 kubelet[1929]: E1029 00:42:32.313937 1929 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 29 00:42:32.314704 kubelet[1929]: I1029 00:42:32.314577 1929 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 29 00:42:32.314836 kubelet[1929]: I1029 00:42:32.314800 1929 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 29 00:42:32.315449 kubelet[1929]: I1029 00:42:32.315079 1929 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 29 00:42:32.316100 kubelet[1929]: E1029 00:42:32.316022 1929 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 29 00:42:32.388722 kubelet[1929]: I1029 00:42:32.388693 1929 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 00:42:32.388722 kubelet[1929]: I1029 00:42:32.388719 1929 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:42:32.389457 kubelet[1929]: I1029 00:42:32.389285 1929 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 00:42:32.423910 kubelet[1929]: I1029 00:42:32.423888 1929 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 29 00:42:32.431956 kubelet[1929]: I1029 00:42:32.431926 1929 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 29 00:42:32.432228 kubelet[1929]: I1029 00:42:32.432214 1929 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 29 00:42:32.574336 kubelet[1929]: I1029 00:42:32.574301 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:42:32.574520 kubelet[1929]: I1029 00:42:32.574503 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:42:32.574631 kubelet[1929]: I1029 00:42:32.574617 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:42:32.574716 kubelet[1929]: I1029 00:42:32.574700 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:42:32.574787 kubelet[1929]: I1029 00:42:32.574773 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 29 00:42:32.574853 kubelet[1929]: I1029 00:42:32.574841 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 29 00:42:32.574927 kubelet[1929]: I1029 00:42:32.574915 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b3b7be264d011303e42592e0e86373be-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b3b7be264d011303e42592e0e86373be\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:42:32.574996 kubelet[1929]: I1029 00:42:32.574984 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b3b7be264d011303e42592e0e86373be-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b3b7be264d011303e42592e0e86373be\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:42:32.575209 kubelet[1929]: I1029 00:42:32.575150 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b3b7be264d011303e42592e0e86373be-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b3b7be264d011303e42592e0e86373be\") " pod="kube-system/kube-apiserver-localhost" Oct 29 00:42:32.694605 kubelet[1929]: E1029 00:42:32.694477 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:32.697201 kubelet[1929]: E1029 00:42:32.697162 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:32.697282 kubelet[1929]: E1029 00:42:32.697223 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:32.836806 sudo[1967]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 29 00:42:32.837034 sudo[1967]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Oct 29 00:42:33.252460 kubelet[1929]: I1029 00:42:33.252389 1929 apiserver.go:52] "Watching apiserver" Oct 29 00:42:33.272812 kubelet[1929]: I1029 00:42:33.272765 1929 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 29 00:42:33.300616 kubelet[1929]: I1029 00:42:33.300590 1929 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:42:33.300898 kubelet[1929]: I1029 00:42:33.300880 1929 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 29 00:42:33.301067 kubelet[1929]: I1029 00:42:33.301036 1929 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 29 00:42:33.310127 kubelet[1929]: E1029 00:42:33.310093 1929 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 29 00:42:33.310314 kubelet[1929]: E1029 00:42:33.310292 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:33.310642 kubelet[1929]: E1029 00:42:33.310607 1929 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 29 00:42:33.310754 kubelet[1929]: E1029 00:42:33.310734 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:33.311450 kubelet[1929]: E1029 00:42:33.311421 1929 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 29 00:42:33.311561 kubelet[1929]: E1029 00:42:33.311541 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:33.329931 kubelet[1929]: I1029 00:42:33.329862 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.329835481 podStartE2EDuration="1.329835481s" podCreationTimestamp="2025-10-29 00:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:42:33.32961772 +0000 UTC m=+1.133664703" watchObservedRunningTime="2025-10-29 00:42:33.329835481 +0000 UTC m=+1.133882464" Oct 29 00:42:33.343833 kubelet[1929]: I1029 00:42:33.343777 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.343762814 podStartE2EDuration="1.343762814s" podCreationTimestamp="2025-10-29 00:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:42:33.33674086 +0000 UTC m=+1.140787843" watchObservedRunningTime="2025-10-29 00:42:33.343762814 +0000 UTC m=+1.147809757" Oct 29 00:42:33.346333 sudo[1967]: pam_unix(sudo:session): session closed for user root Oct 29 00:42:33.353242 kubelet[1929]: I1029 00:42:33.353172 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.35315683 podStartE2EDuration="1.35315683s" podCreationTimestamp="2025-10-29 00:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:42:33.344449518 +0000 UTC m=+1.148496501" watchObservedRunningTime="2025-10-29 00:42:33.35315683 +0000 UTC m=+1.157203813" Oct 29 00:42:34.301935 kubelet[1929]: E1029 00:42:34.301897 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:34.302259 kubelet[1929]: E1029 00:42:34.301968 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:34.302460 kubelet[1929]: E1029 00:42:34.302438 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:35.302999 kubelet[1929]: E1029 00:42:35.302965 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:35.452046 sudo[1319]: pam_unix(sudo:session): session closed for user root Oct 29 00:42:35.453736 sshd[1315]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:35.456445 systemd-logind[1208]: Session 5 logged out. Waiting for processes to exit. Oct 29 00:42:35.457235 systemd[1]: sshd@4-10.0.0.113:22-10.0.0.1:41464.service: Deactivated successfully. Oct 29 00:42:35.457985 systemd[1]: session-5.scope: Deactivated successfully. Oct 29 00:42:35.458133 systemd[1]: session-5.scope: Consumed 7.699s CPU time. Oct 29 00:42:35.459110 systemd-logind[1208]: Removed session 5. Oct 29 00:42:38.209871 kubelet[1929]: I1029 00:42:38.209831 1929 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 29 00:42:38.210579 env[1220]: time="2025-10-29T00:42:38.210491431Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 29 00:42:38.210806 kubelet[1929]: I1029 00:42:38.210662 1929 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 29 00:42:39.174884 systemd[1]: Created slice kubepods-besteffort-pod5c12cde2_9292_471c_89d1_01e56a680779.slice. Oct 29 00:42:39.185379 systemd[1]: Created slice kubepods-burstable-pod0024f661_3b09_4def_8936_cae43a5f9a80.slice. Oct 29 00:42:39.227931 kubelet[1929]: I1029 00:42:39.227883 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-etc-cni-netd\") pod \"cilium-2zscn\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " pod="kube-system/cilium-2zscn" Oct 29 00:42:39.227931 kubelet[1929]: I1029 00:42:39.227924 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0024f661-3b09-4def-8936-cae43a5f9a80-cilium-config-path\") pod \"cilium-2zscn\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " pod="kube-system/cilium-2zscn" Oct 29 00:42:39.228288 kubelet[1929]: I1029 00:42:39.227944 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5c12cde2-9292-471c-89d1-01e56a680779-kube-proxy\") pod \"kube-proxy-xztlq\" (UID: \"5c12cde2-9292-471c-89d1-01e56a680779\") " pod="kube-system/kube-proxy-xztlq" Oct 29 00:42:39.228288 kubelet[1929]: I1029 00:42:39.227962 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c12cde2-9292-471c-89d1-01e56a680779-xtables-lock\") pod \"kube-proxy-xztlq\" (UID: \"5c12cde2-9292-471c-89d1-01e56a680779\") " pod="kube-system/kube-proxy-xztlq" Oct 29 00:42:39.228288 kubelet[1929]: I1029 00:42:39.227976 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-hostproc\") pod \"cilium-2zscn\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " pod="kube-system/cilium-2zscn" Oct 29 00:42:39.228288 kubelet[1929]: I1029 00:42:39.227990 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-cilium-cgroup\") pod \"cilium-2zscn\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " pod="kube-system/cilium-2zscn" Oct 29 00:42:39.228288 kubelet[1929]: I1029 00:42:39.228004 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-cni-path\") pod \"cilium-2zscn\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " pod="kube-system/cilium-2zscn" Oct 29 00:42:39.228288 kubelet[1929]: I1029 00:42:39.228017 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-host-proc-sys-net\") pod \"cilium-2zscn\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " pod="kube-system/cilium-2zscn" Oct 29 00:42:39.228425 kubelet[1929]: I1029 00:42:39.228031 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0024f661-3b09-4def-8936-cae43a5f9a80-hubble-tls\") pod \"cilium-2zscn\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " pod="kube-system/cilium-2zscn" Oct 29 00:42:39.228425 kubelet[1929]: I1029 00:42:39.228047 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c12cde2-9292-471c-89d1-01e56a680779-lib-modules\") pod \"kube-proxy-xztlq\" (UID: \"5c12cde2-9292-471c-89d1-01e56a680779\") " pod="kube-system/kube-proxy-xztlq" Oct 29 00:42:39.228425 kubelet[1929]: I1029 00:42:39.228063 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv624\" (UniqueName: \"kubernetes.io/projected/5c12cde2-9292-471c-89d1-01e56a680779-kube-api-access-cv624\") pod \"kube-proxy-xztlq\" (UID: \"5c12cde2-9292-471c-89d1-01e56a680779\") " pod="kube-system/kube-proxy-xztlq" Oct 29 00:42:39.228425 kubelet[1929]: I1029 00:42:39.228079 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-bpf-maps\") pod \"cilium-2zscn\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " pod="kube-system/cilium-2zscn" Oct 29 00:42:39.228425 kubelet[1929]: I1029 00:42:39.228092 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-lib-modules\") pod \"cilium-2zscn\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " pod="kube-system/cilium-2zscn" Oct 29 00:42:39.228425 kubelet[1929]: I1029 00:42:39.228105 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-xtables-lock\") pod \"cilium-2zscn\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " pod="kube-system/cilium-2zscn" Oct 29 00:42:39.228553 kubelet[1929]: I1029 00:42:39.228120 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0024f661-3b09-4def-8936-cae43a5f9a80-clustermesh-secrets\") pod \"cilium-2zscn\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " pod="kube-system/cilium-2zscn" Oct 29 00:42:39.228553 kubelet[1929]: I1029 00:42:39.228141 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-host-proc-sys-kernel\") pod \"cilium-2zscn\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " pod="kube-system/cilium-2zscn" Oct 29 00:42:39.228553 kubelet[1929]: I1029 00:42:39.228154 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzr7d\" (UniqueName: \"kubernetes.io/projected/0024f661-3b09-4def-8936-cae43a5f9a80-kube-api-access-hzr7d\") pod \"cilium-2zscn\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " pod="kube-system/cilium-2zscn" Oct 29 00:42:39.228553 kubelet[1929]: I1029 00:42:39.228178 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-cilium-run\") pod \"cilium-2zscn\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " pod="kube-system/cilium-2zscn" Oct 29 00:42:39.323787 systemd[1]: Created slice kubepods-besteffort-pode6cbfed2_8e15_4bb1_98c7_4e20cbaa4f42.slice. Oct 29 00:42:39.333093 kubelet[1929]: I1029 00:42:39.333039 1929 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 29 00:42:39.429793 kubelet[1929]: I1029 00:42:39.429687 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6cbfed2-8e15-4bb1-98c7-4e20cbaa4f42-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-l7nlz\" (UID: \"e6cbfed2-8e15-4bb1-98c7-4e20cbaa4f42\") " pod="kube-system/cilium-operator-6f9c7c5859-l7nlz" Oct 29 00:42:39.429793 kubelet[1929]: I1029 00:42:39.429727 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp9tk\" (UniqueName: \"kubernetes.io/projected/e6cbfed2-8e15-4bb1-98c7-4e20cbaa4f42-kube-api-access-zp9tk\") pod \"cilium-operator-6f9c7c5859-l7nlz\" (UID: \"e6cbfed2-8e15-4bb1-98c7-4e20cbaa4f42\") " pod="kube-system/cilium-operator-6f9c7c5859-l7nlz" Oct 29 00:42:39.485368 kubelet[1929]: E1029 00:42:39.485325 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:39.486103 env[1220]: time="2025-10-29T00:42:39.486067082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xztlq,Uid:5c12cde2-9292-471c-89d1-01e56a680779,Namespace:kube-system,Attempt:0,}" Oct 29 00:42:39.489238 kubelet[1929]: E1029 00:42:39.489210 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:39.489745 env[1220]: time="2025-10-29T00:42:39.489710029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2zscn,Uid:0024f661-3b09-4def-8936-cae43a5f9a80,Namespace:kube-system,Attempt:0,}" Oct 29 00:42:39.503384 env[1220]: time="2025-10-29T00:42:39.503319027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 00:42:39.503384 env[1220]: time="2025-10-29T00:42:39.503359910Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 00:42:39.503384 env[1220]: time="2025-10-29T00:42:39.503371431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 00:42:39.504474 env[1220]: time="2025-10-29T00:42:39.503482039Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c3fdf54492c0086681e8c31d920e1dc55760964bc6c0b1eb56ba4cdc67c011d3 pid=2030 runtime=io.containerd.runc.v2 Oct 29 00:42:39.505063 env[1220]: time="2025-10-29T00:42:39.505013392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 00:42:39.505128 env[1220]: time="2025-10-29T00:42:39.505072756Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 00:42:39.505128 env[1220]: time="2025-10-29T00:42:39.505100478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 00:42:39.505349 env[1220]: time="2025-10-29T00:42:39.505312174Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3697654d10425bc3318d9c9112820ed0bee67d9b29e9a15ce1c0c7ecb17ba4ad pid=2037 runtime=io.containerd.runc.v2 Oct 29 00:42:39.514460 systemd[1]: Started cri-containerd-c3fdf54492c0086681e8c31d920e1dc55760964bc6c0b1eb56ba4cdc67c011d3.scope. Oct 29 00:42:39.521012 systemd[1]: Started cri-containerd-3697654d10425bc3318d9c9112820ed0bee67d9b29e9a15ce1c0c7ecb17ba4ad.scope. Oct 29 00:42:39.553046 env[1220]: time="2025-10-29T00:42:39.553005393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xztlq,Uid:5c12cde2-9292-471c-89d1-01e56a680779,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3fdf54492c0086681e8c31d920e1dc55760964bc6c0b1eb56ba4cdc67c011d3\"" Oct 29 00:42:39.555309 kubelet[1929]: E1029 00:42:39.553904 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:39.556988 env[1220]: time="2025-10-29T00:42:39.556879157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2zscn,Uid:0024f661-3b09-4def-8936-cae43a5f9a80,Namespace:kube-system,Attempt:0,} returns sandbox id \"3697654d10425bc3318d9c9112820ed0bee67d9b29e9a15ce1c0c7ecb17ba4ad\"" Oct 29 00:42:39.557559 kubelet[1929]: E1029 00:42:39.557356 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:39.559214 env[1220]: time="2025-10-29T00:42:39.558377107Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 29 00:42:39.561058 env[1220]: time="2025-10-29T00:42:39.561026422Z" level=info msg="CreateContainer within sandbox \"c3fdf54492c0086681e8c31d920e1dc55760964bc6c0b1eb56ba4cdc67c011d3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 29 00:42:39.574260 env[1220]: time="2025-10-29T00:42:39.574228750Z" level=info msg="CreateContainer within sandbox \"c3fdf54492c0086681e8c31d920e1dc55760964bc6c0b1eb56ba4cdc67c011d3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9d15259955d51a48405c969e6ed9639ae60cb8ff38701482291f3f9677722172\"" Oct 29 00:42:39.574801 env[1220]: time="2025-10-29T00:42:39.574776511Z" level=info msg="StartContainer for \"9d15259955d51a48405c969e6ed9639ae60cb8ff38701482291f3f9677722172\"" Oct 29 00:42:39.588290 systemd[1]: Started cri-containerd-9d15259955d51a48405c969e6ed9639ae60cb8ff38701482291f3f9677722172.scope. Oct 29 00:42:39.624051 env[1220]: time="2025-10-29T00:42:39.624011643Z" level=info msg="StartContainer for \"9d15259955d51a48405c969e6ed9639ae60cb8ff38701482291f3f9677722172\" returns successfully" Oct 29 00:42:39.626937 kubelet[1929]: E1029 00:42:39.626911 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:39.627408 env[1220]: time="2025-10-29T00:42:39.627380490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-l7nlz,Uid:e6cbfed2-8e15-4bb1-98c7-4e20cbaa4f42,Namespace:kube-system,Attempt:0,}" Oct 29 00:42:39.642901 env[1220]: time="2025-10-29T00:42:39.642808863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 00:42:39.642901 env[1220]: time="2025-10-29T00:42:39.642864387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 00:42:39.642901 env[1220]: time="2025-10-29T00:42:39.642876547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 00:42:39.643293 env[1220]: time="2025-10-29T00:42:39.643249095Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d1520fdbeaa2a75b18fe2cec40a1ef1e89262ea428a5dca907ec1bb8c460e5ab pid=2141 runtime=io.containerd.runc.v2 Oct 29 00:42:39.655402 systemd[1]: Started cri-containerd-d1520fdbeaa2a75b18fe2cec40a1ef1e89262ea428a5dca907ec1bb8c460e5ab.scope. Oct 29 00:42:39.688621 env[1220]: time="2025-10-29T00:42:39.688527137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-l7nlz,Uid:e6cbfed2-8e15-4bb1-98c7-4e20cbaa4f42,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1520fdbeaa2a75b18fe2cec40a1ef1e89262ea428a5dca907ec1bb8c460e5ab\"" Oct 29 00:42:39.691118 kubelet[1929]: E1029 00:42:39.691014 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:40.310708 kubelet[1929]: E1029 00:42:40.310620 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:40.320862 kubelet[1929]: I1029 00:42:40.320811 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xztlq" podStartSLOduration=1.320796396 podStartE2EDuration="1.320796396s" podCreationTimestamp="2025-10-29 00:42:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:42:40.319997981 +0000 UTC m=+8.124044964" watchObservedRunningTime="2025-10-29 00:42:40.320796396 +0000 UTC m=+8.124843379" Oct 29 00:42:41.614068 kubelet[1929]: E1029 00:42:41.614010 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:41.749855 kubelet[1929]: E1029 00:42:41.749785 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:42.318097 kubelet[1929]: E1029 00:42:42.318067 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:42.318371 kubelet[1929]: E1029 00:42:42.318209 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:42.825685 kubelet[1929]: E1029 00:42:42.825640 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:43.320076 kubelet[1929]: E1029 00:42:43.320019 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:43.320610 kubelet[1929]: E1029 00:42:43.320571 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:44.605919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount859423185.mount: Deactivated successfully. Oct 29 00:42:46.800287 env[1220]: time="2025-10-29T00:42:46.800209054Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:46.802223 env[1220]: time="2025-10-29T00:42:46.801801055Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:46.803358 env[1220]: time="2025-10-29T00:42:46.803231887Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:46.803857 env[1220]: time="2025-10-29T00:42:46.803817797Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 29 00:42:46.807483 env[1220]: time="2025-10-29T00:42:46.807453701Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 29 00:42:46.809549 env[1220]: time="2025-10-29T00:42:46.809515045Z" level=info msg="CreateContainer within sandbox \"3697654d10425bc3318d9c9112820ed0bee67d9b29e9a15ce1c0c7ecb17ba4ad\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 29 00:42:46.821235 env[1220]: time="2025-10-29T00:42:46.821186596Z" level=info msg="CreateContainer within sandbox \"3697654d10425bc3318d9c9112820ed0bee67d9b29e9a15ce1c0c7ecb17ba4ad\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cae2c3d80141a1e1d0ba00d4c32dfaf8364991b3bf2e43e3ab43b303bceff9c8\"" Oct 29 00:42:46.823014 env[1220]: time="2025-10-29T00:42:46.822972527Z" level=info msg="StartContainer for \"cae2c3d80141a1e1d0ba00d4c32dfaf8364991b3bf2e43e3ab43b303bceff9c8\"" Oct 29 00:42:46.842424 systemd[1]: run-containerd-runc-k8s.io-cae2c3d80141a1e1d0ba00d4c32dfaf8364991b3bf2e43e3ab43b303bceff9c8-runc.0hX6hN.mount: Deactivated successfully. Oct 29 00:42:46.843877 systemd[1]: Started cri-containerd-cae2c3d80141a1e1d0ba00d4c32dfaf8364991b3bf2e43e3ab43b303bceff9c8.scope. Oct 29 00:42:46.869781 env[1220]: time="2025-10-29T00:42:46.869736894Z" level=info msg="StartContainer for \"cae2c3d80141a1e1d0ba00d4c32dfaf8364991b3bf2e43e3ab43b303bceff9c8\" returns successfully" Oct 29 00:42:46.881763 systemd[1]: cri-containerd-cae2c3d80141a1e1d0ba00d4c32dfaf8364991b3bf2e43e3ab43b303bceff9c8.scope: Deactivated successfully. Oct 29 00:42:47.011573 update_engine[1213]: I1029 00:42:47.011504 1213 update_attempter.cc:509] Updating boot flags... Oct 29 00:42:47.087962 env[1220]: time="2025-10-29T00:42:47.085570693Z" level=info msg="shim disconnected" id=cae2c3d80141a1e1d0ba00d4c32dfaf8364991b3bf2e43e3ab43b303bceff9c8 Oct 29 00:42:47.087962 env[1220]: time="2025-10-29T00:42:47.085627616Z" level=warning msg="cleaning up after shim disconnected" id=cae2c3d80141a1e1d0ba00d4c32dfaf8364991b3bf2e43e3ab43b303bceff9c8 namespace=k8s.io Oct 29 00:42:47.087962 env[1220]: time="2025-10-29T00:42:47.085638736Z" level=info msg="cleaning up dead shim" Oct 29 00:42:47.103831 env[1220]: time="2025-10-29T00:42:47.103368710Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:42:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2370 runtime=io.containerd.runc.v2\n" Oct 29 00:42:47.336721 kubelet[1929]: E1029 00:42:47.336678 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:47.350311 env[1220]: time="2025-10-29T00:42:47.346390769Z" level=info msg="CreateContainer within sandbox \"3697654d10425bc3318d9c9112820ed0bee67d9b29e9a15ce1c0c7ecb17ba4ad\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 29 00:42:47.364235 env[1220]: time="2025-10-29T00:42:47.364168945Z" level=info msg="CreateContainer within sandbox \"3697654d10425bc3318d9c9112820ed0bee67d9b29e9a15ce1c0c7ecb17ba4ad\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"37fe45b0a69cf1e3de6b689189efa931cfd9fa027178b00f771bc7bbabf6151b\"" Oct 29 00:42:47.364914 env[1220]: time="2025-10-29T00:42:47.364842857Z" level=info msg="StartContainer for \"37fe45b0a69cf1e3de6b689189efa931cfd9fa027178b00f771bc7bbabf6151b\"" Oct 29 00:42:47.383870 systemd[1]: Started cri-containerd-37fe45b0a69cf1e3de6b689189efa931cfd9fa027178b00f771bc7bbabf6151b.scope. Oct 29 00:42:47.428446 env[1220]: time="2025-10-29T00:42:47.426494505Z" level=info msg="StartContainer for \"37fe45b0a69cf1e3de6b689189efa931cfd9fa027178b00f771bc7bbabf6151b\" returns successfully" Oct 29 00:42:47.439601 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 29 00:42:47.439821 systemd[1]: Stopped systemd-sysctl.service. Oct 29 00:42:47.440002 systemd[1]: Stopping systemd-sysctl.service... Oct 29 00:42:47.441488 systemd[1]: Starting systemd-sysctl.service... Oct 29 00:42:47.443490 systemd[1]: cri-containerd-37fe45b0a69cf1e3de6b689189efa931cfd9fa027178b00f771bc7bbabf6151b.scope: Deactivated successfully. Oct 29 00:42:47.450913 systemd[1]: Finished systemd-sysctl.service. Oct 29 00:42:47.465450 env[1220]: time="2025-10-29T00:42:47.465405259Z" level=info msg="shim disconnected" id=37fe45b0a69cf1e3de6b689189efa931cfd9fa027178b00f771bc7bbabf6151b Oct 29 00:42:47.465450 env[1220]: time="2025-10-29T00:42:47.465451101Z" level=warning msg="cleaning up after shim disconnected" id=37fe45b0a69cf1e3de6b689189efa931cfd9fa027178b00f771bc7bbabf6151b namespace=k8s.io Oct 29 00:42:47.465708 env[1220]: time="2025-10-29T00:42:47.465461181Z" level=info msg="cleaning up dead shim" Oct 29 00:42:47.471795 env[1220]: time="2025-10-29T00:42:47.471753004Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:42:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2437 runtime=io.containerd.runc.v2\n" Oct 29 00:42:47.820775 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cae2c3d80141a1e1d0ba00d4c32dfaf8364991b3bf2e43e3ab43b303bceff9c8-rootfs.mount: Deactivated successfully. Oct 29 00:42:48.091337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount55384714.mount: Deactivated successfully. Oct 29 00:42:48.339533 kubelet[1929]: E1029 00:42:48.339485 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:48.347963 env[1220]: time="2025-10-29T00:42:48.347808330Z" level=info msg="CreateContainer within sandbox \"3697654d10425bc3318d9c9112820ed0bee67d9b29e9a15ce1c0c7ecb17ba4ad\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 29 00:42:48.372819 env[1220]: time="2025-10-29T00:42:48.372733872Z" level=info msg="CreateContainer within sandbox \"3697654d10425bc3318d9c9112820ed0bee67d9b29e9a15ce1c0c7ecb17ba4ad\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"99919d81a1d10b088af1a22398c79a681f2acb5a55e5c50acd32707a29b767f1\"" Oct 29 00:42:48.373480 env[1220]: time="2025-10-29T00:42:48.373450264Z" level=info msg="StartContainer for \"99919d81a1d10b088af1a22398c79a681f2acb5a55e5c50acd32707a29b767f1\"" Oct 29 00:42:48.388344 systemd[1]: Started cri-containerd-99919d81a1d10b088af1a22398c79a681f2acb5a55e5c50acd32707a29b767f1.scope. Oct 29 00:42:48.428944 env[1220]: time="2025-10-29T00:42:48.428906645Z" level=info msg="StartContainer for \"99919d81a1d10b088af1a22398c79a681f2acb5a55e5c50acd32707a29b767f1\" returns successfully" Oct 29 00:42:48.429551 systemd[1]: cri-containerd-99919d81a1d10b088af1a22398c79a681f2acb5a55e5c50acd32707a29b767f1.scope: Deactivated successfully. Oct 29 00:42:48.453280 env[1220]: time="2025-10-29T00:42:48.453228639Z" level=info msg="shim disconnected" id=99919d81a1d10b088af1a22398c79a681f2acb5a55e5c50acd32707a29b767f1 Oct 29 00:42:48.453458 env[1220]: time="2025-10-29T00:42:48.453287242Z" level=warning msg="cleaning up after shim disconnected" id=99919d81a1d10b088af1a22398c79a681f2acb5a55e5c50acd32707a29b767f1 namespace=k8s.io Oct 29 00:42:48.453458 env[1220]: time="2025-10-29T00:42:48.453298882Z" level=info msg="cleaning up dead shim" Oct 29 00:42:48.459691 env[1220]: time="2025-10-29T00:42:48.459657453Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:42:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2494 runtime=io.containerd.runc.v2\n" Oct 29 00:42:49.016514 env[1220]: time="2025-10-29T00:42:49.016464006Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:49.018447 env[1220]: time="2025-10-29T00:42:49.018418731Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:49.020024 env[1220]: time="2025-10-29T00:42:49.019988440Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 29 00:42:49.020540 env[1220]: time="2025-10-29T00:42:49.020510582Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 29 00:42:49.024771 env[1220]: time="2025-10-29T00:42:49.024727806Z" level=info msg="CreateContainer within sandbox \"d1520fdbeaa2a75b18fe2cec40a1ef1e89262ea428a5dca907ec1bb8c460e5ab\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 29 00:42:49.034387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1382118072.mount: Deactivated successfully. Oct 29 00:42:49.038486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2484568924.mount: Deactivated successfully. Oct 29 00:42:49.040781 env[1220]: time="2025-10-29T00:42:49.040715744Z" level=info msg="CreateContainer within sandbox \"d1520fdbeaa2a75b18fe2cec40a1ef1e89262ea428a5dca907ec1bb8c460e5ab\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"115776fb8cdff2aff6c00d860ac1fe24033b214d2d3824122277f367c3ffedf6\"" Oct 29 00:42:49.042728 env[1220]: time="2025-10-29T00:42:49.041385253Z" level=info msg="StartContainer for \"115776fb8cdff2aff6c00d860ac1fe24033b214d2d3824122277f367c3ffedf6\"" Oct 29 00:42:49.058418 systemd[1]: Started cri-containerd-115776fb8cdff2aff6c00d860ac1fe24033b214d2d3824122277f367c3ffedf6.scope. Oct 29 00:42:49.085032 env[1220]: time="2025-10-29T00:42:49.084990435Z" level=info msg="StartContainer for \"115776fb8cdff2aff6c00d860ac1fe24033b214d2d3824122277f367c3ffedf6\" returns successfully" Oct 29 00:42:49.345237 kubelet[1929]: E1029 00:42:49.345129 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:49.349654 kubelet[1929]: E1029 00:42:49.349617 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:49.357975 env[1220]: time="2025-10-29T00:42:49.357933781Z" level=info msg="CreateContainer within sandbox \"3697654d10425bc3318d9c9112820ed0bee67d9b29e9a15ce1c0c7ecb17ba4ad\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 29 00:42:49.370298 env[1220]: time="2025-10-29T00:42:49.370247918Z" level=info msg="CreateContainer within sandbox \"3697654d10425bc3318d9c9112820ed0bee67d9b29e9a15ce1c0c7ecb17ba4ad\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"74af74580cb46f9be70936bf57893f1f5f2d9500b172827dca21fed3a1aae801\"" Oct 29 00:42:49.370956 env[1220]: time="2025-10-29T00:42:49.370931628Z" level=info msg="StartContainer for \"74af74580cb46f9be70936bf57893f1f5f2d9500b172827dca21fed3a1aae801\"" Oct 29 00:42:49.378042 kubelet[1929]: I1029 00:42:49.377972 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-l7nlz" podStartSLOduration=1.049373123 podStartE2EDuration="10.377956734s" podCreationTimestamp="2025-10-29 00:42:39 +0000 UTC" firstStartedPulling="2025-10-29 00:42:39.692765568 +0000 UTC m=+7.496812551" lastFinishedPulling="2025-10-29 00:42:49.021349179 +0000 UTC m=+16.825396162" observedRunningTime="2025-10-29 00:42:49.355937334 +0000 UTC m=+17.159984317" watchObservedRunningTime="2025-10-29 00:42:49.377956734 +0000 UTC m=+17.182003717" Oct 29 00:42:49.392821 systemd[1]: Started cri-containerd-74af74580cb46f9be70936bf57893f1f5f2d9500b172827dca21fed3a1aae801.scope. Oct 29 00:42:49.420516 systemd[1]: cri-containerd-74af74580cb46f9be70936bf57893f1f5f2d9500b172827dca21fed3a1aae801.scope: Deactivated successfully. Oct 29 00:42:49.423780 env[1220]: time="2025-10-29T00:42:49.423517522Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0024f661_3b09_4def_8936_cae43a5f9a80.slice/cri-containerd-74af74580cb46f9be70936bf57893f1f5f2d9500b172827dca21fed3a1aae801.scope/memory.events\": no such file or directory" Oct 29 00:42:49.455675 env[1220]: time="2025-10-29T00:42:49.455618282Z" level=info msg="StartContainer for \"74af74580cb46f9be70936bf57893f1f5f2d9500b172827dca21fed3a1aae801\" returns successfully" Oct 29 00:42:49.492863 env[1220]: time="2025-10-29T00:42:49.492813344Z" level=info msg="shim disconnected" id=74af74580cb46f9be70936bf57893f1f5f2d9500b172827dca21fed3a1aae801 Oct 29 00:42:49.492863 env[1220]: time="2025-10-29T00:42:49.492861386Z" level=warning msg="cleaning up after shim disconnected" id=74af74580cb46f9be70936bf57893f1f5f2d9500b172827dca21fed3a1aae801 namespace=k8s.io Oct 29 00:42:49.492863 env[1220]: time="2025-10-29T00:42:49.492871227Z" level=info msg="cleaning up dead shim" Oct 29 00:42:49.500272 env[1220]: time="2025-10-29T00:42:49.500234588Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:42:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2589 runtime=io.containerd.runc.v2\n" Oct 29 00:42:50.356872 kubelet[1929]: E1029 00:42:50.356838 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:50.357350 kubelet[1929]: E1029 00:42:50.357028 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:50.364091 env[1220]: time="2025-10-29T00:42:50.364051283Z" level=info msg="CreateContainer within sandbox \"3697654d10425bc3318d9c9112820ed0bee67d9b29e9a15ce1c0c7ecb17ba4ad\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 29 00:42:50.385745 env[1220]: time="2025-10-29T00:42:50.385691703Z" level=info msg="CreateContainer within sandbox \"3697654d10425bc3318d9c9112820ed0bee67d9b29e9a15ce1c0c7ecb17ba4ad\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d\"" Oct 29 00:42:50.387470 env[1220]: time="2025-10-29T00:42:50.386269367Z" level=info msg="StartContainer for \"3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d\"" Oct 29 00:42:50.405789 systemd[1]: Started cri-containerd-3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d.scope. Oct 29 00:42:50.465980 env[1220]: time="2025-10-29T00:42:50.465923438Z" level=info msg="StartContainer for \"3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d\" returns successfully" Oct 29 00:42:50.574226 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Oct 29 00:42:50.625156 kubelet[1929]: I1029 00:42:50.625036 1929 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 29 00:42:50.671873 systemd[1]: Created slice kubepods-burstable-pod8f631746_b255_4a29_939d_7f93f3155043.slice. Oct 29 00:42:50.676092 systemd[1]: Created slice kubepods-burstable-pod68e440f1_724a_481c_af90_fdcd2959a7b3.slice. Oct 29 00:42:50.723455 kubelet[1929]: I1029 00:42:50.723415 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68e440f1-724a-481c-af90-fdcd2959a7b3-config-volume\") pod \"coredns-66bc5c9577-gfdll\" (UID: \"68e440f1-724a-481c-af90-fdcd2959a7b3\") " pod="kube-system/coredns-66bc5c9577-gfdll" Oct 29 00:42:50.723455 kubelet[1929]: I1029 00:42:50.723461 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f631746-b255-4a29-939d-7f93f3155043-config-volume\") pod \"coredns-66bc5c9577-zmb2r\" (UID: \"8f631746-b255-4a29-939d-7f93f3155043\") " pod="kube-system/coredns-66bc5c9577-zmb2r" Oct 29 00:42:50.723663 kubelet[1929]: I1029 00:42:50.723480 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlnq9\" (UniqueName: \"kubernetes.io/projected/8f631746-b255-4a29-939d-7f93f3155043-kube-api-access-mlnq9\") pod \"coredns-66bc5c9577-zmb2r\" (UID: \"8f631746-b255-4a29-939d-7f93f3155043\") " pod="kube-system/coredns-66bc5c9577-zmb2r" Oct 29 00:42:50.723663 kubelet[1929]: I1029 00:42:50.723500 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpqqb\" (UniqueName: \"kubernetes.io/projected/68e440f1-724a-481c-af90-fdcd2959a7b3-kube-api-access-tpqqb\") pod \"coredns-66bc5c9577-gfdll\" (UID: \"68e440f1-724a-481c-af90-fdcd2959a7b3\") " pod="kube-system/coredns-66bc5c9577-gfdll" Oct 29 00:42:50.902215 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Oct 29 00:42:50.977592 kubelet[1929]: E1029 00:42:50.977474 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:50.978489 env[1220]: time="2025-10-29T00:42:50.978431582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zmb2r,Uid:8f631746-b255-4a29-939d-7f93f3155043,Namespace:kube-system,Attempt:0,}" Oct 29 00:42:50.981600 kubelet[1929]: E1029 00:42:50.980964 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:50.982170 env[1220]: time="2025-10-29T00:42:50.982130616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gfdll,Uid:68e440f1-724a-481c-af90-fdcd2959a7b3,Namespace:kube-system,Attempt:0,}" Oct 29 00:42:51.361118 kubelet[1929]: E1029 00:42:51.361030 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:51.375406 kubelet[1929]: I1029 00:42:51.374835 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2zscn" podStartSLOduration=5.125912665 podStartE2EDuration="12.374820541s" podCreationTimestamp="2025-10-29 00:42:39 +0000 UTC" firstStartedPulling="2025-10-29 00:42:39.55786815 +0000 UTC m=+7.361915133" lastFinishedPulling="2025-10-29 00:42:46.806776026 +0000 UTC m=+14.610823009" observedRunningTime="2025-10-29 00:42:51.374338682 +0000 UTC m=+19.178385665" watchObservedRunningTime="2025-10-29 00:42:51.374820541 +0000 UTC m=+19.178867524" Oct 29 00:42:52.362691 kubelet[1929]: E1029 00:42:52.362662 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:52.517445 systemd-networkd[1047]: cilium_host: Link UP Oct 29 00:42:52.519980 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Oct 29 00:42:52.520061 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Oct 29 00:42:52.520357 systemd-networkd[1047]: cilium_net: Link UP Oct 29 00:42:52.521032 systemd-networkd[1047]: cilium_net: Gained carrier Oct 29 00:42:52.521220 systemd-networkd[1047]: cilium_host: Gained carrier Oct 29 00:42:52.601872 systemd-networkd[1047]: cilium_vxlan: Link UP Oct 29 00:42:52.601883 systemd-networkd[1047]: cilium_vxlan: Gained carrier Oct 29 00:42:52.852228 kernel: NET: Registered PF_ALG protocol family Oct 29 00:42:53.074301 systemd-networkd[1047]: cilium_net: Gained IPv6LL Oct 29 00:42:53.139370 systemd-networkd[1047]: cilium_host: Gained IPv6LL Oct 29 00:42:53.364135 kubelet[1929]: E1029 00:42:53.364086 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:53.427969 systemd-networkd[1047]: lxc_health: Link UP Oct 29 00:42:53.438316 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Oct 29 00:42:53.438290 systemd-networkd[1047]: lxc_health: Gained carrier Oct 29 00:42:53.535947 systemd-networkd[1047]: lxc34b2bbfa9cba: Link UP Oct 29 00:42:53.546256 kernel: eth0: renamed from tmp8a59c Oct 29 00:42:53.552004 systemd-networkd[1047]: lxce4c105ff9e0f: Link UP Oct 29 00:42:53.552231 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc34b2bbfa9cba: link becomes ready Oct 29 00:42:53.552185 systemd-networkd[1047]: lxc34b2bbfa9cba: Gained carrier Oct 29 00:42:53.561222 kernel: eth0: renamed from tmpdacc4 Oct 29 00:42:53.568064 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Oct 29 00:42:53.568140 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce4c105ff9e0f: link becomes ready Oct 29 00:42:53.568172 systemd-networkd[1047]: lxce4c105ff9e0f: Gained carrier Oct 29 00:42:53.971334 systemd-networkd[1047]: cilium_vxlan: Gained IPv6LL Oct 29 00:42:54.367677 kubelet[1929]: E1029 00:42:54.365708 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:54.674352 systemd-networkd[1047]: lxce4c105ff9e0f: Gained IPv6LL Oct 29 00:42:55.058528 systemd-networkd[1047]: lxc34b2bbfa9cba: Gained IPv6LL Oct 29 00:42:55.378390 systemd-networkd[1047]: lxc_health: Gained IPv6LL Oct 29 00:42:57.034075 env[1220]: time="2025-10-29T00:42:57.034009311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 00:42:57.034711 env[1220]: time="2025-10-29T00:42:57.034671811Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 00:42:57.034888 env[1220]: time="2025-10-29T00:42:57.034862657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 00:42:57.035251 env[1220]: time="2025-10-29T00:42:57.035082104Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a59cb4d01c5e91597c2627d0531e2fb7912d530b607759d07a006f58947c716 pid=3141 runtime=io.containerd.runc.v2 Oct 29 00:42:57.044925 env[1220]: time="2025-10-29T00:42:57.044849241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 00:42:57.044925 env[1220]: time="2025-10-29T00:42:57.044900842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 00:42:57.050264 env[1220]: time="2025-10-29T00:42:57.045067167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 00:42:57.050264 env[1220]: time="2025-10-29T00:42:57.046634655Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dacc48a31baea0f44bb733a707be294e04a603034d5b156126f1cf792af013e9 pid=3165 runtime=io.containerd.runc.v2 Oct 29 00:42:57.059407 systemd[1]: Started cri-containerd-8a59cb4d01c5e91597c2627d0531e2fb7912d530b607759d07a006f58947c716.scope. Oct 29 00:42:57.060866 systemd[1]: Started cri-containerd-dacc48a31baea0f44bb733a707be294e04a603034d5b156126f1cf792af013e9.scope. Oct 29 00:42:57.074055 systemd-resolved[1160]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:42:57.085942 systemd-resolved[1160]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 29 00:42:57.092445 env[1220]: time="2025-10-29T00:42:57.092407766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zmb2r,Uid:8f631746-b255-4a29-939d-7f93f3155043,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a59cb4d01c5e91597c2627d0531e2fb7912d530b607759d07a006f58947c716\"" Oct 29 00:42:57.093017 kubelet[1929]: E1029 00:42:57.092995 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:57.098075 env[1220]: time="2025-10-29T00:42:57.097747888Z" level=info msg="CreateContainer within sandbox \"8a59cb4d01c5e91597c2627d0531e2fb7912d530b607759d07a006f58947c716\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 00:42:57.108212 env[1220]: time="2025-10-29T00:42:57.108145364Z" level=info msg="CreateContainer within sandbox \"8a59cb4d01c5e91597c2627d0531e2fb7912d530b607759d07a006f58947c716\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cd1222d98b507f70f8f57805e22536f75971879d4b9f7607c68b186c6c7d4910\"" Oct 29 00:42:57.112387 env[1220]: time="2025-10-29T00:42:57.112353772Z" level=info msg="StartContainer for \"cd1222d98b507f70f8f57805e22536f75971879d4b9f7607c68b186c6c7d4910\"" Oct 29 00:42:57.113482 env[1220]: time="2025-10-29T00:42:57.113455205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gfdll,Uid:68e440f1-724a-481c-af90-fdcd2959a7b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"dacc48a31baea0f44bb733a707be294e04a603034d5b156126f1cf792af013e9\"" Oct 29 00:42:57.114094 kubelet[1929]: E1029 00:42:57.114068 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:57.118584 env[1220]: time="2025-10-29T00:42:57.118546200Z" level=info msg="CreateContainer within sandbox \"dacc48a31baea0f44bb733a707be294e04a603034d5b156126f1cf792af013e9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 29 00:42:57.132139 systemd[1]: Started cri-containerd-cd1222d98b507f70f8f57805e22536f75971879d4b9f7607c68b186c6c7d4910.scope. Oct 29 00:42:57.136330 env[1220]: time="2025-10-29T00:42:57.134143474Z" level=info msg="CreateContainer within sandbox \"dacc48a31baea0f44bb733a707be294e04a603034d5b156126f1cf792af013e9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b5d7825732c3d89a11a04ec45c5ccd2d5108ca90e76ef0b36cf34692ceb6da46\"" Oct 29 00:42:57.136330 env[1220]: time="2025-10-29T00:42:57.135970409Z" level=info msg="StartContainer for \"b5d7825732c3d89a11a04ec45c5ccd2d5108ca90e76ef0b36cf34692ceb6da46\"" Oct 29 00:42:57.160522 systemd[1]: Started cri-containerd-b5d7825732c3d89a11a04ec45c5ccd2d5108ca90e76ef0b36cf34692ceb6da46.scope. Oct 29 00:42:57.169690 env[1220]: time="2025-10-29T00:42:57.169644673Z" level=info msg="StartContainer for \"cd1222d98b507f70f8f57805e22536f75971879d4b9f7607c68b186c6c7d4910\" returns successfully" Oct 29 00:42:57.195782 env[1220]: time="2025-10-29T00:42:57.195735545Z" level=info msg="StartContainer for \"b5d7825732c3d89a11a04ec45c5ccd2d5108ca90e76ef0b36cf34692ceb6da46\" returns successfully" Oct 29 00:42:57.371113 kubelet[1929]: E1029 00:42:57.371011 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:57.373649 kubelet[1929]: E1029 00:42:57.373598 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:57.387485 kubelet[1929]: I1029 00:42:57.387429 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gfdll" podStartSLOduration=18.387413049 podStartE2EDuration="18.387413049s" podCreationTimestamp="2025-10-29 00:42:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:42:57.385518112 +0000 UTC m=+25.189565095" watchObservedRunningTime="2025-10-29 00:42:57.387413049 +0000 UTC m=+25.191460032" Oct 29 00:42:57.401230 kubelet[1929]: I1029 00:42:57.400901 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zmb2r" podStartSLOduration=18.400886259 podStartE2EDuration="18.400886259s" podCreationTimestamp="2025-10-29 00:42:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:42:57.400695173 +0000 UTC m=+25.204742156" watchObservedRunningTime="2025-10-29 00:42:57.400886259 +0000 UTC m=+25.204933202" Oct 29 00:42:57.774664 systemd[1]: Started sshd@5-10.0.0.113:22-10.0.0.1:56380.service. Oct 29 00:42:57.810257 sshd[3295]: Accepted publickey for core from 10.0.0.1 port 56380 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:42:57.811590 sshd[3295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:42:57.814876 systemd-logind[1208]: New session 6 of user core. Oct 29 00:42:57.815774 systemd[1]: Started session-6.scope. Oct 29 00:42:57.931911 sshd[3295]: pam_unix(sshd:session): session closed for user core Oct 29 00:42:57.934321 systemd[1]: sshd@5-10.0.0.113:22-10.0.0.1:56380.service: Deactivated successfully. Oct 29 00:42:57.935118 systemd[1]: session-6.scope: Deactivated successfully. Oct 29 00:42:57.935620 systemd-logind[1208]: Session 6 logged out. Waiting for processes to exit. Oct 29 00:42:57.936231 systemd-logind[1208]: Removed session 6. Oct 29 00:42:58.040628 systemd[1]: run-containerd-runc-k8s.io-dacc48a31baea0f44bb733a707be294e04a603034d5b156126f1cf792af013e9-runc.7QW1F0.mount: Deactivated successfully. Oct 29 00:42:58.375290 kubelet[1929]: E1029 00:42:58.375229 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:58.375598 kubelet[1929]: E1029 00:42:58.375292 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:59.044581 kubelet[1929]: I1029 00:42:59.044525 1929 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 29 00:42:59.045075 kubelet[1929]: E1029 00:42:59.045054 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:59.377402 kubelet[1929]: E1029 00:42:59.377365 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:59.377679 kubelet[1929]: E1029 00:42:59.377525 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:42:59.377982 kubelet[1929]: E1029 00:42:59.377936 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:02.937293 systemd[1]: Started sshd@6-10.0.0.113:22-10.0.0.1:52990.service. Oct 29 00:43:02.973272 sshd[3317]: Accepted publickey for core from 10.0.0.1 port 52990 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:43:02.974516 sshd[3317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:43:02.978266 systemd-logind[1208]: New session 7 of user core. Oct 29 00:43:02.978756 systemd[1]: Started session-7.scope. Oct 29 00:43:03.087808 sshd[3317]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:03.090304 systemd[1]: sshd@6-10.0.0.113:22-10.0.0.1:52990.service: Deactivated successfully. Oct 29 00:43:03.091114 systemd[1]: session-7.scope: Deactivated successfully. Oct 29 00:43:03.091605 systemd-logind[1208]: Session 7 logged out. Waiting for processes to exit. Oct 29 00:43:03.092274 systemd-logind[1208]: Removed session 7. Oct 29 00:43:08.092297 systemd[1]: Started sshd@7-10.0.0.113:22-10.0.0.1:53006.service. Oct 29 00:43:08.135514 sshd[3333]: Accepted publickey for core from 10.0.0.1 port 53006 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:43:08.138553 sshd[3333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:43:08.145948 systemd-logind[1208]: New session 8 of user core. Oct 29 00:43:08.146933 systemd[1]: Started session-8.scope. Oct 29 00:43:08.268418 sshd[3333]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:08.271498 systemd[1]: sshd@7-10.0.0.113:22-10.0.0.1:53006.service: Deactivated successfully. Oct 29 00:43:08.272521 systemd[1]: session-8.scope: Deactivated successfully. Oct 29 00:43:08.273106 systemd-logind[1208]: Session 8 logged out. Waiting for processes to exit. Oct 29 00:43:08.273786 systemd-logind[1208]: Removed session 8. Oct 29 00:43:13.272491 systemd[1]: Started sshd@8-10.0.0.113:22-10.0.0.1:52042.service. Oct 29 00:43:13.305726 sshd[3352]: Accepted publickey for core from 10.0.0.1 port 52042 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:43:13.306981 sshd[3352]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:43:13.311176 systemd-logind[1208]: New session 9 of user core. Oct 29 00:43:13.311607 systemd[1]: Started session-9.scope. Oct 29 00:43:13.419836 sshd[3352]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:13.423894 systemd[1]: Started sshd@9-10.0.0.113:22-10.0.0.1:52048.service. Oct 29 00:43:13.425690 systemd[1]: sshd@8-10.0.0.113:22-10.0.0.1:52042.service: Deactivated successfully. Oct 29 00:43:13.426519 systemd[1]: session-9.scope: Deactivated successfully. Oct 29 00:43:13.427087 systemd-logind[1208]: Session 9 logged out. Waiting for processes to exit. Oct 29 00:43:13.427935 systemd-logind[1208]: Removed session 9. Oct 29 00:43:13.457078 sshd[3366]: Accepted publickey for core from 10.0.0.1 port 52048 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:43:13.458237 sshd[3366]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:43:13.461674 systemd-logind[1208]: New session 10 of user core. Oct 29 00:43:13.462869 systemd[1]: Started session-10.scope. Oct 29 00:43:13.640183 sshd[3366]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:13.643215 systemd[1]: sshd@9-10.0.0.113:22-10.0.0.1:52048.service: Deactivated successfully. Oct 29 00:43:13.643889 systemd[1]: session-10.scope: Deactivated successfully. Oct 29 00:43:13.644491 systemd-logind[1208]: Session 10 logged out. Waiting for processes to exit. Oct 29 00:43:13.645670 systemd[1]: Started sshd@10-10.0.0.113:22-10.0.0.1:52052.service. Oct 29 00:43:13.646779 systemd-logind[1208]: Removed session 10. Oct 29 00:43:13.680321 sshd[3378]: Accepted publickey for core from 10.0.0.1 port 52052 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:43:13.681403 sshd[3378]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:43:13.685305 systemd-logind[1208]: New session 11 of user core. Oct 29 00:43:13.685721 systemd[1]: Started session-11.scope. Oct 29 00:43:13.828045 sshd[3378]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:13.830337 systemd[1]: sshd@10-10.0.0.113:22-10.0.0.1:52052.service: Deactivated successfully. Oct 29 00:43:13.831076 systemd[1]: session-11.scope: Deactivated successfully. Oct 29 00:43:13.831609 systemd-logind[1208]: Session 11 logged out. Waiting for processes to exit. Oct 29 00:43:13.832408 systemd-logind[1208]: Removed session 11. Oct 29 00:43:18.833170 systemd[1]: Started sshd@11-10.0.0.113:22-10.0.0.1:52062.service. Oct 29 00:43:18.880785 sshd[3393]: Accepted publickey for core from 10.0.0.1 port 52062 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:43:18.882060 sshd[3393]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:43:18.890891 systemd-logind[1208]: New session 12 of user core. Oct 29 00:43:18.891347 systemd[1]: Started session-12.scope. Oct 29 00:43:19.026070 sshd[3393]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:19.028670 systemd[1]: sshd@11-10.0.0.113:22-10.0.0.1:52062.service: Deactivated successfully. Oct 29 00:43:19.029515 systemd[1]: session-12.scope: Deactivated successfully. Oct 29 00:43:19.030613 systemd-logind[1208]: Session 12 logged out. Waiting for processes to exit. Oct 29 00:43:19.031516 systemd-logind[1208]: Removed session 12. Oct 29 00:43:24.030570 systemd[1]: Started sshd@12-10.0.0.113:22-10.0.0.1:52574.service. Oct 29 00:43:24.063821 sshd[3408]: Accepted publickey for core from 10.0.0.1 port 52574 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:43:24.065374 sshd[3408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:43:24.069269 systemd-logind[1208]: New session 13 of user core. Oct 29 00:43:24.069732 systemd[1]: Started session-13.scope. Oct 29 00:43:24.178687 sshd[3408]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:24.181817 systemd[1]: Started sshd@13-10.0.0.113:22-10.0.0.1:52578.service. Oct 29 00:43:24.183133 systemd[1]: session-13.scope: Deactivated successfully. Oct 29 00:43:24.183861 systemd-logind[1208]: Session 13 logged out. Waiting for processes to exit. Oct 29 00:43:24.183982 systemd[1]: sshd@12-10.0.0.113:22-10.0.0.1:52574.service: Deactivated successfully. Oct 29 00:43:24.184946 systemd-logind[1208]: Removed session 13. Oct 29 00:43:24.215726 sshd[3420]: Accepted publickey for core from 10.0.0.1 port 52578 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:43:24.216892 sshd[3420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:43:24.220261 systemd-logind[1208]: New session 14 of user core. Oct 29 00:43:24.221136 systemd[1]: Started session-14.scope. Oct 29 00:43:24.566778 sshd[3420]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:24.570553 systemd[1]: Started sshd@14-10.0.0.113:22-10.0.0.1:52592.service. Oct 29 00:43:24.572304 systemd-logind[1208]: Session 14 logged out. Waiting for processes to exit. Oct 29 00:43:24.572638 systemd[1]: sshd@13-10.0.0.113:22-10.0.0.1:52578.service: Deactivated successfully. Oct 29 00:43:24.573405 systemd[1]: session-14.scope: Deactivated successfully. Oct 29 00:43:24.574155 systemd-logind[1208]: Removed session 14. Oct 29 00:43:24.606140 sshd[3431]: Accepted publickey for core from 10.0.0.1 port 52592 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:43:24.607505 sshd[3431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:43:24.610901 systemd-logind[1208]: New session 15 of user core. Oct 29 00:43:24.611836 systemd[1]: Started session-15.scope. Oct 29 00:43:25.198269 sshd[3431]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:25.201669 systemd[1]: Started sshd@15-10.0.0.113:22-10.0.0.1:52598.service. Oct 29 00:43:25.202867 systemd[1]: sshd@14-10.0.0.113:22-10.0.0.1:52592.service: Deactivated successfully. Oct 29 00:43:25.203914 systemd[1]: session-15.scope: Deactivated successfully. Oct 29 00:43:25.212689 systemd-logind[1208]: Session 15 logged out. Waiting for processes to exit. Oct 29 00:43:25.214688 systemd-logind[1208]: Removed session 15. Oct 29 00:43:25.242042 sshd[3449]: Accepted publickey for core from 10.0.0.1 port 52598 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:43:25.243339 sshd[3449]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:43:25.247169 systemd-logind[1208]: New session 16 of user core. Oct 29 00:43:25.247621 systemd[1]: Started session-16.scope. Oct 29 00:43:25.459895 sshd[3449]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:25.462087 systemd[1]: Started sshd@16-10.0.0.113:22-10.0.0.1:52608.service. Oct 29 00:43:25.467686 systemd[1]: sshd@15-10.0.0.113:22-10.0.0.1:52598.service: Deactivated successfully. Oct 29 00:43:25.468338 systemd[1]: session-16.scope: Deactivated successfully. Oct 29 00:43:25.468901 systemd-logind[1208]: Session 16 logged out. Waiting for processes to exit. Oct 29 00:43:25.469650 systemd-logind[1208]: Removed session 16. Oct 29 00:43:25.499043 sshd[3461]: Accepted publickey for core from 10.0.0.1 port 52608 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:43:25.500183 sshd[3461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:43:25.503372 systemd-logind[1208]: New session 17 of user core. Oct 29 00:43:25.504173 systemd[1]: Started session-17.scope. Oct 29 00:43:25.620988 sshd[3461]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:25.623487 systemd[1]: sshd@16-10.0.0.113:22-10.0.0.1:52608.service: Deactivated successfully. Oct 29 00:43:25.624173 systemd[1]: session-17.scope: Deactivated successfully. Oct 29 00:43:25.624732 systemd-logind[1208]: Session 17 logged out. Waiting for processes to exit. Oct 29 00:43:25.625613 systemd-logind[1208]: Removed session 17. Oct 29 00:43:30.625949 systemd[1]: Started sshd@17-10.0.0.113:22-10.0.0.1:50414.service. Oct 29 00:43:30.659356 sshd[3480]: Accepted publickey for core from 10.0.0.1 port 50414 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:43:30.660566 sshd[3480]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:43:30.663861 systemd-logind[1208]: New session 18 of user core. Oct 29 00:43:30.664754 systemd[1]: Started session-18.scope. Oct 29 00:43:30.774621 sshd[3480]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:30.777029 systemd[1]: sshd@17-10.0.0.113:22-10.0.0.1:50414.service: Deactivated successfully. Oct 29 00:43:30.777763 systemd[1]: session-18.scope: Deactivated successfully. Oct 29 00:43:30.778341 systemd-logind[1208]: Session 18 logged out. Waiting for processes to exit. Oct 29 00:43:30.779240 systemd-logind[1208]: Removed session 18. Oct 29 00:43:35.779004 systemd[1]: Started sshd@18-10.0.0.113:22-10.0.0.1:50426.service. Oct 29 00:43:35.813325 sshd[3497]: Accepted publickey for core from 10.0.0.1 port 50426 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:43:35.814534 sshd[3497]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:43:35.818720 systemd-logind[1208]: New session 19 of user core. Oct 29 00:43:35.819741 systemd[1]: Started session-19.scope. Oct 29 00:43:35.926687 sshd[3497]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:35.929035 systemd[1]: sshd@18-10.0.0.113:22-10.0.0.1:50426.service: Deactivated successfully. Oct 29 00:43:35.929808 systemd[1]: session-19.scope: Deactivated successfully. Oct 29 00:43:35.930419 systemd-logind[1208]: Session 19 logged out. Waiting for processes to exit. Oct 29 00:43:35.931289 systemd-logind[1208]: Removed session 19. Oct 29 00:43:40.932032 systemd[1]: Started sshd@19-10.0.0.113:22-10.0.0.1:37568.service. Oct 29 00:43:40.965255 sshd[3512]: Accepted publickey for core from 10.0.0.1 port 37568 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:43:40.966380 sshd[3512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:43:40.970805 systemd[1]: Started session-20.scope. Oct 29 00:43:40.971135 systemd-logind[1208]: New session 20 of user core. Oct 29 00:43:41.078410 sshd[3512]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:41.081512 systemd[1]: sshd@19-10.0.0.113:22-10.0.0.1:37568.service: Deactivated successfully. Oct 29 00:43:41.082158 systemd[1]: session-20.scope: Deactivated successfully. Oct 29 00:43:41.082742 systemd-logind[1208]: Session 20 logged out. Waiting for processes to exit. Oct 29 00:43:41.083887 systemd[1]: Started sshd@20-10.0.0.113:22-10.0.0.1:37574.service. Oct 29 00:43:41.084674 systemd-logind[1208]: Removed session 20. Oct 29 00:43:41.117962 sshd[3525]: Accepted publickey for core from 10.0.0.1 port 37574 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:43:41.119278 sshd[3525]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:43:41.122988 systemd-logind[1208]: New session 21 of user core. Oct 29 00:43:41.123875 systemd[1]: Started session-21.scope. Oct 29 00:43:43.288815 kubelet[1929]: E1029 00:43:43.288769 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:43.519737 env[1220]: time="2025-10-29T00:43:43.519591094Z" level=info msg="StopContainer for \"115776fb8cdff2aff6c00d860ac1fe24033b214d2d3824122277f367c3ffedf6\" with timeout 30 (s)" Oct 29 00:43:43.525379 env[1220]: time="2025-10-29T00:43:43.525326406Z" level=info msg="Stop container \"115776fb8cdff2aff6c00d860ac1fe24033b214d2d3824122277f367c3ffedf6\" with signal terminated" Oct 29 00:43:43.546373 systemd[1]: cri-containerd-115776fb8cdff2aff6c00d860ac1fe24033b214d2d3824122277f367c3ffedf6.scope: Deactivated successfully. Oct 29 00:43:43.556839 env[1220]: time="2025-10-29T00:43:43.556777805Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 29 00:43:43.567881 env[1220]: time="2025-10-29T00:43:43.567838145Z" level=info msg="StopContainer for \"3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d\" with timeout 2 (s)" Oct 29 00:43:43.571562 env[1220]: time="2025-10-29T00:43:43.571523791Z" level=info msg="Stop container \"3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d\" with signal terminated" Oct 29 00:43:43.572115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-115776fb8cdff2aff6c00d860ac1fe24033b214d2d3824122277f367c3ffedf6-rootfs.mount: Deactivated successfully. Oct 29 00:43:43.578388 systemd-networkd[1047]: lxc_health: Link DOWN Oct 29 00:43:43.578394 systemd-networkd[1047]: lxc_health: Lost carrier Oct 29 00:43:43.584434 env[1220]: time="2025-10-29T00:43:43.584387754Z" level=info msg="shim disconnected" id=115776fb8cdff2aff6c00d860ac1fe24033b214d2d3824122277f367c3ffedf6 Oct 29 00:43:43.584573 env[1220]: time="2025-10-29T00:43:43.584447755Z" level=warning msg="cleaning up after shim disconnected" id=115776fb8cdff2aff6c00d860ac1fe24033b214d2d3824122277f367c3ffedf6 namespace=k8s.io Oct 29 00:43:43.584573 env[1220]: time="2025-10-29T00:43:43.584462355Z" level=info msg="cleaning up dead shim" Oct 29 00:43:43.592867 env[1220]: time="2025-10-29T00:43:43.592829341Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:43:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3576 runtime=io.containerd.runc.v2\n" Oct 29 00:43:43.595316 env[1220]: time="2025-10-29T00:43:43.595277652Z" level=info msg="StopContainer for \"115776fb8cdff2aff6c00d860ac1fe24033b214d2d3824122277f367c3ffedf6\" returns successfully" Oct 29 00:43:43.595910 env[1220]: time="2025-10-29T00:43:43.595883540Z" level=info msg="StopPodSandbox for \"d1520fdbeaa2a75b18fe2cec40a1ef1e89262ea428a5dca907ec1bb8c460e5ab\"" Oct 29 00:43:43.595970 env[1220]: time="2025-10-29T00:43:43.595949821Z" level=info msg="Container to stop \"115776fb8cdff2aff6c00d860ac1fe24033b214d2d3824122277f367c3ffedf6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 29 00:43:43.597842 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d1520fdbeaa2a75b18fe2cec40a1ef1e89262ea428a5dca907ec1bb8c460e5ab-shm.mount: Deactivated successfully. Oct 29 00:43:43.603366 systemd[1]: cri-containerd-d1520fdbeaa2a75b18fe2cec40a1ef1e89262ea428a5dca907ec1bb8c460e5ab.scope: Deactivated successfully. Oct 29 00:43:43.604998 systemd[1]: cri-containerd-3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d.scope: Deactivated successfully. Oct 29 00:43:43.605321 systemd[1]: cri-containerd-3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d.scope: Consumed 6.139s CPU time. Oct 29 00:43:43.624372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d-rootfs.mount: Deactivated successfully. Oct 29 00:43:43.629758 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1520fdbeaa2a75b18fe2cec40a1ef1e89262ea428a5dca907ec1bb8c460e5ab-rootfs.mount: Deactivated successfully. Oct 29 00:43:43.632600 env[1220]: time="2025-10-29T00:43:43.632561245Z" level=info msg="shim disconnected" id=3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d Oct 29 00:43:43.632856 env[1220]: time="2025-10-29T00:43:43.632835968Z" level=warning msg="cleaning up after shim disconnected" id=3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d namespace=k8s.io Oct 29 00:43:43.632945 env[1220]: time="2025-10-29T00:43:43.632930889Z" level=info msg="cleaning up dead shim" Oct 29 00:43:43.633261 env[1220]: time="2025-10-29T00:43:43.632742927Z" level=info msg="shim disconnected" id=d1520fdbeaa2a75b18fe2cec40a1ef1e89262ea428a5dca907ec1bb8c460e5ab Oct 29 00:43:43.633321 env[1220]: time="2025-10-29T00:43:43.633263094Z" level=warning msg="cleaning up after shim disconnected" id=d1520fdbeaa2a75b18fe2cec40a1ef1e89262ea428a5dca907ec1bb8c460e5ab namespace=k8s.io Oct 29 00:43:43.633321 env[1220]: time="2025-10-29T00:43:43.633273654Z" level=info msg="cleaning up dead shim" Oct 29 00:43:43.640119 env[1220]: time="2025-10-29T00:43:43.640081420Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:43:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3623 runtime=io.containerd.runc.v2\n" Oct 29 00:43:43.641238 env[1220]: time="2025-10-29T00:43:43.641183394Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:43:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3624 runtime=io.containerd.runc.v2\n" Oct 29 00:43:43.641596 env[1220]: time="2025-10-29T00:43:43.641570839Z" level=info msg="TearDown network for sandbox \"d1520fdbeaa2a75b18fe2cec40a1ef1e89262ea428a5dca907ec1bb8c460e5ab\" successfully" Oct 29 00:43:43.641702 env[1220]: time="2025-10-29T00:43:43.641597199Z" level=info msg="StopPodSandbox for \"d1520fdbeaa2a75b18fe2cec40a1ef1e89262ea428a5dca907ec1bb8c460e5ab\" returns successfully" Oct 29 00:43:43.642739 env[1220]: time="2025-10-29T00:43:43.642703453Z" level=info msg="StopContainer for \"3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d\" returns successfully" Oct 29 00:43:43.644554 env[1220]: time="2025-10-29T00:43:43.644525916Z" level=info msg="StopPodSandbox for \"3697654d10425bc3318d9c9112820ed0bee67d9b29e9a15ce1c0c7ecb17ba4ad\"" Oct 29 00:43:43.644707 env[1220]: time="2025-10-29T00:43:43.644684758Z" level=info msg="Container to stop \"37fe45b0a69cf1e3de6b689189efa931cfd9fa027178b00f771bc7bbabf6151b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 29 00:43:43.644778 env[1220]: time="2025-10-29T00:43:43.644761959Z" level=info msg="Container to stop \"99919d81a1d10b088af1a22398c79a681f2acb5a55e5c50acd32707a29b767f1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 29 00:43:43.644839 env[1220]: time="2025-10-29T00:43:43.644823160Z" level=info msg="Container to stop \"3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 29 00:43:43.644911 env[1220]: time="2025-10-29T00:43:43.644895121Z" level=info msg="Container to stop \"cae2c3d80141a1e1d0ba00d4c32dfaf8364991b3bf2e43e3ab43b303bceff9c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 29 00:43:43.644978 env[1220]: time="2025-10-29T00:43:43.644962322Z" level=info msg="Container to stop \"74af74580cb46f9be70936bf57893f1f5f2d9500b172827dca21fed3a1aae801\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 29 00:43:43.656363 systemd[1]: cri-containerd-3697654d10425bc3318d9c9112820ed0bee67d9b29e9a15ce1c0c7ecb17ba4ad.scope: Deactivated successfully. Oct 29 00:43:43.682667 env[1220]: time="2025-10-29T00:43:43.682616599Z" level=info msg="shim disconnected" id=3697654d10425bc3318d9c9112820ed0bee67d9b29e9a15ce1c0c7ecb17ba4ad Oct 29 00:43:43.682957 env[1220]: time="2025-10-29T00:43:43.682939283Z" level=warning msg="cleaning up after shim disconnected" id=3697654d10425bc3318d9c9112820ed0bee67d9b29e9a15ce1c0c7ecb17ba4ad namespace=k8s.io Oct 29 00:43:43.683031 env[1220]: time="2025-10-29T00:43:43.683016644Z" level=info msg="cleaning up dead shim" Oct 29 00:43:43.690512 env[1220]: time="2025-10-29T00:43:43.690465618Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:43:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3668 runtime=io.containerd.runc.v2\n" Oct 29 00:43:43.690997 env[1220]: time="2025-10-29T00:43:43.690966104Z" level=info msg="TearDown network for sandbox \"3697654d10425bc3318d9c9112820ed0bee67d9b29e9a15ce1c0c7ecb17ba4ad\" successfully" Oct 29 00:43:43.691091 env[1220]: time="2025-10-29T00:43:43.691073626Z" level=info msg="StopPodSandbox for \"3697654d10425bc3318d9c9112820ed0bee67d9b29e9a15ce1c0c7ecb17ba4ad\" returns successfully" Oct 29 00:43:43.741626 kubelet[1929]: I1029 00:43:43.741594 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-etc-cni-netd\") pod \"0024f661-3b09-4def-8936-cae43a5f9a80\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " Oct 29 00:43:43.741925 kubelet[1929]: I1029 00:43:43.741906 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-host-proc-sys-net\") pod \"0024f661-3b09-4def-8936-cae43a5f9a80\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " Oct 29 00:43:43.742028 kubelet[1929]: I1029 00:43:43.742012 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzr7d\" (UniqueName: \"kubernetes.io/projected/0024f661-3b09-4def-8936-cae43a5f9a80-kube-api-access-hzr7d\") pod \"0024f661-3b09-4def-8936-cae43a5f9a80\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " Oct 29 00:43:43.742270 kubelet[1929]: I1029 00:43:43.741723 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0024f661-3b09-4def-8936-cae43a5f9a80" (UID: "0024f661-3b09-4def-8936-cae43a5f9a80"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:43:43.742329 kubelet[1929]: I1029 00:43:43.742241 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-cilium-run\") pod \"0024f661-3b09-4def-8936-cae43a5f9a80\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " Oct 29 00:43:43.742329 kubelet[1929]: I1029 00:43:43.741983 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0024f661-3b09-4def-8936-cae43a5f9a80" (UID: "0024f661-3b09-4def-8936-cae43a5f9a80"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:43:43.742329 kubelet[1929]: I1029 00:43:43.742319 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-cni-path\") pod \"0024f661-3b09-4def-8936-cae43a5f9a80\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " Oct 29 00:43:43.742406 kubelet[1929]: I1029 00:43:43.742343 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0024f661-3b09-4def-8936-cae43a5f9a80-hubble-tls\") pod \"0024f661-3b09-4def-8936-cae43a5f9a80\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " Oct 29 00:43:43.742406 kubelet[1929]: I1029 00:43:43.742362 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0024f661-3b09-4def-8936-cae43a5f9a80-cilium-config-path\") pod \"0024f661-3b09-4def-8936-cae43a5f9a80\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " Oct 29 00:43:43.742406 kubelet[1929]: I1029 00:43:43.742381 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-bpf-maps\") pod \"0024f661-3b09-4def-8936-cae43a5f9a80\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " Oct 29 00:43:43.742406 kubelet[1929]: I1029 00:43:43.742394 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-lib-modules\") pod \"0024f661-3b09-4def-8936-cae43a5f9a80\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " Oct 29 00:43:43.742406 kubelet[1929]: I1029 00:43:43.742397 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-cni-path" (OuterVolumeSpecName: "cni-path") pod "0024f661-3b09-4def-8936-cae43a5f9a80" (UID: "0024f661-3b09-4def-8936-cae43a5f9a80"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:43:43.742544 kubelet[1929]: I1029 00:43:43.742418 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp9tk\" (UniqueName: \"kubernetes.io/projected/e6cbfed2-8e15-4bb1-98c7-4e20cbaa4f42-kube-api-access-zp9tk\") pod \"e6cbfed2-8e15-4bb1-98c7-4e20cbaa4f42\" (UID: \"e6cbfed2-8e15-4bb1-98c7-4e20cbaa4f42\") " Oct 29 00:43:43.742544 kubelet[1929]: I1029 00:43:43.742436 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-xtables-lock\") pod \"0024f661-3b09-4def-8936-cae43a5f9a80\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " Oct 29 00:43:43.742544 kubelet[1929]: I1029 00:43:43.742472 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-host-proc-sys-kernel\") pod \"0024f661-3b09-4def-8936-cae43a5f9a80\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " Oct 29 00:43:43.742544 kubelet[1929]: I1029 00:43:43.742490 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6cbfed2-8e15-4bb1-98c7-4e20cbaa4f42-cilium-config-path\") pod \"e6cbfed2-8e15-4bb1-98c7-4e20cbaa4f42\" (UID: \"e6cbfed2-8e15-4bb1-98c7-4e20cbaa4f42\") " Oct 29 00:43:43.742544 kubelet[1929]: I1029 00:43:43.742504 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-hostproc\") pod \"0024f661-3b09-4def-8936-cae43a5f9a80\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " Oct 29 00:43:43.742544 kubelet[1929]: I1029 00:43:43.742520 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-cilium-cgroup\") pod \"0024f661-3b09-4def-8936-cae43a5f9a80\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " Oct 29 00:43:43.742676 kubelet[1929]: I1029 00:43:43.742537 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0024f661-3b09-4def-8936-cae43a5f9a80-clustermesh-secrets\") pod \"0024f661-3b09-4def-8936-cae43a5f9a80\" (UID: \"0024f661-3b09-4def-8936-cae43a5f9a80\") " Oct 29 00:43:43.742676 kubelet[1929]: I1029 00:43:43.742570 1929 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:43.742676 kubelet[1929]: I1029 00:43:43.742580 1929 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:43.742676 kubelet[1929]: I1029 00:43:43.742588 1929 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:43.743134 kubelet[1929]: I1029 00:43:43.742785 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0024f661-3b09-4def-8936-cae43a5f9a80" (UID: "0024f661-3b09-4def-8936-cae43a5f9a80"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:43:43.743134 kubelet[1929]: I1029 00:43:43.742827 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0024f661-3b09-4def-8936-cae43a5f9a80" (UID: "0024f661-3b09-4def-8936-cae43a5f9a80"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:43:43.743134 kubelet[1929]: I1029 00:43:43.742843 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0024f661-3b09-4def-8936-cae43a5f9a80" (UID: "0024f661-3b09-4def-8936-cae43a5f9a80"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:43:43.743134 kubelet[1929]: I1029 00:43:43.742859 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0024f661-3b09-4def-8936-cae43a5f9a80" (UID: "0024f661-3b09-4def-8936-cae43a5f9a80"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:43:43.743793 kubelet[1929]: I1029 00:43:43.743502 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0024f661-3b09-4def-8936-cae43a5f9a80" (UID: "0024f661-3b09-4def-8936-cae43a5f9a80"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:43:43.743793 kubelet[1929]: I1029 00:43:43.743515 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-hostproc" (OuterVolumeSpecName: "hostproc") pod "0024f661-3b09-4def-8936-cae43a5f9a80" (UID: "0024f661-3b09-4def-8936-cae43a5f9a80"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:43:43.743793 kubelet[1929]: I1029 00:43:43.743572 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0024f661-3b09-4def-8936-cae43a5f9a80" (UID: "0024f661-3b09-4def-8936-cae43a5f9a80"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:43:43.744305 kubelet[1929]: I1029 00:43:43.744269 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0024f661-3b09-4def-8936-cae43a5f9a80-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0024f661-3b09-4def-8936-cae43a5f9a80" (UID: "0024f661-3b09-4def-8936-cae43a5f9a80"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 29 00:43:43.745779 kubelet[1929]: I1029 00:43:43.745743 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6cbfed2-8e15-4bb1-98c7-4e20cbaa4f42-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e6cbfed2-8e15-4bb1-98c7-4e20cbaa4f42" (UID: "e6cbfed2-8e15-4bb1-98c7-4e20cbaa4f42"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 29 00:43:43.747935 kubelet[1929]: I1029 00:43:43.747907 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6cbfed2-8e15-4bb1-98c7-4e20cbaa4f42-kube-api-access-zp9tk" (OuterVolumeSpecName: "kube-api-access-zp9tk") pod "e6cbfed2-8e15-4bb1-98c7-4e20cbaa4f42" (UID: "e6cbfed2-8e15-4bb1-98c7-4e20cbaa4f42"). InnerVolumeSpecName "kube-api-access-zp9tk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 00:43:43.748316 kubelet[1929]: I1029 00:43:43.748294 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0024f661-3b09-4def-8936-cae43a5f9a80-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0024f661-3b09-4def-8936-cae43a5f9a80" (UID: "0024f661-3b09-4def-8936-cae43a5f9a80"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 00:43:43.748892 kubelet[1929]: I1029 00:43:43.748865 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0024f661-3b09-4def-8936-cae43a5f9a80-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0024f661-3b09-4def-8936-cae43a5f9a80" (UID: "0024f661-3b09-4def-8936-cae43a5f9a80"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 29 00:43:43.749053 kubelet[1929]: I1029 00:43:43.749036 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0024f661-3b09-4def-8936-cae43a5f9a80-kube-api-access-hzr7d" (OuterVolumeSpecName: "kube-api-access-hzr7d") pod "0024f661-3b09-4def-8936-cae43a5f9a80" (UID: "0024f661-3b09-4def-8936-cae43a5f9a80"). InnerVolumeSpecName "kube-api-access-hzr7d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 00:43:43.843415 kubelet[1929]: I1029 00:43:43.843309 1929 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:43.843415 kubelet[1929]: I1029 00:43:43.843340 1929 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:43.843415 kubelet[1929]: I1029 00:43:43.843350 1929 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e6cbfed2-8e15-4bb1-98c7-4e20cbaa4f42-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:43.843415 kubelet[1929]: I1029 00:43:43.843357 1929 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:43.843415 kubelet[1929]: I1029 00:43:43.843366 1929 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:43.843415 kubelet[1929]: I1029 00:43:43.843373 1929 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0024f661-3b09-4def-8936-cae43a5f9a80-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:43.843415 kubelet[1929]: I1029 00:43:43.843381 1929 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hzr7d\" (UniqueName: \"kubernetes.io/projected/0024f661-3b09-4def-8936-cae43a5f9a80-kube-api-access-hzr7d\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:43.843415 kubelet[1929]: I1029 00:43:43.843388 1929 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:43.843694 kubelet[1929]: I1029 00:43:43.843397 1929 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0024f661-3b09-4def-8936-cae43a5f9a80-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:43.844312 kubelet[1929]: I1029 00:43:43.844240 1929 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0024f661-3b09-4def-8936-cae43a5f9a80-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:43.844312 kubelet[1929]: I1029 00:43:43.844271 1929 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:43.844312 kubelet[1929]: I1029 00:43:43.844281 1929 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0024f661-3b09-4def-8936-cae43a5f9a80-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:43.844312 kubelet[1929]: I1029 00:43:43.844288 1929 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zp9tk\" (UniqueName: \"kubernetes.io/projected/e6cbfed2-8e15-4bb1-98c7-4e20cbaa4f42-kube-api-access-zp9tk\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:44.296086 systemd[1]: Removed slice kubepods-burstable-pod0024f661_3b09_4def_8936_cae43a5f9a80.slice. Oct 29 00:43:44.296172 systemd[1]: kubepods-burstable-pod0024f661_3b09_4def_8936_cae43a5f9a80.slice: Consumed 6.253s CPU time. Oct 29 00:43:44.297061 systemd[1]: Removed slice kubepods-besteffort-pode6cbfed2_8e15_4bb1_98c7_4e20cbaa4f42.slice. Oct 29 00:43:44.469633 kubelet[1929]: I1029 00:43:44.469529 1929 scope.go:117] "RemoveContainer" containerID="115776fb8cdff2aff6c00d860ac1fe24033b214d2d3824122277f367c3ffedf6" Oct 29 00:43:44.477708 env[1220]: time="2025-10-29T00:43:44.477630925Z" level=info msg="RemoveContainer for \"115776fb8cdff2aff6c00d860ac1fe24033b214d2d3824122277f367c3ffedf6\"" Oct 29 00:43:44.482339 env[1220]: time="2025-10-29T00:43:44.482122142Z" level=info msg="RemoveContainer for \"115776fb8cdff2aff6c00d860ac1fe24033b214d2d3824122277f367c3ffedf6\" returns successfully" Oct 29 00:43:44.482429 kubelet[1929]: I1029 00:43:44.482369 1929 scope.go:117] "RemoveContainer" containerID="115776fb8cdff2aff6c00d860ac1fe24033b214d2d3824122277f367c3ffedf6" Oct 29 00:43:44.482674 env[1220]: time="2025-10-29T00:43:44.482602388Z" level=error msg="ContainerStatus for \"115776fb8cdff2aff6c00d860ac1fe24033b214d2d3824122277f367c3ffedf6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"115776fb8cdff2aff6c00d860ac1fe24033b214d2d3824122277f367c3ffedf6\": not found" Oct 29 00:43:44.483644 kubelet[1929]: E1029 00:43:44.483550 1929 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"115776fb8cdff2aff6c00d860ac1fe24033b214d2d3824122277f367c3ffedf6\": not found" containerID="115776fb8cdff2aff6c00d860ac1fe24033b214d2d3824122277f367c3ffedf6" Oct 29 00:43:44.483644 kubelet[1929]: I1029 00:43:44.483598 1929 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"115776fb8cdff2aff6c00d860ac1fe24033b214d2d3824122277f367c3ffedf6"} err="failed to get container status \"115776fb8cdff2aff6c00d860ac1fe24033b214d2d3824122277f367c3ffedf6\": rpc error: code = NotFound desc = an error occurred when try to find container \"115776fb8cdff2aff6c00d860ac1fe24033b214d2d3824122277f367c3ffedf6\": not found" Oct 29 00:43:44.483644 kubelet[1929]: I1029 00:43:44.483631 1929 scope.go:117] "RemoveContainer" containerID="3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d" Oct 29 00:43:44.486913 env[1220]: time="2025-10-29T00:43:44.486835801Z" level=info msg="RemoveContainer for \"3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d\"" Oct 29 00:43:44.491166 env[1220]: time="2025-10-29T00:43:44.491103375Z" level=info msg="RemoveContainer for \"3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d\" returns successfully" Oct 29 00:43:44.491540 kubelet[1929]: I1029 00:43:44.491513 1929 scope.go:117] "RemoveContainer" containerID="74af74580cb46f9be70936bf57893f1f5f2d9500b172827dca21fed3a1aae801" Oct 29 00:43:44.493417 env[1220]: time="2025-10-29T00:43:44.492883237Z" level=info msg="RemoveContainer for \"74af74580cb46f9be70936bf57893f1f5f2d9500b172827dca21fed3a1aae801\"" Oct 29 00:43:44.496484 env[1220]: time="2025-10-29T00:43:44.496424162Z" level=info msg="RemoveContainer for \"74af74580cb46f9be70936bf57893f1f5f2d9500b172827dca21fed3a1aae801\" returns successfully" Oct 29 00:43:44.496702 kubelet[1929]: I1029 00:43:44.496667 1929 scope.go:117] "RemoveContainer" containerID="99919d81a1d10b088af1a22398c79a681f2acb5a55e5c50acd32707a29b767f1" Oct 29 00:43:44.497757 env[1220]: time="2025-10-29T00:43:44.497729499Z" level=info msg="RemoveContainer for \"99919d81a1d10b088af1a22398c79a681f2acb5a55e5c50acd32707a29b767f1\"" Oct 29 00:43:44.499964 env[1220]: time="2025-10-29T00:43:44.499925886Z" level=info msg="RemoveContainer for \"99919d81a1d10b088af1a22398c79a681f2acb5a55e5c50acd32707a29b767f1\" returns successfully" Oct 29 00:43:44.500117 kubelet[1929]: I1029 00:43:44.500083 1929 scope.go:117] "RemoveContainer" containerID="37fe45b0a69cf1e3de6b689189efa931cfd9fa027178b00f771bc7bbabf6151b" Oct 29 00:43:44.501281 env[1220]: time="2025-10-29T00:43:44.501252743Z" level=info msg="RemoveContainer for \"37fe45b0a69cf1e3de6b689189efa931cfd9fa027178b00f771bc7bbabf6151b\"" Oct 29 00:43:44.503726 env[1220]: time="2025-10-29T00:43:44.503685254Z" level=info msg="RemoveContainer for \"37fe45b0a69cf1e3de6b689189efa931cfd9fa027178b00f771bc7bbabf6151b\" returns successfully" Oct 29 00:43:44.503901 kubelet[1929]: I1029 00:43:44.503880 1929 scope.go:117] "RemoveContainer" containerID="cae2c3d80141a1e1d0ba00d4c32dfaf8364991b3bf2e43e3ab43b303bceff9c8" Oct 29 00:43:44.505158 env[1220]: time="2025-10-29T00:43:44.505131352Z" level=info msg="RemoveContainer for \"cae2c3d80141a1e1d0ba00d4c32dfaf8364991b3bf2e43e3ab43b303bceff9c8\"" Oct 29 00:43:44.507363 env[1220]: time="2025-10-29T00:43:44.507335700Z" level=info msg="RemoveContainer for \"cae2c3d80141a1e1d0ba00d4c32dfaf8364991b3bf2e43e3ab43b303bceff9c8\" returns successfully" Oct 29 00:43:44.507534 kubelet[1929]: I1029 00:43:44.507513 1929 scope.go:117] "RemoveContainer" containerID="3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d" Oct 29 00:43:44.509393 env[1220]: time="2025-10-29T00:43:44.509327365Z" level=error msg="ContainerStatus for \"3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d\": not found" Oct 29 00:43:44.509811 kubelet[1929]: E1029 00:43:44.509684 1929 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d\": not found" containerID="3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d" Oct 29 00:43:44.509971 kubelet[1929]: I1029 00:43:44.509847 1929 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d"} err="failed to get container status \"3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d\": rpc error: code = NotFound desc = an error occurred when try to find container \"3811d9c9c264f86467bd111def4fcf59108ebf6d7696e224003aee84d6e6017d\": not found" Oct 29 00:43:44.510015 kubelet[1929]: I1029 00:43:44.509968 1929 scope.go:117] "RemoveContainer" containerID="74af74580cb46f9be70936bf57893f1f5f2d9500b172827dca21fed3a1aae801" Oct 29 00:43:44.510405 env[1220]: time="2025-10-29T00:43:44.510347378Z" level=error msg="ContainerStatus for \"74af74580cb46f9be70936bf57893f1f5f2d9500b172827dca21fed3a1aae801\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"74af74580cb46f9be70936bf57893f1f5f2d9500b172827dca21fed3a1aae801\": not found" Oct 29 00:43:44.510610 kubelet[1929]: E1029 00:43:44.510585 1929 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"74af74580cb46f9be70936bf57893f1f5f2d9500b172827dca21fed3a1aae801\": not found" containerID="74af74580cb46f9be70936bf57893f1f5f2d9500b172827dca21fed3a1aae801" Oct 29 00:43:44.510736 kubelet[1929]: I1029 00:43:44.510716 1929 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"74af74580cb46f9be70936bf57893f1f5f2d9500b172827dca21fed3a1aae801"} err="failed to get container status \"74af74580cb46f9be70936bf57893f1f5f2d9500b172827dca21fed3a1aae801\": rpc error: code = NotFound desc = an error occurred when try to find container \"74af74580cb46f9be70936bf57893f1f5f2d9500b172827dca21fed3a1aae801\": not found" Oct 29 00:43:44.510809 kubelet[1929]: I1029 00:43:44.510796 1929 scope.go:117] "RemoveContainer" containerID="99919d81a1d10b088af1a22398c79a681f2acb5a55e5c50acd32707a29b767f1" Oct 29 00:43:44.511699 env[1220]: time="2025-10-29T00:43:44.511635554Z" level=error msg="ContainerStatus for \"99919d81a1d10b088af1a22398c79a681f2acb5a55e5c50acd32707a29b767f1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99919d81a1d10b088af1a22398c79a681f2acb5a55e5c50acd32707a29b767f1\": not found" Oct 29 00:43:44.511948 kubelet[1929]: E1029 00:43:44.511916 1929 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99919d81a1d10b088af1a22398c79a681f2acb5a55e5c50acd32707a29b767f1\": not found" containerID="99919d81a1d10b088af1a22398c79a681f2acb5a55e5c50acd32707a29b767f1" Oct 29 00:43:44.512060 kubelet[1929]: I1029 00:43:44.511955 1929 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99919d81a1d10b088af1a22398c79a681f2acb5a55e5c50acd32707a29b767f1"} err="failed to get container status \"99919d81a1d10b088af1a22398c79a681f2acb5a55e5c50acd32707a29b767f1\": rpc error: code = NotFound desc = an error occurred when try to find container \"99919d81a1d10b088af1a22398c79a681f2acb5a55e5c50acd32707a29b767f1\": not found" Oct 29 00:43:44.512060 kubelet[1929]: I1029 00:43:44.512057 1929 scope.go:117] "RemoveContainer" containerID="37fe45b0a69cf1e3de6b689189efa931cfd9fa027178b00f771bc7bbabf6151b" Oct 29 00:43:44.512378 env[1220]: time="2025-10-29T00:43:44.512324043Z" level=error msg="ContainerStatus for \"37fe45b0a69cf1e3de6b689189efa931cfd9fa027178b00f771bc7bbabf6151b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"37fe45b0a69cf1e3de6b689189efa931cfd9fa027178b00f771bc7bbabf6151b\": not found" Oct 29 00:43:44.512611 kubelet[1929]: E1029 00:43:44.512488 1929 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"37fe45b0a69cf1e3de6b689189efa931cfd9fa027178b00f771bc7bbabf6151b\": not found" containerID="37fe45b0a69cf1e3de6b689189efa931cfd9fa027178b00f771bc7bbabf6151b" Oct 29 00:43:44.512666 kubelet[1929]: I1029 00:43:44.512610 1929 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"37fe45b0a69cf1e3de6b689189efa931cfd9fa027178b00f771bc7bbabf6151b"} err="failed to get container status \"37fe45b0a69cf1e3de6b689189efa931cfd9fa027178b00f771bc7bbabf6151b\": rpc error: code = NotFound desc = an error occurred when try to find container \"37fe45b0a69cf1e3de6b689189efa931cfd9fa027178b00f771bc7bbabf6151b\": not found" Oct 29 00:43:44.512666 kubelet[1929]: I1029 00:43:44.512624 1929 scope.go:117] "RemoveContainer" containerID="cae2c3d80141a1e1d0ba00d4c32dfaf8364991b3bf2e43e3ab43b303bceff9c8" Oct 29 00:43:44.512924 env[1220]: time="2025-10-29T00:43:44.512880130Z" level=error msg="ContainerStatus for \"cae2c3d80141a1e1d0ba00d4c32dfaf8364991b3bf2e43e3ab43b303bceff9c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cae2c3d80141a1e1d0ba00d4c32dfaf8364991b3bf2e43e3ab43b303bceff9c8\": not found" Oct 29 00:43:44.513401 kubelet[1929]: E1029 00:43:44.513377 1929 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cae2c3d80141a1e1d0ba00d4c32dfaf8364991b3bf2e43e3ab43b303bceff9c8\": not found" containerID="cae2c3d80141a1e1d0ba00d4c32dfaf8364991b3bf2e43e3ab43b303bceff9c8" Oct 29 00:43:44.513525 kubelet[1929]: I1029 00:43:44.513502 1929 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cae2c3d80141a1e1d0ba00d4c32dfaf8364991b3bf2e43e3ab43b303bceff9c8"} err="failed to get container status \"cae2c3d80141a1e1d0ba00d4c32dfaf8364991b3bf2e43e3ab43b303bceff9c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"cae2c3d80141a1e1d0ba00d4c32dfaf8364991b3bf2e43e3ab43b303bceff9c8\": not found" Oct 29 00:43:44.529795 systemd[1]: var-lib-kubelet-pods-e6cbfed2\x2d8e15\x2d4bb1\x2d98c7\x2d4e20cbaa4f42-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzp9tk.mount: Deactivated successfully. Oct 29 00:43:44.529891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3697654d10425bc3318d9c9112820ed0bee67d9b29e9a15ce1c0c7ecb17ba4ad-rootfs.mount: Deactivated successfully. Oct 29 00:43:44.529945 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3697654d10425bc3318d9c9112820ed0bee67d9b29e9a15ce1c0c7ecb17ba4ad-shm.mount: Deactivated successfully. Oct 29 00:43:44.530007 systemd[1]: var-lib-kubelet-pods-0024f661\x2d3b09\x2d4def\x2d8936\x2dcae43a5f9a80-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhzr7d.mount: Deactivated successfully. Oct 29 00:43:44.530059 systemd[1]: var-lib-kubelet-pods-0024f661\x2d3b09\x2d4def\x2d8936\x2dcae43a5f9a80-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 29 00:43:44.530110 systemd[1]: var-lib-kubelet-pods-0024f661\x2d3b09\x2d4def\x2d8936\x2dcae43a5f9a80-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 29 00:43:45.407438 sshd[3525]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:45.411479 systemd[1]: Started sshd@21-10.0.0.113:22-10.0.0.1:37588.service. Oct 29 00:43:45.411985 systemd[1]: sshd@20-10.0.0.113:22-10.0.0.1:37574.service: Deactivated successfully. Oct 29 00:43:45.412941 systemd[1]: session-21.scope: Deactivated successfully. Oct 29 00:43:45.413132 systemd[1]: session-21.scope: Consumed 1.634s CPU time. Oct 29 00:43:45.413664 systemd-logind[1208]: Session 21 logged out. Waiting for processes to exit. Oct 29 00:43:45.414592 systemd-logind[1208]: Removed session 21. Oct 29 00:43:45.446956 sshd[3687]: Accepted publickey for core from 10.0.0.1 port 37588 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:43:45.448403 sshd[3687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:43:45.452119 systemd-logind[1208]: New session 22 of user core. Oct 29 00:43:45.452894 systemd[1]: Started session-22.scope. Oct 29 00:43:46.193675 sshd[3687]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:46.197433 systemd[1]: Started sshd@22-10.0.0.113:22-10.0.0.1:37602.service. Oct 29 00:43:46.202577 systemd[1]: sshd@21-10.0.0.113:22-10.0.0.1:37588.service: Deactivated successfully. Oct 29 00:43:46.203617 systemd[1]: session-22.scope: Deactivated successfully. Oct 29 00:43:46.204319 systemd-logind[1208]: Session 22 logged out. Waiting for processes to exit. Oct 29 00:43:46.206870 systemd-logind[1208]: Removed session 22. Oct 29 00:43:46.215001 systemd[1]: Created slice kubepods-burstable-pod1e3433e9_0894_4290_983c_ff53c8e727cc.slice. Oct 29 00:43:46.246832 sshd[3699]: Accepted publickey for core from 10.0.0.1 port 37602 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:43:46.248181 sshd[3699]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:43:46.252467 systemd-logind[1208]: New session 23 of user core. Oct 29 00:43:46.254025 systemd[1]: Started session-23.scope. Oct 29 00:43:46.259582 kubelet[1929]: I1029 00:43:46.259405 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-hostproc\") pod \"cilium-mwg7c\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " pod="kube-system/cilium-mwg7c" Oct 29 00:43:46.259582 kubelet[1929]: I1029 00:43:46.259494 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-cilium-cgroup\") pod \"cilium-mwg7c\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " pod="kube-system/cilium-mwg7c" Oct 29 00:43:46.259582 kubelet[1929]: I1029 00:43:46.259541 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-xtables-lock\") pod \"cilium-mwg7c\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " pod="kube-system/cilium-mwg7c" Oct 29 00:43:46.259582 kubelet[1929]: I1029 00:43:46.259562 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-host-proc-sys-net\") pod \"cilium-mwg7c\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " pod="kube-system/cilium-mwg7c" Oct 29 00:43:46.259937 kubelet[1929]: I1029 00:43:46.259615 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-cni-path\") pod \"cilium-mwg7c\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " pod="kube-system/cilium-mwg7c" Oct 29 00:43:46.259937 kubelet[1929]: I1029 00:43:46.259664 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1e3433e9-0894-4290-983c-ff53c8e727cc-cilium-ipsec-secrets\") pod \"cilium-mwg7c\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " pod="kube-system/cilium-mwg7c" Oct 29 00:43:46.259937 kubelet[1929]: I1029 00:43:46.259690 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e3433e9-0894-4290-983c-ff53c8e727cc-clustermesh-secrets\") pod \"cilium-mwg7c\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " pod="kube-system/cilium-mwg7c" Oct 29 00:43:46.259937 kubelet[1929]: I1029 00:43:46.259705 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-host-proc-sys-kernel\") pod \"cilium-mwg7c\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " pod="kube-system/cilium-mwg7c" Oct 29 00:43:46.259937 kubelet[1929]: I1029 00:43:46.259727 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfklf\" (UniqueName: \"kubernetes.io/projected/1e3433e9-0894-4290-983c-ff53c8e727cc-kube-api-access-bfklf\") pod \"cilium-mwg7c\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " pod="kube-system/cilium-mwg7c" Oct 29 00:43:46.260044 kubelet[1929]: I1029 00:43:46.259756 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-etc-cni-netd\") pod \"cilium-mwg7c\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " pod="kube-system/cilium-mwg7c" Oct 29 00:43:46.260044 kubelet[1929]: I1029 00:43:46.259782 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-lib-modules\") pod \"cilium-mwg7c\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " pod="kube-system/cilium-mwg7c" Oct 29 00:43:46.260044 kubelet[1929]: I1029 00:43:46.259798 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e3433e9-0894-4290-983c-ff53c8e727cc-hubble-tls\") pod \"cilium-mwg7c\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " pod="kube-system/cilium-mwg7c" Oct 29 00:43:46.260044 kubelet[1929]: I1029 00:43:46.259813 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-cilium-run\") pod \"cilium-mwg7c\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " pod="kube-system/cilium-mwg7c" Oct 29 00:43:46.260044 kubelet[1929]: I1029 00:43:46.259841 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-bpf-maps\") pod \"cilium-mwg7c\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " pod="kube-system/cilium-mwg7c" Oct 29 00:43:46.260044 kubelet[1929]: I1029 00:43:46.259863 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e3433e9-0894-4290-983c-ff53c8e727cc-cilium-config-path\") pod \"cilium-mwg7c\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " pod="kube-system/cilium-mwg7c" Oct 29 00:43:46.290866 kubelet[1929]: I1029 00:43:46.290827 1929 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0024f661-3b09-4def-8936-cae43a5f9a80" path="/var/lib/kubelet/pods/0024f661-3b09-4def-8936-cae43a5f9a80/volumes" Oct 29 00:43:46.291405 kubelet[1929]: I1029 00:43:46.291386 1929 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6cbfed2-8e15-4bb1-98c7-4e20cbaa4f42" path="/var/lib/kubelet/pods/e6cbfed2-8e15-4bb1-98c7-4e20cbaa4f42/volumes" Oct 29 00:43:46.390620 sshd[3699]: pam_unix(sshd:session): session closed for user core Oct 29 00:43:46.396714 systemd[1]: Started sshd@23-10.0.0.113:22-10.0.0.1:37616.service. Oct 29 00:43:46.397754 systemd[1]: sshd@22-10.0.0.113:22-10.0.0.1:37602.service: Deactivated successfully. Oct 29 00:43:46.398926 systemd[1]: session-23.scope: Deactivated successfully. Oct 29 00:43:46.399570 systemd-logind[1208]: Session 23 logged out. Waiting for processes to exit. Oct 29 00:43:46.409430 kubelet[1929]: E1029 00:43:46.409398 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:46.410165 env[1220]: time="2025-10-29T00:43:46.410107537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mwg7c,Uid:1e3433e9-0894-4290-983c-ff53c8e727cc,Namespace:kube-system,Attempt:0,}" Oct 29 00:43:46.412137 systemd-logind[1208]: Removed session 23. Oct 29 00:43:46.426177 env[1220]: time="2025-10-29T00:43:46.426100497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 00:43:46.426177 env[1220]: time="2025-10-29T00:43:46.426140897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 00:43:46.426177 env[1220]: time="2025-10-29T00:43:46.426151898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 00:43:46.426359 env[1220]: time="2025-10-29T00:43:46.426281299Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/19da9f30685223c3e2587a166cdf90aca96a820390272f655e24a536d430b895 pid=3726 runtime=io.containerd.runc.v2 Oct 29 00:43:46.437072 systemd[1]: Started cri-containerd-19da9f30685223c3e2587a166cdf90aca96a820390272f655e24a536d430b895.scope. Oct 29 00:43:46.441221 sshd[3717]: Accepted publickey for core from 10.0.0.1 port 37616 ssh2: RSA SHA256:pYB66aLjdXbBwWKgZ+jLlT0UVkYzJYHWsaqS3PI+gyo Oct 29 00:43:46.445474 sshd[3717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 29 00:43:46.451912 systemd-logind[1208]: New session 24 of user core. Oct 29 00:43:46.452830 systemd[1]: Started session-24.scope. Oct 29 00:43:46.470843 env[1220]: time="2025-10-29T00:43:46.470800017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mwg7c,Uid:1e3433e9-0894-4290-983c-ff53c8e727cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"19da9f30685223c3e2587a166cdf90aca96a820390272f655e24a536d430b895\"" Oct 29 00:43:46.471488 kubelet[1929]: E1029 00:43:46.471454 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:46.477440 env[1220]: time="2025-10-29T00:43:46.477400739Z" level=info msg="CreateContainer within sandbox \"19da9f30685223c3e2587a166cdf90aca96a820390272f655e24a536d430b895\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 29 00:43:46.486249 env[1220]: time="2025-10-29T00:43:46.486210809Z" level=info msg="CreateContainer within sandbox \"19da9f30685223c3e2587a166cdf90aca96a820390272f655e24a536d430b895\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5fe23e186b76350fc7836f360d3932ee710fbc1cf08be57031f12752f62cc8ea\"" Oct 29 00:43:46.486638 env[1220]: time="2025-10-29T00:43:46.486596014Z" level=info msg="StartContainer for \"5fe23e186b76350fc7836f360d3932ee710fbc1cf08be57031f12752f62cc8ea\"" Oct 29 00:43:46.501949 systemd[1]: Started cri-containerd-5fe23e186b76350fc7836f360d3932ee710fbc1cf08be57031f12752f62cc8ea.scope. Oct 29 00:43:46.514204 systemd[1]: cri-containerd-5fe23e186b76350fc7836f360d3932ee710fbc1cf08be57031f12752f62cc8ea.scope: Deactivated successfully. Oct 29 00:43:46.534305 env[1220]: time="2025-10-29T00:43:46.534248051Z" level=info msg="shim disconnected" id=5fe23e186b76350fc7836f360d3932ee710fbc1cf08be57031f12752f62cc8ea Oct 29 00:43:46.534537 env[1220]: time="2025-10-29T00:43:46.534514814Z" level=warning msg="cleaning up after shim disconnected" id=5fe23e186b76350fc7836f360d3932ee710fbc1cf08be57031f12752f62cc8ea namespace=k8s.io Oct 29 00:43:46.534606 env[1220]: time="2025-10-29T00:43:46.534592815Z" level=info msg="cleaning up dead shim" Oct 29 00:43:46.543054 env[1220]: time="2025-10-29T00:43:46.542993360Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:43:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3792 runtime=io.containerd.runc.v2\ntime=\"2025-10-29T00:43:46Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/5fe23e186b76350fc7836f360d3932ee710fbc1cf08be57031f12752f62cc8ea/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 29 00:43:46.543451 env[1220]: time="2025-10-29T00:43:46.543340885Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Oct 29 00:43:46.543657 env[1220]: time="2025-10-29T00:43:46.543615688Z" level=error msg="Failed to pipe stderr of container \"5fe23e186b76350fc7836f360d3932ee710fbc1cf08be57031f12752f62cc8ea\"" error="reading from a closed fifo" Oct 29 00:43:46.544183 env[1220]: time="2025-10-29T00:43:46.544143135Z" level=error msg="Failed to pipe stdout of container \"5fe23e186b76350fc7836f360d3932ee710fbc1cf08be57031f12752f62cc8ea\"" error="reading from a closed fifo" Oct 29 00:43:46.549380 env[1220]: time="2025-10-29T00:43:46.549306039Z" level=error msg="StartContainer for \"5fe23e186b76350fc7836f360d3932ee710fbc1cf08be57031f12752f62cc8ea\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 29 00:43:46.549615 kubelet[1929]: E1029 00:43:46.549581 1929 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="5fe23e186b76350fc7836f360d3932ee710fbc1cf08be57031f12752f62cc8ea" Oct 29 00:43:46.549702 kubelet[1929]: E1029 00:43:46.549668 1929 kuberuntime_manager.go:1449] "Unhandled Error" err="init container mount-cgroup start failed in pod cilium-mwg7c_kube-system(1e3433e9-0894-4290-983c-ff53c8e727cc): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" logger="UnhandledError" Oct 29 00:43:46.549738 kubelet[1929]: E1029 00:43:46.549709 1929 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-mwg7c" podUID="1e3433e9-0894-4290-983c-ff53c8e727cc" Oct 29 00:43:47.288765 kubelet[1929]: E1029 00:43:47.288680 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:47.329101 kubelet[1929]: E1029 00:43:47.329066 1929 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 29 00:43:47.490677 env[1220]: time="2025-10-29T00:43:47.490636962Z" level=info msg="StopPodSandbox for \"19da9f30685223c3e2587a166cdf90aca96a820390272f655e24a536d430b895\"" Oct 29 00:43:47.491017 env[1220]: time="2025-10-29T00:43:47.490711482Z" level=info msg="Container to stop \"5fe23e186b76350fc7836f360d3932ee710fbc1cf08be57031f12752f62cc8ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 29 00:43:47.492505 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19da9f30685223c3e2587a166cdf90aca96a820390272f655e24a536d430b895-shm.mount: Deactivated successfully. Oct 29 00:43:47.497561 systemd[1]: cri-containerd-19da9f30685223c3e2587a166cdf90aca96a820390272f655e24a536d430b895.scope: Deactivated successfully. Oct 29 00:43:47.521755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19da9f30685223c3e2587a166cdf90aca96a820390272f655e24a536d430b895-rootfs.mount: Deactivated successfully. Oct 29 00:43:47.525256 env[1220]: time="2025-10-29T00:43:47.525213153Z" level=info msg="shim disconnected" id=19da9f30685223c3e2587a166cdf90aca96a820390272f655e24a536d430b895 Oct 29 00:43:47.525443 env[1220]: time="2025-10-29T00:43:47.525424835Z" level=warning msg="cleaning up after shim disconnected" id=19da9f30685223c3e2587a166cdf90aca96a820390272f655e24a536d430b895 namespace=k8s.io Oct 29 00:43:47.525528 env[1220]: time="2025-10-29T00:43:47.525507716Z" level=info msg="cleaning up dead shim" Oct 29 00:43:47.533643 env[1220]: time="2025-10-29T00:43:47.533610618Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:43:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3823 runtime=io.containerd.runc.v2\n" Oct 29 00:43:47.534022 env[1220]: time="2025-10-29T00:43:47.533992902Z" level=info msg="TearDown network for sandbox \"19da9f30685223c3e2587a166cdf90aca96a820390272f655e24a536d430b895\" successfully" Oct 29 00:43:47.534116 env[1220]: time="2025-10-29T00:43:47.534098224Z" level=info msg="StopPodSandbox for \"19da9f30685223c3e2587a166cdf90aca96a820390272f655e24a536d430b895\" returns successfully" Oct 29 00:43:47.672051 kubelet[1929]: I1029 00:43:47.671989 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1e3433e9-0894-4290-983c-ff53c8e727cc-cilium-ipsec-secrets\") pod \"1e3433e9-0894-4290-983c-ff53c8e727cc\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " Oct 29 00:43:47.672294 kubelet[1929]: I1029 00:43:47.672065 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e3433e9-0894-4290-983c-ff53c8e727cc-cilium-config-path\") pod \"1e3433e9-0894-4290-983c-ff53c8e727cc\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " Oct 29 00:43:47.672294 kubelet[1929]: I1029 00:43:47.672088 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e3433e9-0894-4290-983c-ff53c8e727cc-hubble-tls\") pod \"1e3433e9-0894-4290-983c-ff53c8e727cc\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " Oct 29 00:43:47.672294 kubelet[1929]: I1029 00:43:47.672105 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-bpf-maps\") pod \"1e3433e9-0894-4290-983c-ff53c8e727cc\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " Oct 29 00:43:47.672294 kubelet[1929]: I1029 00:43:47.672125 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfklf\" (UniqueName: \"kubernetes.io/projected/1e3433e9-0894-4290-983c-ff53c8e727cc-kube-api-access-bfklf\") pod \"1e3433e9-0894-4290-983c-ff53c8e727cc\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " Oct 29 00:43:47.672294 kubelet[1929]: I1029 00:43:47.672138 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-etc-cni-netd\") pod \"1e3433e9-0894-4290-983c-ff53c8e727cc\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " Oct 29 00:43:47.672294 kubelet[1929]: I1029 00:43:47.672158 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-host-proc-sys-kernel\") pod \"1e3433e9-0894-4290-983c-ff53c8e727cc\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " Oct 29 00:43:47.672451 kubelet[1929]: I1029 00:43:47.672171 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-hostproc\") pod \"1e3433e9-0894-4290-983c-ff53c8e727cc\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " Oct 29 00:43:47.672451 kubelet[1929]: I1029 00:43:47.672183 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-xtables-lock\") pod \"1e3433e9-0894-4290-983c-ff53c8e727cc\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " Oct 29 00:43:47.672451 kubelet[1929]: I1029 00:43:47.672217 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-host-proc-sys-net\") pod \"1e3433e9-0894-4290-983c-ff53c8e727cc\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " Oct 29 00:43:47.672451 kubelet[1929]: I1029 00:43:47.672234 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-cni-path\") pod \"1e3433e9-0894-4290-983c-ff53c8e727cc\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " Oct 29 00:43:47.672451 kubelet[1929]: I1029 00:43:47.672248 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-cilium-cgroup\") pod \"1e3433e9-0894-4290-983c-ff53c8e727cc\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " Oct 29 00:43:47.672451 kubelet[1929]: I1029 00:43:47.672265 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-lib-modules\") pod \"1e3433e9-0894-4290-983c-ff53c8e727cc\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " Oct 29 00:43:47.672604 kubelet[1929]: I1029 00:43:47.672281 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e3433e9-0894-4290-983c-ff53c8e727cc-clustermesh-secrets\") pod \"1e3433e9-0894-4290-983c-ff53c8e727cc\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " Oct 29 00:43:47.672604 kubelet[1929]: I1029 00:43:47.672296 1929 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-cilium-run\") pod \"1e3433e9-0894-4290-983c-ff53c8e727cc\" (UID: \"1e3433e9-0894-4290-983c-ff53c8e727cc\") " Oct 29 00:43:47.672604 kubelet[1929]: I1029 00:43:47.672318 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1e3433e9-0894-4290-983c-ff53c8e727cc" (UID: "1e3433e9-0894-4290-983c-ff53c8e727cc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:43:47.672604 kubelet[1929]: I1029 00:43:47.672356 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1e3433e9-0894-4290-983c-ff53c8e727cc" (UID: "1e3433e9-0894-4290-983c-ff53c8e727cc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:43:47.672604 kubelet[1929]: I1029 00:43:47.672381 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1e3433e9-0894-4290-983c-ff53c8e727cc" (UID: "1e3433e9-0894-4290-983c-ff53c8e727cc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:43:47.672724 kubelet[1929]: I1029 00:43:47.672400 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-hostproc" (OuterVolumeSpecName: "hostproc") pod "1e3433e9-0894-4290-983c-ff53c8e727cc" (UID: "1e3433e9-0894-4290-983c-ff53c8e727cc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:43:47.672724 kubelet[1929]: I1029 00:43:47.672418 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1e3433e9-0894-4290-983c-ff53c8e727cc" (UID: "1e3433e9-0894-4290-983c-ff53c8e727cc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:43:47.672724 kubelet[1929]: I1029 00:43:47.672433 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1e3433e9-0894-4290-983c-ff53c8e727cc" (UID: "1e3433e9-0894-4290-983c-ff53c8e727cc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:43:47.672724 kubelet[1929]: I1029 00:43:47.672449 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-cni-path" (OuterVolumeSpecName: "cni-path") pod "1e3433e9-0894-4290-983c-ff53c8e727cc" (UID: "1e3433e9-0894-4290-983c-ff53c8e727cc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:43:47.672724 kubelet[1929]: I1029 00:43:47.672474 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1e3433e9-0894-4290-983c-ff53c8e727cc" (UID: "1e3433e9-0894-4290-983c-ff53c8e727cc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:43:47.672834 kubelet[1929]: I1029 00:43:47.672492 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1e3433e9-0894-4290-983c-ff53c8e727cc" (UID: "1e3433e9-0894-4290-983c-ff53c8e727cc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:43:47.672834 kubelet[1929]: I1029 00:43:47.672751 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1e3433e9-0894-4290-983c-ff53c8e727cc" (UID: "1e3433e9-0894-4290-983c-ff53c8e727cc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 29 00:43:47.676240 systemd[1]: var-lib-kubelet-pods-1e3433e9\x2d0894\x2d4290\x2d983c\x2dff53c8e727cc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 29 00:43:47.676343 systemd[1]: var-lib-kubelet-pods-1e3433e9\x2d0894\x2d4290\x2d983c\x2dff53c8e727cc-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 29 00:43:47.677643 kubelet[1929]: I1029 00:43:47.677608 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e3433e9-0894-4290-983c-ff53c8e727cc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1e3433e9-0894-4290-983c-ff53c8e727cc" (UID: "1e3433e9-0894-4290-983c-ff53c8e727cc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 29 00:43:47.677736 kubelet[1929]: I1029 00:43:47.677640 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e3433e9-0894-4290-983c-ff53c8e727cc-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "1e3433e9-0894-4290-983c-ff53c8e727cc" (UID: "1e3433e9-0894-4290-983c-ff53c8e727cc"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 29 00:43:47.677771 kubelet[1929]: I1029 00:43:47.677753 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e3433e9-0894-4290-983c-ff53c8e727cc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1e3433e9-0894-4290-983c-ff53c8e727cc" (UID: "1e3433e9-0894-4290-983c-ff53c8e727cc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 29 00:43:47.678294 kubelet[1929]: I1029 00:43:47.678265 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e3433e9-0894-4290-983c-ff53c8e727cc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1e3433e9-0894-4290-983c-ff53c8e727cc" (UID: "1e3433e9-0894-4290-983c-ff53c8e727cc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 00:43:47.678403 kubelet[1929]: I1029 00:43:47.678378 1929 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e3433e9-0894-4290-983c-ff53c8e727cc-kube-api-access-bfklf" (OuterVolumeSpecName: "kube-api-access-bfklf") pod "1e3433e9-0894-4290-983c-ff53c8e727cc" (UID: "1e3433e9-0894-4290-983c-ff53c8e727cc"). InnerVolumeSpecName "kube-api-access-bfklf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 29 00:43:47.678544 systemd[1]: var-lib-kubelet-pods-1e3433e9\x2d0894\x2d4290\x2d983c\x2dff53c8e727cc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 29 00:43:47.773184 kubelet[1929]: I1029 00:43:47.773150 1929 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:47.773184 kubelet[1929]: I1029 00:43:47.773181 1929 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:47.773347 kubelet[1929]: I1029 00:43:47.773219 1929 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1e3433e9-0894-4290-983c-ff53c8e727cc-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:47.773347 kubelet[1929]: I1029 00:43:47.773230 1929 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:47.773347 kubelet[1929]: I1029 00:43:47.773239 1929 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1e3433e9-0894-4290-983c-ff53c8e727cc-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:47.773347 kubelet[1929]: I1029 00:43:47.773247 1929 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1e3433e9-0894-4290-983c-ff53c8e727cc-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:47.773347 kubelet[1929]: I1029 00:43:47.773255 1929 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1e3433e9-0894-4290-983c-ff53c8e727cc-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:47.773347 kubelet[1929]: I1029 00:43:47.773262 1929 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:47.773347 kubelet[1929]: I1029 00:43:47.773269 1929 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bfklf\" (UniqueName: \"kubernetes.io/projected/1e3433e9-0894-4290-983c-ff53c8e727cc-kube-api-access-bfklf\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:47.773347 kubelet[1929]: I1029 00:43:47.773276 1929 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:47.773542 kubelet[1929]: I1029 00:43:47.773283 1929 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:47.773542 kubelet[1929]: I1029 00:43:47.773291 1929 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:47.773542 kubelet[1929]: I1029 00:43:47.773299 1929 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:47.773542 kubelet[1929]: I1029 00:43:47.773306 1929 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:47.773542 kubelet[1929]: I1029 00:43:47.773314 1929 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1e3433e9-0894-4290-983c-ff53c8e727cc-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 29 00:43:48.294119 systemd[1]: Removed slice kubepods-burstable-pod1e3433e9_0894_4290_983c_ff53c8e727cc.slice. Oct 29 00:43:48.364588 systemd[1]: var-lib-kubelet-pods-1e3433e9\x2d0894\x2d4290\x2d983c\x2dff53c8e727cc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbfklf.mount: Deactivated successfully. Oct 29 00:43:48.494040 kubelet[1929]: I1029 00:43:48.494005 1929 scope.go:117] "RemoveContainer" containerID="5fe23e186b76350fc7836f360d3932ee710fbc1cf08be57031f12752f62cc8ea" Oct 29 00:43:48.495326 env[1220]: time="2025-10-29T00:43:48.495290070Z" level=info msg="RemoveContainer for \"5fe23e186b76350fc7836f360d3932ee710fbc1cf08be57031f12752f62cc8ea\"" Oct 29 00:43:48.498609 env[1220]: time="2025-10-29T00:43:48.498564711Z" level=info msg="RemoveContainer for \"5fe23e186b76350fc7836f360d3932ee710fbc1cf08be57031f12752f62cc8ea\" returns successfully" Oct 29 00:43:48.540682 systemd[1]: Created slice kubepods-burstable-pod72db6544_dd57_4bf9_8193_6243c64b9b22.slice. Oct 29 00:43:48.577186 kubelet[1929]: I1029 00:43:48.577067 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/72db6544-dd57-4bf9-8193-6243c64b9b22-host-proc-sys-net\") pod \"cilium-kz9lx\" (UID: \"72db6544-dd57-4bf9-8193-6243c64b9b22\") " pod="kube-system/cilium-kz9lx" Oct 29 00:43:48.577186 kubelet[1929]: I1029 00:43:48.577126 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/72db6544-dd57-4bf9-8193-6243c64b9b22-cilium-run\") pod \"cilium-kz9lx\" (UID: \"72db6544-dd57-4bf9-8193-6243c64b9b22\") " pod="kube-system/cilium-kz9lx" Oct 29 00:43:48.577186 kubelet[1929]: I1029 00:43:48.577152 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/72db6544-dd57-4bf9-8193-6243c64b9b22-host-proc-sys-kernel\") pod \"cilium-kz9lx\" (UID: \"72db6544-dd57-4bf9-8193-6243c64b9b22\") " pod="kube-system/cilium-kz9lx" Oct 29 00:43:48.577186 kubelet[1929]: I1029 00:43:48.577171 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/72db6544-dd57-4bf9-8193-6243c64b9b22-cilium-cgroup\") pod \"cilium-kz9lx\" (UID: \"72db6544-dd57-4bf9-8193-6243c64b9b22\") " pod="kube-system/cilium-kz9lx" Oct 29 00:43:48.577572 kubelet[1929]: I1029 00:43:48.577206 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/72db6544-dd57-4bf9-8193-6243c64b9b22-clustermesh-secrets\") pod \"cilium-kz9lx\" (UID: \"72db6544-dd57-4bf9-8193-6243c64b9b22\") " pod="kube-system/cilium-kz9lx" Oct 29 00:43:48.577572 kubelet[1929]: I1029 00:43:48.577400 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwbjc\" (UniqueName: \"kubernetes.io/projected/72db6544-dd57-4bf9-8193-6243c64b9b22-kube-api-access-kwbjc\") pod \"cilium-kz9lx\" (UID: \"72db6544-dd57-4bf9-8193-6243c64b9b22\") " pod="kube-system/cilium-kz9lx" Oct 29 00:43:48.577572 kubelet[1929]: I1029 00:43:48.577430 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/72db6544-dd57-4bf9-8193-6243c64b9b22-bpf-maps\") pod \"cilium-kz9lx\" (UID: \"72db6544-dd57-4bf9-8193-6243c64b9b22\") " pod="kube-system/cilium-kz9lx" Oct 29 00:43:48.577572 kubelet[1929]: I1029 00:43:48.577451 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72db6544-dd57-4bf9-8193-6243c64b9b22-lib-modules\") pod \"cilium-kz9lx\" (UID: \"72db6544-dd57-4bf9-8193-6243c64b9b22\") " pod="kube-system/cilium-kz9lx" Oct 29 00:43:48.577572 kubelet[1929]: I1029 00:43:48.577490 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/72db6544-dd57-4bf9-8193-6243c64b9b22-cilium-ipsec-secrets\") pod \"cilium-kz9lx\" (UID: \"72db6544-dd57-4bf9-8193-6243c64b9b22\") " pod="kube-system/cilium-kz9lx" Oct 29 00:43:48.577572 kubelet[1929]: I1029 00:43:48.577510 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/72db6544-dd57-4bf9-8193-6243c64b9b22-hostproc\") pod \"cilium-kz9lx\" (UID: \"72db6544-dd57-4bf9-8193-6243c64b9b22\") " pod="kube-system/cilium-kz9lx" Oct 29 00:43:48.577714 kubelet[1929]: I1029 00:43:48.577528 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/72db6544-dd57-4bf9-8193-6243c64b9b22-etc-cni-netd\") pod \"cilium-kz9lx\" (UID: \"72db6544-dd57-4bf9-8193-6243c64b9b22\") " pod="kube-system/cilium-kz9lx" Oct 29 00:43:48.577714 kubelet[1929]: I1029 00:43:48.577549 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72db6544-dd57-4bf9-8193-6243c64b9b22-cilium-config-path\") pod \"cilium-kz9lx\" (UID: \"72db6544-dd57-4bf9-8193-6243c64b9b22\") " pod="kube-system/cilium-kz9lx" Oct 29 00:43:48.577714 kubelet[1929]: I1029 00:43:48.577569 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/72db6544-dd57-4bf9-8193-6243c64b9b22-hubble-tls\") pod \"cilium-kz9lx\" (UID: \"72db6544-dd57-4bf9-8193-6243c64b9b22\") " pod="kube-system/cilium-kz9lx" Oct 29 00:43:48.577714 kubelet[1929]: I1029 00:43:48.577587 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/72db6544-dd57-4bf9-8193-6243c64b9b22-cni-path\") pod \"cilium-kz9lx\" (UID: \"72db6544-dd57-4bf9-8193-6243c64b9b22\") " pod="kube-system/cilium-kz9lx" Oct 29 00:43:48.577714 kubelet[1929]: I1029 00:43:48.577606 1929 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72db6544-dd57-4bf9-8193-6243c64b9b22-xtables-lock\") pod \"cilium-kz9lx\" (UID: \"72db6544-dd57-4bf9-8193-6243c64b9b22\") " pod="kube-system/cilium-kz9lx" Oct 29 00:43:48.845220 kubelet[1929]: E1029 00:43:48.845079 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:48.846333 env[1220]: time="2025-10-29T00:43:48.846277753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kz9lx,Uid:72db6544-dd57-4bf9-8193-6243c64b9b22,Namespace:kube-system,Attempt:0,}" Oct 29 00:43:48.857351 env[1220]: time="2025-10-29T00:43:48.857280529Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 29 00:43:48.857464 env[1220]: time="2025-10-29T00:43:48.857360290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 29 00:43:48.857464 env[1220]: time="2025-10-29T00:43:48.857388771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 29 00:43:48.857618 env[1220]: time="2025-10-29T00:43:48.857584253Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f81da0aa27f2512b4cfe36db1400c07833b72dfebac62036d251fc37fcb16cbb pid=3853 runtime=io.containerd.runc.v2 Oct 29 00:43:48.867652 systemd[1]: Started cri-containerd-f81da0aa27f2512b4cfe36db1400c07833b72dfebac62036d251fc37fcb16cbb.scope. Oct 29 00:43:48.891241 env[1220]: time="2025-10-29T00:43:48.891175151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kz9lx,Uid:72db6544-dd57-4bf9-8193-6243c64b9b22,Namespace:kube-system,Attempt:0,} returns sandbox id \"f81da0aa27f2512b4cfe36db1400c07833b72dfebac62036d251fc37fcb16cbb\"" Oct 29 00:43:48.892084 kubelet[1929]: E1029 00:43:48.892058 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:48.895993 env[1220]: time="2025-10-29T00:43:48.895958330Z" level=info msg="CreateContainer within sandbox \"f81da0aa27f2512b4cfe36db1400c07833b72dfebac62036d251fc37fcb16cbb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 29 00:43:48.906443 env[1220]: time="2025-10-29T00:43:48.906396380Z" level=info msg="CreateContainer within sandbox \"f81da0aa27f2512b4cfe36db1400c07833b72dfebac62036d251fc37fcb16cbb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"19333046a42885580537a81237c2c653582f9ded2ed4a5d69e25d654dcd9b114\"" Oct 29 00:43:48.907730 env[1220]: time="2025-10-29T00:43:48.906971707Z" level=info msg="StartContainer for \"19333046a42885580537a81237c2c653582f9ded2ed4a5d69e25d654dcd9b114\"" Oct 29 00:43:48.922541 systemd[1]: Started cri-containerd-19333046a42885580537a81237c2c653582f9ded2ed4a5d69e25d654dcd9b114.scope. Oct 29 00:43:48.950401 env[1220]: time="2025-10-29T00:43:48.950347646Z" level=info msg="StartContainer for \"19333046a42885580537a81237c2c653582f9ded2ed4a5d69e25d654dcd9b114\" returns successfully" Oct 29 00:43:48.959281 systemd[1]: cri-containerd-19333046a42885580537a81237c2c653582f9ded2ed4a5d69e25d654dcd9b114.scope: Deactivated successfully. Oct 29 00:43:48.981179 env[1220]: time="2025-10-29T00:43:48.981133949Z" level=info msg="shim disconnected" id=19333046a42885580537a81237c2c653582f9ded2ed4a5d69e25d654dcd9b114 Oct 29 00:43:48.981179 env[1220]: time="2025-10-29T00:43:48.981179949Z" level=warning msg="cleaning up after shim disconnected" id=19333046a42885580537a81237c2c653582f9ded2ed4a5d69e25d654dcd9b114 namespace=k8s.io Oct 29 00:43:48.981377 env[1220]: time="2025-10-29T00:43:48.981198589Z" level=info msg="cleaning up dead shim" Oct 29 00:43:48.986942 env[1220]: time="2025-10-29T00:43:48.986906580Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:43:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3937 runtime=io.containerd.runc.v2\n" Oct 29 00:43:49.497846 kubelet[1929]: E1029 00:43:49.497797 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:49.502559 env[1220]: time="2025-10-29T00:43:49.502500327Z" level=info msg="CreateContainer within sandbox \"f81da0aa27f2512b4cfe36db1400c07833b72dfebac62036d251fc37fcb16cbb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 29 00:43:49.514699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3007384703.mount: Deactivated successfully. Oct 29 00:43:49.518511 env[1220]: time="2025-10-29T00:43:49.518320403Z" level=info msg="CreateContainer within sandbox \"f81da0aa27f2512b4cfe36db1400c07833b72dfebac62036d251fc37fcb16cbb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"956d05059db18eb280f967cbe2988424f9ee15160f7ea490c21352f5c83e8440\"" Oct 29 00:43:49.519099 env[1220]: time="2025-10-29T00:43:49.519070332Z" level=info msg="StartContainer for \"956d05059db18eb280f967cbe2988424f9ee15160f7ea490c21352f5c83e8440\"" Oct 29 00:43:49.536428 systemd[1]: Started cri-containerd-956d05059db18eb280f967cbe2988424f9ee15160f7ea490c21352f5c83e8440.scope. Oct 29 00:43:49.560965 env[1220]: time="2025-10-29T00:43:49.560909010Z" level=info msg="StartContainer for \"956d05059db18eb280f967cbe2988424f9ee15160f7ea490c21352f5c83e8440\" returns successfully" Oct 29 00:43:49.569910 systemd[1]: cri-containerd-956d05059db18eb280f967cbe2988424f9ee15160f7ea490c21352f5c83e8440.scope: Deactivated successfully. Oct 29 00:43:49.598559 env[1220]: time="2025-10-29T00:43:49.598510276Z" level=info msg="shim disconnected" id=956d05059db18eb280f967cbe2988424f9ee15160f7ea490c21352f5c83e8440 Oct 29 00:43:49.598783 env[1220]: time="2025-10-29T00:43:49.598762959Z" level=warning msg="cleaning up after shim disconnected" id=956d05059db18eb280f967cbe2988424f9ee15160f7ea490c21352f5c83e8440 namespace=k8s.io Oct 29 00:43:49.598850 env[1220]: time="2025-10-29T00:43:49.598836960Z" level=info msg="cleaning up dead shim" Oct 29 00:43:49.605883 env[1220]: time="2025-10-29T00:43:49.605843966Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:43:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4000 runtime=io.containerd.runc.v2\n" Oct 29 00:43:49.639648 kubelet[1929]: W1029 00:43:49.639567 1929 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e3433e9_0894_4290_983c_ff53c8e727cc.slice/cri-containerd-5fe23e186b76350fc7836f360d3932ee710fbc1cf08be57031f12752f62cc8ea.scope WatchSource:0}: container "5fe23e186b76350fc7836f360d3932ee710fbc1cf08be57031f12752f62cc8ea" in namespace "k8s.io": not found Oct 29 00:43:50.290225 kubelet[1929]: I1029 00:43:50.290166 1929 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e3433e9-0894-4290-983c-ff53c8e727cc" path="/var/lib/kubelet/pods/1e3433e9-0894-4290-983c-ff53c8e727cc/volumes" Oct 29 00:43:50.364785 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-956d05059db18eb280f967cbe2988424f9ee15160f7ea490c21352f5c83e8440-rootfs.mount: Deactivated successfully. Oct 29 00:43:50.502363 kubelet[1929]: E1029 00:43:50.501396 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:50.506394 env[1220]: time="2025-10-29T00:43:50.506352418Z" level=info msg="CreateContainer within sandbox \"f81da0aa27f2512b4cfe36db1400c07833b72dfebac62036d251fc37fcb16cbb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 29 00:43:50.517915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3562678717.mount: Deactivated successfully. Oct 29 00:43:50.522881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1013238685.mount: Deactivated successfully. Oct 29 00:43:50.525786 env[1220]: time="2025-10-29T00:43:50.525744577Z" level=info msg="CreateContainer within sandbox \"f81da0aa27f2512b4cfe36db1400c07833b72dfebac62036d251fc37fcb16cbb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"43698a315c9d39bd1baa5e8cceda88197cec328be3bbedbcc5d95ee2223159fb\"" Oct 29 00:43:50.526407 env[1220]: time="2025-10-29T00:43:50.526326664Z" level=info msg="StartContainer for \"43698a315c9d39bd1baa5e8cceda88197cec328be3bbedbcc5d95ee2223159fb\"" Oct 29 00:43:50.540802 systemd[1]: Started cri-containerd-43698a315c9d39bd1baa5e8cceda88197cec328be3bbedbcc5d95ee2223159fb.scope. Oct 29 00:43:50.569293 env[1220]: time="2025-10-29T00:43:50.569184313Z" level=info msg="StartContainer for \"43698a315c9d39bd1baa5e8cceda88197cec328be3bbedbcc5d95ee2223159fb\" returns successfully" Oct 29 00:43:50.575036 systemd[1]: cri-containerd-43698a315c9d39bd1baa5e8cceda88197cec328be3bbedbcc5d95ee2223159fb.scope: Deactivated successfully. Oct 29 00:43:50.635908 env[1220]: time="2025-10-29T00:43:50.635855856Z" level=info msg="shim disconnected" id=43698a315c9d39bd1baa5e8cceda88197cec328be3bbedbcc5d95ee2223159fb Oct 29 00:43:50.635908 env[1220]: time="2025-10-29T00:43:50.635905257Z" level=warning msg="cleaning up after shim disconnected" id=43698a315c9d39bd1baa5e8cceda88197cec328be3bbedbcc5d95ee2223159fb namespace=k8s.io Oct 29 00:43:50.635908 env[1220]: time="2025-10-29T00:43:50.635914977Z" level=info msg="cleaning up dead shim" Oct 29 00:43:50.644389 env[1220]: time="2025-10-29T00:43:50.644345121Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:43:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4058 runtime=io.containerd.runc.v2\n" Oct 29 00:43:51.505335 kubelet[1929]: E1029 00:43:51.505307 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:51.511284 env[1220]: time="2025-10-29T00:43:51.510519271Z" level=info msg="CreateContainer within sandbox \"f81da0aa27f2512b4cfe36db1400c07833b72dfebac62036d251fc37fcb16cbb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 29 00:43:51.527322 env[1220]: time="2025-10-29T00:43:51.527281638Z" level=info msg="CreateContainer within sandbox \"f81da0aa27f2512b4cfe36db1400c07833b72dfebac62036d251fc37fcb16cbb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f8c3edb6c241023ce75f540dd346453c74eb858c6e3b781a44d7504ea9470a5c\"" Oct 29 00:43:51.528215 env[1220]: time="2025-10-29T00:43:51.528044207Z" level=info msg="StartContainer for \"f8c3edb6c241023ce75f540dd346453c74eb858c6e3b781a44d7504ea9470a5c\"" Oct 29 00:43:51.541083 systemd[1]: Started cri-containerd-f8c3edb6c241023ce75f540dd346453c74eb858c6e3b781a44d7504ea9470a5c.scope. Oct 29 00:43:51.567514 env[1220]: time="2025-10-29T00:43:51.567448452Z" level=info msg="StartContainer for \"f8c3edb6c241023ce75f540dd346453c74eb858c6e3b781a44d7504ea9470a5c\" returns successfully" Oct 29 00:43:51.568346 systemd[1]: cri-containerd-f8c3edb6c241023ce75f540dd346453c74eb858c6e3b781a44d7504ea9470a5c.scope: Deactivated successfully. Oct 29 00:43:51.592634 env[1220]: time="2025-10-29T00:43:51.592587841Z" level=info msg="shim disconnected" id=f8c3edb6c241023ce75f540dd346453c74eb858c6e3b781a44d7504ea9470a5c Oct 29 00:43:51.593021 env[1220]: time="2025-10-29T00:43:51.592997686Z" level=warning msg="cleaning up after shim disconnected" id=f8c3edb6c241023ce75f540dd346453c74eb858c6e3b781a44d7504ea9470a5c namespace=k8s.io Oct 29 00:43:51.593101 env[1220]: time="2025-10-29T00:43:51.593086727Z" level=info msg="cleaning up dead shim" Oct 29 00:43:51.602767 env[1220]: time="2025-10-29T00:43:51.602718206Z" level=warning msg="cleanup warnings time=\"2025-10-29T00:43:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4113 runtime=io.containerd.runc.v2\n" Oct 29 00:43:52.330594 kubelet[1929]: E1029 00:43:52.330555 1929 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 29 00:43:52.514213 kubelet[1929]: E1029 00:43:52.511869 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:52.522583 env[1220]: time="2025-10-29T00:43:52.522538901Z" level=info msg="CreateContainer within sandbox \"f81da0aa27f2512b4cfe36db1400c07833b72dfebac62036d251fc37fcb16cbb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 29 00:43:52.538315 env[1220]: time="2025-10-29T00:43:52.538264614Z" level=info msg="CreateContainer within sandbox \"f81da0aa27f2512b4cfe36db1400c07833b72dfebac62036d251fc37fcb16cbb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"90f2d6edfb782f6618595e7751253220311cf3a0f250ded50741de2383c97724\"" Oct 29 00:43:52.539831 env[1220]: time="2025-10-29T00:43:52.538946742Z" level=info msg="StartContainer for \"90f2d6edfb782f6618595e7751253220311cf3a0f250ded50741de2383c97724\"" Oct 29 00:43:52.554842 systemd[1]: Started cri-containerd-90f2d6edfb782f6618595e7751253220311cf3a0f250ded50741de2383c97724.scope. Oct 29 00:43:52.585257 env[1220]: time="2025-10-29T00:43:52.585151669Z" level=info msg="StartContainer for \"90f2d6edfb782f6618595e7751253220311cf3a0f250ded50741de2383c97724\" returns successfully" Oct 29 00:43:52.751552 kubelet[1929]: W1029 00:43:52.751507 1929 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72db6544_dd57_4bf9_8193_6243c64b9b22.slice/cri-containerd-19333046a42885580537a81237c2c653582f9ded2ed4a5d69e25d654dcd9b114.scope WatchSource:0}: task 19333046a42885580537a81237c2c653582f9ded2ed4a5d69e25d654dcd9b114 not found Oct 29 00:43:52.927223 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Oct 29 00:43:53.516040 kubelet[1929]: E1029 00:43:53.515591 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:53.531200 kubelet[1929]: I1029 00:43:53.531096 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kz9lx" podStartSLOduration=5.531082968 podStartE2EDuration="5.531082968s" podCreationTimestamp="2025-10-29 00:43:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-29 00:43:53.53043632 +0000 UTC m=+81.334483303" watchObservedRunningTime="2025-10-29 00:43:53.531082968 +0000 UTC m=+81.335129951" Oct 29 00:43:53.934489 kubelet[1929]: I1029 00:43:53.934442 1929 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-29T00:43:53Z","lastTransitionTime":"2025-10-29T00:43:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 29 00:43:54.844580 kubelet[1929]: E1029 00:43:54.844540 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:55.682761 systemd-networkd[1047]: lxc_health: Link UP Oct 29 00:43:55.689914 systemd-networkd[1047]: lxc_health: Gained carrier Oct 29 00:43:55.690308 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Oct 29 00:43:55.858239 kubelet[1929]: W1029 00:43:55.858182 1929 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72db6544_dd57_4bf9_8193_6243c64b9b22.slice/cri-containerd-956d05059db18eb280f967cbe2988424f9ee15160f7ea490c21352f5c83e8440.scope WatchSource:0}: task 956d05059db18eb280f967cbe2988424f9ee15160f7ea490c21352f5c83e8440 not found Oct 29 00:43:56.845023 kubelet[1929]: E1029 00:43:56.844975 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:57.010436 systemd-networkd[1047]: lxc_health: Gained IPv6LL Oct 29 00:43:57.289148 kubelet[1929]: E1029 00:43:57.289117 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:57.523504 kubelet[1929]: E1029 00:43:57.523449 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:58.526649 kubelet[1929]: E1029 00:43:58.526592 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:43:58.966580 kubelet[1929]: W1029 00:43:58.965148 1929 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72db6544_dd57_4bf9_8193_6243c64b9b22.slice/cri-containerd-43698a315c9d39bd1baa5e8cceda88197cec328be3bbedbcc5d95ee2223159fb.scope WatchSource:0}: task 43698a315c9d39bd1baa5e8cceda88197cec328be3bbedbcc5d95ee2223159fb not found Oct 29 00:44:00.288679 kubelet[1929]: E1029 00:44:00.288635 1929 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 29 00:44:01.240260 systemd[1]: run-containerd-runc-k8s.io-90f2d6edfb782f6618595e7751253220311cf3a0f250ded50741de2383c97724-runc.l2x28d.mount: Deactivated successfully. Oct 29 00:44:01.298031 sshd[3717]: pam_unix(sshd:session): session closed for user core Oct 29 00:44:01.300897 systemd[1]: sshd@23-10.0.0.113:22-10.0.0.1:37616.service: Deactivated successfully. Oct 29 00:44:01.301642 systemd[1]: session-24.scope: Deactivated successfully. Oct 29 00:44:01.302407 systemd-logind[1208]: Session 24 logged out. Waiting for processes to exit. Oct 29 00:44:01.303372 systemd-logind[1208]: Removed session 24. Oct 29 00:44:02.075812 kubelet[1929]: W1029 00:44:02.075270 1929 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72db6544_dd57_4bf9_8193_6243c64b9b22.slice/cri-containerd-f8c3edb6c241023ce75f540dd346453c74eb858c6e3b781a44d7504ea9470a5c.scope WatchSource:0}: task f8c3edb6c241023ce75f540dd346453c74eb858c6e3b781a44d7504ea9470a5c not found