Nov 1 00:22:39.730517 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 1 00:22:39.730536 kernel: Linux version 5.15.192-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Oct 31 23:12:38 -00 2025 Nov 1 00:22:39.730544 kernel: efi: EFI v2.70 by EDK II Nov 1 00:22:39.730549 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Nov 1 00:22:39.730554 kernel: random: crng init done Nov 1 00:22:39.730560 kernel: ACPI: Early table checksum verification disabled Nov 1 00:22:39.730566 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Nov 1 00:22:39.730573 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 1 00:22:39.730578 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:39.730583 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:39.730589 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:39.730594 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:39.730599 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:39.730604 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:39.730612 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:39.730618 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:39.730624 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:22:39.730630 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 1 00:22:39.730635 kernel: NUMA: Failed to initialise from firmware Nov 1 00:22:39.730641 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 1 00:22:39.730647 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Nov 1 00:22:39.730653 kernel: Zone ranges: Nov 1 00:22:39.730658 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 1 00:22:39.730665 kernel: DMA32 empty Nov 1 00:22:39.730671 kernel: Normal empty Nov 1 00:22:39.730676 kernel: Movable zone start for each node Nov 1 00:22:39.730682 kernel: Early memory node ranges Nov 1 00:22:39.730687 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Nov 1 00:22:39.730693 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Nov 1 00:22:39.730699 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Nov 1 00:22:39.730704 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Nov 1 00:22:39.730710 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Nov 1 00:22:39.730716 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Nov 1 00:22:39.730721 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Nov 1 00:22:39.730727 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 1 00:22:39.730734 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 1 00:22:39.730740 kernel: psci: probing for conduit method from ACPI. Nov 1 00:22:39.730745 kernel: psci: PSCIv1.1 detected in firmware. Nov 1 00:22:39.730751 kernel: psci: Using standard PSCI v0.2 function IDs Nov 1 00:22:39.730762 kernel: psci: Trusted OS migration not required Nov 1 00:22:39.730770 kernel: psci: SMC Calling Convention v1.1 Nov 1 00:22:39.730777 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 1 00:22:39.730784 kernel: ACPI: SRAT not present Nov 1 00:22:39.730791 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Nov 1 00:22:39.730797 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Nov 1 00:22:39.730803 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 1 00:22:39.730809 kernel: Detected PIPT I-cache on CPU0 Nov 1 00:22:39.730815 kernel: CPU features: detected: GIC system register CPU interface Nov 1 00:22:39.730821 kernel: CPU features: detected: Hardware dirty bit management Nov 1 00:22:39.730828 kernel: CPU features: detected: Spectre-v4 Nov 1 00:22:39.730834 kernel: CPU features: detected: Spectre-BHB Nov 1 00:22:39.730841 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 1 00:22:39.730847 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 1 00:22:39.730853 kernel: CPU features: detected: ARM erratum 1418040 Nov 1 00:22:39.730859 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 1 00:22:39.730865 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Nov 1 00:22:39.730871 kernel: Policy zone: DMA Nov 1 00:22:39.730878 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=284392058f112e827cd7c521dcce1be27e1367d0030df494642d12e41e342e29 Nov 1 00:22:39.730885 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 1 00:22:39.730891 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:22:39.730897 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:22:39.730903 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:22:39.730911 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7592K rodata, 36416K init, 777K bss, 114948K reserved, 0K cma-reserved) Nov 1 00:22:39.730917 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 1 00:22:39.730923 kernel: trace event string verifier disabled Nov 1 00:22:39.730929 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:22:39.730936 kernel: rcu: RCU event tracing is enabled. Nov 1 00:22:39.730942 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 1 00:22:39.730948 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:22:39.730955 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:22:39.730962 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:22:39.730969 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 1 00:22:39.730975 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 1 00:22:39.730993 kernel: GICv3: 256 SPIs implemented Nov 1 00:22:39.731000 kernel: GICv3: 0 Extended SPIs implemented Nov 1 00:22:39.731006 kernel: GICv3: Distributor has no Range Selector support Nov 1 00:22:39.731013 kernel: Root IRQ handler: gic_handle_irq Nov 1 00:22:39.731020 kernel: GICv3: 16 PPIs implemented Nov 1 00:22:39.731026 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 1 00:22:39.731036 kernel: ACPI: SRAT not present Nov 1 00:22:39.731043 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 1 00:22:39.731049 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Nov 1 00:22:39.731057 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Nov 1 00:22:39.731063 kernel: GICv3: using LPI property table @0x00000000400d0000 Nov 1 00:22:39.731070 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Nov 1 00:22:39.731080 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 1 00:22:39.731090 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 1 00:22:39.731096 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 1 00:22:39.731103 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 1 00:22:39.731109 kernel: arm-pv: using stolen time PV Nov 1 00:22:39.731116 kernel: Console: colour dummy device 80x25 Nov 1 00:22:39.731122 kernel: ACPI: Core revision 20210730 Nov 1 00:22:39.731129 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 1 00:22:39.731135 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:22:39.731142 kernel: LSM: Security Framework initializing Nov 1 00:22:39.731149 kernel: SELinux: Initializing. Nov 1 00:22:39.731156 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:22:39.731163 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:22:39.731170 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:22:39.731176 kernel: Platform MSI: ITS@0x8080000 domain created Nov 1 00:22:39.731183 kernel: PCI/MSI: ITS@0x8080000 domain created Nov 1 00:22:39.731190 kernel: Remapping and enabling EFI services. Nov 1 00:22:39.731196 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:22:39.731203 kernel: Detected PIPT I-cache on CPU1 Nov 1 00:22:39.731210 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 1 00:22:39.731217 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Nov 1 00:22:39.731223 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 1 00:22:39.731229 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 1 00:22:39.731242 kernel: Detected PIPT I-cache on CPU2 Nov 1 00:22:39.731248 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 1 00:22:39.731255 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Nov 1 00:22:39.731261 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 1 00:22:39.731267 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 1 00:22:39.731274 kernel: Detected PIPT I-cache on CPU3 Nov 1 00:22:39.731282 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 1 00:22:39.731288 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Nov 1 00:22:39.731294 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 1 00:22:39.731300 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 1 00:22:39.731311 kernel: smp: Brought up 1 node, 4 CPUs Nov 1 00:22:39.731319 kernel: SMP: Total of 4 processors activated. Nov 1 00:22:39.731326 kernel: CPU features: detected: 32-bit EL0 Support Nov 1 00:22:39.731332 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 1 00:22:39.731339 kernel: CPU features: detected: Common not Private translations Nov 1 00:22:39.731345 kernel: CPU features: detected: CRC32 instructions Nov 1 00:22:39.731352 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 1 00:22:39.731358 kernel: CPU features: detected: LSE atomic instructions Nov 1 00:22:39.731366 kernel: CPU features: detected: Privileged Access Never Nov 1 00:22:39.731373 kernel: CPU features: detected: RAS Extension Support Nov 1 00:22:39.731379 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 1 00:22:39.731385 kernel: CPU: All CPU(s) started at EL1 Nov 1 00:22:39.731392 kernel: alternatives: patching kernel code Nov 1 00:22:39.731399 kernel: devtmpfs: initialized Nov 1 00:22:39.731406 kernel: KASLR enabled Nov 1 00:22:39.731413 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:22:39.731419 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 1 00:22:39.731426 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:22:39.731432 kernel: SMBIOS 3.0.0 present. Nov 1 00:22:39.731438 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Nov 1 00:22:39.731445 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:22:39.731451 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 1 00:22:39.731459 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 1 00:22:39.731466 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 1 00:22:39.731472 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:22:39.731478 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 Nov 1 00:22:39.731485 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:22:39.731491 kernel: cpuidle: using governor menu Nov 1 00:22:39.731497 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 1 00:22:39.731504 kernel: ASID allocator initialised with 32768 entries Nov 1 00:22:39.731510 kernel: ACPI: bus type PCI registered Nov 1 00:22:39.731518 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:22:39.731524 kernel: Serial: AMBA PL011 UART driver Nov 1 00:22:39.731531 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:22:39.731537 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Nov 1 00:22:39.731544 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:22:39.731550 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Nov 1 00:22:39.731557 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:22:39.731563 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 1 00:22:39.731570 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:22:39.731577 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:22:39.731584 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:22:39.731590 kernel: ACPI: Added _OSI(Linux-Dell-Video) Nov 1 00:22:39.731597 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Nov 1 00:22:39.731603 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Nov 1 00:22:39.731610 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:22:39.731616 kernel: ACPI: Interpreter enabled Nov 1 00:22:39.731622 kernel: ACPI: Using GIC for interrupt routing Nov 1 00:22:39.731629 kernel: ACPI: MCFG table detected, 1 entries Nov 1 00:22:39.731636 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 1 00:22:39.731643 kernel: printk: console [ttyAMA0] enabled Nov 1 00:22:39.731649 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:22:39.731769 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:22:39.731832 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 1 00:22:39.731889 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 1 00:22:39.731945 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 1 00:22:39.732029 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 1 00:22:39.732039 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 1 00:22:39.732046 kernel: PCI host bridge to bus 0000:00 Nov 1 00:22:39.732111 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 1 00:22:39.732163 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 1 00:22:39.732214 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 1 00:22:39.732277 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:22:39.732350 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Nov 1 00:22:39.732478 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Nov 1 00:22:39.732543 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Nov 1 00:22:39.732602 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Nov 1 00:22:39.732661 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Nov 1 00:22:39.732719 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Nov 1 00:22:39.732778 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Nov 1 00:22:39.732841 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Nov 1 00:22:39.732894 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 1 00:22:39.732951 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 1 00:22:39.733030 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 1 00:22:39.733040 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 1 00:22:39.733047 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 1 00:22:39.733053 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 1 00:22:39.733060 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 1 00:22:39.733068 kernel: iommu: Default domain type: Translated Nov 1 00:22:39.733075 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 1 00:22:39.733081 kernel: vgaarb: loaded Nov 1 00:22:39.733088 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:22:39.733095 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:22:39.733101 kernel: PTP clock support registered Nov 1 00:22:39.733107 kernel: Registered efivars operations Nov 1 00:22:39.733114 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 1 00:22:39.733120 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:22:39.733128 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:22:39.733135 kernel: pnp: PnP ACPI init Nov 1 00:22:39.733203 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 1 00:22:39.733213 kernel: pnp: PnP ACPI: found 1 devices Nov 1 00:22:39.733219 kernel: NET: Registered PF_INET protocol family Nov 1 00:22:39.733226 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:22:39.733240 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 00:22:39.733247 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:22:39.733255 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:22:39.733262 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Nov 1 00:22:39.733269 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 00:22:39.733275 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:22:39.733282 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:22:39.733288 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:22:39.733295 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:22:39.733301 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Nov 1 00:22:39.733308 kernel: kvm [1]: HYP mode not available Nov 1 00:22:39.733315 kernel: Initialise system trusted keyrings Nov 1 00:22:39.733322 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 00:22:39.733328 kernel: Key type asymmetric registered Nov 1 00:22:39.733334 kernel: Asymmetric key parser 'x509' registered Nov 1 00:22:39.733341 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 1 00:22:39.733347 kernel: io scheduler mq-deadline registered Nov 1 00:22:39.733354 kernel: io scheduler kyber registered Nov 1 00:22:39.733360 kernel: io scheduler bfq registered Nov 1 00:22:39.733367 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 1 00:22:39.733375 kernel: ACPI: button: Power Button [PWRB] Nov 1 00:22:39.733381 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 1 00:22:39.733446 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 1 00:22:39.733455 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:22:39.733462 kernel: thunder_xcv, ver 1.0 Nov 1 00:22:39.733468 kernel: thunder_bgx, ver 1.0 Nov 1 00:22:39.733474 kernel: nicpf, ver 1.0 Nov 1 00:22:39.733481 kernel: nicvf, ver 1.0 Nov 1 00:22:39.733546 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 1 00:22:39.733603 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-01T00:22:39 UTC (1761956559) Nov 1 00:22:39.733612 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 00:22:39.733618 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:22:39.733625 kernel: Segment Routing with IPv6 Nov 1 00:22:39.733631 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:22:39.733638 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:22:39.733644 kernel: Key type dns_resolver registered Nov 1 00:22:39.733651 kernel: registered taskstats version 1 Nov 1 00:22:39.733658 kernel: Loading compiled-in X.509 certificates Nov 1 00:22:39.733665 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.192-flatcar: 4aa5071b9a6f96878595e36d4bd5862a671c915d' Nov 1 00:22:39.733672 kernel: Key type .fscrypt registered Nov 1 00:22:39.733678 kernel: Key type fscrypt-provisioning registered Nov 1 00:22:39.733685 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:22:39.733691 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:22:39.733698 kernel: ima: No architecture policies found Nov 1 00:22:39.733704 kernel: clk: Disabling unused clocks Nov 1 00:22:39.733711 kernel: Freeing unused kernel memory: 36416K Nov 1 00:22:39.733718 kernel: Run /init as init process Nov 1 00:22:39.733725 kernel: with arguments: Nov 1 00:22:39.733731 kernel: /init Nov 1 00:22:39.733737 kernel: with environment: Nov 1 00:22:39.733744 kernel: HOME=/ Nov 1 00:22:39.733750 kernel: TERM=linux Nov 1 00:22:39.733756 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 1 00:22:39.733764 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:22:39.733774 systemd[1]: Detected virtualization kvm. Nov 1 00:22:39.733781 systemd[1]: Detected architecture arm64. Nov 1 00:22:39.733788 systemd[1]: Running in initrd. Nov 1 00:22:39.733795 systemd[1]: No hostname configured, using default hostname. Nov 1 00:22:39.733801 systemd[1]: Hostname set to . Nov 1 00:22:39.733808 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:22:39.733815 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:22:39.733822 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:22:39.733830 systemd[1]: Reached target cryptsetup.target. Nov 1 00:22:39.733837 systemd[1]: Reached target paths.target. Nov 1 00:22:39.733844 systemd[1]: Reached target slices.target. Nov 1 00:22:39.733851 systemd[1]: Reached target swap.target. Nov 1 00:22:39.733858 systemd[1]: Reached target timers.target. Nov 1 00:22:39.733865 systemd[1]: Listening on iscsid.socket. Nov 1 00:22:39.733872 systemd[1]: Listening on iscsiuio.socket. Nov 1 00:22:39.733879 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:22:39.733887 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:22:39.733894 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:22:39.733901 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:22:39.733908 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:22:39.733915 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:22:39.733922 systemd[1]: Reached target sockets.target. Nov 1 00:22:39.733929 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:22:39.733935 systemd[1]: Finished network-cleanup.service. Nov 1 00:22:39.733942 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:22:39.733950 systemd[1]: Starting systemd-journald.service... Nov 1 00:22:39.733957 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:22:39.733964 systemd[1]: Starting systemd-resolved.service... Nov 1 00:22:39.733971 systemd[1]: Starting systemd-vconsole-setup.service... Nov 1 00:22:39.733988 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:22:39.733998 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:22:39.734005 systemd[1]: Finished systemd-vconsole-setup.service. Nov 1 00:22:39.734012 systemd[1]: Starting dracut-cmdline-ask.service... Nov 1 00:22:39.734021 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:22:39.734028 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:22:39.734035 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:22:39.734045 systemd-journald[291]: Journal started Nov 1 00:22:39.734089 systemd-journald[291]: Runtime Journal (/run/log/journal/9d8f8e361282446697a516b29b63537a) is 6.0M, max 48.7M, 42.6M free. Nov 1 00:22:39.709832 systemd-modules-load[292]: Inserted module 'overlay' Nov 1 00:22:39.736473 systemd[1]: Started systemd-journald.service. Nov 1 00:22:39.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:39.743976 kernel: audit: type=1130 audit(1761956559.737:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:39.744018 kernel: Bridge firewalling registered Nov 1 00:22:39.744028 kernel: audit: type=1130 audit(1761956559.741:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:39.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:39.740342 systemd-modules-load[292]: Inserted module 'br_netfilter' Nov 1 00:22:39.740536 systemd[1]: Finished dracut-cmdline-ask.service. Nov 1 00:22:39.742182 systemd[1]: Starting dracut-cmdline.service... Nov 1 00:22:39.742474 systemd-resolved[293]: Positive Trust Anchors: Nov 1 00:22:39.742482 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:22:39.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:39.742508 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:22:39.759828 kernel: audit: type=1130 audit(1761956559.748:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:39.759851 kernel: SCSI subsystem initialized Nov 1 00:22:39.746615 systemd-resolved[293]: Defaulting to hostname 'linux'. Nov 1 00:22:39.747426 systemd[1]: Started systemd-resolved.service. Nov 1 00:22:39.748742 systemd[1]: Reached target nss-lookup.target. Nov 1 00:22:39.764261 dracut-cmdline[308]: dracut-dracut-053 Nov 1 00:22:39.766492 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=284392058f112e827cd7c521dcce1be27e1367d0030df494642d12e41e342e29 Nov 1 00:22:39.775375 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:22:39.775397 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:22:39.775406 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Nov 1 00:22:39.770314 systemd-modules-load[292]: Inserted module 'dm_multipath' Nov 1 00:22:39.780107 kernel: audit: type=1130 audit(1761956559.775:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:39.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:39.771067 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:22:39.776908 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:22:39.786693 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:22:39.791079 kernel: audit: type=1130 audit(1761956559.787:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:39.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:39.834013 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:22:39.846009 kernel: iscsi: registered transport (tcp) Nov 1 00:22:39.860439 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:22:39.860456 kernel: QLogic iSCSI HBA Driver Nov 1 00:22:39.894509 systemd[1]: Finished dracut-cmdline.service. Nov 1 00:22:39.896271 systemd[1]: Starting dracut-pre-udev.service... Nov 1 00:22:39.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:39.901027 kernel: audit: type=1130 audit(1761956559.894:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:39.938017 kernel: raid6: neonx8 gen() 13557 MB/s Nov 1 00:22:39.955013 kernel: raid6: neonx8 xor() 10620 MB/s Nov 1 00:22:39.972007 kernel: raid6: neonx4 gen() 13382 MB/s Nov 1 00:22:39.989007 kernel: raid6: neonx4 xor() 11024 MB/s Nov 1 00:22:40.006004 kernel: raid6: neonx2 gen() 12880 MB/s Nov 1 00:22:40.023004 kernel: raid6: neonx2 xor() 10106 MB/s Nov 1 00:22:40.040005 kernel: raid6: neonx1 gen() 10381 MB/s Nov 1 00:22:40.057019 kernel: raid6: neonx1 xor() 8685 MB/s Nov 1 00:22:40.074009 kernel: raid6: int64x8 gen() 6194 MB/s Nov 1 00:22:40.091013 kernel: raid6: int64x8 xor() 3530 MB/s Nov 1 00:22:40.108010 kernel: raid6: int64x4 gen() 7120 MB/s Nov 1 00:22:40.125009 kernel: raid6: int64x4 xor() 3821 MB/s Nov 1 00:22:40.142009 kernel: raid6: int64x2 gen() 6101 MB/s Nov 1 00:22:40.159020 kernel: raid6: int64x2 xor() 3288 MB/s Nov 1 00:22:40.176007 kernel: raid6: int64x1 gen() 5002 MB/s Nov 1 00:22:40.193183 kernel: raid6: int64x1 xor() 2621 MB/s Nov 1 00:22:40.193203 kernel: raid6: using algorithm neonx8 gen() 13557 MB/s Nov 1 00:22:40.193212 kernel: raid6: .... xor() 10620 MB/s, rmw enabled Nov 1 00:22:40.194341 kernel: raid6: using neon recovery algorithm Nov 1 00:22:40.205004 kernel: xor: measuring software checksum speed Nov 1 00:22:40.205021 kernel: 8regs : 16560 MB/sec Nov 1 00:22:40.206268 kernel: 32regs : 18324 MB/sec Nov 1 00:22:40.206280 kernel: arm64_neon : 27654 MB/sec Nov 1 00:22:40.206289 kernel: xor: using function: arm64_neon (27654 MB/sec) Nov 1 00:22:40.260006 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Nov 1 00:22:40.273121 systemd[1]: Finished dracut-pre-udev.service. Nov 1 00:22:40.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:40.275154 systemd[1]: Starting systemd-udevd.service... Nov 1 00:22:40.280325 kernel: audit: type=1130 audit(1761956560.272:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:40.280354 kernel: audit: type=1334 audit(1761956560.273:9): prog-id=7 op=LOAD Nov 1 00:22:40.280364 kernel: audit: type=1334 audit(1761956560.273:10): prog-id=8 op=LOAD Nov 1 00:22:40.273000 audit: BPF prog-id=7 op=LOAD Nov 1 00:22:40.273000 audit: BPF prog-id=8 op=LOAD Nov 1 00:22:40.294350 systemd-udevd[491]: Using default interface naming scheme 'v252'. Nov 1 00:22:40.297627 systemd[1]: Started systemd-udevd.service. Nov 1 00:22:40.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:40.299306 systemd[1]: Starting dracut-pre-trigger.service... Nov 1 00:22:40.311512 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Nov 1 00:22:40.338198 systemd[1]: Finished dracut-pre-trigger.service. Nov 1 00:22:40.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:40.339826 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:22:40.373906 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:22:40.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:40.406285 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 1 00:22:40.412134 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:22:40.412150 kernel: GPT:9289727 != 19775487 Nov 1 00:22:40.412158 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:22:40.412167 kernel: GPT:9289727 != 19775487 Nov 1 00:22:40.412175 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:22:40.412182 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:22:40.424608 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Nov 1 00:22:40.425890 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Nov 1 00:22:40.431009 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (551) Nov 1 00:22:40.432556 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Nov 1 00:22:40.436169 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Nov 1 00:22:40.445869 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:22:40.447594 systemd[1]: Starting disk-uuid.service... Nov 1 00:22:40.453542 disk-uuid[563]: Primary Header is updated. Nov 1 00:22:40.453542 disk-uuid[563]: Secondary Entries is updated. Nov 1 00:22:40.453542 disk-uuid[563]: Secondary Header is updated. Nov 1 00:22:40.457017 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:22:40.459002 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:22:40.462007 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:22:40.503713 (udev-worker)[538]: vda6: Failed to create/update device symlink '/dev/disk/by-partuuid/958114b3-d7c7-4100-b60e-684c9332694d', ignoring: No such file or directory Nov 1 00:22:41.465013 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:22:41.465065 disk-uuid[564]: The operation has completed successfully. Nov 1 00:22:41.484999 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:22:41.486141 systemd[1]: Finished disk-uuid.service. Nov 1 00:22:41.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:41.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:41.490131 systemd[1]: Starting verity-setup.service... Nov 1 00:22:41.501996 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 1 00:22:41.521199 systemd[1]: Found device dev-mapper-usr.device. Nov 1 00:22:41.523295 systemd[1]: Mounting sysusr-usr.mount... Nov 1 00:22:41.525893 systemd[1]: Finished verity-setup.service. Nov 1 00:22:41.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:41.568002 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Nov 1 00:22:41.568629 systemd[1]: Mounted sysusr-usr.mount. Nov 1 00:22:41.569500 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Nov 1 00:22:41.570172 systemd[1]: Starting ignition-setup.service... Nov 1 00:22:41.572487 systemd[1]: Starting parse-ip-for-networkd.service... Nov 1 00:22:41.579087 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 1 00:22:41.579119 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:22:41.579128 kernel: BTRFS info (device vda6): has skinny extents Nov 1 00:22:41.586641 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 00:22:41.592608 systemd[1]: Finished ignition-setup.service. Nov 1 00:22:41.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:41.594128 systemd[1]: Starting ignition-fetch-offline.service... Nov 1 00:22:41.640473 ignition[649]: Ignition 2.14.0 Nov 1 00:22:41.640484 ignition[649]: Stage: fetch-offline Nov 1 00:22:41.640521 ignition[649]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:41.640530 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:22:41.640656 ignition[649]: parsed url from cmdline: "" Nov 1 00:22:41.640659 ignition[649]: no config URL provided Nov 1 00:22:41.640663 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:22:41.640670 ignition[649]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:22:41.640687 ignition[649]: op(1): [started] loading QEMU firmware config module Nov 1 00:22:41.640692 ignition[649]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 1 00:22:41.644623 ignition[649]: op(1): [finished] loading QEMU firmware config module Nov 1 00:22:41.658783 systemd[1]: Finished parse-ip-for-networkd.service. Nov 1 00:22:41.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:41.659000 audit: BPF prog-id=9 op=LOAD Nov 1 00:22:41.661109 systemd[1]: Starting systemd-networkd.service... Nov 1 00:22:41.679630 systemd-networkd[742]: lo: Link UP Nov 1 00:22:41.679642 systemd-networkd[742]: lo: Gained carrier Nov 1 00:22:41.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:41.680022 systemd-networkd[742]: Enumeration completed Nov 1 00:22:41.680093 systemd[1]: Started systemd-networkd.service. Nov 1 00:22:41.680200 systemd-networkd[742]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:22:41.681205 systemd-networkd[742]: eth0: Link UP Nov 1 00:22:41.681208 systemd-networkd[742]: eth0: Gained carrier Nov 1 00:22:41.681633 systemd[1]: Reached target network.target. Nov 1 00:22:41.683743 systemd[1]: Starting iscsiuio.service... Nov 1 00:22:41.690867 systemd[1]: Started iscsiuio.service. Nov 1 00:22:41.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:41.692479 systemd[1]: Starting iscsid.service... Nov 1 00:22:41.695502 iscsid[747]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:22:41.695502 iscsid[747]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Nov 1 00:22:41.695502 iscsid[747]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Nov 1 00:22:41.695502 iscsid[747]: If using hardware iscsi like qla4xxx this message can be ignored. Nov 1 00:22:41.695502 iscsid[747]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Nov 1 00:22:41.695502 iscsid[747]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Nov 1 00:22:41.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:41.698053 systemd-networkd[742]: eth0: DHCPv4 address 10.0.0.94/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:22:41.698602 systemd[1]: Started iscsid.service. Nov 1 00:22:41.704294 systemd[1]: Starting dracut-initqueue.service... Nov 1 00:22:41.713847 systemd[1]: Finished dracut-initqueue.service. Nov 1 00:22:41.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:41.714927 systemd[1]: Reached target remote-fs-pre.target. Nov 1 00:22:41.716537 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:22:41.718213 systemd[1]: Reached target remote-fs.target. Nov 1 00:22:41.720540 systemd[1]: Starting dracut-pre-mount.service... Nov 1 00:22:41.722908 ignition[649]: parsing config with SHA512: e65c9988ba6e5da3ce4d119814ffca37edb7bce0932d5f0e67c54dc8dd08118dcec4fe8b7eacfce493a2e5d2b93724f003b3ebc6d10d09908af2fab347f32ca0 Nov 1 00:22:41.732095 systemd[1]: Finished dracut-pre-mount.service. Nov 1 00:22:41.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:41.732535 unknown[649]: fetched base config from "system" Nov 1 00:22:41.733203 ignition[649]: fetch-offline: fetch-offline passed Nov 1 00:22:41.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:41.732542 unknown[649]: fetched user config from "qemu" Nov 1 00:22:41.733289 ignition[649]: Ignition finished successfully Nov 1 00:22:41.734557 systemd[1]: Finished ignition-fetch-offline.service. Nov 1 00:22:41.736195 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 00:22:41.736857 systemd[1]: Starting ignition-kargs.service... Nov 1 00:22:41.745699 ignition[761]: Ignition 2.14.0 Nov 1 00:22:41.745709 ignition[761]: Stage: kargs Nov 1 00:22:41.745797 ignition[761]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:41.745807 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:22:41.748080 systemd[1]: Finished ignition-kargs.service. Nov 1 00:22:41.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:41.746688 ignition[761]: kargs: kargs passed Nov 1 00:22:41.746735 ignition[761]: Ignition finished successfully Nov 1 00:22:41.750558 systemd[1]: Starting ignition-disks.service... Nov 1 00:22:41.756819 ignition[767]: Ignition 2.14.0 Nov 1 00:22:41.756828 ignition[767]: Stage: disks Nov 1 00:22:41.756913 ignition[767]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:41.758756 systemd[1]: Finished ignition-disks.service. Nov 1 00:22:41.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:41.756929 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:22:41.760470 systemd[1]: Reached target initrd-root-device.target. Nov 1 00:22:41.757865 ignition[767]: disks: disks passed Nov 1 00:22:41.761844 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:22:41.757904 ignition[767]: Ignition finished successfully Nov 1 00:22:41.763671 systemd[1]: Reached target local-fs.target. Nov 1 00:22:41.765098 systemd[1]: Reached target sysinit.target. Nov 1 00:22:41.766300 systemd[1]: Reached target basic.target. Nov 1 00:22:41.768402 systemd[1]: Starting systemd-fsck-root.service... Nov 1 00:22:41.778867 systemd-fsck[775]: ROOT: clean, 637/553520 files, 56031/553472 blocks Nov 1 00:22:41.782201 systemd[1]: Finished systemd-fsck-root.service. Nov 1 00:22:41.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:41.784218 systemd[1]: Mounting sysroot.mount... Nov 1 00:22:41.791006 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Nov 1 00:22:41.791319 systemd[1]: Mounted sysroot.mount. Nov 1 00:22:41.792072 systemd[1]: Reached target initrd-root-fs.target. Nov 1 00:22:41.794385 systemd[1]: Mounting sysroot-usr.mount... Nov 1 00:22:41.795296 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Nov 1 00:22:41.795332 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:22:41.795353 systemd[1]: Reached target ignition-diskful.target. Nov 1 00:22:41.797135 systemd[1]: Mounted sysroot-usr.mount. Nov 1 00:22:41.799068 systemd[1]: Starting initrd-setup-root.service... Nov 1 00:22:41.803110 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:22:41.806790 initrd-setup-root[793]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:22:41.810773 initrd-setup-root[801]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:22:41.814687 initrd-setup-root[809]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:22:41.840609 systemd[1]: Finished initrd-setup-root.service. Nov 1 00:22:41.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:41.842125 systemd[1]: Starting ignition-mount.service... Nov 1 00:22:41.843409 systemd[1]: Starting sysroot-boot.service... Nov 1 00:22:41.847308 bash[826]: umount: /sysroot/usr/share/oem: not mounted. Nov 1 00:22:41.855356 ignition[828]: INFO : Ignition 2.14.0 Nov 1 00:22:41.855356 ignition[828]: INFO : Stage: mount Nov 1 00:22:41.857622 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:41.857622 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:22:41.857622 ignition[828]: INFO : mount: mount passed Nov 1 00:22:41.857622 ignition[828]: INFO : Ignition finished successfully Nov 1 00:22:41.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:41.857651 systemd[1]: Finished ignition-mount.service. Nov 1 00:22:41.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:41.861485 systemd[1]: Finished sysroot-boot.service. Nov 1 00:22:42.532326 systemd[1]: Mounting sysroot-usr-share-oem.mount... Nov 1 00:22:42.539541 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (837) Nov 1 00:22:42.539568 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 1 00:22:42.539583 kernel: BTRFS info (device vda6): using free space tree Nov 1 00:22:42.540998 kernel: BTRFS info (device vda6): has skinny extents Nov 1 00:22:42.543796 systemd[1]: Mounted sysroot-usr-share-oem.mount. Nov 1 00:22:42.545408 systemd[1]: Starting ignition-files.service... Nov 1 00:22:42.558208 ignition[857]: INFO : Ignition 2.14.0 Nov 1 00:22:42.558208 ignition[857]: INFO : Stage: files Nov 1 00:22:42.559821 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:42.559821 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:22:42.559821 ignition[857]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:22:42.563459 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:22:42.563459 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:22:42.563459 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:22:42.563459 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:22:42.563459 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:22:42.563459 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:22:42.563459 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Nov 1 00:22:42.563459 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 1 00:22:42.563459 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Nov 1 00:22:42.562411 unknown[857]: wrote ssh authorized keys file for user: core Nov 1 00:22:42.633561 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 1 00:22:42.781307 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 1 00:22:42.783511 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:22:42.783511 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Nov 1 00:22:43.002143 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Nov 1 00:22:43.080623 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 1 00:22:43.080623 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:22:43.084327 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:22:43.084327 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:22:43.084327 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:22:43.084327 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:22:43.084327 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:22:43.084327 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:22:43.084327 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:22:43.084327 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:22:43.084327 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:22:43.084327 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 1 00:22:43.084327 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 1 00:22:43.084327 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 1 00:22:43.084327 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Nov 1 00:22:43.342838 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Nov 1 00:22:43.374080 systemd-networkd[742]: eth0: Gained IPv6LL Nov 1 00:22:43.729453 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 1 00:22:43.729453 ignition[857]: INFO : files: op(d): [started] processing unit "containerd.service" Nov 1 00:22:43.733751 ignition[857]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:22:43.733751 ignition[857]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Nov 1 00:22:43.733751 ignition[857]: INFO : files: op(d): [finished] processing unit "containerd.service" Nov 1 00:22:43.733751 ignition[857]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Nov 1 00:22:43.733751 ignition[857]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:22:43.733751 ignition[857]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:22:43.733751 ignition[857]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Nov 1 00:22:43.733751 ignition[857]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Nov 1 00:22:43.733751 ignition[857]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:22:43.733751 ignition[857]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:22:43.733751 ignition[857]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Nov 1 00:22:43.733751 ignition[857]: INFO : files: op(13): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:22:43.733751 ignition[857]: INFO : files: op(13): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:22:43.733751 ignition[857]: INFO : files: op(14): [started] setting preset to disabled for "coreos-metadata.service" Nov 1 00:22:43.733751 ignition[857]: INFO : files: op(14): op(15): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:22:43.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.755891 systemd[1]: Finished ignition-files.service. Nov 1 00:22:43.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.765842 ignition[857]: INFO : files: op(14): op(15): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:22:43.765842 ignition[857]: INFO : files: op(14): [finished] setting preset to disabled for "coreos-metadata.service" Nov 1 00:22:43.765842 ignition[857]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:22:43.765842 ignition[857]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:22:43.765842 ignition[857]: INFO : files: files passed Nov 1 00:22:43.765842 ignition[857]: INFO : Ignition finished successfully Nov 1 00:22:43.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.758753 systemd[1]: Starting initrd-setup-root-after-ignition.service... Nov 1 00:22:43.760147 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Nov 1 00:22:43.778830 initrd-setup-root-after-ignition[881]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Nov 1 00:22:43.760803 systemd[1]: Starting ignition-quench.service... Nov 1 00:22:43.781147 initrd-setup-root-after-ignition[884]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:22:43.763771 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:22:43.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.763849 systemd[1]: Finished ignition-quench.service. Nov 1 00:22:43.766859 systemd[1]: Finished initrd-setup-root-after-ignition.service. Nov 1 00:22:43.768462 systemd[1]: Reached target ignition-complete.target. Nov 1 00:22:43.771197 systemd[1]: Starting initrd-parse-etc.service... Nov 1 00:22:43.782553 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:22:43.782633 systemd[1]: Finished initrd-parse-etc.service. Nov 1 00:22:43.783617 systemd[1]: Reached target initrd-fs.target. Nov 1 00:22:43.785100 systemd[1]: Reached target initrd.target. Nov 1 00:22:43.786502 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Nov 1 00:22:43.787166 systemd[1]: Starting dracut-pre-pivot.service... Nov 1 00:22:43.797203 systemd[1]: Finished dracut-pre-pivot.service. Nov 1 00:22:43.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.798848 systemd[1]: Starting initrd-cleanup.service... Nov 1 00:22:43.806686 systemd[1]: Stopped target nss-lookup.target. Nov 1 00:22:43.807662 systemd[1]: Stopped target remote-cryptsetup.target. Nov 1 00:22:43.809145 systemd[1]: Stopped target timers.target. Nov 1 00:22:43.810560 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:22:43.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.810680 systemd[1]: Stopped dracut-pre-pivot.service. Nov 1 00:22:43.812017 systemd[1]: Stopped target initrd.target. Nov 1 00:22:43.813540 systemd[1]: Stopped target basic.target. Nov 1 00:22:43.814834 systemd[1]: Stopped target ignition-complete.target. Nov 1 00:22:43.816261 systemd[1]: Stopped target ignition-diskful.target. Nov 1 00:22:43.817649 systemd[1]: Stopped target initrd-root-device.target. Nov 1 00:22:43.819171 systemd[1]: Stopped target remote-fs.target. Nov 1 00:22:43.820628 systemd[1]: Stopped target remote-fs-pre.target. Nov 1 00:22:43.822118 systemd[1]: Stopped target sysinit.target. Nov 1 00:22:43.823517 systemd[1]: Stopped target local-fs.target. Nov 1 00:22:43.824863 systemd[1]: Stopped target local-fs-pre.target. Nov 1 00:22:43.826247 systemd[1]: Stopped target swap.target. Nov 1 00:22:43.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.827480 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:22:43.827592 systemd[1]: Stopped dracut-pre-mount.service. Nov 1 00:22:43.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.828947 systemd[1]: Stopped target cryptsetup.target. Nov 1 00:22:43.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.830250 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:22:43.830353 systemd[1]: Stopped dracut-initqueue.service. Nov 1 00:22:43.831897 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:22:43.832009 systemd[1]: Stopped ignition-fetch-offline.service. Nov 1 00:22:43.833405 systemd[1]: Stopped target paths.target. Nov 1 00:22:43.834706 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:22:43.836015 systemd[1]: Stopped systemd-ask-password-console.path. Nov 1 00:22:43.837174 systemd[1]: Stopped target slices.target. Nov 1 00:22:43.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.838741 systemd[1]: Stopped target sockets.target. Nov 1 00:22:43.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.840150 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:22:43.840237 systemd[1]: Closed iscsid.socket. Nov 1 00:22:43.841356 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:22:43.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.841420 systemd[1]: Closed iscsiuio.socket. Nov 1 00:22:43.842862 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:22:43.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.842964 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Nov 1 00:22:43.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.844535 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:22:43.856062 ignition[897]: INFO : Ignition 2.14.0 Nov 1 00:22:43.856062 ignition[897]: INFO : Stage: umount Nov 1 00:22:43.856062 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:22:43.856062 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:22:43.856062 ignition[897]: INFO : umount: umount passed Nov 1 00:22:43.856062 ignition[897]: INFO : Ignition finished successfully Nov 1 00:22:43.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.844629 systemd[1]: Stopped ignition-files.service. Nov 1 00:22:43.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.846702 systemd[1]: Stopping ignition-mount.service... Nov 1 00:22:43.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.847963 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:22:43.848106 systemd[1]: Stopped kmod-static-nodes.service. Nov 1 00:22:43.850113 systemd[1]: Stopping sysroot-boot.service... Nov 1 00:22:43.850784 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:22:43.850918 systemd[1]: Stopped systemd-udev-trigger.service. Nov 1 00:22:43.852291 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:22:43.852375 systemd[1]: Stopped dracut-pre-trigger.service. Nov 1 00:22:43.856770 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:22:43.856853 systemd[1]: Finished initrd-cleanup.service. Nov 1 00:22:43.858594 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:22:43.858931 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:22:43.859022 systemd[1]: Stopped ignition-mount.service. Nov 1 00:22:43.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.860408 systemd[1]: Stopped target network.target. Nov 1 00:22:43.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.862187 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:22:43.862254 systemd[1]: Stopped ignition-disks.service. Nov 1 00:22:43.863672 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:22:43.885000 audit: BPF prog-id=6 op=UNLOAD Nov 1 00:22:43.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.863711 systemd[1]: Stopped ignition-kargs.service. Nov 1 00:22:43.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.865179 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:22:43.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.865231 systemd[1]: Stopped ignition-setup.service. Nov 1 00:22:43.866957 systemd[1]: Stopping systemd-networkd.service... Nov 1 00:22:43.868115 systemd[1]: Stopping systemd-resolved.service... Nov 1 00:22:43.876038 systemd-networkd[742]: eth0: DHCPv6 lease lost Nov 1 00:22:43.895000 audit: BPF prog-id=9 op=UNLOAD Nov 1 00:22:43.877963 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:22:43.878076 systemd[1]: Stopped systemd-networkd.service. Nov 1 00:22:43.879918 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:22:43.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.880016 systemd[1]: Stopped systemd-resolved.service. Nov 1 00:22:43.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.881461 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:22:43.881492 systemd[1]: Closed systemd-networkd.socket. Nov 1 00:22:43.883378 systemd[1]: Stopping network-cleanup.service... Nov 1 00:22:43.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.884232 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:22:43.907000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.884286 systemd[1]: Stopped parse-ip-for-networkd.service. Nov 1 00:22:43.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.886027 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:22:43.886072 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:22:43.888506 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:22:43.912000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.888549 systemd[1]: Stopped systemd-modules-load.service. Nov 1 00:22:43.889572 systemd[1]: Stopping systemd-udevd.service... Nov 1 00:22:43.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.893962 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:22:43.897527 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:22:43.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.897645 systemd[1]: Stopped network-cleanup.service. Nov 1 00:22:43.899444 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:22:43.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:43.899559 systemd[1]: Stopped systemd-udevd.service. Nov 1 00:22:43.900989 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:22:43.901025 systemd[1]: Closed systemd-udevd-control.socket. Nov 1 00:22:43.902310 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:22:43.902342 systemd[1]: Closed systemd-udevd-kernel.socket. Nov 1 00:22:43.903792 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:22:43.903837 systemd[1]: Stopped dracut-pre-udev.service. Nov 1 00:22:43.905313 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:22:43.905353 systemd[1]: Stopped dracut-cmdline.service. Nov 1 00:22:43.930000 audit: BPF prog-id=5 op=UNLOAD Nov 1 00:22:43.930000 audit: BPF prog-id=4 op=UNLOAD Nov 1 00:22:43.930000 audit: BPF prog-id=3 op=UNLOAD Nov 1 00:22:43.907365 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:22:43.907405 systemd[1]: Stopped dracut-cmdline-ask.service. Nov 1 00:22:43.932000 audit: BPF prog-id=8 op=UNLOAD Nov 1 00:22:43.932000 audit: BPF prog-id=7 op=UNLOAD Nov 1 00:22:43.909753 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Nov 1 00:22:43.910656 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:22:43.911671 systemd[1]: Stopped systemd-vconsole-setup.service. Nov 1 00:22:43.915114 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:22:43.915203 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Nov 1 00:22:43.917574 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:22:43.917661 systemd[1]: Stopped sysroot-boot.service. Nov 1 00:22:43.919458 systemd[1]: Reached target initrd-switch-root.target. Nov 1 00:22:43.920973 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:22:43.921041 systemd[1]: Stopped initrd-setup-root.service. Nov 1 00:22:43.923124 systemd[1]: Starting initrd-switch-root.service... Nov 1 00:22:43.928551 systemd[1]: Switching root. Nov 1 00:22:43.948560 iscsid[747]: iscsid shutting down. Nov 1 00:22:43.949273 systemd-journald[291]: Received SIGTERM from PID 1 (systemd). Nov 1 00:22:43.949323 systemd-journald[291]: Journal stopped Nov 1 00:22:45.944409 kernel: SELinux: Class mctp_socket not defined in policy. Nov 1 00:22:45.944459 kernel: SELinux: Class anon_inode not defined in policy. Nov 1 00:22:45.944476 kernel: SELinux: the above unknown classes and permissions will be allowed Nov 1 00:22:45.944490 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:22:45.944499 kernel: SELinux: policy capability open_perms=1 Nov 1 00:22:45.944509 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:22:45.944521 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:22:45.944531 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:22:45.944541 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:22:45.944551 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:22:45.944562 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:22:45.944572 kernel: kauditd_printk_skb: 66 callbacks suppressed Nov 1 00:22:45.944581 kernel: audit: type=1403 audit(1761956564.037:77): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:22:45.944601 systemd[1]: Successfully loaded SELinux policy in 42.571ms. Nov 1 00:22:45.944618 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.938ms. Nov 1 00:22:45.944632 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Nov 1 00:22:45.944643 systemd[1]: Detected virtualization kvm. Nov 1 00:22:45.944654 systemd[1]: Detected architecture arm64. Nov 1 00:22:45.944665 systemd[1]: Detected first boot. Nov 1 00:22:45.944675 systemd[1]: Initializing machine ID from VM UUID. Nov 1 00:22:45.944687 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Nov 1 00:22:45.944698 kernel: audit: type=1400 audit(1761956564.191:78): avc: denied { associate } for pid=949 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Nov 1 00:22:45.944709 kernel: audit: type=1300 audit(1761956564.191:78): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c766c a1=40000caae0 a2=40000d0a00 a3=32 items=0 ppid=932 pid=949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:45.944720 kernel: audit: type=1327 audit(1761956564.191:78): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:22:45.944731 kernel: audit: type=1400 audit(1761956564.194:79): avc: denied { associate } for pid=949 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Nov 1 00:22:45.944742 kernel: audit: type=1300 audit(1761956564.194:79): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c7749 a2=1ed a3=0 items=2 ppid=932 pid=949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:45.944752 kernel: audit: type=1307 audit(1761956564.194:79): cwd="/" Nov 1 00:22:45.944762 kernel: audit: type=1302 audit(1761956564.194:79): item=0 name=(null) inode=2 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:22:45.944773 kernel: audit: type=1302 audit(1761956564.194:79): item=1 name=(null) inode=3 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Nov 1 00:22:45.944786 kernel: audit: type=1327 audit(1761956564.194:79): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Nov 1 00:22:45.944799 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:22:45.944810 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:22:45.944821 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:22:45.944833 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:22:45.944845 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:22:45.944855 systemd[1]: Unnecessary job was removed for dev-vda6.device. Nov 1 00:22:45.944866 systemd[1]: Created slice system-addon\x2dconfig.slice. Nov 1 00:22:45.944876 systemd[1]: Created slice system-addon\x2drun.slice. Nov 1 00:22:45.944887 systemd[1]: Created slice system-getty.slice. Nov 1 00:22:45.944897 systemd[1]: Created slice system-modprobe.slice. Nov 1 00:22:45.944907 systemd[1]: Created slice system-serial\x2dgetty.slice. Nov 1 00:22:45.944919 systemd[1]: Created slice system-system\x2dcloudinit.slice. Nov 1 00:22:45.944929 systemd[1]: Created slice system-systemd\x2dfsck.slice. Nov 1 00:22:45.944939 systemd[1]: Created slice user.slice. Nov 1 00:22:45.944949 systemd[1]: Started systemd-ask-password-console.path. Nov 1 00:22:45.944959 systemd[1]: Started systemd-ask-password-wall.path. Nov 1 00:22:45.944970 systemd[1]: Set up automount boot.automount. Nov 1 00:22:45.944991 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Nov 1 00:22:45.945002 systemd[1]: Reached target integritysetup.target. Nov 1 00:22:45.945014 systemd[1]: Reached target remote-cryptsetup.target. Nov 1 00:22:45.945025 systemd[1]: Reached target remote-fs.target. Nov 1 00:22:45.945036 systemd[1]: Reached target slices.target. Nov 1 00:22:45.945050 systemd[1]: Reached target swap.target. Nov 1 00:22:45.945061 systemd[1]: Reached target torcx.target. Nov 1 00:22:45.945071 systemd[1]: Reached target veritysetup.target. Nov 1 00:22:45.945081 systemd[1]: Listening on systemd-coredump.socket. Nov 1 00:22:45.945091 systemd[1]: Listening on systemd-initctl.socket. Nov 1 00:22:45.945102 systemd[1]: Listening on systemd-journald-audit.socket. Nov 1 00:22:45.945113 systemd[1]: Listening on systemd-journald-dev-log.socket. Nov 1 00:22:45.945124 systemd[1]: Listening on systemd-journald.socket. Nov 1 00:22:45.945135 systemd[1]: Listening on systemd-networkd.socket. Nov 1 00:22:45.945145 systemd[1]: Listening on systemd-udevd-control.socket. Nov 1 00:22:45.945155 systemd[1]: Listening on systemd-udevd-kernel.socket. Nov 1 00:22:45.945165 systemd[1]: Listening on systemd-userdbd.socket. Nov 1 00:22:45.945175 systemd[1]: Mounting dev-hugepages.mount... Nov 1 00:22:45.945186 systemd[1]: Mounting dev-mqueue.mount... Nov 1 00:22:45.945201 systemd[1]: Mounting media.mount... Nov 1 00:22:45.945212 systemd[1]: Mounting sys-kernel-debug.mount... Nov 1 00:22:45.945224 systemd[1]: Mounting sys-kernel-tracing.mount... Nov 1 00:22:45.945235 systemd[1]: Mounting tmp.mount... Nov 1 00:22:45.945246 systemd[1]: Starting flatcar-tmpfiles.service... Nov 1 00:22:45.945256 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:22:45.945267 systemd[1]: Starting kmod-static-nodes.service... Nov 1 00:22:45.945277 systemd[1]: Starting modprobe@configfs.service... Nov 1 00:22:45.945288 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:22:45.945298 systemd[1]: Starting modprobe@drm.service... Nov 1 00:22:45.945309 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:22:45.945320 systemd[1]: Starting modprobe@fuse.service... Nov 1 00:22:45.945331 systemd[1]: Starting modprobe@loop.service... Nov 1 00:22:45.945342 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:22:45.945352 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Nov 1 00:22:45.945362 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Nov 1 00:22:45.945372 kernel: fuse: init (API version 7.34) Nov 1 00:22:45.945382 kernel: loop: module loaded Nov 1 00:22:45.945392 systemd[1]: Starting systemd-journald.service... Nov 1 00:22:45.945402 systemd[1]: Starting systemd-modules-load.service... Nov 1 00:22:45.945414 systemd[1]: Starting systemd-network-generator.service... Nov 1 00:22:45.945424 systemd[1]: Starting systemd-remount-fs.service... Nov 1 00:22:45.945435 systemd[1]: Starting systemd-udev-trigger.service... Nov 1 00:22:45.945445 systemd[1]: Mounted dev-hugepages.mount. Nov 1 00:22:45.945455 systemd[1]: Mounted dev-mqueue.mount. Nov 1 00:22:45.945466 systemd[1]: Mounted media.mount. Nov 1 00:22:45.945476 systemd[1]: Mounted sys-kernel-debug.mount. Nov 1 00:22:45.945487 systemd[1]: Mounted sys-kernel-tracing.mount. Nov 1 00:22:45.945497 systemd[1]: Mounted tmp.mount. Nov 1 00:22:45.945509 systemd[1]: Finished kmod-static-nodes.service. Nov 1 00:22:45.945522 systemd-journald[1033]: Journal started Nov 1 00:22:45.945568 systemd-journald[1033]: Runtime Journal (/run/log/journal/9d8f8e361282446697a516b29b63537a) is 6.0M, max 48.7M, 42.6M free. Nov 1 00:22:45.943000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Nov 1 00:22:45.943000 audit[1033]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffff2e132f0 a2=4000 a3=1 items=0 ppid=1 pid=1033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:45.943000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Nov 1 00:22:45.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:45.946997 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:22:45.947039 systemd[1]: Finished modprobe@configfs.service. Nov 1 00:22:45.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:45.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:45.950155 systemd[1]: Started systemd-journald.service. Nov 1 00:22:45.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:45.951277 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:22:45.951506 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:22:45.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:45.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:45.952628 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:22:45.952847 systemd[1]: Finished modprobe@drm.service. Nov 1 00:22:45.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:45.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:45.953955 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:22:45.954174 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:22:45.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:45.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:45.955490 systemd[1]: Finished flatcar-tmpfiles.service. Nov 1 00:22:45.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:45.956669 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:22:45.956873 systemd[1]: Finished modprobe@fuse.service. Nov 1 00:22:45.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:45.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:45.958437 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:22:45.958626 systemd[1]: Finished modprobe@loop.service. Nov 1 00:22:45.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:45.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:45.960222 systemd[1]: Finished systemd-modules-load.service. Nov 1 00:22:45.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:45.961641 systemd[1]: Finished systemd-network-generator.service. Nov 1 00:22:45.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:45.963095 systemd[1]: Finished systemd-remount-fs.service. Nov 1 00:22:45.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:45.964455 systemd[1]: Reached target network-pre.target. Nov 1 00:22:45.966575 systemd[1]: Mounting sys-fs-fuse-connections.mount... Nov 1 00:22:45.968563 systemd[1]: Mounting sys-kernel-config.mount... Nov 1 00:22:45.969368 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:22:45.970889 systemd[1]: Starting systemd-hwdb-update.service... Nov 1 00:22:45.972884 systemd[1]: Starting systemd-journal-flush.service... Nov 1 00:22:45.973866 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:22:45.974877 systemd[1]: Starting systemd-random-seed.service... Nov 1 00:22:45.975930 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:22:45.976929 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:22:45.977861 systemd-journald[1033]: Time spent on flushing to /var/log/journal/9d8f8e361282446697a516b29b63537a is 15.931ms for 926 entries. Nov 1 00:22:45.977861 systemd-journald[1033]: System Journal (/var/log/journal/9d8f8e361282446697a516b29b63537a) is 8.0M, max 195.6M, 187.6M free. Nov 1 00:22:46.007716 systemd-journald[1033]: Received client request to flush runtime journal. Nov 1 00:22:45.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:45.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:45.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:46.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:45.978914 systemd[1]: Starting systemd-sysusers.service... Nov 1 00:22:45.983448 systemd[1]: Finished systemd-udev-trigger.service. Nov 1 00:22:45.984687 systemd[1]: Mounted sys-fs-fuse-connections.mount. Nov 1 00:22:46.009191 udevadm[1079]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 00:22:45.985802 systemd[1]: Mounted sys-kernel-config.mount. Nov 1 00:22:45.988495 systemd[1]: Starting systemd-udev-settle.service... Nov 1 00:22:45.989943 systemd[1]: Finished systemd-random-seed.service. Nov 1 00:22:46.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:45.991303 systemd[1]: Reached target first-boot-complete.target. Nov 1 00:22:45.998051 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:22:46.004117 systemd[1]: Finished systemd-sysusers.service. Nov 1 00:22:46.006145 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Nov 1 00:22:46.008762 systemd[1]: Finished systemd-journal-flush.service. Nov 1 00:22:46.021623 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Nov 1 00:22:46.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:46.329698 systemd[1]: Finished systemd-hwdb-update.service. Nov 1 00:22:46.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:46.331886 systemd[1]: Starting systemd-udevd.service... Nov 1 00:22:46.347859 systemd-udevd[1089]: Using default interface naming scheme 'v252'. Nov 1 00:22:46.359832 systemd[1]: Started systemd-udevd.service. Nov 1 00:22:46.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:46.362375 systemd[1]: Starting systemd-networkd.service... Nov 1 00:22:46.368395 systemd[1]: Starting systemd-userdbd.service... Nov 1 00:22:46.382963 systemd[1]: Found device dev-ttyAMA0.device. Nov 1 00:22:46.398085 systemd[1]: Started systemd-userdbd.service. Nov 1 00:22:46.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:46.441379 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Nov 1 00:22:46.452762 systemd-networkd[1099]: lo: Link UP Nov 1 00:22:46.452770 systemd-networkd[1099]: lo: Gained carrier Nov 1 00:22:46.453163 systemd-networkd[1099]: Enumeration completed Nov 1 00:22:46.453275 systemd-networkd[1099]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:22:46.453276 systemd[1]: Started systemd-networkd.service. Nov 1 00:22:46.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:46.454548 systemd-networkd[1099]: eth0: Link UP Nov 1 00:22:46.454557 systemd-networkd[1099]: eth0: Gained carrier Nov 1 00:22:46.458500 systemd[1]: Finished systemd-udev-settle.service. Nov 1 00:22:46.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:46.460559 systemd[1]: Starting lvm2-activation-early.service... Nov 1 00:22:46.468599 lvm[1124]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:22:46.476105 systemd-networkd[1099]: eth0: DHCPv4 address 10.0.0.94/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:22:46.498852 systemd[1]: Finished lvm2-activation-early.service. Nov 1 00:22:46.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:46.500325 systemd[1]: Reached target cryptsetup.target. Nov 1 00:22:46.502358 systemd[1]: Starting lvm2-activation.service... Nov 1 00:22:46.505896 lvm[1126]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:22:46.543876 systemd[1]: Finished lvm2-activation.service. Nov 1 00:22:46.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:46.544957 systemd[1]: Reached target local-fs-pre.target. Nov 1 00:22:46.545861 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:22:46.545893 systemd[1]: Reached target local-fs.target. Nov 1 00:22:46.546749 systemd[1]: Reached target machines.target. Nov 1 00:22:46.548718 systemd[1]: Starting ldconfig.service... Nov 1 00:22:46.549786 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:22:46.549840 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:22:46.550845 systemd[1]: Starting systemd-boot-update.service... Nov 1 00:22:46.552754 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Nov 1 00:22:46.555033 systemd[1]: Starting systemd-machine-id-commit.service... Nov 1 00:22:46.556967 systemd[1]: Starting systemd-sysext.service... Nov 1 00:22:46.558186 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1129 (bootctl) Nov 1 00:22:46.559222 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Nov 1 00:22:46.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:46.565360 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Nov 1 00:22:46.572772 systemd[1]: Unmounting usr-share-oem.mount... Nov 1 00:22:46.577626 systemd[1]: usr-share-oem.mount: Deactivated successfully. Nov 1 00:22:46.577906 systemd[1]: Unmounted usr-share-oem.mount. Nov 1 00:22:46.623363 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:22:46.624017 systemd[1]: Finished systemd-machine-id-commit.service. Nov 1 00:22:46.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:46.628006 kernel: loop0: detected capacity change from 0 to 207008 Nov 1 00:22:46.628465 systemd-fsck[1137]: fsck.fat 4.2 (2021-01-31) Nov 1 00:22:46.628465 systemd-fsck[1137]: /dev/vda1: 236 files, 117310/258078 clusters Nov 1 00:22:46.630381 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Nov 1 00:22:46.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:46.633379 systemd[1]: Mounting boot.mount... Nov 1 00:22:46.641121 systemd[1]: Mounted boot.mount. Nov 1 00:22:46.643345 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:22:46.649493 systemd[1]: Finished systemd-boot-update.service. Nov 1 00:22:46.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:46.675009 kernel: loop1: detected capacity change from 0 to 207008 Nov 1 00:22:46.678935 (sd-sysext)[1150]: Using extensions 'kubernetes'. Nov 1 00:22:46.679301 (sd-sysext)[1150]: Merged extensions into '/usr'. Nov 1 00:22:46.693711 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:22:46.694924 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:22:46.696846 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:22:46.698867 systemd[1]: Starting modprobe@loop.service... Nov 1 00:22:46.699975 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:22:46.700137 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:22:46.700921 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:22:46.701103 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:22:46.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:46.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:46.702581 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:22:46.702723 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:22:46.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:46.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:46.704237 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:22:46.704823 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:22:46.705009 systemd[1]: Finished modprobe@loop.service. Nov 1 00:22:46.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:46.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:46.706190 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:22:46.737593 ldconfig[1128]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:22:46.741048 systemd[1]: Finished ldconfig.service. Nov 1 00:22:46.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:46.937069 systemd[1]: Mounting usr-share-oem.mount... Nov 1 00:22:46.942573 systemd[1]: Mounted usr-share-oem.mount. Nov 1 00:22:46.944519 systemd[1]: Finished systemd-sysext.service. Nov 1 00:22:46.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:46.946563 systemd[1]: Starting ensure-sysext.service... Nov 1 00:22:46.948306 systemd[1]: Starting systemd-tmpfiles-setup.service... Nov 1 00:22:46.952466 systemd[1]: Reloading. Nov 1 00:22:46.956768 systemd-tmpfiles[1165]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Nov 1 00:22:46.957810 systemd-tmpfiles[1165]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:22:46.959107 systemd-tmpfiles[1165]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:22:46.986427 /usr/lib/systemd/system-generators/torcx-generator[1185]: time="2025-11-01T00:22:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:22:46.986456 /usr/lib/systemd/system-generators/torcx-generator[1185]: time="2025-11-01T00:22:46Z" level=info msg="torcx already run" Nov 1 00:22:47.054702 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:22:47.054725 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:22:47.070129 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:22:47.115967 systemd[1]: Finished systemd-tmpfiles-setup.service. Nov 1 00:22:47.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:47.119951 systemd[1]: Starting audit-rules.service... Nov 1 00:22:47.121829 systemd[1]: Starting clean-ca-certificates.service... Nov 1 00:22:47.123884 systemd[1]: Starting systemd-journal-catalog-update.service... Nov 1 00:22:47.126477 systemd[1]: Starting systemd-resolved.service... Nov 1 00:22:47.128651 systemd[1]: Starting systemd-timesyncd.service... Nov 1 00:22:47.130762 systemd[1]: Starting systemd-update-utmp.service... Nov 1 00:22:47.132282 systemd[1]: Finished clean-ca-certificates.service. Nov 1 00:22:47.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:47.136703 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:22:47.137934 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:22:47.136000 audit[1243]: SYSTEM_BOOT pid=1243 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Nov 1 00:22:47.139918 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:22:47.141999 systemd[1]: Starting modprobe@loop.service... Nov 1 00:22:47.142895 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:22:47.143026 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:22:47.143139 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:22:47.143877 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:22:47.144137 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:22:47.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:47.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:47.145369 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:22:47.145496 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:22:47.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:47.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:47.146877 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:22:47.147037 systemd[1]: Finished modprobe@loop.service. Nov 1 00:22:47.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:47.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:47.149764 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:22:47.149899 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:22:47.151894 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:22:47.153241 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:22:47.155358 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:22:47.157365 systemd[1]: Starting modprobe@loop.service... Nov 1 00:22:47.158140 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:22:47.158289 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:22:47.158405 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:22:47.159386 systemd[1]: Finished systemd-update-utmp.service. Nov 1 00:22:47.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:47.160718 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:22:47.160855 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:22:47.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:47.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:47.162105 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:22:47.162245 systemd[1]: Finished modprobe@loop.service. Nov 1 00:22:47.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:47.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:47.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:47.163551 systemd[1]: Finished systemd-journal-catalog-update.service. Nov 1 00:22:47.165303 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:22:47.165459 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:22:47.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:47.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 1 00:22:47.166000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Nov 1 00:22:47.166000 audit[1268]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffda7598a0 a2=420 a3=0 items=0 ppid=1231 pid=1268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Nov 1 00:22:47.166000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Nov 1 00:22:47.169616 systemd[1]: Finished audit-rules.service. Nov 1 00:22:47.171399 augenrules[1268]: No rules Nov 1 00:22:47.170895 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Nov 1 00:22:47.172173 systemd[1]: Starting modprobe@dm_mod.service... Nov 1 00:22:47.173951 systemd[1]: Starting modprobe@drm.service... Nov 1 00:22:47.175908 systemd[1]: Starting modprobe@efi_pstore.service... Nov 1 00:22:47.177960 systemd[1]: Starting modprobe@loop.service... Nov 1 00:22:47.178828 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Nov 1 00:22:47.178956 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:22:47.180343 systemd[1]: Starting systemd-networkd-wait-online.service... Nov 1 00:22:47.182569 systemd[1]: Starting systemd-update-done.service... Nov 1 00:22:47.183512 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:22:47.184689 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:22:47.184841 systemd[1]: Finished modprobe@dm_mod.service. Nov 1 00:22:47.186255 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:22:47.186394 systemd[1]: Finished modprobe@drm.service. Nov 1 00:22:47.187613 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:22:47.187758 systemd[1]: Finished modprobe@efi_pstore.service. Nov 1 00:22:47.189063 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:22:47.189242 systemd[1]: Finished modprobe@loop.service. Nov 1 00:22:47.190674 systemd[1]: Finished systemd-update-done.service. Nov 1 00:22:47.191936 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:22:47.192045 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Nov 1 00:22:47.194874 systemd[1]: Finished ensure-sysext.service. Nov 1 00:22:47.205444 systemd[1]: Started systemd-timesyncd.service. Nov 1 00:22:47.206100 systemd-timesyncd[1240]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 1 00:22:47.206157 systemd-timesyncd[1240]: Initial clock synchronization to Sat 2025-11-01 00:22:47.497345 UTC. Nov 1 00:22:47.206760 systemd[1]: Reached target time-set.target. Nov 1 00:22:47.212104 systemd-resolved[1238]: Positive Trust Anchors: Nov 1 00:22:47.212119 systemd-resolved[1238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:22:47.212144 systemd-resolved[1238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Nov 1 00:22:47.220321 systemd-resolved[1238]: Defaulting to hostname 'linux'. Nov 1 00:22:47.221721 systemd[1]: Started systemd-resolved.service. Nov 1 00:22:47.222677 systemd[1]: Reached target network.target. Nov 1 00:22:47.223504 systemd[1]: Reached target nss-lookup.target. Nov 1 00:22:47.224368 systemd[1]: Reached target sysinit.target. Nov 1 00:22:47.225245 systemd[1]: Started motdgen.path. Nov 1 00:22:47.225973 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Nov 1 00:22:47.227259 systemd[1]: Started logrotate.timer. Nov 1 00:22:47.228081 systemd[1]: Started mdadm.timer. Nov 1 00:22:47.228768 systemd[1]: Started systemd-tmpfiles-clean.timer. Nov 1 00:22:47.229667 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:22:47.229704 systemd[1]: Reached target paths.target. Nov 1 00:22:47.230486 systemd[1]: Reached target timers.target. Nov 1 00:22:47.231601 systemd[1]: Listening on dbus.socket. Nov 1 00:22:47.233539 systemd[1]: Starting docker.socket... Nov 1 00:22:47.235280 systemd[1]: Listening on sshd.socket. Nov 1 00:22:47.236195 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:22:47.236512 systemd[1]: Listening on docker.socket. Nov 1 00:22:47.237357 systemd[1]: Reached target sockets.target. Nov 1 00:22:47.238197 systemd[1]: Reached target basic.target. Nov 1 00:22:47.239135 systemd[1]: System is tainted: cgroupsv1 Nov 1 00:22:47.239197 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:22:47.239219 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Nov 1 00:22:47.240222 systemd[1]: Starting containerd.service... Nov 1 00:22:47.241996 systemd[1]: Starting dbus.service... Nov 1 00:22:47.243718 systemd[1]: Starting enable-oem-cloudinit.service... Nov 1 00:22:47.245724 systemd[1]: Starting extend-filesystems.service... Nov 1 00:22:47.246810 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Nov 1 00:22:47.247885 systemd[1]: Starting motdgen.service... Nov 1 00:22:47.248401 jq[1294]: false Nov 1 00:22:47.249962 systemd[1]: Starting prepare-helm.service... Nov 1 00:22:47.252069 systemd[1]: Starting ssh-key-proc-cmdline.service... Nov 1 00:22:47.254255 systemd[1]: Starting sshd-keygen.service... Nov 1 00:22:47.256712 systemd[1]: Starting systemd-logind.service... Nov 1 00:22:47.257930 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Nov 1 00:22:47.260912 extend-filesystems[1295]: Found loop1 Nov 1 00:22:47.260912 extend-filesystems[1295]: Found vda Nov 1 00:22:47.260912 extend-filesystems[1295]: Found vda1 Nov 1 00:22:47.260912 extend-filesystems[1295]: Found vda2 Nov 1 00:22:47.260912 extend-filesystems[1295]: Found vda3 Nov 1 00:22:47.260912 extend-filesystems[1295]: Found usr Nov 1 00:22:47.260912 extend-filesystems[1295]: Found vda4 Nov 1 00:22:47.260912 extend-filesystems[1295]: Found vda6 Nov 1 00:22:47.260912 extend-filesystems[1295]: Found vda7 Nov 1 00:22:47.260912 extend-filesystems[1295]: Found vda9 Nov 1 00:22:47.260912 extend-filesystems[1295]: Checking size of /dev/vda9 Nov 1 00:22:47.258018 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:22:47.276488 dbus-daemon[1293]: [system] SELinux support is enabled Nov 1 00:22:47.259134 systemd[1]: Starting update-engine.service... Nov 1 00:22:47.261223 systemd[1]: Starting update-ssh-keys-after-ignition.service... Nov 1 00:22:47.280478 jq[1312]: true Nov 1 00:22:47.266395 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:22:47.266633 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Nov 1 00:22:47.280716 tar[1319]: linux-arm64/LICENSE Nov 1 00:22:47.280716 tar[1319]: linux-arm64/helm Nov 1 00:22:47.267675 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:22:47.267914 systemd[1]: Finished ssh-key-proc-cmdline.service. Nov 1 00:22:47.276952 systemd[1]: Started dbus.service. Nov 1 00:22:47.280420 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:22:47.280441 systemd[1]: Reached target system-config.target. Nov 1 00:22:47.281362 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:22:47.281389 systemd[1]: Reached target user-config.target. Nov 1 00:22:47.283201 jq[1324]: true Nov 1 00:22:47.292664 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:22:47.292899 systemd[1]: Finished motdgen.service. Nov 1 00:22:47.296504 extend-filesystems[1295]: Resized partition /dev/vda9 Nov 1 00:22:47.301838 extend-filesystems[1341]: resize2fs 1.46.5 (30-Dec-2021) Nov 1 00:22:47.309545 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 1 00:22:47.330416 update_engine[1311]: I1101 00:22:47.330173 1311 main.cc:92] Flatcar Update Engine starting Nov 1 00:22:47.332902 systemd[1]: Started update-engine.service. Nov 1 00:22:47.335728 update_engine[1311]: I1101 00:22:47.332942 1311 update_check_scheduler.cc:74] Next update check in 4m28s Nov 1 00:22:47.335585 systemd[1]: Started locksmithd.service. Nov 1 00:22:47.336942 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 1 00:22:47.350690 systemd-logind[1305]: Watching system buttons on /dev/input/event0 (Power Button) Nov 1 00:22:47.357947 extend-filesystems[1341]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 00:22:47.357947 extend-filesystems[1341]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 1 00:22:47.357947 extend-filesystems[1341]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 1 00:22:47.363047 bash[1351]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:22:47.350891 systemd-logind[1305]: New seat seat0. Nov 1 00:22:47.363179 extend-filesystems[1295]: Resized filesystem in /dev/vda9 Nov 1 00:22:47.353610 systemd[1]: Started systemd-logind.service. Nov 1 00:22:47.354866 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:22:47.355134 systemd[1]: Finished extend-filesystems.service. Nov 1 00:22:47.356478 systemd[1]: Finished update-ssh-keys-after-ignition.service. Nov 1 00:22:47.365656 env[1322]: time="2025-11-01T00:22:47.365487720Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Nov 1 00:22:47.389704 env[1322]: time="2025-11-01T00:22:47.389651800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:22:47.389819 env[1322]: time="2025-11-01T00:22:47.389801320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:47.390965 env[1322]: time="2025-11-01T00:22:47.390924840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.192-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:22:47.390965 env[1322]: time="2025-11-01T00:22:47.390957360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:47.391247 env[1322]: time="2025-11-01T00:22:47.391222240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:22:47.391247 env[1322]: time="2025-11-01T00:22:47.391245600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:47.391302 env[1322]: time="2025-11-01T00:22:47.391269360Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Nov 1 00:22:47.391302 env[1322]: time="2025-11-01T00:22:47.391279520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:47.391365 env[1322]: time="2025-11-01T00:22:47.391350360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:47.391566 env[1322]: time="2025-11-01T00:22:47.391549120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:22:47.391712 env[1322]: time="2025-11-01T00:22:47.391692680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:22:47.391734 env[1322]: time="2025-11-01T00:22:47.391713560Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:22:47.391779 env[1322]: time="2025-11-01T00:22:47.391764880Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Nov 1 00:22:47.391806 env[1322]: time="2025-11-01T00:22:47.391781160Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:22:47.395183 env[1322]: time="2025-11-01T00:22:47.395150760Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:22:47.395251 env[1322]: time="2025-11-01T00:22:47.395208280Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:22:47.395251 env[1322]: time="2025-11-01T00:22:47.395224440Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:22:47.395312 env[1322]: time="2025-11-01T00:22:47.395249760Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:22:47.395312 env[1322]: time="2025-11-01T00:22:47.395264760Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:22:47.395312 env[1322]: time="2025-11-01T00:22:47.395277720Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:22:47.395312 env[1322]: time="2025-11-01T00:22:47.395292200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:22:47.395659 env[1322]: time="2025-11-01T00:22:47.395622880Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:22:47.395709 env[1322]: time="2025-11-01T00:22:47.395648440Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Nov 1 00:22:47.395709 env[1322]: time="2025-11-01T00:22:47.395691480Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:22:47.395709 env[1322]: time="2025-11-01T00:22:47.395707400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:22:47.395778 env[1322]: time="2025-11-01T00:22:47.395719960Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:22:47.395865 env[1322]: time="2025-11-01T00:22:47.395832280Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:22:47.395996 env[1322]: time="2025-11-01T00:22:47.395908160Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:22:47.396293 env[1322]: time="2025-11-01T00:22:47.396265480Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:22:47.396332 env[1322]: time="2025-11-01T00:22:47.396301360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:22:47.396332 env[1322]: time="2025-11-01T00:22:47.396318640Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:22:47.396439 env[1322]: time="2025-11-01T00:22:47.396419400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:22:47.396439 env[1322]: time="2025-11-01T00:22:47.396436120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:22:47.396499 env[1322]: time="2025-11-01T00:22:47.396448240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:22:47.396499 env[1322]: time="2025-11-01T00:22:47.396459200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:22:47.396499 env[1322]: time="2025-11-01T00:22:47.396471960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:22:47.396499 env[1322]: time="2025-11-01T00:22:47.396484560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:22:47.396578 env[1322]: time="2025-11-01T00:22:47.396500920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:22:47.396578 env[1322]: time="2025-11-01T00:22:47.396512920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:22:47.396578 env[1322]: time="2025-11-01T00:22:47.396525200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:22:47.396663 env[1322]: time="2025-11-01T00:22:47.396642840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:22:47.396692 env[1322]: time="2025-11-01T00:22:47.396668320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:22:47.396692 env[1322]: time="2025-11-01T00:22:47.396681040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:22:47.396734 env[1322]: time="2025-11-01T00:22:47.396692160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:22:47.396734 env[1322]: time="2025-11-01T00:22:47.396706200Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Nov 1 00:22:47.396734 env[1322]: time="2025-11-01T00:22:47.396716800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:22:47.396734 env[1322]: time="2025-11-01T00:22:47.396733040Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Nov 1 00:22:47.396816 env[1322]: time="2025-11-01T00:22:47.396764760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:22:47.397053 env[1322]: time="2025-11-01T00:22:47.396945120Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:22:47.397648 env[1322]: time="2025-11-01T00:22:47.397060440Z" level=info msg="Connect containerd service" Nov 1 00:22:47.397648 env[1322]: time="2025-11-01T00:22:47.397096760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:22:47.397716 env[1322]: time="2025-11-01T00:22:47.397688240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:22:47.398117 env[1322]: time="2025-11-01T00:22:47.398080120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:22:47.399240 env[1322]: time="2025-11-01T00:22:47.398127080Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:22:47.399240 env[1322]: time="2025-11-01T00:22:47.398170240Z" level=info msg="containerd successfully booted in 0.038601s" Nov 1 00:22:47.398276 systemd[1]: Started containerd.service. Nov 1 00:22:47.401310 env[1322]: time="2025-11-01T00:22:47.400918760Z" level=info msg="Start subscribing containerd event" Nov 1 00:22:47.401310 env[1322]: time="2025-11-01T00:22:47.400976680Z" level=info msg="Start recovering state" Nov 1 00:22:47.401310 env[1322]: time="2025-11-01T00:22:47.401052280Z" level=info msg="Start event monitor" Nov 1 00:22:47.401310 env[1322]: time="2025-11-01T00:22:47.401083600Z" level=info msg="Start snapshots syncer" Nov 1 00:22:47.401310 env[1322]: time="2025-11-01T00:22:47.401099080Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:22:47.401310 env[1322]: time="2025-11-01T00:22:47.401107360Z" level=info msg="Start streaming server" Nov 1 00:22:47.410441 locksmithd[1352]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:22:47.685115 tar[1319]: linux-arm64/README.md Nov 1 00:22:47.689533 systemd[1]: Finished prepare-helm.service. Nov 1 00:22:48.366864 systemd-networkd[1099]: eth0: Gained IPv6LL Nov 1 00:22:48.368569 systemd[1]: Finished systemd-networkd-wait-online.service. Nov 1 00:22:48.369958 systemd[1]: Reached target network-online.target. Nov 1 00:22:48.372572 systemd[1]: Starting kubelet.service... Nov 1 00:22:48.395985 sshd_keygen[1321]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:22:48.413992 systemd[1]: Finished sshd-keygen.service. Nov 1 00:22:48.416331 systemd[1]: Starting issuegen.service... Nov 1 00:22:48.421405 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:22:48.421624 systemd[1]: Finished issuegen.service. Nov 1 00:22:48.423919 systemd[1]: Starting systemd-user-sessions.service... Nov 1 00:22:48.429973 systemd[1]: Finished systemd-user-sessions.service. Nov 1 00:22:48.432400 systemd[1]: Started getty@tty1.service. Nov 1 00:22:48.434577 systemd[1]: Started serial-getty@ttyAMA0.service. Nov 1 00:22:48.435768 systemd[1]: Reached target getty.target. Nov 1 00:22:48.973804 systemd[1]: Started kubelet.service. Nov 1 00:22:48.975252 systemd[1]: Reached target multi-user.target. Nov 1 00:22:48.977732 systemd[1]: Starting systemd-update-utmp-runlevel.service... Nov 1 00:22:48.983887 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Nov 1 00:22:48.984127 systemd[1]: Finished systemd-update-utmp-runlevel.service. Nov 1 00:22:48.985481 systemd[1]: Startup finished in 5.002s (kernel) + 4.991s (userspace) = 9.994s. Nov 1 00:22:49.354381 kubelet[1394]: E1101 00:22:49.354281 1394 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:22:49.356358 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:22:49.356502 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:22:51.931528 systemd[1]: Created slice system-sshd.slice. Nov 1 00:22:51.932725 systemd[1]: Started sshd@0-10.0.0.94:22-10.0.0.1:53832.service. Nov 1 00:22:51.980086 sshd[1404]: Accepted publickey for core from 10.0.0.1 port 53832 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:22:51.982535 sshd[1404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:51.991331 systemd-logind[1305]: New session 1 of user core. Nov 1 00:22:51.992078 systemd[1]: Created slice user-500.slice. Nov 1 00:22:51.992921 systemd[1]: Starting user-runtime-dir@500.service... Nov 1 00:22:52.000905 systemd[1]: Finished user-runtime-dir@500.service. Nov 1 00:22:52.001987 systemd[1]: Starting user@500.service... Nov 1 00:22:52.004914 (systemd)[1409]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:52.062762 systemd[1409]: Queued start job for default target default.target. Nov 1 00:22:52.062961 systemd[1409]: Reached target paths.target. Nov 1 00:22:52.062976 systemd[1409]: Reached target sockets.target. Nov 1 00:22:52.062987 systemd[1409]: Reached target timers.target. Nov 1 00:22:52.063012 systemd[1409]: Reached target basic.target. Nov 1 00:22:52.063053 systemd[1409]: Reached target default.target. Nov 1 00:22:52.063079 systemd[1409]: Startup finished in 53ms. Nov 1 00:22:52.063301 systemd[1]: Started user@500.service. Nov 1 00:22:52.064155 systemd[1]: Started session-1.scope. Nov 1 00:22:52.115410 systemd[1]: Started sshd@1-10.0.0.94:22-10.0.0.1:53844.service. Nov 1 00:22:52.156865 sshd[1418]: Accepted publickey for core from 10.0.0.1 port 53844 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:22:52.158104 sshd[1418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:52.161522 systemd-logind[1305]: New session 2 of user core. Nov 1 00:22:52.162311 systemd[1]: Started session-2.scope. Nov 1 00:22:52.215705 sshd[1418]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:52.218093 systemd[1]: Started sshd@2-10.0.0.94:22-10.0.0.1:53854.service. Nov 1 00:22:52.218585 systemd[1]: sshd@1-10.0.0.94:22-10.0.0.1:53844.service: Deactivated successfully. Nov 1 00:22:52.219592 systemd-logind[1305]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:22:52.219631 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:22:52.220369 systemd-logind[1305]: Removed session 2. Nov 1 00:22:52.266438 sshd[1424]: Accepted publickey for core from 10.0.0.1 port 53854 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:22:52.268187 sshd[1424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:52.271343 systemd-logind[1305]: New session 3 of user core. Nov 1 00:22:52.272089 systemd[1]: Started session-3.scope. Nov 1 00:22:52.326508 sshd[1424]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:52.328480 systemd[1]: Started sshd@3-10.0.0.94:22-10.0.0.1:53864.service. Nov 1 00:22:52.328901 systemd[1]: sshd@2-10.0.0.94:22-10.0.0.1:53854.service: Deactivated successfully. Nov 1 00:22:52.329940 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:22:52.329957 systemd-logind[1305]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:22:52.330908 systemd-logind[1305]: Removed session 3. Nov 1 00:22:52.371132 sshd[1430]: Accepted publickey for core from 10.0.0.1 port 53864 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:22:52.372291 sshd[1430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:52.376045 systemd-logind[1305]: New session 4 of user core. Nov 1 00:22:52.376390 systemd[1]: Started session-4.scope. Nov 1 00:22:52.431894 sshd[1430]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:52.434220 systemd[1]: Started sshd@4-10.0.0.94:22-10.0.0.1:53878.service. Nov 1 00:22:52.434681 systemd[1]: sshd@3-10.0.0.94:22-10.0.0.1:53864.service: Deactivated successfully. Nov 1 00:22:52.435736 systemd-logind[1305]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:22:52.435742 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:22:52.436679 systemd-logind[1305]: Removed session 4. Nov 1 00:22:52.477182 sshd[1438]: Accepted publickey for core from 10.0.0.1 port 53878 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:22:52.477966 sshd[1438]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:22:52.481276 systemd-logind[1305]: New session 5 of user core. Nov 1 00:22:52.482153 systemd[1]: Started session-5.scope. Nov 1 00:22:52.541169 sudo[1443]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:22:52.541400 sudo[1443]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Nov 1 00:22:52.581871 systemd[1]: Starting docker.service... Nov 1 00:22:52.640298 env[1454]: time="2025-11-01T00:22:52.640243296Z" level=info msg="Starting up" Nov 1 00:22:52.641755 env[1454]: time="2025-11-01T00:22:52.641729764Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:22:52.641805 env[1454]: time="2025-11-01T00:22:52.641754725Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:22:52.641805 env[1454]: time="2025-11-01T00:22:52.641783119Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:22:52.641805 env[1454]: time="2025-11-01T00:22:52.641793332Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:22:52.643734 env[1454]: time="2025-11-01T00:22:52.643701984Z" level=info msg="parsed scheme: \"unix\"" module=grpc Nov 1 00:22:52.643840 env[1454]: time="2025-11-01T00:22:52.643823851Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Nov 1 00:22:52.643904 env[1454]: time="2025-11-01T00:22:52.643887869Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Nov 1 00:22:52.643958 env[1454]: time="2025-11-01T00:22:52.643944166Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Nov 1 00:22:52.849220 env[1454]: time="2025-11-01T00:22:52.848756039Z" level=warning msg="Your kernel does not support cgroup blkio weight" Nov 1 00:22:52.849385 env[1454]: time="2025-11-01T00:22:52.849367212Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Nov 1 00:22:52.849587 env[1454]: time="2025-11-01T00:22:52.849571563Z" level=info msg="Loading containers: start." Nov 1 00:22:52.966131 kernel: Initializing XFRM netlink socket Nov 1 00:22:52.989707 env[1454]: time="2025-11-01T00:22:52.989674220Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Nov 1 00:22:53.048056 systemd-networkd[1099]: docker0: Link UP Nov 1 00:22:53.067481 env[1454]: time="2025-11-01T00:22:53.067435054Z" level=info msg="Loading containers: done." Nov 1 00:22:53.082345 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2693857665-merged.mount: Deactivated successfully. Nov 1 00:22:53.084806 env[1454]: time="2025-11-01T00:22:53.084755519Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:22:53.084963 env[1454]: time="2025-11-01T00:22:53.084944301Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Nov 1 00:22:53.085087 env[1454]: time="2025-11-01T00:22:53.085071065Z" level=info msg="Daemon has completed initialization" Nov 1 00:22:53.099132 systemd[1]: Started docker.service. Nov 1 00:22:53.106687 env[1454]: time="2025-11-01T00:22:53.106230073Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:22:53.808717 env[1322]: time="2025-11-01T00:22:53.808669905Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 00:22:54.376979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1840726624.mount: Deactivated successfully. Nov 1 00:22:55.610665 env[1322]: time="2025-11-01T00:22:55.610615158Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:55.612319 env[1322]: time="2025-11-01T00:22:55.612282140Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:55.614192 env[1322]: time="2025-11-01T00:22:55.614161234Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:55.616768 env[1322]: time="2025-11-01T00:22:55.616739180Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:55.617592 env[1322]: time="2025-11-01T00:22:55.617562792Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Nov 1 00:22:55.618392 env[1322]: time="2025-11-01T00:22:55.618363765Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 00:22:57.333995 env[1322]: time="2025-11-01T00:22:57.333941930Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:57.335495 env[1322]: time="2025-11-01T00:22:57.335473397Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:57.337266 env[1322]: time="2025-11-01T00:22:57.337244943Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:57.339644 env[1322]: time="2025-11-01T00:22:57.339621724Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:57.340402 env[1322]: time="2025-11-01T00:22:57.340369746Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Nov 1 00:22:57.340845 env[1322]: time="2025-11-01T00:22:57.340819658Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 00:22:58.580105 env[1322]: time="2025-11-01T00:22:58.580042274Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:58.582007 env[1322]: time="2025-11-01T00:22:58.581973440Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:58.583942 env[1322]: time="2025-11-01T00:22:58.583904484Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:58.586261 env[1322]: time="2025-11-01T00:22:58.586228174Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:22:58.586946 env[1322]: time="2025-11-01T00:22:58.586907460Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Nov 1 00:22:58.587412 env[1322]: time="2025-11-01T00:22:58.587383094Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 00:22:59.450812 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:22:59.451064 systemd[1]: Stopped kubelet.service. Nov 1 00:22:59.452476 systemd[1]: Starting kubelet.service... Nov 1 00:22:59.548920 systemd[1]: Started kubelet.service. Nov 1 00:22:59.589082 kubelet[1593]: E1101 00:22:59.589033 1593 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:22:59.591248 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:22:59.591395 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:22:59.633482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount392659314.mount: Deactivated successfully. Nov 1 00:23:00.287075 env[1322]: time="2025-11-01T00:23:00.287020156Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:00.288615 env[1322]: time="2025-11-01T00:23:00.288581163Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:00.290140 env[1322]: time="2025-11-01T00:23:00.290104777Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:00.295234 env[1322]: time="2025-11-01T00:23:00.295199152Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:00.296318 env[1322]: time="2025-11-01T00:23:00.296291909Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Nov 1 00:23:00.296840 env[1322]: time="2025-11-01T00:23:00.296794891Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 00:23:00.815134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1006517590.mount: Deactivated successfully. Nov 1 00:23:01.783510 env[1322]: time="2025-11-01T00:23:01.783466193Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:01.785087 env[1322]: time="2025-11-01T00:23:01.785058387Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:01.787144 env[1322]: time="2025-11-01T00:23:01.786810440Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:01.789283 env[1322]: time="2025-11-01T00:23:01.789256720Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:01.790067 env[1322]: time="2025-11-01T00:23:01.790036452Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Nov 1 00:23:01.790721 env[1322]: time="2025-11-01T00:23:01.790689738Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:23:02.242273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1201385799.mount: Deactivated successfully. Nov 1 00:23:02.247433 env[1322]: time="2025-11-01T00:23:02.247371559Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:02.249535 env[1322]: time="2025-11-01T00:23:02.249507849Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:02.251047 env[1322]: time="2025-11-01T00:23:02.251016192Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:02.252652 env[1322]: time="2025-11-01T00:23:02.252623085Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:02.253207 env[1322]: time="2025-11-01T00:23:02.253181645Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 1 00:23:02.253846 env[1322]: time="2025-11-01T00:23:02.253798248Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 00:23:02.767892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2110007501.mount: Deactivated successfully. Nov 1 00:23:04.876073 env[1322]: time="2025-11-01T00:23:04.876015003Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:04.877370 env[1322]: time="2025-11-01T00:23:04.877320552Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:04.879255 env[1322]: time="2025-11-01T00:23:04.879223258Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.16-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:04.881547 env[1322]: time="2025-11-01T00:23:04.881517882Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:04.882437 env[1322]: time="2025-11-01T00:23:04.882405924Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Nov 1 00:23:09.700906 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:23:09.701100 systemd[1]: Stopped kubelet.service. Nov 1 00:23:09.702629 systemd[1]: Starting kubelet.service... Nov 1 00:23:09.801062 systemd[1]: Started kubelet.service. Nov 1 00:23:09.835516 kubelet[1630]: E1101 00:23:09.835460 1630 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:23:09.837508 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:23:09.837658 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:23:10.484151 systemd[1]: Stopped kubelet.service. Nov 1 00:23:10.486183 systemd[1]: Starting kubelet.service... Nov 1 00:23:10.510640 systemd[1]: Reloading. Nov 1 00:23:10.558870 /usr/lib/systemd/system-generators/torcx-generator[1666]: time="2025-11-01T00:23:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:23:10.558902 /usr/lib/systemd/system-generators/torcx-generator[1666]: time="2025-11-01T00:23:10Z" level=info msg="torcx already run" Nov 1 00:23:10.698349 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:23:10.698371 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:23:10.714208 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:23:10.794746 systemd[1]: Started kubelet.service. Nov 1 00:23:10.796406 systemd[1]: Stopping kubelet.service... Nov 1 00:23:10.796969 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:23:10.797241 systemd[1]: Stopped kubelet.service. Nov 1 00:23:10.798969 systemd[1]: Starting kubelet.service... Nov 1 00:23:10.890912 systemd[1]: Started kubelet.service. Nov 1 00:23:10.930870 kubelet[1725]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:23:10.930870 kubelet[1725]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:23:10.930870 kubelet[1725]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:23:10.931272 kubelet[1725]: I1101 00:23:10.930937 1725 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:23:12.139048 kubelet[1725]: I1101 00:23:12.139012 1725 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:23:12.139415 kubelet[1725]: I1101 00:23:12.139400 1725 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:23:12.139742 kubelet[1725]: I1101 00:23:12.139725 1725 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:23:12.159907 kubelet[1725]: E1101 00:23:12.159862 1725 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.94:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:23:12.164003 kubelet[1725]: I1101 00:23:12.163967 1725 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:23:12.169875 kubelet[1725]: E1101 00:23:12.169848 1725 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:23:12.169952 kubelet[1725]: I1101 00:23:12.169884 1725 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:23:12.174929 kubelet[1725]: I1101 00:23:12.174904 1725 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:23:12.175880 kubelet[1725]: I1101 00:23:12.175823 1725 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:23:12.176092 kubelet[1725]: I1101 00:23:12.175869 1725 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:23:12.176180 kubelet[1725]: I1101 00:23:12.176153 1725 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:23:12.176180 kubelet[1725]: I1101 00:23:12.176162 1725 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:23:12.176629 kubelet[1725]: I1101 00:23:12.176597 1725 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:23:12.179220 kubelet[1725]: I1101 00:23:12.179200 1725 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:23:12.179260 kubelet[1725]: I1101 00:23:12.179231 1725 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:23:12.179286 kubelet[1725]: I1101 00:23:12.179270 1725 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:23:12.179286 kubelet[1725]: I1101 00:23:12.179279 1725 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:23:12.191681 kubelet[1725]: W1101 00:23:12.191627 1725 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Nov 1 00:23:12.191736 kubelet[1725]: E1101 00:23:12.191686 1725 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:23:12.193964 kubelet[1725]: I1101 00:23:12.193901 1725 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:23:12.194645 kubelet[1725]: I1101 00:23:12.194620 1725 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:23:12.194758 kubelet[1725]: W1101 00:23:12.194741 1725 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:23:12.195061 kubelet[1725]: W1101 00:23:12.195026 1725 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Nov 1 00:23:12.195181 kubelet[1725]: E1101 00:23:12.195162 1725 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:23:12.195636 kubelet[1725]: I1101 00:23:12.195604 1725 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:23:12.195703 kubelet[1725]: I1101 00:23:12.195643 1725 server.go:1287] "Started kubelet" Nov 1 00:23:12.195797 kubelet[1725]: I1101 00:23:12.195770 1725 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:23:12.195989 kubelet[1725]: I1101 00:23:12.195929 1725 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:23:12.196215 kubelet[1725]: I1101 00:23:12.196191 1725 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:23:12.197542 kubelet[1725]: I1101 00:23:12.197522 1725 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:23:12.199021 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Nov 1 00:23:12.199226 kubelet[1725]: I1101 00:23:12.199157 1725 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:23:12.199476 kubelet[1725]: I1101 00:23:12.199442 1725 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:23:12.199649 kubelet[1725]: E1101 00:23:12.199632 1725 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:23:12.200591 kubelet[1725]: E1101 00:23:12.200570 1725 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:23:12.200701 kubelet[1725]: I1101 00:23:12.200690 1725 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:23:12.200932 kubelet[1725]: E1101 00:23:12.200691 1725 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.94:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.94:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1873ba2f737a58d9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 00:23:12.195623129 +0000 UTC m=+1.298918748,LastTimestamp:2025-11-01 00:23:12.195623129 +0000 UTC m=+1.298918748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 00:23:12.200932 kubelet[1725]: I1101 00:23:12.200903 1725 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:23:12.201141 kubelet[1725]: I1101 00:23:12.201125 1725 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:23:12.201224 kubelet[1725]: I1101 00:23:12.201196 1725 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:23:12.201306 kubelet[1725]: I1101 00:23:12.201286 1725 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:23:12.201490 kubelet[1725]: W1101 00:23:12.201335 1725 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Nov 1 00:23:12.201490 kubelet[1725]: E1101 00:23:12.201383 1725 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:23:12.201892 kubelet[1725]: E1101 00:23:12.201867 1725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="200ms" Nov 1 00:23:12.202315 kubelet[1725]: I1101 00:23:12.202293 1725 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:23:12.220459 kubelet[1725]: I1101 00:23:12.220427 1725 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:23:12.220459 kubelet[1725]: I1101 00:23:12.220447 1725 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:23:12.220459 kubelet[1725]: I1101 00:23:12.220464 1725 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:23:12.221198 kubelet[1725]: I1101 00:23:12.221163 1725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:23:12.222272 kubelet[1725]: I1101 00:23:12.222251 1725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:23:12.222318 kubelet[1725]: I1101 00:23:12.222280 1725 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:23:12.222318 kubelet[1725]: I1101 00:23:12.222299 1725 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:23:12.222318 kubelet[1725]: I1101 00:23:12.222307 1725 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:23:12.222401 kubelet[1725]: E1101 00:23:12.222346 1725 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:23:12.223226 kubelet[1725]: W1101 00:23:12.222970 1725 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Nov 1 00:23:12.223274 kubelet[1725]: E1101 00:23:12.223244 1725 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:23:12.300415 kubelet[1725]: I1101 00:23:12.300370 1725 policy_none.go:49] "None policy: Start" Nov 1 00:23:12.300415 kubelet[1725]: I1101 00:23:12.300404 1725 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:23:12.300415 kubelet[1725]: I1101 00:23:12.300418 1725 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:23:12.301813 kubelet[1725]: E1101 00:23:12.301789 1725 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:23:12.305691 kubelet[1725]: I1101 00:23:12.305639 1725 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:23:12.305796 kubelet[1725]: I1101 00:23:12.305779 1725 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:23:12.305836 kubelet[1725]: I1101 00:23:12.305795 1725 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:23:12.306308 kubelet[1725]: I1101 00:23:12.306206 1725 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:23:12.306750 kubelet[1725]: E1101 00:23:12.306731 1725 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:23:12.306814 kubelet[1725]: E1101 00:23:12.306783 1725 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 1 00:23:12.328052 kubelet[1725]: E1101 00:23:12.328022 1725 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:23:12.330700 kubelet[1725]: E1101 00:23:12.330677 1725 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:23:12.331731 kubelet[1725]: E1101 00:23:12.331707 1725 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:23:12.404307 kubelet[1725]: I1101 00:23:12.402550 1725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:23:12.404307 kubelet[1725]: I1101 00:23:12.402599 1725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fbe77c5fbc4949b49d4e346fd3d9dc7a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fbe77c5fbc4949b49d4e346fd3d9dc7a\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:23:12.404307 kubelet[1725]: I1101 00:23:12.402623 1725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fbe77c5fbc4949b49d4e346fd3d9dc7a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fbe77c5fbc4949b49d4e346fd3d9dc7a\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:23:12.404307 kubelet[1725]: I1101 00:23:12.402639 1725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:23:12.404307 kubelet[1725]: I1101 00:23:12.402655 1725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:23:12.404508 kubelet[1725]: I1101 00:23:12.402670 1725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fbe77c5fbc4949b49d4e346fd3d9dc7a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fbe77c5fbc4949b49d4e346fd3d9dc7a\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:23:12.404508 kubelet[1725]: I1101 00:23:12.402697 1725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:23:12.404508 kubelet[1725]: I1101 00:23:12.402718 1725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:23:12.404508 kubelet[1725]: I1101 00:23:12.402735 1725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:23:12.404508 kubelet[1725]: E1101 00:23:12.403229 1725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="400ms" Nov 1 00:23:12.407545 kubelet[1725]: I1101 00:23:12.407502 1725 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:23:12.408184 kubelet[1725]: E1101 00:23:12.408109 1725 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Nov 1 00:23:12.609806 kubelet[1725]: I1101 00:23:12.609758 1725 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:23:12.610433 kubelet[1725]: E1101 00:23:12.610157 1725 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Nov 1 00:23:12.629527 kubelet[1725]: E1101 00:23:12.629501 1725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:12.630284 env[1322]: time="2025-11-01T00:23:12.630154028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fbe77c5fbc4949b49d4e346fd3d9dc7a,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:12.631724 kubelet[1725]: E1101 00:23:12.631665 1725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:12.632242 kubelet[1725]: E1101 00:23:12.632202 1725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:12.632308 env[1322]: time="2025-11-01T00:23:12.632250480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:12.632560 env[1322]: time="2025-11-01T00:23:12.632493759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:12.804458 kubelet[1725]: E1101 00:23:12.804382 1725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="800ms" Nov 1 00:23:13.011738 kubelet[1725]: I1101 00:23:13.011659 1725 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:23:13.012055 kubelet[1725]: E1101 00:23:13.012021 1725 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Nov 1 00:23:13.059712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount25408957.mount: Deactivated successfully. Nov 1 00:23:13.064651 env[1322]: time="2025-11-01T00:23:13.064610098Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:13.066808 env[1322]: time="2025-11-01T00:23:13.066779499Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:13.068318 env[1322]: time="2025-11-01T00:23:13.068290650Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:13.069577 env[1322]: time="2025-11-01T00:23:13.069528168Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:13.070772 env[1322]: time="2025-11-01T00:23:13.070748022Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:13.072139 env[1322]: time="2025-11-01T00:23:13.072109620Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:13.072816 env[1322]: time="2025-11-01T00:23:13.072776641Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:13.073460 env[1322]: time="2025-11-01T00:23:13.073427201Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:13.076542 env[1322]: time="2025-11-01T00:23:13.076516830Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:13.078105 env[1322]: time="2025-11-01T00:23:13.078079207Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:13.078911 env[1322]: time="2025-11-01T00:23:13.078885528Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:13.081865 env[1322]: time="2025-11-01T00:23:13.081836818Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:13.103246 env[1322]: time="2025-11-01T00:23:13.103162511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:13.103341 env[1322]: time="2025-11-01T00:23:13.103248301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:13.103341 env[1322]: time="2025-11-01T00:23:13.103274495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:13.103567 env[1322]: time="2025-11-01T00:23:13.103532308Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/38f7688e36b0902478ce6ac897c112754e482066cf750852c090079b60cc3e81 pid=1766 runtime=io.containerd.runc.v2 Nov 1 00:23:13.107480 env[1322]: time="2025-11-01T00:23:13.107383760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:13.107480 env[1322]: time="2025-11-01T00:23:13.107445240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:13.107480 env[1322]: time="2025-11-01T00:23:13.107457175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:13.107721 env[1322]: time="2025-11-01T00:23:13.107653629Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb4037740c45983dc6d6418d84e136040a24649be5ff4ecd80fc92ab3bc43f4b pid=1788 runtime=io.containerd.runc.v2 Nov 1 00:23:13.109946 env[1322]: time="2025-11-01T00:23:13.109825753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:13.109946 env[1322]: time="2025-11-01T00:23:13.109885030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:13.109946 env[1322]: time="2025-11-01T00:23:13.109896124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:13.110408 env[1322]: time="2025-11-01T00:23:13.110344703Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f78469950de0a61ebf63033c498f671af995dee4885375853d4e64fbd39e8feb pid=1797 runtime=io.containerd.runc.v2 Nov 1 00:23:13.166850 env[1322]: time="2025-11-01T00:23:13.166808280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"38f7688e36b0902478ce6ac897c112754e482066cf750852c090079b60cc3e81\"" Nov 1 00:23:13.168041 kubelet[1725]: E1101 00:23:13.167971 1725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:13.170057 env[1322]: time="2025-11-01T00:23:13.170026034Z" level=info msg="CreateContainer within sandbox \"38f7688e36b0902478ce6ac897c112754e482066cf750852c090079b60cc3e81\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:23:13.171285 env[1322]: time="2025-11-01T00:23:13.171259947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f78469950de0a61ebf63033c498f671af995dee4885375853d4e64fbd39e8feb\"" Nov 1 00:23:13.172065 kubelet[1725]: E1101 00:23:13.171891 1725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:13.172131 env[1322]: time="2025-11-01T00:23:13.172056695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:fbe77c5fbc4949b49d4e346fd3d9dc7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb4037740c45983dc6d6418d84e136040a24649be5ff4ecd80fc92ab3bc43f4b\"" Nov 1 00:23:13.173352 kubelet[1725]: E1101 00:23:13.173242 1725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:13.174007 env[1322]: time="2025-11-01T00:23:13.173723367Z" level=info msg="CreateContainer within sandbox \"f78469950de0a61ebf63033c498f671af995dee4885375853d4e64fbd39e8feb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:23:13.174627 env[1322]: time="2025-11-01T00:23:13.174593811Z" level=info msg="CreateContainer within sandbox \"fb4037740c45983dc6d6418d84e136040a24649be5ff4ecd80fc92ab3bc43f4b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:23:13.184905 env[1322]: time="2025-11-01T00:23:13.184848250Z" level=info msg="CreateContainer within sandbox \"38f7688e36b0902478ce6ac897c112754e482066cf750852c090079b60cc3e81\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a618b82a123d1d9de5d58980c922be0147e6736d005e37afce6a457633fa72be\"" Nov 1 00:23:13.185518 env[1322]: time="2025-11-01T00:23:13.185484111Z" level=info msg="StartContainer for \"a618b82a123d1d9de5d58980c922be0147e6736d005e37afce6a457633fa72be\"" Nov 1 00:23:13.194951 env[1322]: time="2025-11-01T00:23:13.194913885Z" level=info msg="CreateContainer within sandbox \"fb4037740c45983dc6d6418d84e136040a24649be5ff4ecd80fc92ab3bc43f4b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"498770163dba1fa586f68d5a0130cd49f3978f04461c2bc658778bdd69ebb207\"" Nov 1 00:23:13.195405 env[1322]: time="2025-11-01T00:23:13.195380087Z" level=info msg="StartContainer for \"498770163dba1fa586f68d5a0130cd49f3978f04461c2bc658778bdd69ebb207\"" Nov 1 00:23:13.196450 env[1322]: time="2025-11-01T00:23:13.196131377Z" level=info msg="CreateContainer within sandbox \"f78469950de0a61ebf63033c498f671af995dee4885375853d4e64fbd39e8feb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"18e152d0a77ae03c564eeee4c6ba38524047e8501d7cff34f9e31f36fb96ce31\"" Nov 1 00:23:13.196536 env[1322]: time="2025-11-01T00:23:13.196511868Z" level=info msg="StartContainer for \"18e152d0a77ae03c564eeee4c6ba38524047e8501d7cff34f9e31f36fb96ce31\"" Nov 1 00:23:13.258489 env[1322]: time="2025-11-01T00:23:13.258438457Z" level=info msg="StartContainer for \"a618b82a123d1d9de5d58980c922be0147e6736d005e37afce6a457633fa72be\" returns successfully" Nov 1 00:23:13.278528 env[1322]: time="2025-11-01T00:23:13.278470840Z" level=info msg="StartContainer for \"18e152d0a77ae03c564eeee4c6ba38524047e8501d7cff34f9e31f36fb96ce31\" returns successfully" Nov 1 00:23:13.278663 env[1322]: time="2025-11-01T00:23:13.278536284Z" level=info msg="StartContainer for \"498770163dba1fa586f68d5a0130cd49f3978f04461c2bc658778bdd69ebb207\" returns successfully" Nov 1 00:23:13.319505 kubelet[1725]: W1101 00:23:13.319339 1725 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Nov 1 00:23:13.319505 kubelet[1725]: E1101 00:23:13.319404 1725 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:23:13.813678 kubelet[1725]: I1101 00:23:13.813644 1725 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:23:14.248577 kubelet[1725]: E1101 00:23:14.248543 1725 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:23:14.248909 kubelet[1725]: E1101 00:23:14.248673 1725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:14.250590 kubelet[1725]: E1101 00:23:14.250564 1725 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:23:14.250689 kubelet[1725]: E1101 00:23:14.250672 1725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:14.252290 kubelet[1725]: E1101 00:23:14.252260 1725 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:23:14.252401 kubelet[1725]: E1101 00:23:14.252382 1725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:14.827273 kubelet[1725]: E1101 00:23:14.827223 1725 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 1 00:23:14.880989 kubelet[1725]: I1101 00:23:14.880943 1725 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:23:14.881158 kubelet[1725]: E1101 00:23:14.881145 1725 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 1 00:23:14.888587 kubelet[1725]: E1101 00:23:14.888562 1725 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:23:14.989664 kubelet[1725]: E1101 00:23:14.989622 1725 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:23:15.090318 kubelet[1725]: E1101 00:23:15.090207 1725 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:23:15.191290 kubelet[1725]: E1101 00:23:15.191255 1725 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:23:15.254318 kubelet[1725]: E1101 00:23:15.254289 1725 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:23:15.254773 kubelet[1725]: E1101 00:23:15.254757 1725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:15.254842 kubelet[1725]: E1101 00:23:15.254816 1725 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:23:15.254975 kubelet[1725]: E1101 00:23:15.254954 1725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:15.255240 kubelet[1725]: E1101 00:23:15.255210 1725 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:23:15.255427 kubelet[1725]: E1101 00:23:15.255411 1725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:15.292409 kubelet[1725]: E1101 00:23:15.292345 1725 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:23:15.394330 kubelet[1725]: E1101 00:23:15.393237 1725 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:23:15.494158 kubelet[1725]: E1101 00:23:15.494069 1725 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:23:15.602055 kubelet[1725]: I1101 00:23:15.601997 1725 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:23:15.609022 kubelet[1725]: E1101 00:23:15.608964 1725 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 1 00:23:15.609022 kubelet[1725]: I1101 00:23:15.609003 1725 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:23:15.610537 kubelet[1725]: E1101 00:23:15.610508 1725 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:23:15.610537 kubelet[1725]: I1101 00:23:15.610531 1725 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:23:15.612565 kubelet[1725]: E1101 00:23:15.612544 1725 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 1 00:23:16.182179 kubelet[1725]: I1101 00:23:16.182143 1725 apiserver.go:52] "Watching apiserver" Nov 1 00:23:16.201074 kubelet[1725]: I1101 00:23:16.201023 1725 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:23:16.254694 kubelet[1725]: I1101 00:23:16.254665 1725 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:23:16.255192 kubelet[1725]: I1101 00:23:16.254755 1725 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:23:16.260269 kubelet[1725]: E1101 00:23:16.260230 1725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:16.261826 kubelet[1725]: E1101 00:23:16.261786 1725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:16.721165 kubelet[1725]: I1101 00:23:16.721128 1725 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:23:16.746588 kubelet[1725]: E1101 00:23:16.746513 1725 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:16.849485 systemd[1]: Reloading. Nov 1 00:23:16.917027 /usr/lib/systemd/system-generators/torcx-generator[2016]: time="2025-11-01T00:23:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Nov 1 00:23:16.917055 /usr/lib/systemd/system-generators/torcx-generator[2016]: time="2025-11-01T00:23:16Z" level=info msg="torcx already run" Nov 1 00:23:16.985131 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Nov 1 00:23:16.985152 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Nov 1 00:23:17.000751 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:23:17.072962 systemd[1]: Stopping kubelet.service... Nov 1 00:23:17.096412 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:23:17.096710 systemd[1]: Stopped kubelet.service. Nov 1 00:23:17.098582 systemd[1]: Starting kubelet.service... Nov 1 00:23:17.193814 systemd[1]: Started kubelet.service. Nov 1 00:23:17.247420 kubelet[2069]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:23:17.247420 kubelet[2069]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:23:17.247420 kubelet[2069]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:23:17.248070 kubelet[2069]: I1101 00:23:17.247971 2069 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:23:17.259858 kubelet[2069]: I1101 00:23:17.259454 2069 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:23:17.259858 kubelet[2069]: I1101 00:23:17.259489 2069 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:23:17.260424 kubelet[2069]: I1101 00:23:17.260292 2069 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:23:17.261557 kubelet[2069]: I1101 00:23:17.261540 2069 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 00:23:17.263959 kubelet[2069]: I1101 00:23:17.263938 2069 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:23:17.266818 kubelet[2069]: E1101 00:23:17.266795 2069 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:23:17.266818 kubelet[2069]: I1101 00:23:17.266817 2069 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 00:23:17.271470 kubelet[2069]: I1101 00:23:17.271446 2069 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:23:17.272048 kubelet[2069]: I1101 00:23:17.272013 2069 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:23:17.272302 kubelet[2069]: I1101 00:23:17.272119 2069 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Nov 1 00:23:17.272451 kubelet[2069]: I1101 00:23:17.272439 2069 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:23:17.272508 kubelet[2069]: I1101 00:23:17.272499 2069 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:23:17.272598 kubelet[2069]: I1101 00:23:17.272588 2069 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:23:17.272780 kubelet[2069]: I1101 00:23:17.272768 2069 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:23:17.272850 kubelet[2069]: I1101 00:23:17.272839 2069 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:23:17.272926 kubelet[2069]: I1101 00:23:17.272917 2069 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:23:17.273005 kubelet[2069]: I1101 00:23:17.272994 2069 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:23:17.273847 kubelet[2069]: I1101 00:23:17.273826 2069 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Nov 1 00:23:17.274340 kubelet[2069]: I1101 00:23:17.274325 2069 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:23:17.275435 kubelet[2069]: I1101 00:23:17.275419 2069 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:23:17.275505 kubelet[2069]: I1101 00:23:17.275465 2069 server.go:1287] "Started kubelet" Nov 1 00:23:17.275818 kubelet[2069]: I1101 00:23:17.275759 2069 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:23:17.276121 kubelet[2069]: I1101 00:23:17.276106 2069 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:23:17.277469 kubelet[2069]: I1101 00:23:17.277404 2069 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:23:17.280016 kubelet[2069]: I1101 00:23:17.278181 2069 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:23:17.280679 kubelet[2069]: I1101 00:23:17.280635 2069 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:23:17.281645 kubelet[2069]: I1101 00:23:17.281624 2069 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:23:17.282845 kubelet[2069]: E1101 00:23:17.282827 2069 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:23:17.282948 kubelet[2069]: I1101 00:23:17.282938 2069 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:23:17.283489 kubelet[2069]: I1101 00:23:17.283469 2069 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:23:17.283709 kubelet[2069]: I1101 00:23:17.283697 2069 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:23:17.287951 kubelet[2069]: E1101 00:23:17.287930 2069 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:23:17.289640 kubelet[2069]: I1101 00:23:17.289432 2069 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:23:17.289640 kubelet[2069]: I1101 00:23:17.289560 2069 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:23:17.297874 kubelet[2069]: I1101 00:23:17.297831 2069 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:23:17.309880 kubelet[2069]: I1101 00:23:17.309825 2069 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:23:17.311031 kubelet[2069]: I1101 00:23:17.310833 2069 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:23:17.311031 kubelet[2069]: I1101 00:23:17.310859 2069 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:23:17.311031 kubelet[2069]: I1101 00:23:17.310904 2069 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:23:17.311031 kubelet[2069]: I1101 00:23:17.310917 2069 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:23:17.311031 kubelet[2069]: E1101 00:23:17.310987 2069 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:23:17.338838 kubelet[2069]: I1101 00:23:17.338812 2069 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:23:17.339111 kubelet[2069]: I1101 00:23:17.339096 2069 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:23:17.339191 kubelet[2069]: I1101 00:23:17.339182 2069 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:23:17.339426 kubelet[2069]: I1101 00:23:17.339412 2069 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:23:17.339518 kubelet[2069]: I1101 00:23:17.339493 2069 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:23:17.339576 kubelet[2069]: I1101 00:23:17.339568 2069 policy_none.go:49] "None policy: Start" Nov 1 00:23:17.339629 kubelet[2069]: I1101 00:23:17.339620 2069 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:23:17.339684 kubelet[2069]: I1101 00:23:17.339675 2069 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:23:17.339865 kubelet[2069]: I1101 00:23:17.339855 2069 state_mem.go:75] "Updated machine memory state" Nov 1 00:23:17.341150 kubelet[2069]: I1101 00:23:17.341132 2069 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:23:17.342234 kubelet[2069]: I1101 00:23:17.342219 2069 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:23:17.342404 kubelet[2069]: I1101 00:23:17.342373 2069 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:23:17.343022 kubelet[2069]: I1101 00:23:17.343008 2069 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:23:17.343818 kubelet[2069]: E1101 00:23:17.343798 2069 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:23:17.412543 kubelet[2069]: I1101 00:23:17.412475 2069 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:23:17.412913 kubelet[2069]: I1101 00:23:17.412894 2069 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:23:17.413048 kubelet[2069]: I1101 00:23:17.413021 2069 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:23:17.418123 kubelet[2069]: E1101 00:23:17.418093 2069 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:23:17.420787 kubelet[2069]: E1101 00:23:17.420664 2069 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 1 00:23:17.420910 kubelet[2069]: E1101 00:23:17.420895 2069 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 1 00:23:17.446918 kubelet[2069]: I1101 00:23:17.446890 2069 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:23:17.453748 kubelet[2069]: I1101 00:23:17.453720 2069 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 1 00:23:17.453876 kubelet[2069]: I1101 00:23:17.453814 2069 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:23:17.485013 kubelet[2069]: I1101 00:23:17.484946 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:23:17.485013 kubelet[2069]: I1101 00:23:17.485000 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:23:17.485013 kubelet[2069]: I1101 00:23:17.485021 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:23:17.485236 kubelet[2069]: I1101 00:23:17.485041 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fbe77c5fbc4949b49d4e346fd3d9dc7a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"fbe77c5fbc4949b49d4e346fd3d9dc7a\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:23:17.485236 kubelet[2069]: I1101 00:23:17.485058 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fbe77c5fbc4949b49d4e346fd3d9dc7a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"fbe77c5fbc4949b49d4e346fd3d9dc7a\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:23:17.485236 kubelet[2069]: I1101 00:23:17.485075 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fbe77c5fbc4949b49d4e346fd3d9dc7a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"fbe77c5fbc4949b49d4e346fd3d9dc7a\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:23:17.485236 kubelet[2069]: I1101 00:23:17.485093 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:23:17.485236 kubelet[2069]: I1101 00:23:17.485109 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:23:17.485347 kubelet[2069]: I1101 00:23:17.485127 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:23:17.719160 kubelet[2069]: E1101 00:23:17.719133 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:17.721266 kubelet[2069]: E1101 00:23:17.721239 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:17.721486 kubelet[2069]: E1101 00:23:17.721468 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:17.836829 sudo[2105]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 1 00:23:17.837074 sudo[2105]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Nov 1 00:23:18.273880 kubelet[2069]: I1101 00:23:18.273836 2069 apiserver.go:52] "Watching apiserver" Nov 1 00:23:18.283683 kubelet[2069]: I1101 00:23:18.283662 2069 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:23:18.291142 sudo[2105]: pam_unix(sudo:session): session closed for user root Nov 1 00:23:18.321529 kubelet[2069]: I1101 00:23:18.321466 2069 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:23:18.322566 kubelet[2069]: E1101 00:23:18.322371 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:18.323053 kubelet[2069]: E1101 00:23:18.323034 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:18.412468 kubelet[2069]: E1101 00:23:18.412426 2069 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 1 00:23:18.412613 kubelet[2069]: E1101 00:23:18.412597 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:18.488273 kubelet[2069]: I1101 00:23:18.487870 2069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.487854342 podStartE2EDuration="2.487854342s" podCreationTimestamp="2025-11-01 00:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:23:18.472583343 +0000 UTC m=+1.273710590" watchObservedRunningTime="2025-11-01 00:23:18.487854342 +0000 UTC m=+1.288981509" Nov 1 00:23:18.496409 kubelet[2069]: I1101 00:23:18.496363 2069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.496346489 podStartE2EDuration="2.496346489s" podCreationTimestamp="2025-11-01 00:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:23:18.488768707 +0000 UTC m=+1.289895874" watchObservedRunningTime="2025-11-01 00:23:18.496346489 +0000 UTC m=+1.297473656" Nov 1 00:23:19.322941 kubelet[2069]: E1101 00:23:19.322914 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:19.323313 kubelet[2069]: E1101 00:23:19.323026 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:20.324315 kubelet[2069]: E1101 00:23:20.324281 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:20.361432 kubelet[2069]: E1101 00:23:20.361389 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:20.488130 sudo[1443]: pam_unix(sudo:session): session closed for user root Nov 1 00:23:20.490667 sshd[1438]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:20.493398 systemd[1]: sshd@4-10.0.0.94:22-10.0.0.1:53878.service: Deactivated successfully. Nov 1 00:23:20.494429 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:23:20.494759 systemd-logind[1305]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:23:20.495490 systemd-logind[1305]: Removed session 5. Nov 1 00:23:22.162284 kubelet[2069]: I1101 00:23:22.162253 2069 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:23:22.163029 env[1322]: time="2025-11-01T00:23:22.162954026Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:23:22.163494 kubelet[2069]: I1101 00:23:22.163479 2069 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:23:22.819169 kubelet[2069]: I1101 00:23:22.819106 2069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=6.819087631 podStartE2EDuration="6.819087631s" podCreationTimestamp="2025-11-01 00:23:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:23:18.496637441 +0000 UTC m=+1.297764608" watchObservedRunningTime="2025-11-01 00:23:22.819087631 +0000 UTC m=+5.620214798" Nov 1 00:23:22.822379 kubelet[2069]: I1101 00:23:22.822336 2069 status_manager.go:890] "Failed to get status for pod" podUID="6b8d0e39-37fc-4910-853c-3f23c783452d" pod="kube-system/kube-proxy-2s29r" err="pods \"kube-proxy-2s29r\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Nov 1 00:23:22.822621 kubelet[2069]: W1101 00:23:22.822379 2069 reflector.go:569] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Nov 1 00:23:22.822870 kubelet[2069]: E1101 00:23:22.822844 2069 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Nov 1 00:23:22.822967 kubelet[2069]: W1101 00:23:22.822807 2069 reflector.go:569] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Nov 1 00:23:22.823073 kubelet[2069]: E1101 00:23:22.823056 2069 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Nov 1 00:23:22.925853 kubelet[2069]: I1101 00:23:22.925815 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b8d0e39-37fc-4910-853c-3f23c783452d-xtables-lock\") pod \"kube-proxy-2s29r\" (UID: \"6b8d0e39-37fc-4910-853c-3f23c783452d\") " pod="kube-system/kube-proxy-2s29r" Nov 1 00:23:22.925853 kubelet[2069]: I1101 00:23:22.925857 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-clustermesh-secrets\") pod \"cilium-fbkk2\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " pod="kube-system/cilium-fbkk2" Nov 1 00:23:22.926062 kubelet[2069]: I1101 00:23:22.925876 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6b8d0e39-37fc-4910-853c-3f23c783452d-kube-proxy\") pod \"kube-proxy-2s29r\" (UID: \"6b8d0e39-37fc-4910-853c-3f23c783452d\") " pod="kube-system/kube-proxy-2s29r" Nov 1 00:23:22.926062 kubelet[2069]: I1101 00:23:22.925892 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-cilium-run\") pod \"cilium-fbkk2\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " pod="kube-system/cilium-fbkk2" Nov 1 00:23:22.926062 kubelet[2069]: I1101 00:23:22.925932 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-host-proc-sys-net\") pod \"cilium-fbkk2\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " pod="kube-system/cilium-fbkk2" Nov 1 00:23:22.926062 kubelet[2069]: I1101 00:23:22.925989 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-bpf-maps\") pod \"cilium-fbkk2\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " pod="kube-system/cilium-fbkk2" Nov 1 00:23:22.926062 kubelet[2069]: I1101 00:23:22.926019 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-etc-cni-netd\") pod \"cilium-fbkk2\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " pod="kube-system/cilium-fbkk2" Nov 1 00:23:22.926062 kubelet[2069]: I1101 00:23:22.926042 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-host-proc-sys-kernel\") pod \"cilium-fbkk2\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " pod="kube-system/cilium-fbkk2" Nov 1 00:23:22.926229 kubelet[2069]: I1101 00:23:22.926062 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcrjn\" (UniqueName: \"kubernetes.io/projected/6b8d0e39-37fc-4910-853c-3f23c783452d-kube-api-access-gcrjn\") pod \"kube-proxy-2s29r\" (UID: \"6b8d0e39-37fc-4910-853c-3f23c783452d\") " pod="kube-system/kube-proxy-2s29r" Nov 1 00:23:22.926229 kubelet[2069]: I1101 00:23:22.926086 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b8d0e39-37fc-4910-853c-3f23c783452d-lib-modules\") pod \"kube-proxy-2s29r\" (UID: \"6b8d0e39-37fc-4910-853c-3f23c783452d\") " pod="kube-system/kube-proxy-2s29r" Nov 1 00:23:22.926229 kubelet[2069]: I1101 00:23:22.926107 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-hostproc\") pod \"cilium-fbkk2\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " pod="kube-system/cilium-fbkk2" Nov 1 00:23:22.926229 kubelet[2069]: I1101 00:23:22.926129 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f44d\" (UniqueName: \"kubernetes.io/projected/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-kube-api-access-5f44d\") pod \"cilium-fbkk2\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " pod="kube-system/cilium-fbkk2" Nov 1 00:23:22.926229 kubelet[2069]: I1101 00:23:22.926144 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-xtables-lock\") pod \"cilium-fbkk2\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " pod="kube-system/cilium-fbkk2" Nov 1 00:23:22.926340 kubelet[2069]: I1101 00:23:22.926171 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-cilium-config-path\") pod \"cilium-fbkk2\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " pod="kube-system/cilium-fbkk2" Nov 1 00:23:22.926340 kubelet[2069]: I1101 00:23:22.926195 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-cni-path\") pod \"cilium-fbkk2\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " pod="kube-system/cilium-fbkk2" Nov 1 00:23:22.926340 kubelet[2069]: I1101 00:23:22.926212 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-lib-modules\") pod \"cilium-fbkk2\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " pod="kube-system/cilium-fbkk2" Nov 1 00:23:22.926340 kubelet[2069]: I1101 00:23:22.926229 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-hubble-tls\") pod \"cilium-fbkk2\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " pod="kube-system/cilium-fbkk2" Nov 1 00:23:22.926340 kubelet[2069]: I1101 00:23:22.926243 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-cilium-cgroup\") pod \"cilium-fbkk2\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " pod="kube-system/cilium-fbkk2" Nov 1 00:23:23.027376 kubelet[2069]: I1101 00:23:23.027336 2069 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 1 00:23:23.328478 kubelet[2069]: I1101 00:23:23.328443 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2g2j\" (UniqueName: \"kubernetes.io/projected/71b0b001-fd1d-49e0-a0bb-46b1911fa452-kube-api-access-x2g2j\") pod \"cilium-operator-6c4d7847fc-g6849\" (UID: \"71b0b001-fd1d-49e0-a0bb-46b1911fa452\") " pod="kube-system/cilium-operator-6c4d7847fc-g6849" Nov 1 00:23:23.328822 kubelet[2069]: I1101 00:23:23.328491 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71b0b001-fd1d-49e0-a0bb-46b1911fa452-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-g6849\" (UID: \"71b0b001-fd1d-49e0-a0bb-46b1911fa452\") " pod="kube-system/cilium-operator-6c4d7847fc-g6849" Nov 1 00:23:24.027890 kubelet[2069]: E1101 00:23:24.027853 2069 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:23:24.028181 kubelet[2069]: E1101 00:23:24.028162 2069 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6b8d0e39-37fc-4910-853c-3f23c783452d-kube-proxy podName:6b8d0e39-37fc-4910-853c-3f23c783452d nodeName:}" failed. No retries permitted until 2025-11-01 00:23:24.528138492 +0000 UTC m=+7.329265659 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/6b8d0e39-37fc-4910-853c-3f23c783452d-kube-proxy") pod "kube-proxy-2s29r" (UID: "6b8d0e39-37fc-4910-853c-3f23c783452d") : failed to sync configmap cache: timed out waiting for the condition Nov 1 00:23:24.040133 kubelet[2069]: E1101 00:23:24.040088 2069 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:23:24.040133 kubelet[2069]: E1101 00:23:24.040134 2069 projected.go:194] Error preparing data for projected volume kube-api-access-5f44d for pod kube-system/cilium-fbkk2: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:23:24.040289 kubelet[2069]: E1101 00:23:24.040202 2069 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-kube-api-access-5f44d podName:bf21b7b0-ffe4-4d30-86bd-6e21036bc37c nodeName:}" failed. No retries permitted until 2025-11-01 00:23:24.540183838 +0000 UTC m=+7.341311005 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5f44d" (UniqueName: "kubernetes.io/projected/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-kube-api-access-5f44d") pod "cilium-fbkk2" (UID: "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c") : failed to sync configmap cache: timed out waiting for the condition Nov 1 00:23:24.040386 kubelet[2069]: E1101 00:23:24.040078 2069 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:23:24.040479 kubelet[2069]: E1101 00:23:24.040464 2069 projected.go:194] Error preparing data for projected volume kube-api-access-gcrjn for pod kube-system/kube-proxy-2s29r: failed to sync configmap cache: timed out waiting for the condition Nov 1 00:23:24.040569 kubelet[2069]: E1101 00:23:24.040558 2069 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b8d0e39-37fc-4910-853c-3f23c783452d-kube-api-access-gcrjn podName:6b8d0e39-37fc-4910-853c-3f23c783452d nodeName:}" failed. No retries permitted until 2025-11-01 00:23:24.54054528 +0000 UTC m=+7.341672407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gcrjn" (UniqueName: "kubernetes.io/projected/6b8d0e39-37fc-4910-853c-3f23c783452d-kube-api-access-gcrjn") pod "kube-proxy-2s29r" (UID: "6b8d0e39-37fc-4910-853c-3f23c783452d") : failed to sync configmap cache: timed out waiting for the condition Nov 1 00:23:24.209569 kubelet[2069]: E1101 00:23:24.206861 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:24.210282 env[1322]: time="2025-11-01T00:23:24.209995778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-g6849,Uid:71b0b001-fd1d-49e0-a0bb-46b1911fa452,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:24.245651 env[1322]: time="2025-11-01T00:23:24.245588151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:24.245651 env[1322]: time="2025-11-01T00:23:24.245624972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:24.245840 env[1322]: time="2025-11-01T00:23:24.245634977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:24.245933 env[1322]: time="2025-11-01T00:23:24.245899366Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c26d772c7149e470b1f5b8f5490e9f34ae86d8c673aa07549f612c1b9e6b7725 pid=2164 runtime=io.containerd.runc.v2 Nov 1 00:23:24.258796 systemd[1]: run-containerd-runc-k8s.io-c26d772c7149e470b1f5b8f5490e9f34ae86d8c673aa07549f612c1b9e6b7725-runc.VHYPST.mount: Deactivated successfully. Nov 1 00:23:24.294325 env[1322]: time="2025-11-01T00:23:24.294225070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-g6849,Uid:71b0b001-fd1d-49e0-a0bb-46b1911fa452,Namespace:kube-system,Attempt:0,} returns sandbox id \"c26d772c7149e470b1f5b8f5490e9f34ae86d8c673aa07549f612c1b9e6b7725\"" Nov 1 00:23:24.295663 kubelet[2069]: E1101 00:23:24.295613 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:24.296778 env[1322]: time="2025-11-01T00:23:24.296749523Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 1 00:23:24.923313 kubelet[2069]: E1101 00:23:24.923238 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:24.924154 env[1322]: time="2025-11-01T00:23:24.924046911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2s29r,Uid:6b8d0e39-37fc-4910-853c-3f23c783452d,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:24.928327 kubelet[2069]: E1101 00:23:24.928302 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:24.928831 env[1322]: time="2025-11-01T00:23:24.928797651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fbkk2,Uid:bf21b7b0-ffe4-4d30-86bd-6e21036bc37c,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:24.942616 env[1322]: time="2025-11-01T00:23:24.942527661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:24.942616 env[1322]: time="2025-11-01T00:23:24.942569244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:24.942616 env[1322]: time="2025-11-01T00:23:24.942580090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:24.944342 env[1322]: time="2025-11-01T00:23:24.944131959Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/93ca4233d32c74be3ca2179bbb3f99b5bb3d3b8116c2e37afd74a8f08c337f23 pid=2208 runtime=io.containerd.runc.v2 Nov 1 00:23:24.945792 env[1322]: time="2025-11-01T00:23:24.945731175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:24.945890 env[1322]: time="2025-11-01T00:23:24.945815502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:24.945890 env[1322]: time="2025-11-01T00:23:24.945844678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:24.946069 env[1322]: time="2025-11-01T00:23:24.946034184Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/30a2551a781ab6cd9ef6468236e252a408ddf28876c5eea3feaca3c54f0887cb pid=2225 runtime=io.containerd.runc.v2 Nov 1 00:23:24.989658 env[1322]: time="2025-11-01T00:23:24.989618913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fbkk2,Uid:bf21b7b0-ffe4-4d30-86bd-6e21036bc37c,Namespace:kube-system,Attempt:0,} returns sandbox id \"30a2551a781ab6cd9ef6468236e252a408ddf28876c5eea3feaca3c54f0887cb\"" Nov 1 00:23:24.989863 env[1322]: time="2025-11-01T00:23:24.989632081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2s29r,Uid:6b8d0e39-37fc-4910-853c-3f23c783452d,Namespace:kube-system,Attempt:0,} returns sandbox id \"93ca4233d32c74be3ca2179bbb3f99b5bb3d3b8116c2e37afd74a8f08c337f23\"" Nov 1 00:23:24.990578 kubelet[2069]: E1101 00:23:24.990549 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:24.991109 kubelet[2069]: E1101 00:23:24.991087 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:24.992718 env[1322]: time="2025-11-01T00:23:24.992685751Z" level=info msg="CreateContainer within sandbox \"93ca4233d32c74be3ca2179bbb3f99b5bb3d3b8116c2e37afd74a8f08c337f23\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:23:25.004905 env[1322]: time="2025-11-01T00:23:25.004777238Z" level=info msg="CreateContainer within sandbox \"93ca4233d32c74be3ca2179bbb3f99b5bb3d3b8116c2e37afd74a8f08c337f23\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b32d422ff8fc46c61a1f975d80628b5e9fdd4bb4d1f6e1b987ee60e253fc3fbb\"" Nov 1 00:23:25.005679 env[1322]: time="2025-11-01T00:23:25.005418538Z" level=info msg="StartContainer for \"b32d422ff8fc46c61a1f975d80628b5e9fdd4bb4d1f6e1b987ee60e253fc3fbb\"" Nov 1 00:23:25.064504 env[1322]: time="2025-11-01T00:23:25.064461626Z" level=info msg="StartContainer for \"b32d422ff8fc46c61a1f975d80628b5e9fdd4bb4d1f6e1b987ee60e253fc3fbb\" returns successfully" Nov 1 00:23:25.335771 kubelet[2069]: E1101 00:23:25.335742 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:25.486138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3355711406.mount: Deactivated successfully. Nov 1 00:23:26.051807 env[1322]: time="2025-11-01T00:23:26.051739133Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:26.052950 env[1322]: time="2025-11-01T00:23:26.052920926Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:26.054930 env[1322]: time="2025-11-01T00:23:26.054901040Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:26.055516 env[1322]: time="2025-11-01T00:23:26.055478770Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Nov 1 00:23:26.057489 env[1322]: time="2025-11-01T00:23:26.057176381Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 1 00:23:26.058917 env[1322]: time="2025-11-01T00:23:26.058874713Z" level=info msg="CreateContainer within sandbox \"c26d772c7149e470b1f5b8f5490e9f34ae86d8c673aa07549f612c1b9e6b7725\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 1 00:23:26.068850 env[1322]: time="2025-11-01T00:23:26.068817542Z" level=info msg="CreateContainer within sandbox \"c26d772c7149e470b1f5b8f5490e9f34ae86d8c673aa07549f612c1b9e6b7725\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c59dfd1001242291f3b651c9a8a9ae1d531009199a6139dfa8d5e1e2d1176188\"" Nov 1 00:23:26.069310 env[1322]: time="2025-11-01T00:23:26.069287657Z" level=info msg="StartContainer for \"c59dfd1001242291f3b651c9a8a9ae1d531009199a6139dfa8d5e1e2d1176188\"" Nov 1 00:23:26.111995 env[1322]: time="2025-11-01T00:23:26.111944218Z" level=info msg="StartContainer for \"c59dfd1001242291f3b651c9a8a9ae1d531009199a6139dfa8d5e1e2d1176188\" returns successfully" Nov 1 00:23:26.337709 kubelet[2069]: E1101 00:23:26.337598 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:26.347320 kubelet[2069]: I1101 00:23:26.347265 2069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2s29r" podStartSLOduration=4.347247988 podStartE2EDuration="4.347247988s" podCreationTimestamp="2025-11-01 00:23:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:23:25.346260678 +0000 UTC m=+8.147387845" watchObservedRunningTime="2025-11-01 00:23:26.347247988 +0000 UTC m=+9.148375155" Nov 1 00:23:26.347494 kubelet[2069]: I1101 00:23:26.347472 2069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-g6849" podStartSLOduration=1.587270294 podStartE2EDuration="3.347467658s" podCreationTimestamp="2025-11-01 00:23:23 +0000 UTC" firstStartedPulling="2025-11-01 00:23:24.29638696 +0000 UTC m=+7.097514127" lastFinishedPulling="2025-11-01 00:23:26.056584324 +0000 UTC m=+8.857711491" observedRunningTime="2025-11-01 00:23:26.346320843 +0000 UTC m=+9.147448010" watchObservedRunningTime="2025-11-01 00:23:26.347467658 +0000 UTC m=+9.148594825" Nov 1 00:23:26.919137 kubelet[2069]: E1101 00:23:26.919107 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:27.339642 kubelet[2069]: E1101 00:23:27.339297 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:27.339642 kubelet[2069]: E1101 00:23:27.339597 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:28.347538 kubelet[2069]: E1101 00:23:28.347196 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:30.128495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2813717773.mount: Deactivated successfully. Nov 1 00:23:30.276372 kubelet[2069]: E1101 00:23:30.276300 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:30.372383 kubelet[2069]: E1101 00:23:30.372330 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:32.368192 env[1322]: time="2025-11-01T00:23:32.368136387Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:32.369723 env[1322]: time="2025-11-01T00:23:32.369668627Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:32.371061 env[1322]: time="2025-11-01T00:23:32.371030165Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Nov 1 00:23:32.372204 env[1322]: time="2025-11-01T00:23:32.372169262Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Nov 1 00:23:32.374789 env[1322]: time="2025-11-01T00:23:32.374209728Z" level=info msg="CreateContainer within sandbox \"30a2551a781ab6cd9ef6468236e252a408ddf28876c5eea3feaca3c54f0887cb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:23:32.383267 env[1322]: time="2025-11-01T00:23:32.383216862Z" level=info msg="CreateContainer within sandbox \"30a2551a781ab6cd9ef6468236e252a408ddf28876c5eea3feaca3c54f0887cb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d571eaa407d537839e40091926fcd96553ec162e939e3bb3015859a040b30a76\"" Nov 1 00:23:32.384021 env[1322]: time="2025-11-01T00:23:32.383970138Z" level=info msg="StartContainer for \"d571eaa407d537839e40091926fcd96553ec162e939e3bb3015859a040b30a76\"" Nov 1 00:23:32.506264 env[1322]: time="2025-11-01T00:23:32.506212163Z" level=info msg="StartContainer for \"d571eaa407d537839e40091926fcd96553ec162e939e3bb3015859a040b30a76\" returns successfully" Nov 1 00:23:32.521776 env[1322]: time="2025-11-01T00:23:32.521721195Z" level=info msg="shim disconnected" id=d571eaa407d537839e40091926fcd96553ec162e939e3bb3015859a040b30a76 Nov 1 00:23:32.521776 env[1322]: time="2025-11-01T00:23:32.521766091Z" level=warning msg="cleaning up after shim disconnected" id=d571eaa407d537839e40091926fcd96553ec162e939e3bb3015859a040b30a76 namespace=k8s.io Nov 1 00:23:32.521776 env[1322]: time="2025-11-01T00:23:32.521777055Z" level=info msg="cleaning up dead shim" Nov 1 00:23:32.528069 env[1322]: time="2025-11-01T00:23:32.528033303Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:23:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2535 runtime=io.containerd.runc.v2\n" Nov 1 00:23:32.993248 update_engine[1311]: I1101 00:23:32.993203 1311 update_attempter.cc:509] Updating boot flags... Nov 1 00:23:33.357444 kubelet[2069]: E1101 00:23:33.357229 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:33.360890 env[1322]: time="2025-11-01T00:23:33.360846172Z" level=info msg="CreateContainer within sandbox \"30a2551a781ab6cd9ef6468236e252a408ddf28876c5eea3feaca3c54f0887cb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:23:33.380687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d571eaa407d537839e40091926fcd96553ec162e939e3bb3015859a040b30a76-rootfs.mount: Deactivated successfully. Nov 1 00:23:33.382211 env[1322]: time="2025-11-01T00:23:33.382113367Z" level=info msg="CreateContainer within sandbox \"30a2551a781ab6cd9ef6468236e252a408ddf28876c5eea3feaca3c54f0887cb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3d963f417c679f78ce731199664c30f49a912fb93923f0e2e046bfd29aaf9118\"" Nov 1 00:23:33.384393 env[1322]: time="2025-11-01T00:23:33.383600564Z" level=info msg="StartContainer for \"3d963f417c679f78ce731199664c30f49a912fb93923f0e2e046bfd29aaf9118\"" Nov 1 00:23:33.438449 env[1322]: time="2025-11-01T00:23:33.438392697Z" level=info msg="StartContainer for \"3d963f417c679f78ce731199664c30f49a912fb93923f0e2e046bfd29aaf9118\" returns successfully" Nov 1 00:23:33.445691 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:23:33.445997 systemd[1]: Stopped systemd-sysctl.service. Nov 1 00:23:33.446169 systemd[1]: Stopping systemd-sysctl.service... Nov 1 00:23:33.447686 systemd[1]: Starting systemd-sysctl.service... Nov 1 00:23:33.450969 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 1 00:23:33.460632 systemd[1]: Finished systemd-sysctl.service. Nov 1 00:23:33.467151 env[1322]: time="2025-11-01T00:23:33.467107643Z" level=info msg="shim disconnected" id=3d963f417c679f78ce731199664c30f49a912fb93923f0e2e046bfd29aaf9118 Nov 1 00:23:33.467452 env[1322]: time="2025-11-01T00:23:33.467431315Z" level=warning msg="cleaning up after shim disconnected" id=3d963f417c679f78ce731199664c30f49a912fb93923f0e2e046bfd29aaf9118 namespace=k8s.io Nov 1 00:23:33.467532 env[1322]: time="2025-11-01T00:23:33.467517785Z" level=info msg="cleaning up dead shim" Nov 1 00:23:33.474276 env[1322]: time="2025-11-01T00:23:33.474238962Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:23:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2613 runtime=io.containerd.runc.v2\n" Nov 1 00:23:34.361359 kubelet[2069]: E1101 00:23:34.361321 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:34.363569 env[1322]: time="2025-11-01T00:23:34.363531812Z" level=info msg="CreateContainer within sandbox \"30a2551a781ab6cd9ef6468236e252a408ddf28876c5eea3feaca3c54f0887cb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:23:34.381522 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d963f417c679f78ce731199664c30f49a912fb93923f0e2e046bfd29aaf9118-rootfs.mount: Deactivated successfully. Nov 1 00:23:34.404499 env[1322]: time="2025-11-01T00:23:34.404460114Z" level=info msg="CreateContainer within sandbox \"30a2551a781ab6cd9ef6468236e252a408ddf28876c5eea3feaca3c54f0887cb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"981138e25b71f4b3f4229179f83c9d9e2aacde4ecc1436249157f2dc2918f9fe\"" Nov 1 00:23:34.406055 env[1322]: time="2025-11-01T00:23:34.406019870Z" level=info msg="StartContainer for \"981138e25b71f4b3f4229179f83c9d9e2aacde4ecc1436249157f2dc2918f9fe\"" Nov 1 00:23:34.467367 env[1322]: time="2025-11-01T00:23:34.467329196Z" level=info msg="StartContainer for \"981138e25b71f4b3f4229179f83c9d9e2aacde4ecc1436249157f2dc2918f9fe\" returns successfully" Nov 1 00:23:34.485561 env[1322]: time="2025-11-01T00:23:34.485501850Z" level=info msg="shim disconnected" id=981138e25b71f4b3f4229179f83c9d9e2aacde4ecc1436249157f2dc2918f9fe Nov 1 00:23:34.485561 env[1322]: time="2025-11-01T00:23:34.485548625Z" level=warning msg="cleaning up after shim disconnected" id=981138e25b71f4b3f4229179f83c9d9e2aacde4ecc1436249157f2dc2918f9fe namespace=k8s.io Nov 1 00:23:34.485561 env[1322]: time="2025-11-01T00:23:34.485558308Z" level=info msg="cleaning up dead shim" Nov 1 00:23:34.492094 env[1322]: time="2025-11-01T00:23:34.492051977Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:23:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2672 runtime=io.containerd.runc.v2\n" Nov 1 00:23:35.371262 kubelet[2069]: E1101 00:23:35.371141 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:35.375476 env[1322]: time="2025-11-01T00:23:35.375412979Z" level=info msg="CreateContainer within sandbox \"30a2551a781ab6cd9ef6468236e252a408ddf28876c5eea3feaca3c54f0887cb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:23:35.380861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-981138e25b71f4b3f4229179f83c9d9e2aacde4ecc1436249157f2dc2918f9fe-rootfs.mount: Deactivated successfully. Nov 1 00:23:35.397501 env[1322]: time="2025-11-01T00:23:35.397459765Z" level=info msg="CreateContainer within sandbox \"30a2551a781ab6cd9ef6468236e252a408ddf28876c5eea3feaca3c54f0887cb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b8771bfca81337b6a8c6f99a400e5bb9d6e954448183854446273f61b566d212\"" Nov 1 00:23:35.398146 env[1322]: time="2025-11-01T00:23:35.398122894Z" level=info msg="StartContainer for \"b8771bfca81337b6a8c6f99a400e5bb9d6e954448183854446273f61b566d212\"" Nov 1 00:23:35.452870 env[1322]: time="2025-11-01T00:23:35.452806004Z" level=info msg="StartContainer for \"b8771bfca81337b6a8c6f99a400e5bb9d6e954448183854446273f61b566d212\" returns successfully" Nov 1 00:23:35.472723 env[1322]: time="2025-11-01T00:23:35.472682707Z" level=info msg="shim disconnected" id=b8771bfca81337b6a8c6f99a400e5bb9d6e954448183854446273f61b566d212 Nov 1 00:23:35.473033 env[1322]: time="2025-11-01T00:23:35.473011210Z" level=warning msg="cleaning up after shim disconnected" id=b8771bfca81337b6a8c6f99a400e5bb9d6e954448183854446273f61b566d212 namespace=k8s.io Nov 1 00:23:35.473116 env[1322]: time="2025-11-01T00:23:35.473102919Z" level=info msg="cleaning up dead shim" Nov 1 00:23:35.480040 env[1322]: time="2025-11-01T00:23:35.480004654Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:23:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2728 runtime=io.containerd.runc.v2\n" Nov 1 00:23:36.377961 kubelet[2069]: E1101 00:23:36.377318 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:36.381909 env[1322]: time="2025-11-01T00:23:36.379573911Z" level=info msg="CreateContainer within sandbox \"30a2551a781ab6cd9ef6468236e252a408ddf28876c5eea3feaca3c54f0887cb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:23:36.380841 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b8771bfca81337b6a8c6f99a400e5bb9d6e954448183854446273f61b566d212-rootfs.mount: Deactivated successfully. Nov 1 00:23:36.405512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount763538704.mount: Deactivated successfully. Nov 1 00:23:36.409286 env[1322]: time="2025-11-01T00:23:36.409237499Z" level=info msg="CreateContainer within sandbox \"30a2551a781ab6cd9ef6468236e252a408ddf28876c5eea3feaca3c54f0887cb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ce6820ebc694000193f0f9befa563a5ebb5fa910357e4f7021580b6054745748\"" Nov 1 00:23:36.410197 env[1322]: time="2025-11-01T00:23:36.410167178Z" level=info msg="StartContainer for \"ce6820ebc694000193f0f9befa563a5ebb5fa910357e4f7021580b6054745748\"" Nov 1 00:23:36.466053 env[1322]: time="2025-11-01T00:23:36.466005344Z" level=info msg="StartContainer for \"ce6820ebc694000193f0f9befa563a5ebb5fa910357e4f7021580b6054745748\" returns successfully" Nov 1 00:23:36.559348 kubelet[2069]: I1101 00:23:36.559108 2069 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:23:36.652004 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Nov 1 00:23:36.718522 kubelet[2069]: I1101 00:23:36.718407 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/282fecde-c653-4c95-9cac-6bb7ca88090c-config-volume\") pod \"coredns-668d6bf9bc-vf7pb\" (UID: \"282fecde-c653-4c95-9cac-6bb7ca88090c\") " pod="kube-system/coredns-668d6bf9bc-vf7pb" Nov 1 00:23:36.718522 kubelet[2069]: I1101 00:23:36.718508 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hqf9\" (UniqueName: \"kubernetes.io/projected/282fecde-c653-4c95-9cac-6bb7ca88090c-kube-api-access-7hqf9\") pod \"coredns-668d6bf9bc-vf7pb\" (UID: \"282fecde-c653-4c95-9cac-6bb7ca88090c\") " pod="kube-system/coredns-668d6bf9bc-vf7pb" Nov 1 00:23:36.718522 kubelet[2069]: I1101 00:23:36.718543 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46rjg\" (UniqueName: \"kubernetes.io/projected/ed596e94-6c23-4470-9cbd-f661a6208f16-kube-api-access-46rjg\") pod \"coredns-668d6bf9bc-qbbgf\" (UID: \"ed596e94-6c23-4470-9cbd-f661a6208f16\") " pod="kube-system/coredns-668d6bf9bc-qbbgf" Nov 1 00:23:36.718522 kubelet[2069]: I1101 00:23:36.718562 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed596e94-6c23-4470-9cbd-f661a6208f16-config-volume\") pod \"coredns-668d6bf9bc-qbbgf\" (UID: \"ed596e94-6c23-4470-9cbd-f661a6208f16\") " pod="kube-system/coredns-668d6bf9bc-qbbgf" Nov 1 00:23:36.886808 kubelet[2069]: E1101 00:23:36.886758 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:36.887453 env[1322]: time="2025-11-01T00:23:36.887415402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vf7pb,Uid:282fecde-c653-4c95-9cac-6bb7ca88090c,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:36.892629 kubelet[2069]: E1101 00:23:36.892588 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:36.893248 env[1322]: time="2025-11-01T00:23:36.893038611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qbbgf,Uid:ed596e94-6c23-4470-9cbd-f661a6208f16,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:36.905025 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Nov 1 00:23:37.383396 kubelet[2069]: E1101 00:23:37.383367 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:37.400556 kubelet[2069]: I1101 00:23:37.400496 2069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fbkk2" podStartSLOduration=8.019100109 podStartE2EDuration="15.400481984s" podCreationTimestamp="2025-11-01 00:23:22 +0000 UTC" firstStartedPulling="2025-11-01 00:23:24.991501608 +0000 UTC m=+7.792628775" lastFinishedPulling="2025-11-01 00:23:32.372883523 +0000 UTC m=+15.174010650" observedRunningTime="2025-11-01 00:23:37.399255153 +0000 UTC m=+20.200382320" watchObservedRunningTime="2025-11-01 00:23:37.400481984 +0000 UTC m=+20.201609151" Nov 1 00:23:38.385050 kubelet[2069]: E1101 00:23:38.385020 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:38.524294 systemd-networkd[1099]: cilium_host: Link UP Nov 1 00:23:38.526418 systemd-networkd[1099]: cilium_net: Link UP Nov 1 00:23:38.526615 systemd-networkd[1099]: cilium_net: Gained carrier Nov 1 00:23:38.526696 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Nov 1 00:23:38.526736 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Nov 1 00:23:38.526753 systemd-networkd[1099]: cilium_host: Gained carrier Nov 1 00:23:38.608106 systemd-networkd[1099]: cilium_vxlan: Link UP Nov 1 00:23:38.608112 systemd-networkd[1099]: cilium_vxlan: Gained carrier Nov 1 00:23:38.867013 kernel: NET: Registered PF_ALG protocol family Nov 1 00:23:39.182103 systemd-networkd[1099]: cilium_net: Gained IPv6LL Nov 1 00:23:39.386751 kubelet[2069]: E1101 00:23:39.386728 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:39.445947 systemd-networkd[1099]: lxc_health: Link UP Nov 1 00:23:39.458477 systemd-networkd[1099]: lxc_health: Gained carrier Nov 1 00:23:39.459006 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:23:39.502102 systemd-networkd[1099]: cilium_host: Gained IPv6LL Nov 1 00:23:39.941653 systemd-networkd[1099]: lxc7f896e8c7ce1: Link UP Nov 1 00:23:39.949376 systemd-networkd[1099]: lxc94b9e9d2520c: Link UP Nov 1 00:23:39.961027 kernel: eth0: renamed from tmp50900 Nov 1 00:23:39.970039 kernel: eth0: renamed from tmpca13c Nov 1 00:23:39.978746 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc94b9e9d2520c: link becomes ready Nov 1 00:23:39.978827 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Nov 1 00:23:39.978849 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7f896e8c7ce1: link becomes ready Nov 1 00:23:39.980907 systemd-networkd[1099]: lxc94b9e9d2520c: Gained carrier Nov 1 00:23:39.981108 systemd-networkd[1099]: lxc7f896e8c7ce1: Gained carrier Nov 1 00:23:40.014414 systemd-networkd[1099]: cilium_vxlan: Gained IPv6LL Nov 1 00:23:40.782356 systemd-networkd[1099]: lxc_health: Gained IPv6LL Nov 1 00:23:40.930904 kubelet[2069]: E1101 00:23:40.930874 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:41.390411 kubelet[2069]: E1101 00:23:41.390374 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:41.742103 systemd-networkd[1099]: lxc94b9e9d2520c: Gained IPv6LL Nov 1 00:23:41.806089 systemd-networkd[1099]: lxc7f896e8c7ce1: Gained IPv6LL Nov 1 00:23:42.391833 kubelet[2069]: E1101 00:23:42.391801 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:43.494938 env[1322]: time="2025-11-01T00:23:43.494853327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:43.494938 env[1322]: time="2025-11-01T00:23:43.494843445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:43.494938 env[1322]: time="2025-11-01T00:23:43.494915221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:43.494938 env[1322]: time="2025-11-01T00:23:43.494926503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:43.495402 env[1322]: time="2025-11-01T00:23:43.494957390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:43.495402 env[1322]: time="2025-11-01T00:23:43.494998999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:43.495402 env[1322]: time="2025-11-01T00:23:43.495142351Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/50900b3086acc3dd9e9da4dbfdb59aa0eb10c5b3b5093b4fbd291f5955016b26 pid=3298 runtime=io.containerd.runc.v2 Nov 1 00:23:43.495402 env[1322]: time="2025-11-01T00:23:43.495250054Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca13cb0b866c94e242398a6cd5056ca26c0711097f1e1ac4742a8078d4d362eb pid=3300 runtime=io.containerd.runc.v2 Nov 1 00:23:43.515208 systemd[1]: run-containerd-runc-k8s.io-ca13cb0b866c94e242398a6cd5056ca26c0711097f1e1ac4742a8078d4d362eb-runc.wRDfFR.mount: Deactivated successfully. Nov 1 00:23:43.531696 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:23:43.538066 systemd-resolved[1238]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:23:43.552014 env[1322]: time="2025-11-01T00:23:43.551776782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vf7pb,Uid:282fecde-c653-4c95-9cac-6bb7ca88090c,Namespace:kube-system,Attempt:0,} returns sandbox id \"50900b3086acc3dd9e9da4dbfdb59aa0eb10c5b3b5093b4fbd291f5955016b26\"" Nov 1 00:23:43.553089 kubelet[2069]: E1101 00:23:43.552390 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:43.555855 env[1322]: time="2025-11-01T00:23:43.555802346Z" level=info msg="CreateContainer within sandbox \"50900b3086acc3dd9e9da4dbfdb59aa0eb10c5b3b5093b4fbd291f5955016b26\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:23:43.561985 env[1322]: time="2025-11-01T00:23:43.561940453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qbbgf,Uid:ed596e94-6c23-4470-9cbd-f661a6208f16,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca13cb0b866c94e242398a6cd5056ca26c0711097f1e1ac4742a8078d4d362eb\"" Nov 1 00:23:43.562951 kubelet[2069]: E1101 00:23:43.562898 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:43.566733 env[1322]: time="2025-11-01T00:23:43.566688295Z" level=info msg="CreateContainer within sandbox \"ca13cb0b866c94e242398a6cd5056ca26c0711097f1e1ac4742a8078d4d362eb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:23:43.583768 env[1322]: time="2025-11-01T00:23:43.583708071Z" level=info msg="CreateContainer within sandbox \"50900b3086acc3dd9e9da4dbfdb59aa0eb10c5b3b5093b4fbd291f5955016b26\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb7750d92f4baa873edc3c60c43ee22622ee77fd7a8b37a3973cd6828d3feb26\"" Nov 1 00:23:43.586734 env[1322]: time="2025-11-01T00:23:43.586441231Z" level=info msg="StartContainer for \"bb7750d92f4baa873edc3c60c43ee22622ee77fd7a8b37a3973cd6828d3feb26\"" Nov 1 00:23:43.589626 env[1322]: time="2025-11-01T00:23:43.589590522Z" level=info msg="CreateContainer within sandbox \"ca13cb0b866c94e242398a6cd5056ca26c0711097f1e1ac4742a8078d4d362eb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"82905b2caa695b2a5ec119a15ca379d839bb3e3c620161166747f0feb37c8398\"" Nov 1 00:23:43.591054 env[1322]: time="2025-11-01T00:23:43.590359531Z" level=info msg="StartContainer for \"82905b2caa695b2a5ec119a15ca379d839bb3e3c620161166747f0feb37c8398\"" Nov 1 00:23:43.671075 env[1322]: time="2025-11-01T00:23:43.671016315Z" level=info msg="StartContainer for \"82905b2caa695b2a5ec119a15ca379d839bb3e3c620161166747f0feb37c8398\" returns successfully" Nov 1 00:23:43.672286 env[1322]: time="2025-11-01T00:23:43.671598483Z" level=info msg="StartContainer for \"bb7750d92f4baa873edc3c60c43ee22622ee77fd7a8b37a3973cd6828d3feb26\" returns successfully" Nov 1 00:23:44.284902 systemd[1]: Started sshd@5-10.0.0.94:22-10.0.0.1:48984.service. Nov 1 00:23:44.329174 sshd[3442]: Accepted publickey for core from 10.0.0.1 port 48984 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:23:44.330926 sshd[3442]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:44.335162 systemd-logind[1305]: New session 6 of user core. Nov 1 00:23:44.335551 systemd[1]: Started session-6.scope. Nov 1 00:23:44.399600 kubelet[2069]: E1101 00:23:44.399380 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:44.401354 kubelet[2069]: E1101 00:23:44.401325 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:44.410806 kubelet[2069]: I1101 00:23:44.410739 2069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qbbgf" podStartSLOduration=21.410725625 podStartE2EDuration="21.410725625s" podCreationTimestamp="2025-11-01 00:23:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:23:44.410495937 +0000 UTC m=+27.211623064" watchObservedRunningTime="2025-11-01 00:23:44.410725625 +0000 UTC m=+27.211852752" Nov 1 00:23:44.491636 sshd[3442]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:44.494229 systemd[1]: sshd@5-10.0.0.94:22-10.0.0.1:48984.service: Deactivated successfully. Nov 1 00:23:44.495151 systemd-logind[1305]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:23:44.495216 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:23:44.495884 systemd-logind[1305]: Removed session 6. Nov 1 00:23:45.403487 kubelet[2069]: E1101 00:23:45.403444 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:45.404136 kubelet[2069]: E1101 00:23:45.404034 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:46.405100 kubelet[2069]: E1101 00:23:46.405019 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:46.405100 kubelet[2069]: E1101 00:23:46.405099 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:49.495959 systemd[1]: Started sshd@6-10.0.0.94:22-10.0.0.1:56252.service. Nov 1 00:23:49.537450 sshd[3463]: Accepted publickey for core from 10.0.0.1 port 56252 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:23:49.539062 sshd[3463]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:49.543017 systemd-logind[1305]: New session 7 of user core. Nov 1 00:23:49.543530 systemd[1]: Started session-7.scope. Nov 1 00:23:49.653861 sshd[3463]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:49.656136 systemd[1]: sshd@6-10.0.0.94:22-10.0.0.1:56252.service: Deactivated successfully. Nov 1 00:23:49.657060 systemd-logind[1305]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:23:49.657128 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:23:49.658135 systemd-logind[1305]: Removed session 7. Nov 1 00:23:54.657475 systemd[1]: Started sshd@7-10.0.0.94:22-10.0.0.1:56256.service. Nov 1 00:23:54.703736 sshd[3479]: Accepted publickey for core from 10.0.0.1 port 56256 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:23:54.705440 sshd[3479]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:54.708681 systemd-logind[1305]: New session 8 of user core. Nov 1 00:23:54.709565 systemd[1]: Started session-8.scope. Nov 1 00:23:54.817162 sshd[3479]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:54.819404 systemd[1]: sshd@7-10.0.0.94:22-10.0.0.1:56256.service: Deactivated successfully. Nov 1 00:23:54.820355 systemd-logind[1305]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:23:54.820408 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:23:54.821095 systemd-logind[1305]: Removed session 8. Nov 1 00:23:59.820961 systemd[1]: Started sshd@8-10.0.0.94:22-10.0.0.1:33048.service. Nov 1 00:23:59.862700 sshd[3496]: Accepted publickey for core from 10.0.0.1 port 33048 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:23:59.863505 sshd[3496]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:23:59.871018 systemd-logind[1305]: New session 9 of user core. Nov 1 00:23:59.871287 systemd[1]: Started session-9.scope. Nov 1 00:24:00.002162 sshd[3496]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:00.004375 systemd[1]: Started sshd@9-10.0.0.94:22-10.0.0.1:33060.service. Nov 1 00:24:00.007055 systemd[1]: sshd@8-10.0.0.94:22-10.0.0.1:33048.service: Deactivated successfully. Nov 1 00:24:00.008111 systemd-logind[1305]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:24:00.008167 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:24:00.009287 systemd-logind[1305]: Removed session 9. Nov 1 00:24:00.051941 sshd[3510]: Accepted publickey for core from 10.0.0.1 port 33060 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:24:00.053642 sshd[3510]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:00.056943 systemd-logind[1305]: New session 10 of user core. Nov 1 00:24:00.057805 systemd[1]: Started session-10.scope. Nov 1 00:24:00.204166 sshd[3510]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:00.206551 systemd[1]: Started sshd@10-10.0.0.94:22-10.0.0.1:33072.service. Nov 1 00:24:00.214320 systemd[1]: sshd@9-10.0.0.94:22-10.0.0.1:33060.service: Deactivated successfully. Nov 1 00:24:00.215640 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:24:00.224127 systemd-logind[1305]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:24:00.227165 systemd-logind[1305]: Removed session 10. Nov 1 00:24:00.253779 sshd[3522]: Accepted publickey for core from 10.0.0.1 port 33072 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:24:00.255164 sshd[3522]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:00.259013 systemd-logind[1305]: New session 11 of user core. Nov 1 00:24:00.259483 systemd[1]: Started session-11.scope. Nov 1 00:24:00.369236 sshd[3522]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:00.371747 systemd-logind[1305]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:24:00.371875 systemd[1]: sshd@10-10.0.0.94:22-10.0.0.1:33072.service: Deactivated successfully. Nov 1 00:24:00.372807 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:24:00.373254 systemd-logind[1305]: Removed session 11. Nov 1 00:24:05.372890 systemd[1]: Started sshd@11-10.0.0.94:22-10.0.0.1:33074.service. Nov 1 00:24:05.414514 sshd[3540]: Accepted publickey for core from 10.0.0.1 port 33074 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:24:05.415815 sshd[3540]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:05.419742 systemd-logind[1305]: New session 12 of user core. Nov 1 00:24:05.420270 systemd[1]: Started session-12.scope. Nov 1 00:24:05.532940 sshd[3540]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:05.535255 systemd[1]: sshd@11-10.0.0.94:22-10.0.0.1:33074.service: Deactivated successfully. Nov 1 00:24:05.536368 systemd-logind[1305]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:24:05.536372 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:24:05.537376 systemd-logind[1305]: Removed session 12. Nov 1 00:24:10.536222 systemd[1]: Started sshd@12-10.0.0.94:22-10.0.0.1:56892.service. Nov 1 00:24:10.577601 sshd[3555]: Accepted publickey for core from 10.0.0.1 port 56892 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:24:10.579096 sshd[3555]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:10.583811 systemd[1]: Started session-13.scope. Nov 1 00:24:10.584388 systemd-logind[1305]: New session 13 of user core. Nov 1 00:24:10.710276 sshd[3555]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:10.713077 systemd[1]: Started sshd@13-10.0.0.94:22-10.0.0.1:56908.service. Nov 1 00:24:10.713628 systemd[1]: sshd@12-10.0.0.94:22-10.0.0.1:56892.service: Deactivated successfully. Nov 1 00:24:10.714701 systemd-logind[1305]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:24:10.714711 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:24:10.715966 systemd-logind[1305]: Removed session 13. Nov 1 00:24:10.755403 sshd[3568]: Accepted publickey for core from 10.0.0.1 port 56908 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:24:10.756580 sshd[3568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:10.760066 systemd-logind[1305]: New session 14 of user core. Nov 1 00:24:10.761126 systemd[1]: Started session-14.scope. Nov 1 00:24:10.940097 sshd[3568]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:10.941110 systemd[1]: Started sshd@14-10.0.0.94:22-10.0.0.1:56914.service. Nov 1 00:24:10.943536 systemd[1]: sshd@13-10.0.0.94:22-10.0.0.1:56908.service: Deactivated successfully. Nov 1 00:24:10.944642 systemd-logind[1305]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:24:10.944699 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:24:10.945354 systemd-logind[1305]: Removed session 14. Nov 1 00:24:10.987484 sshd[3579]: Accepted publickey for core from 10.0.0.1 port 56914 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:24:10.988744 sshd[3579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:10.993951 systemd-logind[1305]: New session 15 of user core. Nov 1 00:24:10.994792 systemd[1]: Started session-15.scope. Nov 1 00:24:11.530686 sshd[3579]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:11.532110 systemd[1]: Started sshd@15-10.0.0.94:22-10.0.0.1:56916.service. Nov 1 00:24:11.534294 systemd[1]: sshd@14-10.0.0.94:22-10.0.0.1:56914.service: Deactivated successfully. Nov 1 00:24:11.535315 systemd-logind[1305]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:24:11.535428 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:24:11.539450 systemd-logind[1305]: Removed session 15. Nov 1 00:24:11.589858 sshd[3597]: Accepted publickey for core from 10.0.0.1 port 56916 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:24:11.591224 sshd[3597]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:11.594775 systemd-logind[1305]: New session 16 of user core. Nov 1 00:24:11.595582 systemd[1]: Started session-16.scope. Nov 1 00:24:11.878198 sshd[3597]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:11.879293 systemd[1]: Started sshd@16-10.0.0.94:22-10.0.0.1:56920.service. Nov 1 00:24:11.881650 systemd[1]: sshd@15-10.0.0.94:22-10.0.0.1:56916.service: Deactivated successfully. Nov 1 00:24:11.882633 systemd-logind[1305]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:24:11.882688 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:24:11.883402 systemd-logind[1305]: Removed session 16. Nov 1 00:24:11.922039 sshd[3612]: Accepted publickey for core from 10.0.0.1 port 56920 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:24:11.923355 sshd[3612]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:11.928383 systemd-logind[1305]: New session 17 of user core. Nov 1 00:24:11.929378 systemd[1]: Started session-17.scope. Nov 1 00:24:12.040728 sshd[3612]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:12.043255 systemd[1]: sshd@16-10.0.0.94:22-10.0.0.1:56920.service: Deactivated successfully. Nov 1 00:24:12.044255 systemd-logind[1305]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:24:12.044309 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:24:12.044934 systemd-logind[1305]: Removed session 17. Nov 1 00:24:17.044718 systemd[1]: Started sshd@17-10.0.0.94:22-10.0.0.1:56926.service. Nov 1 00:24:17.087319 sshd[3631]: Accepted publickey for core from 10.0.0.1 port 56926 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:24:17.088588 sshd[3631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:17.092673 systemd-logind[1305]: New session 18 of user core. Nov 1 00:24:17.092911 systemd[1]: Started session-18.scope. Nov 1 00:24:17.200702 sshd[3631]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:17.203187 systemd[1]: sshd@17-10.0.0.94:22-10.0.0.1:56926.service: Deactivated successfully. Nov 1 00:24:17.204117 systemd-logind[1305]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:24:17.204174 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:24:17.204882 systemd-logind[1305]: Removed session 18. Nov 1 00:24:22.204028 systemd[1]: Started sshd@18-10.0.0.94:22-10.0.0.1:57940.service. Nov 1 00:24:22.246092 sshd[3648]: Accepted publickey for core from 10.0.0.1 port 57940 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:24:22.247396 sshd[3648]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:22.251346 systemd-logind[1305]: New session 19 of user core. Nov 1 00:24:22.252253 systemd[1]: Started session-19.scope. Nov 1 00:24:22.365597 sshd[3648]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:22.367858 systemd[1]: sshd@18-10.0.0.94:22-10.0.0.1:57940.service: Deactivated successfully. Nov 1 00:24:22.368855 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:24:22.369211 systemd-logind[1305]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:24:22.370021 systemd-logind[1305]: Removed session 19. Nov 1 00:24:27.368601 systemd[1]: Started sshd@19-10.0.0.94:22-10.0.0.1:57950.service. Nov 1 00:24:27.417554 sshd[3664]: Accepted publickey for core from 10.0.0.1 port 57950 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:24:27.418884 sshd[3664]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:27.422358 systemd-logind[1305]: New session 20 of user core. Nov 1 00:24:27.423168 systemd[1]: Started session-20.scope. Nov 1 00:24:27.557284 sshd[3664]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:27.559695 systemd[1]: Started sshd@20-10.0.0.94:22-10.0.0.1:57964.service. Nov 1 00:24:27.560232 systemd[1]: sshd@19-10.0.0.94:22-10.0.0.1:57950.service: Deactivated successfully. Nov 1 00:24:27.561235 systemd-logind[1305]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:24:27.561394 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:24:27.562114 systemd-logind[1305]: Removed session 20. Nov 1 00:24:27.601259 sshd[3677]: Accepted publickey for core from 10.0.0.1 port 57964 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:24:27.602455 sshd[3677]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:27.605584 systemd-logind[1305]: New session 21 of user core. Nov 1 00:24:27.606387 systemd[1]: Started session-21.scope. Nov 1 00:24:30.314744 kubelet[2069]: I1101 00:24:30.314682 2069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vf7pb" podStartSLOduration=67.314664819 podStartE2EDuration="1m7.314664819s" podCreationTimestamp="2025-11-01 00:23:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:23:44.434025094 +0000 UTC m=+27.235152301" watchObservedRunningTime="2025-11-01 00:24:30.314664819 +0000 UTC m=+73.115791946" Nov 1 00:24:30.324954 env[1322]: time="2025-11-01T00:24:30.324554381Z" level=info msg="StopContainer for \"c59dfd1001242291f3b651c9a8a9ae1d531009199a6139dfa8d5e1e2d1176188\" with timeout 30 (s)" Nov 1 00:24:30.325994 env[1322]: time="2025-11-01T00:24:30.325956450Z" level=info msg="Stop container \"c59dfd1001242291f3b651c9a8a9ae1d531009199a6139dfa8d5e1e2d1176188\" with signal terminated" Nov 1 00:24:30.345448 systemd[1]: run-containerd-runc-k8s.io-ce6820ebc694000193f0f9befa563a5ebb5fa910357e4f7021580b6054745748-runc.GG8BF8.mount: Deactivated successfully. Nov 1 00:24:30.366443 env[1322]: time="2025-11-01T00:24:30.366380880Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:24:30.370015 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c59dfd1001242291f3b651c9a8a9ae1d531009199a6139dfa8d5e1e2d1176188-rootfs.mount: Deactivated successfully. Nov 1 00:24:30.374501 env[1322]: time="2025-11-01T00:24:30.374469438Z" level=info msg="StopContainer for \"ce6820ebc694000193f0f9befa563a5ebb5fa910357e4f7021580b6054745748\" with timeout 2 (s)" Nov 1 00:24:30.375033 env[1322]: time="2025-11-01T00:24:30.375009043Z" level=info msg="Stop container \"ce6820ebc694000193f0f9befa563a5ebb5fa910357e4f7021580b6054745748\" with signal terminated" Nov 1 00:24:30.377428 env[1322]: time="2025-11-01T00:24:30.377388329Z" level=info msg="shim disconnected" id=c59dfd1001242291f3b651c9a8a9ae1d531009199a6139dfa8d5e1e2d1176188 Nov 1 00:24:30.377428 env[1322]: time="2025-11-01T00:24:30.377421847Z" level=warning msg="cleaning up after shim disconnected" id=c59dfd1001242291f3b651c9a8a9ae1d531009199a6139dfa8d5e1e2d1176188 namespace=k8s.io Nov 1 00:24:30.377428 env[1322]: time="2025-11-01T00:24:30.377431647Z" level=info msg="cleaning up dead shim" Nov 1 00:24:30.382686 systemd-networkd[1099]: lxc_health: Link DOWN Nov 1 00:24:30.382696 systemd-networkd[1099]: lxc_health: Lost carrier Nov 1 00:24:30.384773 env[1322]: time="2025-11-01T00:24:30.384715656Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:24:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3727 runtime=io.containerd.runc.v2\n" Nov 1 00:24:30.386867 env[1322]: time="2025-11-01T00:24:30.386830600Z" level=info msg="StopContainer for \"c59dfd1001242291f3b651c9a8a9ae1d531009199a6139dfa8d5e1e2d1176188\" returns successfully" Nov 1 00:24:30.387554 env[1322]: time="2025-11-01T00:24:30.387527235Z" level=info msg="StopPodSandbox for \"c26d772c7149e470b1f5b8f5490e9f34ae86d8c673aa07549f612c1b9e6b7725\"" Nov 1 00:24:30.387714 env[1322]: time="2025-11-01T00:24:30.387588391Z" level=info msg="Container to stop \"c59dfd1001242291f3b651c9a8a9ae1d531009199a6139dfa8d5e1e2d1176188\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:24:30.389392 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c26d772c7149e470b1f5b8f5490e9f34ae86d8c673aa07549f612c1b9e6b7725-shm.mount: Deactivated successfully. Nov 1 00:24:30.422771 env[1322]: time="2025-11-01T00:24:30.422710083Z" level=info msg="shim disconnected" id=c26d772c7149e470b1f5b8f5490e9f34ae86d8c673aa07549f612c1b9e6b7725 Nov 1 00:24:30.422771 env[1322]: time="2025-11-01T00:24:30.422771159Z" level=warning msg="cleaning up after shim disconnected" id=c26d772c7149e470b1f5b8f5490e9f34ae86d8c673aa07549f612c1b9e6b7725 namespace=k8s.io Nov 1 00:24:30.422966 env[1322]: time="2025-11-01T00:24:30.422782518Z" level=info msg="cleaning up dead shim" Nov 1 00:24:30.429733 env[1322]: time="2025-11-01T00:24:30.429686633Z" level=info msg="shim disconnected" id=ce6820ebc694000193f0f9befa563a5ebb5fa910357e4f7021580b6054745748 Nov 1 00:24:30.430035 env[1322]: time="2025-11-01T00:24:30.430014851Z" level=warning msg="cleaning up after shim disconnected" id=ce6820ebc694000193f0f9befa563a5ebb5fa910357e4f7021580b6054745748 namespace=k8s.io Nov 1 00:24:30.430127 env[1322]: time="2025-11-01T00:24:30.430110685Z" level=info msg="cleaning up dead shim" Nov 1 00:24:30.431626 env[1322]: time="2025-11-01T00:24:30.431596269Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:24:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3780 runtime=io.containerd.runc.v2\n" Nov 1 00:24:30.431932 env[1322]: time="2025-11-01T00:24:30.431908289Z" level=info msg="TearDown network for sandbox \"c26d772c7149e470b1f5b8f5490e9f34ae86d8c673aa07549f612c1b9e6b7725\" successfully" Nov 1 00:24:30.431990 env[1322]: time="2025-11-01T00:24:30.431933728Z" level=info msg="StopPodSandbox for \"c26d772c7149e470b1f5b8f5490e9f34ae86d8c673aa07549f612c1b9e6b7725\" returns successfully" Nov 1 00:24:30.437038 env[1322]: time="2025-11-01T00:24:30.437010160Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:24:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3793 runtime=io.containerd.runc.v2\n" Nov 1 00:24:30.439000 env[1322]: time="2025-11-01T00:24:30.438955994Z" level=info msg="StopContainer for \"ce6820ebc694000193f0f9befa563a5ebb5fa910357e4f7021580b6054745748\" returns successfully" Nov 1 00:24:30.440815 env[1322]: time="2025-11-01T00:24:30.440789076Z" level=info msg="StopPodSandbox for \"30a2551a781ab6cd9ef6468236e252a408ddf28876c5eea3feaca3c54f0887cb\"" Nov 1 00:24:30.440957 env[1322]: time="2025-11-01T00:24:30.440933906Z" level=info msg="Container to stop \"981138e25b71f4b3f4229179f83c9d9e2aacde4ecc1436249157f2dc2918f9fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:24:30.441063 env[1322]: time="2025-11-01T00:24:30.441044339Z" level=info msg="Container to stop \"ce6820ebc694000193f0f9befa563a5ebb5fa910357e4f7021580b6054745748\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:24:30.441129 env[1322]: time="2025-11-01T00:24:30.441113135Z" level=info msg="Container to stop \"3d963f417c679f78ce731199664c30f49a912fb93923f0e2e046bfd29aaf9118\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:24:30.441192 env[1322]: time="2025-11-01T00:24:30.441175451Z" level=info msg="Container to stop \"d571eaa407d537839e40091926fcd96553ec162e939e3bb3015859a040b30a76\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:24:30.441265 env[1322]: time="2025-11-01T00:24:30.441248246Z" level=info msg="Container to stop \"b8771bfca81337b6a8c6f99a400e5bb9d6e954448183854446273f61b566d212\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:24:30.462267 env[1322]: time="2025-11-01T00:24:30.462101300Z" level=info msg="shim disconnected" id=30a2551a781ab6cd9ef6468236e252a408ddf28876c5eea3feaca3c54f0887cb Nov 1 00:24:30.462538 env[1322]: time="2025-11-01T00:24:30.462516233Z" level=warning msg="cleaning up after shim disconnected" id=30a2551a781ab6cd9ef6468236e252a408ddf28876c5eea3feaca3c54f0887cb namespace=k8s.io Nov 1 00:24:30.462604 env[1322]: time="2025-11-01T00:24:30.462591108Z" level=info msg="cleaning up dead shim" Nov 1 00:24:30.469331 env[1322]: time="2025-11-01T00:24:30.469298955Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:24:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3825 runtime=io.containerd.runc.v2\n" Nov 1 00:24:30.470087 env[1322]: time="2025-11-01T00:24:30.470040947Z" level=info msg="TearDown network for sandbox \"30a2551a781ab6cd9ef6468236e252a408ddf28876c5eea3feaca3c54f0887cb\" successfully" Nov 1 00:24:30.470209 env[1322]: time="2025-11-01T00:24:30.470189257Z" level=info msg="StopPodSandbox for \"30a2551a781ab6cd9ef6468236e252a408ddf28876c5eea3feaca3c54f0887cb\" returns successfully" Nov 1 00:24:30.495968 kubelet[2069]: I1101 00:24:30.495937 2069 scope.go:117] "RemoveContainer" containerID="c59dfd1001242291f3b651c9a8a9ae1d531009199a6139dfa8d5e1e2d1176188" Nov 1 00:24:30.497300 env[1322]: time="2025-11-01T00:24:30.497253790Z" level=info msg="RemoveContainer for \"c59dfd1001242291f3b651c9a8a9ae1d531009199a6139dfa8d5e1e2d1176188\"" Nov 1 00:24:30.503189 env[1322]: time="2025-11-01T00:24:30.503155129Z" level=info msg="RemoveContainer for \"c59dfd1001242291f3b651c9a8a9ae1d531009199a6139dfa8d5e1e2d1176188\" returns successfully" Nov 1 00:24:30.503400 kubelet[2069]: I1101 00:24:30.503377 2069 scope.go:117] "RemoveContainer" containerID="c59dfd1001242291f3b651c9a8a9ae1d531009199a6139dfa8d5e1e2d1176188" Nov 1 00:24:30.503655 env[1322]: time="2025-11-01T00:24:30.503584941Z" level=error msg="ContainerStatus for \"c59dfd1001242291f3b651c9a8a9ae1d531009199a6139dfa8d5e1e2d1176188\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c59dfd1001242291f3b651c9a8a9ae1d531009199a6139dfa8d5e1e2d1176188\": not found" Nov 1 00:24:30.503936 kubelet[2069]: E1101 00:24:30.503846 2069 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c59dfd1001242291f3b651c9a8a9ae1d531009199a6139dfa8d5e1e2d1176188\": not found" containerID="c59dfd1001242291f3b651c9a8a9ae1d531009199a6139dfa8d5e1e2d1176188" Nov 1 00:24:30.506273 kubelet[2069]: I1101 00:24:30.506086 2069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c59dfd1001242291f3b651c9a8a9ae1d531009199a6139dfa8d5e1e2d1176188"} err="failed to get container status \"c59dfd1001242291f3b651c9a8a9ae1d531009199a6139dfa8d5e1e2d1176188\": rpc error: code = NotFound desc = an error occurred when try to find container \"c59dfd1001242291f3b651c9a8a9ae1d531009199a6139dfa8d5e1e2d1176188\": not found" Nov 1 00:24:30.506273 kubelet[2069]: I1101 00:24:30.506186 2069 scope.go:117] "RemoveContainer" containerID="ce6820ebc694000193f0f9befa563a5ebb5fa910357e4f7021580b6054745748" Nov 1 00:24:30.507383 env[1322]: time="2025-11-01T00:24:30.507351458Z" level=info msg="RemoveContainer for \"ce6820ebc694000193f0f9befa563a5ebb5fa910357e4f7021580b6054745748\"" Nov 1 00:24:30.509755 env[1322]: time="2025-11-01T00:24:30.509720345Z" level=info msg="RemoveContainer for \"ce6820ebc694000193f0f9befa563a5ebb5fa910357e4f7021580b6054745748\" returns successfully" Nov 1 00:24:30.509912 kubelet[2069]: I1101 00:24:30.509895 2069 scope.go:117] "RemoveContainer" containerID="b8771bfca81337b6a8c6f99a400e5bb9d6e954448183854446273f61b566d212" Nov 1 00:24:30.510887 env[1322]: time="2025-11-01T00:24:30.510861191Z" level=info msg="RemoveContainer for \"b8771bfca81337b6a8c6f99a400e5bb9d6e954448183854446273f61b566d212\"" Nov 1 00:24:30.513389 env[1322]: time="2025-11-01T00:24:30.513346551Z" level=info msg="RemoveContainer for \"b8771bfca81337b6a8c6f99a400e5bb9d6e954448183854446273f61b566d212\" returns successfully" Nov 1 00:24:30.513519 kubelet[2069]: I1101 00:24:30.513499 2069 scope.go:117] "RemoveContainer" containerID="981138e25b71f4b3f4229179f83c9d9e2aacde4ecc1436249157f2dc2918f9fe" Nov 1 00:24:30.514385 env[1322]: time="2025-11-01T00:24:30.514360685Z" level=info msg="RemoveContainer for \"981138e25b71f4b3f4229179f83c9d9e2aacde4ecc1436249157f2dc2918f9fe\"" Nov 1 00:24:30.516675 env[1322]: time="2025-11-01T00:24:30.516641258Z" level=info msg="RemoveContainer for \"981138e25b71f4b3f4229179f83c9d9e2aacde4ecc1436249157f2dc2918f9fe\" returns successfully" Nov 1 00:24:30.516933 kubelet[2069]: I1101 00:24:30.516911 2069 scope.go:117] "RemoveContainer" containerID="3d963f417c679f78ce731199664c30f49a912fb93923f0e2e046bfd29aaf9118" Nov 1 00:24:30.517828 env[1322]: time="2025-11-01T00:24:30.517801543Z" level=info msg="RemoveContainer for \"3d963f417c679f78ce731199664c30f49a912fb93923f0e2e046bfd29aaf9118\"" Nov 1 00:24:30.520258 env[1322]: time="2025-11-01T00:24:30.520224627Z" level=info msg="RemoveContainer for \"3d963f417c679f78ce731199664c30f49a912fb93923f0e2e046bfd29aaf9118\" returns successfully" Nov 1 00:24:30.520398 kubelet[2069]: I1101 00:24:30.520382 2069 scope.go:117] "RemoveContainer" containerID="d571eaa407d537839e40091926fcd96553ec162e939e3bb3015859a040b30a76" Nov 1 00:24:30.521253 env[1322]: time="2025-11-01T00:24:30.521228042Z" level=info msg="RemoveContainer for \"d571eaa407d537839e40091926fcd96553ec162e939e3bb3015859a040b30a76\"" Nov 1 00:24:30.527654 kubelet[2069]: I1101 00:24:30.527598 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-cilium-run\") pod \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " Nov 1 00:24:30.527755 kubelet[2069]: I1101 00:24:30.527675 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-bpf-maps\") pod \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " Nov 1 00:24:30.527755 kubelet[2069]: I1101 00:24:30.527702 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2g2j\" (UniqueName: \"kubernetes.io/projected/71b0b001-fd1d-49e0-a0bb-46b1911fa452-kube-api-access-x2g2j\") pod \"71b0b001-fd1d-49e0-a0bb-46b1911fa452\" (UID: \"71b0b001-fd1d-49e0-a0bb-46b1911fa452\") " Nov 1 00:24:30.527755 kubelet[2069]: I1101 00:24:30.527751 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71b0b001-fd1d-49e0-a0bb-46b1911fa452-cilium-config-path\") pod \"71b0b001-fd1d-49e0-a0bb-46b1911fa452\" (UID: \"71b0b001-fd1d-49e0-a0bb-46b1911fa452\") " Nov 1 00:24:30.527836 kubelet[2069]: I1101 00:24:30.527775 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-host-proc-sys-net\") pod \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " Nov 1 00:24:30.527836 kubelet[2069]: I1101 00:24:30.527791 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-host-proc-sys-kernel\") pod \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " Nov 1 00:24:30.527836 kubelet[2069]: I1101 00:24:30.527806 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-hostproc\") pod \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " Nov 1 00:24:30.527836 kubelet[2069]: I1101 00:24:30.527820 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-cilium-cgroup\") pod \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " Nov 1 00:24:30.527836 kubelet[2069]: I1101 00:24:30.527835 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-etc-cni-netd\") pod \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " Nov 1 00:24:30.527945 kubelet[2069]: I1101 00:24:30.527852 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-cilium-config-path\") pod \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " Nov 1 00:24:30.527945 kubelet[2069]: I1101 00:24:30.527872 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-clustermesh-secrets\") pod \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " Nov 1 00:24:30.527945 kubelet[2069]: I1101 00:24:30.527889 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f44d\" (UniqueName: \"kubernetes.io/projected/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-kube-api-access-5f44d\") pod \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " Nov 1 00:24:30.527945 kubelet[2069]: I1101 00:24:30.527927 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-xtables-lock\") pod \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " Nov 1 00:24:30.527945 kubelet[2069]: I1101 00:24:30.527942 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-cni-path\") pod \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " Nov 1 00:24:30.528173 kubelet[2069]: I1101 00:24:30.527957 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-lib-modules\") pod \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " Nov 1 00:24:30.528173 kubelet[2069]: I1101 00:24:30.527973 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-hubble-tls\") pod \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\" (UID: \"bf21b7b0-ffe4-4d30-86bd-6e21036bc37c\") " Nov 1 00:24:30.529530 env[1322]: time="2025-11-01T00:24:30.529493228Z" level=info msg="RemoveContainer for \"d571eaa407d537839e40091926fcd96553ec162e939e3bb3015859a040b30a76\" returns successfully" Nov 1 00:24:30.531832 kubelet[2069]: I1101 00:24:30.530014 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c" (UID: "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:30.531832 kubelet[2069]: I1101 00:24:30.530063 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c" (UID: "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:30.531832 kubelet[2069]: I1101 00:24:30.530071 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c" (UID: "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:30.531832 kubelet[2069]: I1101 00:24:30.530092 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c" (UID: "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:30.531832 kubelet[2069]: I1101 00:24:30.530098 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-cni-path" (OuterVolumeSpecName: "cni-path") pod "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c" (UID: "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:30.532037 kubelet[2069]: I1101 00:24:30.530108 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-hostproc" (OuterVolumeSpecName: "hostproc") pod "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c" (UID: "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:30.532037 kubelet[2069]: I1101 00:24:30.530116 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c" (UID: "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:30.532037 kubelet[2069]: I1101 00:24:30.530141 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c" (UID: "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:30.532037 kubelet[2069]: I1101 00:24:30.530156 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c" (UID: "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:30.532037 kubelet[2069]: I1101 00:24:30.530196 2069 scope.go:117] "RemoveContainer" containerID="ce6820ebc694000193f0f9befa563a5ebb5fa910357e4f7021580b6054745748" Nov 1 00:24:30.532155 kubelet[2069]: I1101 00:24:30.531058 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71b0b001-fd1d-49e0-a0bb-46b1911fa452-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "71b0b001-fd1d-49e0-a0bb-46b1911fa452" (UID: "71b0b001-fd1d-49e0-a0bb-46b1911fa452"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:24:30.532155 kubelet[2069]: I1101 00:24:30.531155 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c" (UID: "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:30.532155 kubelet[2069]: I1101 00:24:30.531767 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c" (UID: "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:24:30.532317 kubelet[2069]: I1101 00:24:30.532286 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c" (UID: "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:24:30.532559 kubelet[2069]: I1101 00:24:30.532533 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-kube-api-access-5f44d" (OuterVolumeSpecName: "kube-api-access-5f44d") pod "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c" (UID: "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c"). InnerVolumeSpecName "kube-api-access-5f44d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:24:30.532868 env[1322]: time="2025-11-01T00:24:30.532809934Z" level=error msg="ContainerStatus for \"ce6820ebc694000193f0f9befa563a5ebb5fa910357e4f7021580b6054745748\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ce6820ebc694000193f0f9befa563a5ebb5fa910357e4f7021580b6054745748\": not found" Nov 1 00:24:30.533062 kubelet[2069]: E1101 00:24:30.533037 2069 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ce6820ebc694000193f0f9befa563a5ebb5fa910357e4f7021580b6054745748\": not found" containerID="ce6820ebc694000193f0f9befa563a5ebb5fa910357e4f7021580b6054745748" Nov 1 00:24:30.533136 kubelet[2069]: I1101 00:24:30.533064 2069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ce6820ebc694000193f0f9befa563a5ebb5fa910357e4f7021580b6054745748"} err="failed to get container status \"ce6820ebc694000193f0f9befa563a5ebb5fa910357e4f7021580b6054745748\": rpc error: code = NotFound desc = an error occurred when try to find container \"ce6820ebc694000193f0f9befa563a5ebb5fa910357e4f7021580b6054745748\": not found" Nov 1 00:24:30.533136 kubelet[2069]: I1101 00:24:30.533091 2069 scope.go:117] "RemoveContainer" containerID="b8771bfca81337b6a8c6f99a400e5bb9d6e954448183854446273f61b566d212" Nov 1 00:24:30.533431 env[1322]: time="2025-11-01T00:24:30.533381857Z" level=error msg="ContainerStatus for \"b8771bfca81337b6a8c6f99a400e5bb9d6e954448183854446273f61b566d212\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b8771bfca81337b6a8c6f99a400e5bb9d6e954448183854446273f61b566d212\": not found" Nov 1 00:24:30.534012 kubelet[2069]: E1101 00:24:30.533972 2069 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b8771bfca81337b6a8c6f99a400e5bb9d6e954448183854446273f61b566d212\": not found" containerID="b8771bfca81337b6a8c6f99a400e5bb9d6e954448183854446273f61b566d212" Nov 1 00:24:30.534085 kubelet[2069]: I1101 00:24:30.534013 2069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b8771bfca81337b6a8c6f99a400e5bb9d6e954448183854446273f61b566d212"} err="failed to get container status \"b8771bfca81337b6a8c6f99a400e5bb9d6e954448183854446273f61b566d212\": rpc error: code = NotFound desc = an error occurred when try to find container \"b8771bfca81337b6a8c6f99a400e5bb9d6e954448183854446273f61b566d212\": not found" Nov 1 00:24:30.534085 kubelet[2069]: I1101 00:24:30.534031 2069 scope.go:117] "RemoveContainer" containerID="981138e25b71f4b3f4229179f83c9d9e2aacde4ecc1436249157f2dc2918f9fe" Nov 1 00:24:30.534266 env[1322]: time="2025-11-01T00:24:30.534209724Z" level=error msg="ContainerStatus for \"981138e25b71f4b3f4229179f83c9d9e2aacde4ecc1436249157f2dc2918f9fe\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"981138e25b71f4b3f4229179f83c9d9e2aacde4ecc1436249157f2dc2918f9fe\": not found" Nov 1 00:24:30.534325 kubelet[2069]: I1101 00:24:30.534240 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c" (UID: "bf21b7b0-ffe4-4d30-86bd-6e21036bc37c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:24:30.534585 kubelet[2069]: E1101 00:24:30.534360 2069 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"981138e25b71f4b3f4229179f83c9d9e2aacde4ecc1436249157f2dc2918f9fe\": not found" containerID="981138e25b71f4b3f4229179f83c9d9e2aacde4ecc1436249157f2dc2918f9fe" Nov 1 00:24:30.534698 kubelet[2069]: I1101 00:24:30.534589 2069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"981138e25b71f4b3f4229179f83c9d9e2aacde4ecc1436249157f2dc2918f9fe"} err="failed to get container status \"981138e25b71f4b3f4229179f83c9d9e2aacde4ecc1436249157f2dc2918f9fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"981138e25b71f4b3f4229179f83c9d9e2aacde4ecc1436249157f2dc2918f9fe\": not found" Nov 1 00:24:30.534698 kubelet[2069]: I1101 00:24:30.534608 2069 scope.go:117] "RemoveContainer" containerID="3d963f417c679f78ce731199664c30f49a912fb93923f0e2e046bfd29aaf9118" Nov 1 00:24:30.534766 kubelet[2069]: I1101 00:24:30.534511 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71b0b001-fd1d-49e0-a0bb-46b1911fa452-kube-api-access-x2g2j" (OuterVolumeSpecName: "kube-api-access-x2g2j") pod "71b0b001-fd1d-49e0-a0bb-46b1911fa452" (UID: "71b0b001-fd1d-49e0-a0bb-46b1911fa452"). InnerVolumeSpecName "kube-api-access-x2g2j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:24:30.534950 env[1322]: time="2025-11-01T00:24:30.534903199Z" level=error msg="ContainerStatus for \"3d963f417c679f78ce731199664c30f49a912fb93923f0e2e046bfd29aaf9118\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3d963f417c679f78ce731199664c30f49a912fb93923f0e2e046bfd29aaf9118\": not found" Nov 1 00:24:30.535131 kubelet[2069]: E1101 00:24:30.535109 2069 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3d963f417c679f78ce731199664c30f49a912fb93923f0e2e046bfd29aaf9118\": not found" containerID="3d963f417c679f78ce731199664c30f49a912fb93923f0e2e046bfd29aaf9118" Nov 1 00:24:30.535198 kubelet[2069]: I1101 00:24:30.535136 2069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3d963f417c679f78ce731199664c30f49a912fb93923f0e2e046bfd29aaf9118"} err="failed to get container status \"3d963f417c679f78ce731199664c30f49a912fb93923f0e2e046bfd29aaf9118\": rpc error: code = NotFound desc = an error occurred when try to find container \"3d963f417c679f78ce731199664c30f49a912fb93923f0e2e046bfd29aaf9118\": not found" Nov 1 00:24:30.535198 kubelet[2069]: I1101 00:24:30.535154 2069 scope.go:117] "RemoveContainer" containerID="d571eaa407d537839e40091926fcd96553ec162e939e3bb3015859a040b30a76" Nov 1 00:24:30.535363 env[1322]: time="2025-11-01T00:24:30.535316932Z" level=error msg="ContainerStatus for \"d571eaa407d537839e40091926fcd96553ec162e939e3bb3015859a040b30a76\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d571eaa407d537839e40091926fcd96553ec162e939e3bb3015859a040b30a76\": not found" Nov 1 00:24:30.535460 kubelet[2069]: E1101 00:24:30.535431 2069 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d571eaa407d537839e40091926fcd96553ec162e939e3bb3015859a040b30a76\": not found" containerID="d571eaa407d537839e40091926fcd96553ec162e939e3bb3015859a040b30a76" Nov 1 00:24:30.535543 kubelet[2069]: I1101 00:24:30.535460 2069 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d571eaa407d537839e40091926fcd96553ec162e939e3bb3015859a040b30a76"} err="failed to get container status \"d571eaa407d537839e40091926fcd96553ec162e939e3bb3015859a040b30a76\": rpc error: code = NotFound desc = an error occurred when try to find container \"d571eaa407d537839e40091926fcd96553ec162e939e3bb3015859a040b30a76\": not found" Nov 1 00:24:30.629111 kubelet[2069]: I1101 00:24:30.629009 2069 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:30.629268 kubelet[2069]: I1101 00:24:30.629256 2069 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:30.629335 kubelet[2069]: I1101 00:24:30.629324 2069 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:30.629391 kubelet[2069]: I1101 00:24:30.629381 2069 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:30.629455 kubelet[2069]: I1101 00:24:30.629446 2069 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:30.629522 kubelet[2069]: I1101 00:24:30.629512 2069 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:30.629578 kubelet[2069]: I1101 00:24:30.629568 2069 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:30.629634 kubelet[2069]: I1101 00:24:30.629622 2069 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5f44d\" (UniqueName: \"kubernetes.io/projected/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-kube-api-access-5f44d\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:30.629687 kubelet[2069]: I1101 00:24:30.629678 2069 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:30.629760 kubelet[2069]: I1101 00:24:30.629748 2069 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:30.629834 kubelet[2069]: I1101 00:24:30.629819 2069 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:30.629894 kubelet[2069]: I1101 00:24:30.629884 2069 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:30.629948 kubelet[2069]: I1101 00:24:30.629939 2069 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:30.630028 kubelet[2069]: I1101 00:24:30.630017 2069 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x2g2j\" (UniqueName: \"kubernetes.io/projected/71b0b001-fd1d-49e0-a0bb-46b1911fa452-kube-api-access-x2g2j\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:30.630096 kubelet[2069]: I1101 00:24:30.630086 2069 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/71b0b001-fd1d-49e0-a0bb-46b1911fa452-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:30.630156 kubelet[2069]: I1101 00:24:30.630146 2069 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:31.313398 kubelet[2069]: I1101 00:24:31.313366 2069 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71b0b001-fd1d-49e0-a0bb-46b1911fa452" path="/var/lib/kubelet/pods/71b0b001-fd1d-49e0-a0bb-46b1911fa452/volumes" Nov 1 00:24:31.313947 kubelet[2069]: I1101 00:24:31.313929 2069 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf21b7b0-ffe4-4d30-86bd-6e21036bc37c" path="/var/lib/kubelet/pods/bf21b7b0-ffe4-4d30-86bd-6e21036bc37c/volumes" Nov 1 00:24:31.338377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce6820ebc694000193f0f9befa563a5ebb5fa910357e4f7021580b6054745748-rootfs.mount: Deactivated successfully. Nov 1 00:24:31.338519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30a2551a781ab6cd9ef6468236e252a408ddf28876c5eea3feaca3c54f0887cb-rootfs.mount: Deactivated successfully. Nov 1 00:24:31.338602 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-30a2551a781ab6cd9ef6468236e252a408ddf28876c5eea3feaca3c54f0887cb-shm.mount: Deactivated successfully. Nov 1 00:24:31.338691 systemd[1]: var-lib-kubelet-pods-bf21b7b0\x2dffe4\x2d4d30\x2d86bd\x2d6e21036bc37c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5f44d.mount: Deactivated successfully. Nov 1 00:24:31.338778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c26d772c7149e470b1f5b8f5490e9f34ae86d8c673aa07549f612c1b9e6b7725-rootfs.mount: Deactivated successfully. Nov 1 00:24:31.338851 systemd[1]: var-lib-kubelet-pods-71b0b001\x2dfd1d\x2d49e0\x2da0bb\x2d46b1911fa452-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx2g2j.mount: Deactivated successfully. Nov 1 00:24:31.338926 systemd[1]: var-lib-kubelet-pods-bf21b7b0\x2dffe4\x2d4d30\x2d86bd\x2d6e21036bc37c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:24:31.339016 systemd[1]: var-lib-kubelet-pods-bf21b7b0\x2dffe4\x2d4d30\x2d86bd\x2d6e21036bc37c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:24:32.277311 sshd[3677]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:32.279611 systemd[1]: Started sshd@21-10.0.0.94:22-10.0.0.1:57484.service. Nov 1 00:24:32.280166 systemd[1]: sshd@20-10.0.0.94:22-10.0.0.1:57964.service: Deactivated successfully. Nov 1 00:24:32.281012 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:24:32.282585 systemd-logind[1305]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:24:32.283844 systemd-logind[1305]: Removed session 21. Nov 1 00:24:32.326380 sshd[3841]: Accepted publickey for core from 10.0.0.1 port 57484 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:24:32.327694 sshd[3841]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:32.331748 systemd-logind[1305]: New session 22 of user core. Nov 1 00:24:32.332345 systemd[1]: Started session-22.scope. Nov 1 00:24:32.365588 kubelet[2069]: E1101 00:24:32.365550 2069 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:24:33.599568 kubelet[2069]: I1101 00:24:33.599510 2069 memory_manager.go:355] "RemoveStaleState removing state" podUID="71b0b001-fd1d-49e0-a0bb-46b1911fa452" containerName="cilium-operator" Nov 1 00:24:33.599568 kubelet[2069]: I1101 00:24:33.599549 2069 memory_manager.go:355] "RemoveStaleState removing state" podUID="bf21b7b0-ffe4-4d30-86bd-6e21036bc37c" containerName="cilium-agent" Nov 1 00:24:33.604066 sshd[3841]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:33.606570 systemd[1]: Started sshd@22-10.0.0.94:22-10.0.0.1:57494.service. Nov 1 00:24:33.611356 systemd[1]: sshd@21-10.0.0.94:22-10.0.0.1:57484.service: Deactivated successfully. Nov 1 00:24:33.615267 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:24:33.616941 systemd-logind[1305]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:24:33.623478 systemd-logind[1305]: Removed session 22. Nov 1 00:24:33.648023 kubelet[2069]: I1101 00:24:33.647974 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-bpf-maps\") pod \"cilium-99mgd\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " pod="kube-system/cilium-99mgd" Nov 1 00:24:33.648023 kubelet[2069]: I1101 00:24:33.648026 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-xtables-lock\") pod \"cilium-99mgd\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " pod="kube-system/cilium-99mgd" Nov 1 00:24:33.648264 kubelet[2069]: I1101 00:24:33.648050 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/953b1464-7010-466a-9eca-33271fc5120a-cilium-config-path\") pod \"cilium-99mgd\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " pod="kube-system/cilium-99mgd" Nov 1 00:24:33.648264 kubelet[2069]: I1101 00:24:33.648067 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-cilium-run\") pod \"cilium-99mgd\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " pod="kube-system/cilium-99mgd" Nov 1 00:24:33.648264 kubelet[2069]: I1101 00:24:33.648084 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-cni-path\") pod \"cilium-99mgd\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " pod="kube-system/cilium-99mgd" Nov 1 00:24:33.648264 kubelet[2069]: I1101 00:24:33.648102 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-host-proc-sys-net\") pod \"cilium-99mgd\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " pod="kube-system/cilium-99mgd" Nov 1 00:24:33.648264 kubelet[2069]: I1101 00:24:33.648119 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77tpf\" (UniqueName: \"kubernetes.io/projected/953b1464-7010-466a-9eca-33271fc5120a-kube-api-access-77tpf\") pod \"cilium-99mgd\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " pod="kube-system/cilium-99mgd" Nov 1 00:24:33.648264 kubelet[2069]: I1101 00:24:33.648136 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-etc-cni-netd\") pod \"cilium-99mgd\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " pod="kube-system/cilium-99mgd" Nov 1 00:24:33.648395 kubelet[2069]: I1101 00:24:33.648153 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/953b1464-7010-466a-9eca-33271fc5120a-cilium-ipsec-secrets\") pod \"cilium-99mgd\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " pod="kube-system/cilium-99mgd" Nov 1 00:24:33.648395 kubelet[2069]: I1101 00:24:33.648169 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-cilium-cgroup\") pod \"cilium-99mgd\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " pod="kube-system/cilium-99mgd" Nov 1 00:24:33.648395 kubelet[2069]: I1101 00:24:33.648184 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-lib-modules\") pod \"cilium-99mgd\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " pod="kube-system/cilium-99mgd" Nov 1 00:24:33.648395 kubelet[2069]: I1101 00:24:33.648198 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-hostproc\") pod \"cilium-99mgd\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " pod="kube-system/cilium-99mgd" Nov 1 00:24:33.648395 kubelet[2069]: I1101 00:24:33.648214 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-host-proc-sys-kernel\") pod \"cilium-99mgd\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " pod="kube-system/cilium-99mgd" Nov 1 00:24:33.648395 kubelet[2069]: I1101 00:24:33.648230 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/953b1464-7010-466a-9eca-33271fc5120a-clustermesh-secrets\") pod \"cilium-99mgd\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " pod="kube-system/cilium-99mgd" Nov 1 00:24:33.648552 kubelet[2069]: I1101 00:24:33.648246 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/953b1464-7010-466a-9eca-33271fc5120a-hubble-tls\") pod \"cilium-99mgd\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " pod="kube-system/cilium-99mgd" Nov 1 00:24:33.663609 sshd[3854]: Accepted publickey for core from 10.0.0.1 port 57494 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:24:33.665026 sshd[3854]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:33.670580 systemd-logind[1305]: New session 23 of user core. Nov 1 00:24:33.670935 systemd[1]: Started session-23.scope. Nov 1 00:24:33.819993 sshd[3854]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:33.820591 systemd[1]: Started sshd@23-10.0.0.94:22-10.0.0.1:57500.service. Nov 1 00:24:33.833544 kubelet[2069]: E1101 00:24:33.828254 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:33.833748 env[1322]: time="2025-11-01T00:24:33.832417927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-99mgd,Uid:953b1464-7010-466a-9eca-33271fc5120a,Namespace:kube-system,Attempt:0,}" Nov 1 00:24:33.828763 systemd[1]: sshd@22-10.0.0.94:22-10.0.0.1:57494.service: Deactivated successfully. Nov 1 00:24:33.829741 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:24:33.836213 systemd-logind[1305]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:24:33.838895 systemd-logind[1305]: Removed session 23. Nov 1 00:24:33.849268 env[1322]: time="2025-11-01T00:24:33.848826252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:24:33.849268 env[1322]: time="2025-11-01T00:24:33.848868810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:24:33.849268 env[1322]: time="2025-11-01T00:24:33.848879969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:24:33.849268 env[1322]: time="2025-11-01T00:24:33.849096318Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/08315f8d4b1a24b9de2c94a6dc668da2ad60baa50faf5d43c561697ed32481b7 pid=3883 runtime=io.containerd.runc.v2 Nov 1 00:24:33.874747 sshd[3872]: Accepted publickey for core from 10.0.0.1 port 57500 ssh2: RSA SHA256:kb3suJ2QTjzwtL4e8dR0lwVlSg216vRjckn55cG0Sc4 Nov 1 00:24:33.875577 sshd[3872]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:24:33.880794 systemd-logind[1305]: New session 24 of user core. Nov 1 00:24:33.881371 systemd[1]: Started session-24.scope. Nov 1 00:24:33.900553 env[1322]: time="2025-11-01T00:24:33.900498017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-99mgd,Uid:953b1464-7010-466a-9eca-33271fc5120a,Namespace:kube-system,Attempt:0,} returns sandbox id \"08315f8d4b1a24b9de2c94a6dc668da2ad60baa50faf5d43c561697ed32481b7\"" Nov 1 00:24:33.901614 kubelet[2069]: E1101 00:24:33.901592 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:33.908651 env[1322]: time="2025-11-01T00:24:33.908605425Z" level=info msg="CreateContainer within sandbox \"08315f8d4b1a24b9de2c94a6dc668da2ad60baa50faf5d43c561697ed32481b7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:24:33.918222 env[1322]: time="2025-11-01T00:24:33.918153596Z" level=info msg="CreateContainer within sandbox \"08315f8d4b1a24b9de2c94a6dc668da2ad60baa50faf5d43c561697ed32481b7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1c675824bcc304636677cde32eaa39e759b65b6ee0f9563b47963890476386b9\"" Nov 1 00:24:33.919205 env[1322]: time="2025-11-01T00:24:33.919161942Z" level=info msg="StartContainer for \"1c675824bcc304636677cde32eaa39e759b65b6ee0f9563b47963890476386b9\"" Nov 1 00:24:33.980647 env[1322]: time="2025-11-01T00:24:33.980596587Z" level=info msg="StartContainer for \"1c675824bcc304636677cde32eaa39e759b65b6ee0f9563b47963890476386b9\" returns successfully" Nov 1 00:24:34.018435 env[1322]: time="2025-11-01T00:24:34.018386034Z" level=info msg="shim disconnected" id=1c675824bcc304636677cde32eaa39e759b65b6ee0f9563b47963890476386b9 Nov 1 00:24:34.018718 env[1322]: time="2025-11-01T00:24:34.018695258Z" level=warning msg="cleaning up after shim disconnected" id=1c675824bcc304636677cde32eaa39e759b65b6ee0f9563b47963890476386b9 namespace=k8s.io Nov 1 00:24:34.018786 env[1322]: time="2025-11-01T00:24:34.018772254Z" level=info msg="cleaning up dead shim" Nov 1 00:24:34.025402 env[1322]: time="2025-11-01T00:24:34.025360206Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:24:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3977 runtime=io.containerd.runc.v2\n" Nov 1 00:24:34.510675 env[1322]: time="2025-11-01T00:24:34.510183186Z" level=info msg="StopPodSandbox for \"08315f8d4b1a24b9de2c94a6dc668da2ad60baa50faf5d43c561697ed32481b7\"" Nov 1 00:24:34.510675 env[1322]: time="2025-11-01T00:24:34.510279021Z" level=info msg="Container to stop \"1c675824bcc304636677cde32eaa39e759b65b6ee0f9563b47963890476386b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 1 00:24:34.539645 env[1322]: time="2025-11-01T00:24:34.539593122Z" level=info msg="shim disconnected" id=08315f8d4b1a24b9de2c94a6dc668da2ad60baa50faf5d43c561697ed32481b7 Nov 1 00:24:34.539645 env[1322]: time="2025-11-01T00:24:34.539642839Z" level=warning msg="cleaning up after shim disconnected" id=08315f8d4b1a24b9de2c94a6dc668da2ad60baa50faf5d43c561697ed32481b7 namespace=k8s.io Nov 1 00:24:34.539645 env[1322]: time="2025-11-01T00:24:34.539653559Z" level=info msg="cleaning up dead shim" Nov 1 00:24:34.546837 env[1322]: time="2025-11-01T00:24:34.546777044Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:24:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4009 runtime=io.containerd.runc.v2\n" Nov 1 00:24:34.547209 env[1322]: time="2025-11-01T00:24:34.547177384Z" level=info msg="TearDown network for sandbox \"08315f8d4b1a24b9de2c94a6dc668da2ad60baa50faf5d43c561697ed32481b7\" successfully" Nov 1 00:24:34.547253 env[1322]: time="2025-11-01T00:24:34.547210023Z" level=info msg="StopPodSandbox for \"08315f8d4b1a24b9de2c94a6dc668da2ad60baa50faf5d43c561697ed32481b7\" returns successfully" Nov 1 00:24:34.654492 kubelet[2069]: I1101 00:24:34.654434 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-lib-modules\") pod \"953b1464-7010-466a-9eca-33271fc5120a\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " Nov 1 00:24:34.654492 kubelet[2069]: I1101 00:24:34.654492 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/953b1464-7010-466a-9eca-33271fc5120a-cilium-config-path\") pod \"953b1464-7010-466a-9eca-33271fc5120a\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " Nov 1 00:24:34.654914 kubelet[2069]: I1101 00:24:34.654512 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-host-proc-sys-net\") pod \"953b1464-7010-466a-9eca-33271fc5120a\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " Nov 1 00:24:34.654914 kubelet[2069]: I1101 00:24:34.654532 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77tpf\" (UniqueName: \"kubernetes.io/projected/953b1464-7010-466a-9eca-33271fc5120a-kube-api-access-77tpf\") pod \"953b1464-7010-466a-9eca-33271fc5120a\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " Nov 1 00:24:34.654914 kubelet[2069]: I1101 00:24:34.654551 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-bpf-maps\") pod \"953b1464-7010-466a-9eca-33271fc5120a\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " Nov 1 00:24:34.654914 kubelet[2069]: I1101 00:24:34.654565 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-cilium-run\") pod \"953b1464-7010-466a-9eca-33271fc5120a\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " Nov 1 00:24:34.654914 kubelet[2069]: I1101 00:24:34.654586 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-etc-cni-netd\") pod \"953b1464-7010-466a-9eca-33271fc5120a\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " Nov 1 00:24:34.654914 kubelet[2069]: I1101 00:24:34.654600 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-cilium-cgroup\") pod \"953b1464-7010-466a-9eca-33271fc5120a\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " Nov 1 00:24:34.655113 kubelet[2069]: I1101 00:24:34.654619 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/953b1464-7010-466a-9eca-33271fc5120a-clustermesh-secrets\") pod \"953b1464-7010-466a-9eca-33271fc5120a\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " Nov 1 00:24:34.655113 kubelet[2069]: I1101 00:24:34.654636 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/953b1464-7010-466a-9eca-33271fc5120a-hubble-tls\") pod \"953b1464-7010-466a-9eca-33271fc5120a\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " Nov 1 00:24:34.655113 kubelet[2069]: I1101 00:24:34.654653 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-xtables-lock\") pod \"953b1464-7010-466a-9eca-33271fc5120a\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " Nov 1 00:24:34.655113 kubelet[2069]: I1101 00:24:34.654682 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-cni-path\") pod \"953b1464-7010-466a-9eca-33271fc5120a\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " Nov 1 00:24:34.655113 kubelet[2069]: I1101 00:24:34.654696 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-host-proc-sys-kernel\") pod \"953b1464-7010-466a-9eca-33271fc5120a\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " Nov 1 00:24:34.655113 kubelet[2069]: I1101 00:24:34.654715 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/953b1464-7010-466a-9eca-33271fc5120a-cilium-ipsec-secrets\") pod \"953b1464-7010-466a-9eca-33271fc5120a\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " Nov 1 00:24:34.655247 kubelet[2069]: I1101 00:24:34.654731 2069 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-hostproc\") pod \"953b1464-7010-466a-9eca-33271fc5120a\" (UID: \"953b1464-7010-466a-9eca-33271fc5120a\") " Nov 1 00:24:34.655247 kubelet[2069]: I1101 00:24:34.654801 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-hostproc" (OuterVolumeSpecName: "hostproc") pod "953b1464-7010-466a-9eca-33271fc5120a" (UID: "953b1464-7010-466a-9eca-33271fc5120a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:34.655247 kubelet[2069]: I1101 00:24:34.654826 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "953b1464-7010-466a-9eca-33271fc5120a" (UID: "953b1464-7010-466a-9eca-33271fc5120a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:34.655247 kubelet[2069]: I1101 00:24:34.655050 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "953b1464-7010-466a-9eca-33271fc5120a" (UID: "953b1464-7010-466a-9eca-33271fc5120a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:34.655247 kubelet[2069]: I1101 00:24:34.655097 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "953b1464-7010-466a-9eca-33271fc5120a" (UID: "953b1464-7010-466a-9eca-33271fc5120a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:34.656776 kubelet[2069]: I1101 00:24:34.655806 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "953b1464-7010-466a-9eca-33271fc5120a" (UID: "953b1464-7010-466a-9eca-33271fc5120a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:34.656776 kubelet[2069]: I1101 00:24:34.655855 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-cni-path" (OuterVolumeSpecName: "cni-path") pod "953b1464-7010-466a-9eca-33271fc5120a" (UID: "953b1464-7010-466a-9eca-33271fc5120a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:34.656776 kubelet[2069]: I1101 00:24:34.655876 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "953b1464-7010-466a-9eca-33271fc5120a" (UID: "953b1464-7010-466a-9eca-33271fc5120a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:34.656776 kubelet[2069]: I1101 00:24:34.656113 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "953b1464-7010-466a-9eca-33271fc5120a" (UID: "953b1464-7010-466a-9eca-33271fc5120a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:34.656776 kubelet[2069]: I1101 00:24:34.655368 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "953b1464-7010-466a-9eca-33271fc5120a" (UID: "953b1464-7010-466a-9eca-33271fc5120a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:34.657015 kubelet[2069]: I1101 00:24:34.656241 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "953b1464-7010-466a-9eca-33271fc5120a" (UID: "953b1464-7010-466a-9eca-33271fc5120a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 1 00:24:34.657015 kubelet[2069]: I1101 00:24:34.656728 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/953b1464-7010-466a-9eca-33271fc5120a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "953b1464-7010-466a-9eca-33271fc5120a" (UID: "953b1464-7010-466a-9eca-33271fc5120a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:24:34.657920 kubelet[2069]: I1101 00:24:34.657886 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953b1464-7010-466a-9eca-33271fc5120a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "953b1464-7010-466a-9eca-33271fc5120a" (UID: "953b1464-7010-466a-9eca-33271fc5120a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:24:34.658770 kubelet[2069]: I1101 00:24:34.658738 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/953b1464-7010-466a-9eca-33271fc5120a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "953b1464-7010-466a-9eca-33271fc5120a" (UID: "953b1464-7010-466a-9eca-33271fc5120a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:24:34.659451 kubelet[2069]: I1101 00:24:34.659409 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/953b1464-7010-466a-9eca-33271fc5120a-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "953b1464-7010-466a-9eca-33271fc5120a" (UID: "953b1464-7010-466a-9eca-33271fc5120a"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:24:34.659906 kubelet[2069]: I1101 00:24:34.659877 2069 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/953b1464-7010-466a-9eca-33271fc5120a-kube-api-access-77tpf" (OuterVolumeSpecName: "kube-api-access-77tpf") pod "953b1464-7010-466a-9eca-33271fc5120a" (UID: "953b1464-7010-466a-9eca-33271fc5120a"). InnerVolumeSpecName "kube-api-access-77tpf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:24:34.754438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08315f8d4b1a24b9de2c94a6dc668da2ad60baa50faf5d43c561697ed32481b7-rootfs.mount: Deactivated successfully. Nov 1 00:24:34.754621 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-08315f8d4b1a24b9de2c94a6dc668da2ad60baa50faf5d43c561697ed32481b7-shm.mount: Deactivated successfully. Nov 1 00:24:34.754711 systemd[1]: var-lib-kubelet-pods-953b1464\x2d7010\x2d466a\x2d9eca\x2d33271fc5120a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d77tpf.mount: Deactivated successfully. Nov 1 00:24:34.754792 systemd[1]: var-lib-kubelet-pods-953b1464\x2d7010\x2d466a\x2d9eca\x2d33271fc5120a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 1 00:24:34.754873 systemd[1]: var-lib-kubelet-pods-953b1464\x2d7010\x2d466a\x2d9eca\x2d33271fc5120a-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Nov 1 00:24:34.754949 systemd[1]: var-lib-kubelet-pods-953b1464\x2d7010\x2d466a\x2d9eca\x2d33271fc5120a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 1 00:24:34.755258 kubelet[2069]: I1101 00:24:34.755233 2069 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:34.755355 kubelet[2069]: I1101 00:24:34.755346 2069 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:34.755418 kubelet[2069]: I1101 00:24:34.755409 2069 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:34.755504 kubelet[2069]: I1101 00:24:34.755490 2069 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/953b1464-7010-466a-9eca-33271fc5120a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:34.755581 kubelet[2069]: I1101 00:24:34.755572 2069 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:34.755647 kubelet[2069]: I1101 00:24:34.755638 2069 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/953b1464-7010-466a-9eca-33271fc5120a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:34.755715 kubelet[2069]: I1101 00:24:34.755706 2069 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:34.755782 kubelet[2069]: I1101 00:24:34.755773 2069 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:34.755847 kubelet[2069]: I1101 00:24:34.755838 2069 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:34.755913 kubelet[2069]: I1101 00:24:34.755904 2069 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:34.755986 kubelet[2069]: I1101 00:24:34.755968 2069 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/953b1464-7010-466a-9eca-33271fc5120a-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:34.756054 kubelet[2069]: I1101 00:24:34.756045 2069 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:34.756155 kubelet[2069]: I1101 00:24:34.756103 2069 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/953b1464-7010-466a-9eca-33271fc5120a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:34.756229 kubelet[2069]: I1101 00:24:34.756216 2069 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/953b1464-7010-466a-9eca-33271fc5120a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:34.756293 kubelet[2069]: I1101 00:24:34.756283 2069 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-77tpf\" (UniqueName: \"kubernetes.io/projected/953b1464-7010-466a-9eca-33271fc5120a-kube-api-access-77tpf\") on node \"localhost\" DevicePath \"\"" Nov 1 00:24:35.512496 kubelet[2069]: I1101 00:24:35.512466 2069 scope.go:117] "RemoveContainer" containerID="1c675824bcc304636677cde32eaa39e759b65b6ee0f9563b47963890476386b9" Nov 1 00:24:35.514667 env[1322]: time="2025-11-01T00:24:35.514626763Z" level=info msg="RemoveContainer for \"1c675824bcc304636677cde32eaa39e759b65b6ee0f9563b47963890476386b9\"" Nov 1 00:24:35.518024 env[1322]: time="2025-11-01T00:24:35.517962728Z" level=info msg="RemoveContainer for \"1c675824bcc304636677cde32eaa39e759b65b6ee0f9563b47963890476386b9\" returns successfully" Nov 1 00:24:35.546715 kubelet[2069]: I1101 00:24:35.546669 2069 memory_manager.go:355] "RemoveStaleState removing state" podUID="953b1464-7010-466a-9eca-33271fc5120a" containerName="mount-cgroup" Nov 1 00:24:35.662811 kubelet[2069]: I1101 00:24:35.662680 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bbfb6880-e3ab-4c2d-819b-4106a1459b15-cilium-run\") pod \"cilium-rz55d\" (UID: \"bbfb6880-e3ab-4c2d-819b-4106a1459b15\") " pod="kube-system/cilium-rz55d" Nov 1 00:24:35.662811 kubelet[2069]: I1101 00:24:35.662746 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bbfb6880-e3ab-4c2d-819b-4106a1459b15-host-proc-sys-kernel\") pod \"cilium-rz55d\" (UID: \"bbfb6880-e3ab-4c2d-819b-4106a1459b15\") " pod="kube-system/cilium-rz55d" Nov 1 00:24:35.662811 kubelet[2069]: I1101 00:24:35.662769 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbfb6880-e3ab-4c2d-819b-4106a1459b15-lib-modules\") pod \"cilium-rz55d\" (UID: \"bbfb6880-e3ab-4c2d-819b-4106a1459b15\") " pod="kube-system/cilium-rz55d" Nov 1 00:24:35.662811 kubelet[2069]: I1101 00:24:35.662788 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bbfb6880-e3ab-4c2d-819b-4106a1459b15-clustermesh-secrets\") pod \"cilium-rz55d\" (UID: \"bbfb6880-e3ab-4c2d-819b-4106a1459b15\") " pod="kube-system/cilium-rz55d" Nov 1 00:24:35.662811 kubelet[2069]: I1101 00:24:35.662805 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bbfb6880-e3ab-4c2d-819b-4106a1459b15-host-proc-sys-net\") pod \"cilium-rz55d\" (UID: \"bbfb6880-e3ab-4c2d-819b-4106a1459b15\") " pod="kube-system/cilium-rz55d" Nov 1 00:24:35.662811 kubelet[2069]: I1101 00:24:35.662822 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bbfb6880-e3ab-4c2d-819b-4106a1459b15-etc-cni-netd\") pod \"cilium-rz55d\" (UID: \"bbfb6880-e3ab-4c2d-819b-4106a1459b15\") " pod="kube-system/cilium-rz55d" Nov 1 00:24:35.663351 kubelet[2069]: I1101 00:24:35.662838 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbfb6880-e3ab-4c2d-819b-4106a1459b15-xtables-lock\") pod \"cilium-rz55d\" (UID: \"bbfb6880-e3ab-4c2d-819b-4106a1459b15\") " pod="kube-system/cilium-rz55d" Nov 1 00:24:35.663351 kubelet[2069]: I1101 00:24:35.662853 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p575b\" (UniqueName: \"kubernetes.io/projected/bbfb6880-e3ab-4c2d-819b-4106a1459b15-kube-api-access-p575b\") pod \"cilium-rz55d\" (UID: \"bbfb6880-e3ab-4c2d-819b-4106a1459b15\") " pod="kube-system/cilium-rz55d" Nov 1 00:24:35.663351 kubelet[2069]: I1101 00:24:35.662871 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bbfb6880-e3ab-4c2d-819b-4106a1459b15-hostproc\") pod \"cilium-rz55d\" (UID: \"bbfb6880-e3ab-4c2d-819b-4106a1459b15\") " pod="kube-system/cilium-rz55d" Nov 1 00:24:35.663351 kubelet[2069]: I1101 00:24:35.662887 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bbfb6880-e3ab-4c2d-819b-4106a1459b15-cilium-cgroup\") pod \"cilium-rz55d\" (UID: \"bbfb6880-e3ab-4c2d-819b-4106a1459b15\") " pod="kube-system/cilium-rz55d" Nov 1 00:24:35.663351 kubelet[2069]: I1101 00:24:35.662902 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bbfb6880-e3ab-4c2d-819b-4106a1459b15-cilium-config-path\") pod \"cilium-rz55d\" (UID: \"bbfb6880-e3ab-4c2d-819b-4106a1459b15\") " pod="kube-system/cilium-rz55d" Nov 1 00:24:35.663351 kubelet[2069]: I1101 00:24:35.662920 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bbfb6880-e3ab-4c2d-819b-4106a1459b15-bpf-maps\") pod \"cilium-rz55d\" (UID: \"bbfb6880-e3ab-4c2d-819b-4106a1459b15\") " pod="kube-system/cilium-rz55d" Nov 1 00:24:35.663497 kubelet[2069]: I1101 00:24:35.662937 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bbfb6880-e3ab-4c2d-819b-4106a1459b15-cni-path\") pod \"cilium-rz55d\" (UID: \"bbfb6880-e3ab-4c2d-819b-4106a1459b15\") " pod="kube-system/cilium-rz55d" Nov 1 00:24:35.663497 kubelet[2069]: I1101 00:24:35.662952 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bbfb6880-e3ab-4c2d-819b-4106a1459b15-cilium-ipsec-secrets\") pod \"cilium-rz55d\" (UID: \"bbfb6880-e3ab-4c2d-819b-4106a1459b15\") " pod="kube-system/cilium-rz55d" Nov 1 00:24:35.663497 kubelet[2069]: I1101 00:24:35.662971 2069 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bbfb6880-e3ab-4c2d-819b-4106a1459b15-hubble-tls\") pod \"cilium-rz55d\" (UID: \"bbfb6880-e3ab-4c2d-819b-4106a1459b15\") " pod="kube-system/cilium-rz55d" Nov 1 00:24:35.852385 kubelet[2069]: E1101 00:24:35.852265 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:35.853122 env[1322]: time="2025-11-01T00:24:35.852777999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rz55d,Uid:bbfb6880-e3ab-4c2d-819b-4106a1459b15,Namespace:kube-system,Attempt:0,}" Nov 1 00:24:35.866159 env[1322]: time="2025-11-01T00:24:35.866081542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:24:35.866312 env[1322]: time="2025-11-01T00:24:35.866124500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:24:35.866312 env[1322]: time="2025-11-01T00:24:35.866137659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:24:35.866513 env[1322]: time="2025-11-01T00:24:35.866478444Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1ffa8359fbffb5915123970a4c363bbf4c65c1314f5f4e06faf2e3a2d48c479a pid=4038 runtime=io.containerd.runc.v2 Nov 1 00:24:35.903538 env[1322]: time="2025-11-01T00:24:35.903492567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rz55d,Uid:bbfb6880-e3ab-4c2d-819b-4106a1459b15,Namespace:kube-system,Attempt:0,} returns sandbox id \"1ffa8359fbffb5915123970a4c363bbf4c65c1314f5f4e06faf2e3a2d48c479a\"" Nov 1 00:24:35.904508 kubelet[2069]: E1101 00:24:35.904484 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:35.907689 env[1322]: time="2025-11-01T00:24:35.907645174Z" level=info msg="CreateContainer within sandbox \"1ffa8359fbffb5915123970a4c363bbf4c65c1314f5f4e06faf2e3a2d48c479a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 1 00:24:35.917323 env[1322]: time="2025-11-01T00:24:35.917261008Z" level=info msg="CreateContainer within sandbox \"1ffa8359fbffb5915123970a4c363bbf4c65c1314f5f4e06faf2e3a2d48c479a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0736cfdbfd988f93dff589a56cde3f6d7c8faac74fefd8cfe395657ea9262d7f\"" Nov 1 00:24:35.917998 env[1322]: time="2025-11-01T00:24:35.917953656Z" level=info msg="StartContainer for \"0736cfdbfd988f93dff589a56cde3f6d7c8faac74fefd8cfe395657ea9262d7f\"" Nov 1 00:24:35.992499 env[1322]: time="2025-11-01T00:24:35.992449321Z" level=info msg="StartContainer for \"0736cfdbfd988f93dff589a56cde3f6d7c8faac74fefd8cfe395657ea9262d7f\" returns successfully" Nov 1 00:24:36.037906 env[1322]: time="2025-11-01T00:24:36.037856657Z" level=info msg="shim disconnected" id=0736cfdbfd988f93dff589a56cde3f6d7c8faac74fefd8cfe395657ea9262d7f Nov 1 00:24:36.037906 env[1322]: time="2025-11-01T00:24:36.037904654Z" level=warning msg="cleaning up after shim disconnected" id=0736cfdbfd988f93dff589a56cde3f6d7c8faac74fefd8cfe395657ea9262d7f namespace=k8s.io Nov 1 00:24:36.037906 env[1322]: time="2025-11-01T00:24:36.037914214Z" level=info msg="cleaning up dead shim" Nov 1 00:24:36.045130 env[1322]: time="2025-11-01T00:24:36.045081625Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:24:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4121 runtime=io.containerd.runc.v2\n" Nov 1 00:24:36.515608 kubelet[2069]: E1101 00:24:36.515558 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:36.520430 env[1322]: time="2025-11-01T00:24:36.520378670Z" level=info msg="CreateContainer within sandbox \"1ffa8359fbffb5915123970a4c363bbf4c65c1314f5f4e06faf2e3a2d48c479a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 1 00:24:36.533463 env[1322]: time="2025-11-01T00:24:36.533416628Z" level=info msg="CreateContainer within sandbox \"1ffa8359fbffb5915123970a4c363bbf4c65c1314f5f4e06faf2e3a2d48c479a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"61b9f0483bd2a860e55dd821a126e36663af05393f735413463d71e233bd94c6\"" Nov 1 00:24:36.535443 env[1322]: time="2025-11-01T00:24:36.535404223Z" level=info msg="StartContainer for \"61b9f0483bd2a860e55dd821a126e36663af05393f735413463d71e233bd94c6\"" Nov 1 00:24:36.582610 env[1322]: time="2025-11-01T00:24:36.582563511Z" level=info msg="StartContainer for \"61b9f0483bd2a860e55dd821a126e36663af05393f735413463d71e233bd94c6\" returns successfully" Nov 1 00:24:36.605513 env[1322]: time="2025-11-01T00:24:36.605467764Z" level=info msg="shim disconnected" id=61b9f0483bd2a860e55dd821a126e36663af05393f735413463d71e233bd94c6 Nov 1 00:24:36.605513 env[1322]: time="2025-11-01T00:24:36.605511963Z" level=warning msg="cleaning up after shim disconnected" id=61b9f0483bd2a860e55dd821a126e36663af05393f735413463d71e233bd94c6 namespace=k8s.io Nov 1 00:24:36.605513 env[1322]: time="2025-11-01T00:24:36.605520882Z" level=info msg="cleaning up dead shim" Nov 1 00:24:36.612705 env[1322]: time="2025-11-01T00:24:36.612657575Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:24:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4183 runtime=io.containerd.runc.v2\n" Nov 1 00:24:37.312461 kubelet[2069]: E1101 00:24:37.312424 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:37.314337 kubelet[2069]: I1101 00:24:37.314308 2069 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="953b1464-7010-466a-9eca-33271fc5120a" path="/var/lib/kubelet/pods/953b1464-7010-466a-9eca-33271fc5120a/volumes" Nov 1 00:24:37.366990 kubelet[2069]: E1101 00:24:37.366941 2069 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 1 00:24:37.519825 kubelet[2069]: E1101 00:24:37.519794 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:37.525118 env[1322]: time="2025-11-01T00:24:37.525079250Z" level=info msg="CreateContainer within sandbox \"1ffa8359fbffb5915123970a4c363bbf4c65c1314f5f4e06faf2e3a2d48c479a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 1 00:24:37.536646 env[1322]: time="2025-11-01T00:24:37.536594390Z" level=info msg="CreateContainer within sandbox \"1ffa8359fbffb5915123970a4c363bbf4c65c1314f5f4e06faf2e3a2d48c479a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"17597b72b34276a6327a19e66f9811a921b15cc24a4bed67a83fba4f7dd2b85c\"" Nov 1 00:24:37.538428 env[1322]: time="2025-11-01T00:24:37.538393159Z" level=info msg="StartContainer for \"17597b72b34276a6327a19e66f9811a921b15cc24a4bed67a83fba4f7dd2b85c\"" Nov 1 00:24:37.593742 env[1322]: time="2025-11-01T00:24:37.593263530Z" level=info msg="StartContainer for \"17597b72b34276a6327a19e66f9811a921b15cc24a4bed67a83fba4f7dd2b85c\" returns successfully" Nov 1 00:24:37.617195 env[1322]: time="2025-11-01T00:24:37.617149058Z" level=info msg="shim disconnected" id=17597b72b34276a6327a19e66f9811a921b15cc24a4bed67a83fba4f7dd2b85c Nov 1 00:24:37.617460 env[1322]: time="2025-11-01T00:24:37.617439886Z" level=warning msg="cleaning up after shim disconnected" id=17597b72b34276a6327a19e66f9811a921b15cc24a4bed67a83fba4f7dd2b85c namespace=k8s.io Nov 1 00:24:37.617527 env[1322]: time="2025-11-01T00:24:37.617513363Z" level=info msg="cleaning up dead shim" Nov 1 00:24:37.624434 env[1322]: time="2025-11-01T00:24:37.624384169Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:24:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4241 runtime=io.containerd.runc.v2\n" Nov 1 00:24:37.768821 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17597b72b34276a6327a19e66f9811a921b15cc24a4bed67a83fba4f7dd2b85c-rootfs.mount: Deactivated successfully. Nov 1 00:24:38.523408 kubelet[2069]: E1101 00:24:38.523365 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:38.526325 env[1322]: time="2025-11-01T00:24:38.526275134Z" level=info msg="CreateContainer within sandbox \"1ffa8359fbffb5915123970a4c363bbf4c65c1314f5f4e06faf2e3a2d48c479a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 1 00:24:38.538397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount250575204.mount: Deactivated successfully. Nov 1 00:24:38.541779 env[1322]: time="2025-11-01T00:24:38.541726926Z" level=info msg="CreateContainer within sandbox \"1ffa8359fbffb5915123970a4c363bbf4c65c1314f5f4e06faf2e3a2d48c479a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a082829265129a8e355aac7355858a826ea0c37ca0d81aa2eef5d703a06f9207\"" Nov 1 00:24:38.544975 env[1322]: time="2025-11-01T00:24:38.544922169Z" level=info msg="StartContainer for \"a082829265129a8e355aac7355858a826ea0c37ca0d81aa2eef5d703a06f9207\"" Nov 1 00:24:38.596195 env[1322]: time="2025-11-01T00:24:38.596146325Z" level=info msg="StartContainer for \"a082829265129a8e355aac7355858a826ea0c37ca0d81aa2eef5d703a06f9207\" returns successfully" Nov 1 00:24:38.619058 env[1322]: time="2025-11-01T00:24:38.619011604Z" level=info msg="shim disconnected" id=a082829265129a8e355aac7355858a826ea0c37ca0d81aa2eef5d703a06f9207 Nov 1 00:24:38.619309 env[1322]: time="2025-11-01T00:24:38.619279314Z" level=warning msg="cleaning up after shim disconnected" id=a082829265129a8e355aac7355858a826ea0c37ca0d81aa2eef5d703a06f9207 namespace=k8s.io Nov 1 00:24:38.619401 env[1322]: time="2025-11-01T00:24:38.619379030Z" level=info msg="cleaning up dead shim" Nov 1 00:24:38.626735 env[1322]: time="2025-11-01T00:24:38.626694241Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:24:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4295 runtime=io.containerd.runc.v2\n" Nov 1 00:24:38.768886 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a082829265129a8e355aac7355858a826ea0c37ca0d81aa2eef5d703a06f9207-rootfs.mount: Deactivated successfully. Nov 1 00:24:39.080273 kubelet[2069]: I1101 00:24:39.080018 2069 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-01T00:24:39Z","lastTransitionTime":"2025-11-01T00:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 1 00:24:39.527634 kubelet[2069]: E1101 00:24:39.527603 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:39.530789 env[1322]: time="2025-11-01T00:24:39.530749615Z" level=info msg="CreateContainer within sandbox \"1ffa8359fbffb5915123970a4c363bbf4c65c1314f5f4e06faf2e3a2d48c479a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 1 00:24:39.550004 env[1322]: time="2025-11-01T00:24:39.548172427Z" level=info msg="CreateContainer within sandbox \"1ffa8359fbffb5915123970a4c363bbf4c65c1314f5f4e06faf2e3a2d48c479a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c1c6316563f289aa281e80c8cc524133a065c928416724d08814dbbdbb2d7e98\"" Nov 1 00:24:39.550295 env[1322]: time="2025-11-01T00:24:39.550260156Z" level=info msg="StartContainer for \"c1c6316563f289aa281e80c8cc524133a065c928416724d08814dbbdbb2d7e98\"" Nov 1 00:24:39.603507 env[1322]: time="2025-11-01T00:24:39.603457760Z" level=info msg="StartContainer for \"c1c6316563f289aa281e80c8cc524133a065c928416724d08814dbbdbb2d7e98\" returns successfully" Nov 1 00:24:39.836004 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Nov 1 00:24:40.532530 kubelet[2069]: E1101 00:24:40.532499 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:40.549855 kubelet[2069]: I1101 00:24:40.549167 2069 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rz55d" podStartSLOduration=5.549148173 podStartE2EDuration="5.549148173s" podCreationTimestamp="2025-11-01 00:24:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:24:40.548039087 +0000 UTC m=+83.349166254" watchObservedRunningTime="2025-11-01 00:24:40.549148173 +0000 UTC m=+83.350275300" Nov 1 00:24:41.853092 kubelet[2069]: E1101 00:24:41.853055 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:42.568295 systemd-networkd[1099]: lxc_health: Link UP Nov 1 00:24:42.576937 systemd-networkd[1099]: lxc_health: Gained carrier Nov 1 00:24:42.577098 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Nov 1 00:24:43.694512 systemd-networkd[1099]: lxc_health: Gained IPv6LL Nov 1 00:24:43.853823 kubelet[2069]: E1101 00:24:43.853781 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:44.262714 systemd[1]: run-containerd-runc-k8s.io-c1c6316563f289aa281e80c8cc524133a065c928416724d08814dbbdbb2d7e98-runc.oTa9VJ.mount: Deactivated successfully. Nov 1 00:24:44.539740 kubelet[2069]: E1101 00:24:44.539626 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:45.313714 kubelet[2069]: E1101 00:24:45.313679 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:45.540791 kubelet[2069]: E1101 00:24:45.540742 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:46.312517 kubelet[2069]: E1101 00:24:46.312472 2069 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:48.608639 systemd[1]: run-containerd-runc-k8s.io-c1c6316563f289aa281e80c8cc524133a065c928416724d08814dbbdbb2d7e98-runc.zYIsMx.mount: Deactivated successfully. Nov 1 00:24:48.674571 sshd[3872]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:48.677166 systemd[1]: sshd@23-10.0.0.94:22-10.0.0.1:57500.service: Deactivated successfully. Nov 1 00:24:48.678160 systemd-logind[1305]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:24:48.678248 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:24:48.678966 systemd-logind[1305]: Removed session 24.