Jan 29 11:08:29.893156 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 29 11:08:29.893177 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 29 11:08:29.893187 kernel: KASLR enabled Jan 29 11:08:29.893193 kernel: efi: EFI v2.7 by EDK II Jan 29 11:08:29.893198 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Jan 29 11:08:29.893204 kernel: random: crng init done Jan 29 11:08:29.893211 kernel: secureboot: Secure boot disabled Jan 29 11:08:29.893217 kernel: ACPI: Early table checksum verification disabled Jan 29 11:08:29.893223 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 29 11:08:29.893231 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 29 11:08:29.893237 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:29.893264 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:29.893271 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:29.893277 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:29.893284 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:29.893293 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:29.893299 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:29.893306 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:29.893312 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:08:29.893318 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 29 11:08:29.893324 kernel: NUMA: Failed to initialise from firmware Jan 29 11:08:29.893331 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:08:29.893337 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jan 29 11:08:29.893343 kernel: Zone ranges: Jan 29 11:08:29.893354 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:08:29.893361 kernel: DMA32 empty Jan 29 11:08:29.893367 kernel: Normal empty Jan 29 11:08:29.893373 kernel: Movable zone start for each node Jan 29 11:08:29.893379 kernel: Early memory node ranges Jan 29 11:08:29.893385 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Jan 29 11:08:29.893391 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Jan 29 11:08:29.893398 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Jan 29 11:08:29.893404 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 29 11:08:29.893410 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 29 11:08:29.893416 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 29 11:08:29.893422 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 29 11:08:29.893428 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 29 11:08:29.893435 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 29 11:08:29.893442 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 29 11:08:29.893448 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 29 11:08:29.893457 kernel: psci: probing for conduit method from ACPI. Jan 29 11:08:29.893463 kernel: psci: PSCIv1.1 detected in firmware. Jan 29 11:08:29.893470 kernel: psci: Using standard PSCI v0.2 function IDs Jan 29 11:08:29.893478 kernel: psci: Trusted OS migration not required Jan 29 11:08:29.893485 kernel: psci: SMC Calling Convention v1.1 Jan 29 11:08:29.893491 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 29 11:08:29.893498 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 29 11:08:29.893505 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 29 11:08:29.893512 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 29 11:08:29.893518 kernel: Detected PIPT I-cache on CPU0 Jan 29 11:08:29.893525 kernel: CPU features: detected: GIC system register CPU interface Jan 29 11:08:29.893531 kernel: CPU features: detected: Hardware dirty bit management Jan 29 11:08:29.893538 kernel: CPU features: detected: Spectre-v4 Jan 29 11:08:29.893546 kernel: CPU features: detected: Spectre-BHB Jan 29 11:08:29.893552 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 29 11:08:29.893559 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 29 11:08:29.893565 kernel: CPU features: detected: ARM erratum 1418040 Jan 29 11:08:29.893572 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 29 11:08:29.893578 kernel: alternatives: applying boot alternatives Jan 29 11:08:29.893586 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 29 11:08:29.893593 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:08:29.893599 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 11:08:29.893606 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:08:29.893612 kernel: Fallback order for Node 0: 0 Jan 29 11:08:29.893620 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 29 11:08:29.893627 kernel: Policy zone: DMA Jan 29 11:08:29.893633 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:08:29.893640 kernel: software IO TLB: area num 4. Jan 29 11:08:29.893647 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 29 11:08:29.893653 kernel: Memory: 2385940K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 186348K reserved, 0K cma-reserved) Jan 29 11:08:29.893660 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 11:08:29.893667 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:08:29.893674 kernel: rcu: RCU event tracing is enabled. Jan 29 11:08:29.893681 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 11:08:29.893687 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:08:29.893694 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:08:29.893703 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:08:29.893709 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 11:08:29.893716 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 29 11:08:29.893722 kernel: GICv3: 256 SPIs implemented Jan 29 11:08:29.893729 kernel: GICv3: 0 Extended SPIs implemented Jan 29 11:08:29.893735 kernel: Root IRQ handler: gic_handle_irq Jan 29 11:08:29.893742 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 29 11:08:29.893748 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 29 11:08:29.893755 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 29 11:08:29.893761 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 29 11:08:29.893768 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 29 11:08:29.893776 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 29 11:08:29.893783 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 29 11:08:29.893790 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:08:29.893796 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:08:29.893803 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 29 11:08:29.893809 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 29 11:08:29.893816 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 29 11:08:29.893823 kernel: arm-pv: using stolen time PV Jan 29 11:08:29.893830 kernel: Console: colour dummy device 80x25 Jan 29 11:08:29.893836 kernel: ACPI: Core revision 20230628 Jan 29 11:08:29.893843 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 29 11:08:29.893852 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:08:29.893859 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:08:29.893865 kernel: landlock: Up and running. Jan 29 11:08:29.893872 kernel: SELinux: Initializing. Jan 29 11:08:29.893879 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:08:29.893885 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 11:08:29.893892 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:08:29.893899 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 11:08:29.893906 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:08:29.893914 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:08:29.893921 kernel: Platform MSI: ITS@0x8080000 domain created Jan 29 11:08:29.893928 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 29 11:08:29.893934 kernel: Remapping and enabling EFI services. Jan 29 11:08:29.893941 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:08:29.893948 kernel: Detected PIPT I-cache on CPU1 Jan 29 11:08:29.893954 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 29 11:08:29.893961 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 29 11:08:29.893968 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:08:29.893976 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 29 11:08:29.893983 kernel: Detected PIPT I-cache on CPU2 Jan 29 11:08:29.893995 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 29 11:08:29.894004 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 29 11:08:29.894022 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:08:29.894029 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 29 11:08:29.894036 kernel: Detected PIPT I-cache on CPU3 Jan 29 11:08:29.894043 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 29 11:08:29.894050 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 29 11:08:29.894059 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 29 11:08:29.894066 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 29 11:08:29.894073 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 11:08:29.894086 kernel: SMP: Total of 4 processors activated. Jan 29 11:08:29.894094 kernel: CPU features: detected: 32-bit EL0 Support Jan 29 11:08:29.894102 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 29 11:08:29.894108 kernel: CPU features: detected: Common not Private translations Jan 29 11:08:29.894115 kernel: CPU features: detected: CRC32 instructions Jan 29 11:08:29.894124 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 29 11:08:29.894132 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 29 11:08:29.894139 kernel: CPU features: detected: LSE atomic instructions Jan 29 11:08:29.894146 kernel: CPU features: detected: Privileged Access Never Jan 29 11:08:29.894153 kernel: CPU features: detected: RAS Extension Support Jan 29 11:08:29.894160 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 29 11:08:29.894167 kernel: CPU: All CPU(s) started at EL1 Jan 29 11:08:29.894174 kernel: alternatives: applying system-wide alternatives Jan 29 11:08:29.894181 kernel: devtmpfs: initialized Jan 29 11:08:29.894190 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:08:29.894197 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 11:08:29.894204 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:08:29.894211 kernel: SMBIOS 3.0.0 present. Jan 29 11:08:29.894218 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 29 11:08:29.894225 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:08:29.894232 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 29 11:08:29.894239 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 29 11:08:29.894265 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 29 11:08:29.894274 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:08:29.894281 kernel: audit: type=2000 audit(0.017:1): state=initialized audit_enabled=0 res=1 Jan 29 11:08:29.894288 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:08:29.894295 kernel: cpuidle: using governor menu Jan 29 11:08:29.894302 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 29 11:08:29.894309 kernel: ASID allocator initialised with 32768 entries Jan 29 11:08:29.894316 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:08:29.894323 kernel: Serial: AMBA PL011 UART driver Jan 29 11:08:29.894330 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 29 11:08:29.894339 kernel: Modules: 0 pages in range for non-PLT usage Jan 29 11:08:29.894346 kernel: Modules: 508880 pages in range for PLT usage Jan 29 11:08:29.894353 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:08:29.894360 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:08:29.894367 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 29 11:08:29.894373 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 29 11:08:29.894380 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:08:29.894388 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:08:29.894394 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 29 11:08:29.894403 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 29 11:08:29.894410 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:08:29.894417 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:08:29.894424 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:08:29.894431 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:08:29.894438 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:08:29.894444 kernel: ACPI: Interpreter enabled Jan 29 11:08:29.894451 kernel: ACPI: Using GIC for interrupt routing Jan 29 11:08:29.894458 kernel: ACPI: MCFG table detected, 1 entries Jan 29 11:08:29.894465 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 29 11:08:29.894474 kernel: printk: console [ttyAMA0] enabled Jan 29 11:08:29.894481 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:08:29.894621 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:08:29.894696 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 29 11:08:29.894765 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 29 11:08:29.894831 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 29 11:08:29.894896 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 29 11:08:29.894908 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 29 11:08:29.894915 kernel: PCI host bridge to bus 0000:00 Jan 29 11:08:29.894987 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 29 11:08:29.895048 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 29 11:08:29.895124 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 29 11:08:29.895186 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:08:29.895289 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 29 11:08:29.895373 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 11:08:29.895443 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 29 11:08:29.895510 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 29 11:08:29.895578 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:08:29.895645 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 29 11:08:29.895712 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 29 11:08:29.895784 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 29 11:08:29.895861 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 29 11:08:29.895922 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 29 11:08:29.895981 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 29 11:08:29.895990 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 29 11:08:29.895997 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 29 11:08:29.896004 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 29 11:08:29.896011 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 29 11:08:29.896021 kernel: iommu: Default domain type: Translated Jan 29 11:08:29.896029 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 29 11:08:29.896036 kernel: efivars: Registered efivars operations Jan 29 11:08:29.896042 kernel: vgaarb: loaded Jan 29 11:08:29.896049 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 29 11:08:29.896056 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:08:29.896064 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:08:29.896070 kernel: pnp: PnP ACPI init Jan 29 11:08:29.896157 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 29 11:08:29.896170 kernel: pnp: PnP ACPI: found 1 devices Jan 29 11:08:29.896178 kernel: NET: Registered PF_INET protocol family Jan 29 11:08:29.896185 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 11:08:29.896192 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 11:08:29.896199 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:08:29.896206 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:08:29.896214 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 11:08:29.896221 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 11:08:29.896230 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:08:29.896237 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 11:08:29.896334 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:08:29.896343 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:08:29.896350 kernel: kvm [1]: HYP mode not available Jan 29 11:08:29.896357 kernel: Initialise system trusted keyrings Jan 29 11:08:29.896363 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 11:08:29.896370 kernel: Key type asymmetric registered Jan 29 11:08:29.896377 kernel: Asymmetric key parser 'x509' registered Jan 29 11:08:29.896388 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 29 11:08:29.896395 kernel: io scheduler mq-deadline registered Jan 29 11:08:29.896402 kernel: io scheduler kyber registered Jan 29 11:08:29.896409 kernel: io scheduler bfq registered Jan 29 11:08:29.896416 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 29 11:08:29.896423 kernel: ACPI: button: Power Button [PWRB] Jan 29 11:08:29.896430 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 29 11:08:29.896508 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 29 11:08:29.896518 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:08:29.896528 kernel: thunder_xcv, ver 1.0 Jan 29 11:08:29.896535 kernel: thunder_bgx, ver 1.0 Jan 29 11:08:29.896542 kernel: nicpf, ver 1.0 Jan 29 11:08:29.896549 kernel: nicvf, ver 1.0 Jan 29 11:08:29.896626 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 29 11:08:29.896692 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T11:08:29 UTC (1738148909) Jan 29 11:08:29.896702 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 11:08:29.896709 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 29 11:08:29.896719 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 29 11:08:29.896726 kernel: watchdog: Hard watchdog permanently disabled Jan 29 11:08:29.896733 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:08:29.896741 kernel: Segment Routing with IPv6 Jan 29 11:08:29.896748 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:08:29.896755 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:08:29.896762 kernel: Key type dns_resolver registered Jan 29 11:08:29.896769 kernel: registered taskstats version 1 Jan 29 11:08:29.896776 kernel: Loading compiled-in X.509 certificates Jan 29 11:08:29.896783 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 29 11:08:29.896792 kernel: Key type .fscrypt registered Jan 29 11:08:29.896799 kernel: Key type fscrypt-provisioning registered Jan 29 11:08:29.896807 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:08:29.896814 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:08:29.896821 kernel: ima: No architecture policies found Jan 29 11:08:29.896828 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 29 11:08:29.896835 kernel: clk: Disabling unused clocks Jan 29 11:08:29.896842 kernel: Freeing unused kernel memory: 39936K Jan 29 11:08:29.896851 kernel: Run /init as init process Jan 29 11:08:29.896858 kernel: with arguments: Jan 29 11:08:29.896865 kernel: /init Jan 29 11:08:29.896871 kernel: with environment: Jan 29 11:08:29.896878 kernel: HOME=/ Jan 29 11:08:29.896885 kernel: TERM=linux Jan 29 11:08:29.896892 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:08:29.896901 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:08:29.896912 systemd[1]: Detected virtualization kvm. Jan 29 11:08:29.896920 systemd[1]: Detected architecture arm64. Jan 29 11:08:29.896927 systemd[1]: Running in initrd. Jan 29 11:08:29.896935 systemd[1]: No hostname configured, using default hostname. Jan 29 11:08:29.896943 systemd[1]: Hostname set to . Jan 29 11:08:29.896951 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:08:29.896959 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:08:29.896966 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:08:29.896976 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:08:29.896984 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:08:29.896992 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:08:29.897000 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:08:29.897008 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:08:29.897017 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:08:29.897025 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:08:29.897034 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:08:29.897042 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:08:29.897050 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:08:29.897057 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:08:29.897065 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:08:29.897073 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:08:29.897091 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:08:29.897100 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:08:29.897107 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:08:29.897117 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:08:29.897125 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:08:29.897132 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:08:29.897140 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:08:29.897148 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:08:29.897156 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:08:29.897163 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:08:29.897171 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:08:29.897180 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:08:29.897188 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:08:29.897195 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:08:29.897203 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:08:29.897211 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:08:29.897218 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:08:29.897226 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:08:29.897236 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:08:29.897298 systemd-journald[238]: Collecting audit messages is disabled. Jan 29 11:08:29.897321 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:08:29.897330 systemd-journald[238]: Journal started Jan 29 11:08:29.897353 systemd-journald[238]: Runtime Journal (/run/log/journal/7b47da13fb284c92a66d60914a33c338) is 5.9M, max 47.3M, 41.4M free. Jan 29 11:08:29.888389 systemd-modules-load[239]: Inserted module 'overlay' Jan 29 11:08:29.899293 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:08:29.900353 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:08:29.904263 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:08:29.905279 kernel: Bridge firewalling registered Jan 29 11:08:29.905258 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 29 11:08:29.912375 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:08:29.914153 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:08:29.915670 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:08:29.917074 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:08:29.921860 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:08:29.923926 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:08:29.927004 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:08:29.934679 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:08:29.936504 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:08:29.946419 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:08:29.948360 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:08:29.958977 dracut-cmdline[276]: dracut-dracut-053 Jan 29 11:08:29.961529 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 29 11:08:29.980610 systemd-resolved[277]: Positive Trust Anchors: Jan 29 11:08:29.980631 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:08:29.980662 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:08:29.985424 systemd-resolved[277]: Defaulting to hostname 'linux'. Jan 29 11:08:29.986458 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:08:29.988102 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:08:30.038272 kernel: SCSI subsystem initialized Jan 29 11:08:30.043257 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:08:30.050262 kernel: iscsi: registered transport (tcp) Jan 29 11:08:30.064286 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:08:30.064306 kernel: QLogic iSCSI HBA Driver Jan 29 11:08:30.115625 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:08:30.130416 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:08:30.145329 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:08:30.145372 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:08:30.146264 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:08:30.193280 kernel: raid6: neonx8 gen() 15771 MB/s Jan 29 11:08:30.210259 kernel: raid6: neonx4 gen() 15745 MB/s Jan 29 11:08:30.227258 kernel: raid6: neonx2 gen() 13208 MB/s Jan 29 11:08:30.244264 kernel: raid6: neonx1 gen() 10419 MB/s Jan 29 11:08:30.261259 kernel: raid6: int64x8 gen() 6776 MB/s Jan 29 11:08:30.278269 kernel: raid6: int64x4 gen() 7340 MB/s Jan 29 11:08:30.295264 kernel: raid6: int64x2 gen() 6109 MB/s Jan 29 11:08:30.312268 kernel: raid6: int64x1 gen() 5056 MB/s Jan 29 11:08:30.312294 kernel: raid6: using algorithm neonx8 gen() 15771 MB/s Jan 29 11:08:30.329271 kernel: raid6: .... xor() 11970 MB/s, rmw enabled Jan 29 11:08:30.329298 kernel: raid6: using neon recovery algorithm Jan 29 11:08:30.334616 kernel: xor: measuring software checksum speed Jan 29 11:08:30.334632 kernel: 8regs : 21613 MB/sec Jan 29 11:08:30.334647 kernel: 32regs : 21080 MB/sec Jan 29 11:08:30.335559 kernel: arm64_neon : 27946 MB/sec Jan 29 11:08:30.335584 kernel: xor: using function: arm64_neon (27946 MB/sec) Jan 29 11:08:30.386280 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:08:30.396814 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:08:30.408430 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:08:30.423880 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jan 29 11:08:30.427042 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:08:30.430111 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:08:30.444588 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Jan 29 11:08:30.471525 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:08:30.482405 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:08:30.521157 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:08:30.528432 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:08:30.542606 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:08:30.543719 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:08:30.545259 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:08:30.547035 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:08:30.554653 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:08:30.564576 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:08:30.575474 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 29 11:08:30.579604 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 11:08:30.579705 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:08:30.579723 kernel: GPT:9289727 != 19775487 Jan 29 11:08:30.579732 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:08:30.579741 kernel: GPT:9289727 != 19775487 Jan 29 11:08:30.579752 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:08:30.579761 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:08:30.578896 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:08:30.579010 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:08:30.580907 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:08:30.581816 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:08:30.581936 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:08:30.583579 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:08:30.601279 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (523) Jan 29 11:08:30.602504 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:08:30.605259 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (522) Jan 29 11:08:30.616376 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:08:30.618469 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:08:30.626003 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:08:30.629639 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:08:30.630557 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:08:30.635944 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:08:30.648447 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:08:30.650169 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:08:30.668101 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:08:30.782555 disk-uuid[552]: Primary Header is updated. Jan 29 11:08:30.782555 disk-uuid[552]: Secondary Entries is updated. Jan 29 11:08:30.782555 disk-uuid[552]: Secondary Header is updated. Jan 29 11:08:30.786270 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:08:31.796266 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:08:31.797002 disk-uuid[561]: The operation has completed successfully. Jan 29 11:08:31.817027 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:08:31.817135 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:08:31.840402 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:08:31.844338 sh[574]: Success Jan 29 11:08:31.864268 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 29 11:08:31.892600 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:08:31.900589 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:08:31.902139 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:08:31.912842 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 29 11:08:31.912876 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:08:31.912888 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:08:31.914299 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:08:31.914315 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:08:31.918697 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:08:31.919927 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:08:31.931399 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:08:31.932830 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:08:31.943919 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 11:08:31.943956 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:08:31.943967 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:08:31.946305 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:08:31.954629 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:08:31.956297 kernel: BTRFS info (device vda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 11:08:31.965271 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:08:31.972404 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:08:32.033479 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:08:32.045449 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:08:32.069765 systemd-networkd[767]: lo: Link UP Jan 29 11:08:32.069779 systemd-networkd[767]: lo: Gained carrier Jan 29 11:08:32.070678 systemd-networkd[767]: Enumeration completed Jan 29 11:08:32.070991 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:08:32.072587 ignition[677]: Ignition 2.20.0 Jan 29 11:08:32.071211 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:08:32.072593 ignition[677]: Stage: fetch-offline Jan 29 11:08:32.071214 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:08:32.072624 ignition[677]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:08:32.071910 systemd-networkd[767]: eth0: Link UP Jan 29 11:08:32.072632 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:08:32.071913 systemd-networkd[767]: eth0: Gained carrier Jan 29 11:08:32.072781 ignition[677]: parsed url from cmdline: "" Jan 29 11:08:32.071920 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:08:32.072784 ignition[677]: no config URL provided Jan 29 11:08:32.072394 systemd[1]: Reached target network.target - Network. Jan 29 11:08:32.072789 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:08:32.072796 ignition[677]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:08:32.072822 ignition[677]: op(1): [started] loading QEMU firmware config module Jan 29 11:08:32.072826 ignition[677]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 11:08:32.081415 ignition[677]: op(1): [finished] loading QEMU firmware config module Jan 29 11:08:32.088335 systemd-networkd[767]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:08:32.092399 ignition[677]: parsing config with SHA512: c5ff7fe1c7c321048abe3ab020f2739cb29f0adafb1d4d5b0248c5ae3b1f652cc55aa18a4485e43aa800e1b03c343783a850e0371c593abe2dde9886e9e1a0ab Jan 29 11:08:32.095749 unknown[677]: fetched base config from "system" Jan 29 11:08:32.095760 unknown[677]: fetched user config from "qemu" Jan 29 11:08:32.096029 ignition[677]: fetch-offline: fetch-offline passed Jan 29 11:08:32.096787 ignition[677]: Ignition finished successfully Jan 29 11:08:32.097865 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:08:32.099284 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:08:32.105488 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:08:32.116014 ignition[773]: Ignition 2.20.0 Jan 29 11:08:32.116024 ignition[773]: Stage: kargs Jan 29 11:08:32.116192 ignition[773]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:08:32.116211 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:08:32.116909 ignition[773]: kargs: kargs passed Jan 29 11:08:32.116951 ignition[773]: Ignition finished successfully Jan 29 11:08:32.120342 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:08:32.129456 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:08:32.140193 ignition[782]: Ignition 2.20.0 Jan 29 11:08:32.140208 ignition[782]: Stage: disks Jan 29 11:08:32.140409 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:08:32.142991 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:08:32.140419 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:08:32.144042 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:08:32.141138 ignition[782]: disks: disks passed Jan 29 11:08:32.145317 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:08:32.141184 ignition[782]: Ignition finished successfully Jan 29 11:08:32.146918 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:08:32.148381 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:08:32.149582 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:08:32.160389 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:08:32.171019 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:08:32.174872 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:08:32.177903 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:08:32.222166 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:08:32.223392 kernel: EXT4-fs (vda9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 29 11:08:32.223298 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:08:32.233346 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:08:32.234866 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:08:32.236048 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:08:32.236095 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:08:32.236118 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:08:32.242264 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (801) Jan 29 11:08:32.241768 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:08:32.246693 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 11:08:32.246712 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:08:32.246722 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:08:32.246732 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:08:32.245953 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:08:32.248763 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:08:32.285875 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:08:32.289858 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:08:32.292837 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:08:32.296324 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:08:32.361796 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:08:32.375396 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:08:32.377636 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:08:32.382257 kernel: BTRFS info (device vda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 11:08:32.398009 ignition[915]: INFO : Ignition 2.20.0 Jan 29 11:08:32.398009 ignition[915]: INFO : Stage: mount Jan 29 11:08:32.398009 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:08:32.398009 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:08:32.401365 ignition[915]: INFO : mount: mount passed Jan 29 11:08:32.401365 ignition[915]: INFO : Ignition finished successfully Jan 29 11:08:32.399724 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:08:32.400836 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:08:32.411335 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:08:32.912381 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:08:32.924428 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:08:32.931259 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (929) Jan 29 11:08:32.933414 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 29 11:08:32.933431 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 29 11:08:32.933441 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:08:32.935262 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:08:32.936480 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:08:32.957679 ignition[946]: INFO : Ignition 2.20.0 Jan 29 11:08:32.959040 ignition[946]: INFO : Stage: files Jan 29 11:08:32.959040 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:08:32.959040 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:08:32.961373 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:08:32.961373 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:08:32.961373 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:08:32.964615 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:08:32.964615 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:08:32.964615 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:08:32.963957 unknown[946]: wrote ssh authorized keys file for user: core Jan 29 11:08:32.968496 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:08:32.968496 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:08:32.968496 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:08:32.972334 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:08:32.972334 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:08:32.972334 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:08:32.972334 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:08:32.972334 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 29 11:08:33.246060 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 29 11:08:33.262115 systemd-networkd[767]: eth0: Gained IPv6LL Jan 29 11:08:33.440546 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 29 11:08:33.440546 ignition[946]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 29 11:08:33.443268 ignition[946]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:08:33.443268 ignition[946]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:08:33.443268 ignition[946]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 29 11:08:33.443268 ignition[946]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:08:33.463979 ignition[946]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:08:33.467201 ignition[946]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:08:33.469142 ignition[946]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:08:33.469142 ignition[946]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:08:33.469142 ignition[946]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:08:33.469142 ignition[946]: INFO : files: files passed Jan 29 11:08:33.469142 ignition[946]: INFO : Ignition finished successfully Jan 29 11:08:33.469890 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:08:33.481525 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:08:33.483111 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:08:33.485879 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:08:33.485976 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:08:33.490225 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 11:08:33.493317 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:08:33.493317 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:08:33.495494 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:08:33.496222 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:08:33.497512 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:08:33.509392 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:08:33.527150 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:08:33.527301 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:08:33.528864 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:08:33.530150 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:08:33.531437 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:08:33.532137 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:08:33.546120 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:08:33.548179 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:08:33.558643 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:08:33.560264 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:08:33.561145 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:08:33.562425 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:08:33.562540 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:08:33.564328 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:08:33.565741 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:08:33.566948 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:08:33.568163 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:08:33.569572 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:08:33.570970 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:08:33.572275 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:08:33.573688 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:08:33.575038 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:08:33.576291 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:08:33.577398 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:08:33.577514 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:08:33.579213 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:08:33.580596 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:08:33.582062 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:08:33.582168 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:08:33.583604 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:08:33.583712 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:08:33.585728 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:08:33.585841 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:08:33.587167 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:08:33.588267 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:08:33.589303 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:08:33.590492 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:08:33.591577 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:08:33.592831 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:08:33.592916 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:08:33.594376 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:08:33.594454 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:08:33.595546 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:08:33.595647 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:08:33.596902 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:08:33.596995 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:08:33.609392 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:08:33.611348 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:08:33.611966 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:08:33.612073 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:08:33.613373 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:08:33.613458 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:08:33.617762 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:08:33.617854 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:08:33.622686 ignition[1000]: INFO : Ignition 2.20.0 Jan 29 11:08:33.622686 ignition[1000]: INFO : Stage: umount Jan 29 11:08:33.624716 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:08:33.624716 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 11:08:33.624716 ignition[1000]: INFO : umount: umount passed Jan 29 11:08:33.624716 ignition[1000]: INFO : Ignition finished successfully Jan 29 11:08:33.625059 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:08:33.625572 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:08:33.625671 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:08:33.628607 systemd[1]: Stopped target network.target - Network. Jan 29 11:08:33.629684 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:08:33.629755 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:08:33.630845 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:08:33.630883 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:08:33.632046 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:08:33.632096 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:08:33.633286 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:08:33.633329 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:08:33.634748 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:08:33.635964 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:08:33.641303 systemd-networkd[767]: eth0: DHCPv6 lease lost Jan 29 11:08:33.643194 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:08:33.643334 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:08:33.646403 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:08:33.646435 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:08:33.654456 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:08:33.655098 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:08:33.655151 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:08:33.657613 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:08:33.658700 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:08:33.658794 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:08:33.662429 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:08:33.662516 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:08:33.663344 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:08:33.663385 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:08:33.664608 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:08:33.664645 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:08:33.666986 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:08:33.667107 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:08:33.668937 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:08:33.669093 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:08:33.671378 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:08:33.671441 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:08:33.672827 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:08:33.672860 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:08:33.674048 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:08:33.674099 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:08:33.676148 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:08:33.676187 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:08:33.678093 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:08:33.678137 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:08:33.680878 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:08:33.682133 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:08:33.682181 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:08:33.683717 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:08:33.683758 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:08:33.685126 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:08:33.685162 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:08:33.686766 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:08:33.686801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:08:33.688477 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:08:33.688589 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:08:33.689787 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:08:33.689861 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:08:33.691304 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:08:33.692195 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:08:33.694282 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:08:33.696026 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:08:33.705183 systemd[1]: Switching root. Jan 29 11:08:33.744104 systemd-journald[238]: Journal stopped Jan 29 11:08:34.419400 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 29 11:08:34.419459 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:08:34.419471 kernel: SELinux: policy capability open_perms=1 Jan 29 11:08:34.419481 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:08:34.419495 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:08:34.419507 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:08:34.419517 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:08:34.419527 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:08:34.419536 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:08:34.419546 kernel: audit: type=1403 audit(1738148913.871:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:08:34.419557 systemd[1]: Successfully loaded SELinux policy in 31.857ms. Jan 29 11:08:34.419574 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.178ms. Jan 29 11:08:34.419586 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:08:34.419597 systemd[1]: Detected virtualization kvm. Jan 29 11:08:34.419612 systemd[1]: Detected architecture arm64. Jan 29 11:08:34.419622 systemd[1]: Detected first boot. Jan 29 11:08:34.419633 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:08:34.419645 zram_generator::config[1044]: No configuration found. Jan 29 11:08:34.419657 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:08:34.419669 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:08:34.419679 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:08:34.419693 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:08:34.419704 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:08:34.419714 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:08:34.419725 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:08:34.419736 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:08:34.419747 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:08:34.419759 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:08:34.419770 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:08:34.419780 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:08:34.419791 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:08:34.419802 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:08:34.419812 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:08:34.419823 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:08:34.419834 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:08:34.419846 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:08:34.419858 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 29 11:08:34.419868 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:08:34.419883 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:08:34.419893 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:08:34.419904 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:08:34.419914 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:08:34.419925 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:08:34.419936 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:08:34.419948 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:08:34.419958 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:08:34.419970 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:08:34.419981 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:08:34.419991 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:08:34.420002 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:08:34.420013 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:08:34.420023 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:08:34.420034 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:08:34.420046 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:08:34.420057 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:08:34.420067 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:08:34.420079 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:08:34.420094 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:08:34.420106 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:08:34.420118 systemd[1]: Reached target machines.target - Containers. Jan 29 11:08:34.420128 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:08:34.420139 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:08:34.420151 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:08:34.420162 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:08:34.420173 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:08:34.420183 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:08:34.420194 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:08:34.420204 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:08:34.420215 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:08:34.420226 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:08:34.420238 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:08:34.420325 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:08:34.420337 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:08:34.420347 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:08:34.420358 kernel: fuse: init (API version 7.39) Jan 29 11:08:34.420368 kernel: loop: module loaded Jan 29 11:08:34.420377 kernel: ACPI: bus type drm_connector registered Jan 29 11:08:34.420387 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:08:34.420398 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:08:34.420411 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:08:34.420422 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:08:34.420433 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:08:34.420462 systemd-journald[1115]: Collecting audit messages is disabled. Jan 29 11:08:34.420487 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:08:34.420498 systemd[1]: Stopped verity-setup.service. Jan 29 11:08:34.420509 systemd-journald[1115]: Journal started Jan 29 11:08:34.420535 systemd-journald[1115]: Runtime Journal (/run/log/journal/7b47da13fb284c92a66d60914a33c338) is 5.9M, max 47.3M, 41.4M free. Jan 29 11:08:34.243387 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:08:34.260187 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:08:34.260532 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:08:34.422811 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:08:34.423460 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:08:34.424341 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:08:34.425334 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:08:34.426147 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:08:34.427156 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:08:34.428126 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:08:34.429139 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:08:34.430306 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:08:34.431428 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:08:34.431567 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:08:34.432694 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:08:34.432836 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:08:34.433903 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:08:34.434048 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:08:34.435144 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:08:34.435306 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:08:34.436482 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:08:34.436620 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:08:34.437687 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:08:34.437856 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:08:34.438924 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:08:34.440309 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:08:34.442463 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:08:34.454548 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:08:34.462401 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:08:34.464293 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:08:34.465080 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:08:34.465127 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:08:34.466937 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:08:34.468979 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:08:34.470889 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:08:34.471814 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:08:34.473153 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:08:34.474932 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:08:34.475845 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:08:34.476785 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:08:34.477711 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:08:34.481428 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:08:34.484619 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:08:34.486950 systemd-journald[1115]: Time spent on flushing to /var/log/journal/7b47da13fb284c92a66d60914a33c338 is 19.848ms for 842 entries. Jan 29 11:08:34.486950 systemd-journald[1115]: System Journal (/var/log/journal/7b47da13fb284c92a66d60914a33c338) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:08:34.519314 systemd-journald[1115]: Received client request to flush runtime journal. Jan 29 11:08:34.519769 kernel: loop0: detected capacity change from 0 to 194096 Jan 29 11:08:34.489762 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:08:34.494030 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:08:34.497291 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:08:34.498231 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:08:34.499657 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:08:34.500980 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:08:34.504812 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:08:34.516510 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:08:34.522468 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:08:34.523917 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:08:34.525508 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:08:34.525783 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Jan 29 11:08:34.525794 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Jan 29 11:08:34.534901 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:08:34.539257 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:08:34.547609 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:08:34.548791 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:08:34.552837 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:08:34.554308 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:08:34.575419 kernel: loop1: detected capacity change from 0 to 116784 Jan 29 11:08:34.579184 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:08:34.589421 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:08:34.600274 kernel: loop2: detected capacity change from 0 to 113552 Jan 29 11:08:34.602236 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Jan 29 11:08:34.602361 systemd-tmpfiles[1179]: ACLs are not supported, ignoring. Jan 29 11:08:34.607356 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:08:34.643317 kernel: loop3: detected capacity change from 0 to 194096 Jan 29 11:08:34.651271 kernel: loop4: detected capacity change from 0 to 116784 Jan 29 11:08:34.664271 kernel: loop5: detected capacity change from 0 to 113552 Jan 29 11:08:34.667487 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 11:08:34.667880 (sd-merge)[1183]: Merged extensions into '/usr'. Jan 29 11:08:34.671040 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:08:34.671166 systemd[1]: Reloading... Jan 29 11:08:34.722265 zram_generator::config[1209]: No configuration found. Jan 29 11:08:34.794233 ldconfig[1150]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:08:34.837906 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:08:34.881566 systemd[1]: Reloading finished in 209 ms. Jan 29 11:08:34.910789 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:08:34.913405 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:08:34.924444 systemd[1]: Starting ensure-sysext.service... Jan 29 11:08:34.926304 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:08:34.938844 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:08:34.938860 systemd[1]: Reloading... Jan 29 11:08:34.950870 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:08:34.951083 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:08:34.952344 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:08:34.952567 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 29 11:08:34.952616 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Jan 29 11:08:34.955037 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:08:34.955050 systemd-tmpfiles[1246]: Skipping /boot Jan 29 11:08:34.966901 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:08:34.966922 systemd-tmpfiles[1246]: Skipping /boot Jan 29 11:08:34.981267 zram_generator::config[1273]: No configuration found. Jan 29 11:08:35.070065 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:08:35.113449 systemd[1]: Reloading finished in 174 ms. Jan 29 11:08:35.129024 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:08:35.140289 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:08:35.147273 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:08:35.149299 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:08:35.153177 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:08:35.155723 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:08:35.170477 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:08:35.173425 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:08:35.179057 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:08:35.180559 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:08:35.187029 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:08:35.191543 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:08:35.193864 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:08:35.201638 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:08:35.207278 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:08:35.208945 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:08:35.209107 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:08:35.211777 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:08:35.211925 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:08:35.213611 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:08:35.215267 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:08:35.215400 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:08:35.216489 systemd-udevd[1315]: Using default interface naming scheme 'v255'. Jan 29 11:08:35.223698 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:08:35.233999 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:08:35.240949 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:08:35.248957 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:08:35.249965 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:08:35.253600 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:08:35.254971 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:08:35.257323 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:08:35.260310 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:08:35.260469 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:08:35.263770 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:08:35.267035 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:08:35.268316 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:08:35.269864 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:08:35.271346 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:08:35.280823 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:08:35.293274 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1349) Jan 29 11:08:35.293263 systemd[1]: Finished ensure-sysext.service. Jan 29 11:08:35.303319 augenrules[1378]: No rules Jan 29 11:08:35.304847 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:08:35.305082 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:08:35.310155 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 29 11:08:35.318156 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:08:35.322455 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:08:35.327145 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:08:35.332411 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:08:35.336721 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:08:35.338802 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:08:35.341173 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:08:35.344761 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:08:35.346101 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:08:35.347356 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:08:35.351142 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:08:35.351351 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:08:35.352553 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:08:35.352695 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:08:35.353922 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:08:35.354054 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:08:35.355502 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:08:35.355634 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:08:35.364506 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:08:35.370053 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:08:35.371077 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:08:35.371178 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:08:35.400365 systemd-resolved[1313]: Positive Trust Anchors: Jan 29 11:08:35.400660 systemd-resolved[1313]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:08:35.401196 systemd-resolved[1313]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:08:35.402522 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:08:35.408378 systemd-resolved[1313]: Defaulting to hostname 'linux'. Jan 29 11:08:35.413417 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:08:35.417012 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:08:35.430515 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:08:35.432026 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:08:35.433556 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:08:35.436364 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:08:35.438711 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:08:35.442378 systemd-networkd[1391]: lo: Link UP Jan 29 11:08:35.442385 systemd-networkd[1391]: lo: Gained carrier Jan 29 11:08:35.445315 systemd-networkd[1391]: Enumeration completed Jan 29 11:08:35.447805 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:08:35.448864 systemd[1]: Reached target network.target - Network. Jan 29 11:08:35.450881 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:08:35.454175 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:08:35.454186 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:08:35.454928 systemd-networkd[1391]: eth0: Link UP Jan 29 11:08:35.454935 systemd-networkd[1391]: eth0: Gained carrier Jan 29 11:08:35.454948 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:08:35.461227 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:08:35.478297 systemd-networkd[1391]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 11:08:35.478975 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Jan 29 11:08:35.479723 systemd-timesyncd[1394]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 11:08:35.479779 systemd-timesyncd[1394]: Initial clock synchronization to Wed 2025-01-29 11:08:35.625077 UTC. Jan 29 11:08:35.489553 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:08:35.495642 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:08:35.496765 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:08:35.497801 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:08:35.498660 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:08:35.499578 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:08:35.500616 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:08:35.501488 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:08:35.502367 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:08:35.503208 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:08:35.503239 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:08:35.503867 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:08:35.505467 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:08:35.507612 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:08:35.516122 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:08:35.518171 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:08:35.519466 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:08:35.520336 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:08:35.521007 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:08:35.521769 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:08:35.521799 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:08:35.522687 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:08:35.524410 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:08:35.527418 lvm[1420]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:08:35.528366 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:08:35.530028 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:08:35.530880 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:08:35.533507 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:08:35.537416 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:08:35.539899 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:08:35.546123 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:08:35.546448 extend-filesystems[1424]: Found loop3 Jan 29 11:08:35.546448 extend-filesystems[1424]: Found loop4 Jan 29 11:08:35.548884 extend-filesystems[1424]: Found loop5 Jan 29 11:08:35.548884 extend-filesystems[1424]: Found vda Jan 29 11:08:35.548884 extend-filesystems[1424]: Found vda1 Jan 29 11:08:35.548884 extend-filesystems[1424]: Found vda2 Jan 29 11:08:35.548884 extend-filesystems[1424]: Found vda3 Jan 29 11:08:35.548884 extend-filesystems[1424]: Found usr Jan 29 11:08:35.548884 extend-filesystems[1424]: Found vda4 Jan 29 11:08:35.548884 extend-filesystems[1424]: Found vda6 Jan 29 11:08:35.548884 extend-filesystems[1424]: Found vda7 Jan 29 11:08:35.548884 extend-filesystems[1424]: Found vda9 Jan 29 11:08:35.548884 extend-filesystems[1424]: Checking size of /dev/vda9 Jan 29 11:08:35.551326 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:08:35.564511 jq[1423]: false Jan 29 11:08:35.551718 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:08:35.554257 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:08:35.565386 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:08:35.566876 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:08:35.570657 dbus-daemon[1422]: [system] SELinux support is enabled Jan 29 11:08:35.572569 extend-filesystems[1424]: Resized partition /dev/vda9 Jan 29 11:08:35.579703 jq[1437]: true Jan 29 11:08:35.573567 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:08:35.576605 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:08:35.578314 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:08:35.578612 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:08:35.578754 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:08:35.579940 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:08:35.580086 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:08:35.584377 extend-filesystems[1445]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:08:35.595652 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1347) Jan 29 11:08:35.595710 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 11:08:35.595151 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:08:35.595177 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:08:35.596911 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:08:35.596936 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:08:35.607314 jq[1446]: true Jan 29 11:08:35.606882 (ntainerd)[1454]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:08:35.620273 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 11:08:35.634100 extend-filesystems[1445]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:08:35.634100 extend-filesystems[1445]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 11:08:35.634100 extend-filesystems[1445]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 11:08:35.639199 extend-filesystems[1424]: Resized filesystem in /dev/vda9 Jan 29 11:08:35.634738 systemd-logind[1429]: Watching system buttons on /dev/input/event0 (Power Button) Jan 29 11:08:35.642157 update_engine[1432]: I20250129 11:08:35.641088 1432 main.cc:92] Flatcar Update Engine starting Jan 29 11:08:35.634928 systemd-logind[1429]: New seat seat0. Jan 29 11:08:35.638121 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:08:35.638330 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:08:35.640626 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:08:35.649994 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:08:35.652338 update_engine[1432]: I20250129 11:08:35.650960 1432 update_check_scheduler.cc:74] Next update check in 2m46s Jan 29 11:08:35.660648 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:08:35.698000 bash[1473]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:08:35.699311 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:08:35.701968 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:08:35.706078 locksmithd[1469]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:08:35.791687 containerd[1454]: time="2025-01-29T11:08:35.791599240Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:08:35.816446 containerd[1454]: time="2025-01-29T11:08:35.816398080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:08:35.818112 containerd[1454]: time="2025-01-29T11:08:35.818065920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:08:35.818140 containerd[1454]: time="2025-01-29T11:08:35.818111600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:08:35.818140 containerd[1454]: time="2025-01-29T11:08:35.818130240Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:08:35.818339 containerd[1454]: time="2025-01-29T11:08:35.818314800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:08:35.818370 containerd[1454]: time="2025-01-29T11:08:35.818345040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:08:35.818422 containerd[1454]: time="2025-01-29T11:08:35.818404520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:08:35.818445 containerd[1454]: time="2025-01-29T11:08:35.818420200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:08:35.818619 containerd[1454]: time="2025-01-29T11:08:35.818598240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:08:35.818657 containerd[1454]: time="2025-01-29T11:08:35.818618120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:08:35.818657 containerd[1454]: time="2025-01-29T11:08:35.818631920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:08:35.818657 containerd[1454]: time="2025-01-29T11:08:35.818640760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:08:35.818726 containerd[1454]: time="2025-01-29T11:08:35.818708240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:08:35.818923 containerd[1454]: time="2025-01-29T11:08:35.818903040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:08:35.819016 containerd[1454]: time="2025-01-29T11:08:35.818998320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:08:35.819043 containerd[1454]: time="2025-01-29T11:08:35.819016040Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:08:35.820278 containerd[1454]: time="2025-01-29T11:08:35.819084400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:08:35.820278 containerd[1454]: time="2025-01-29T11:08:35.819142080Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:08:35.829442 containerd[1454]: time="2025-01-29T11:08:35.829365680Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:08:35.829442 containerd[1454]: time="2025-01-29T11:08:35.829427680Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:08:35.829442 containerd[1454]: time="2025-01-29T11:08:35.829443280Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:08:35.829529 containerd[1454]: time="2025-01-29T11:08:35.829458760Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:08:35.829529 containerd[1454]: time="2025-01-29T11:08:35.829481200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:08:35.829674 containerd[1454]: time="2025-01-29T11:08:35.829649280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:08:35.829981 containerd[1454]: time="2025-01-29T11:08:35.829949800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:08:35.830136 containerd[1454]: time="2025-01-29T11:08:35.830115640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:08:35.830163 containerd[1454]: time="2025-01-29T11:08:35.830140200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:08:35.830163 containerd[1454]: time="2025-01-29T11:08:35.830155360Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:08:35.830208 containerd[1454]: time="2025-01-29T11:08:35.830169600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:08:35.830208 containerd[1454]: time="2025-01-29T11:08:35.830183560Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:08:35.830208 containerd[1454]: time="2025-01-29T11:08:35.830195960Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:08:35.830281 containerd[1454]: time="2025-01-29T11:08:35.830209520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:08:35.830281 containerd[1454]: time="2025-01-29T11:08:35.830230440Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:08:35.830281 containerd[1454]: time="2025-01-29T11:08:35.830264720Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:08:35.830281 containerd[1454]: time="2025-01-29T11:08:35.830280280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:08:35.830350 containerd[1454]: time="2025-01-29T11:08:35.830293800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:08:35.830350 containerd[1454]: time="2025-01-29T11:08:35.830314200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:08:35.830350 containerd[1454]: time="2025-01-29T11:08:35.830327400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:08:35.830350 containerd[1454]: time="2025-01-29T11:08:35.830339160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:08:35.830429 containerd[1454]: time="2025-01-29T11:08:35.830352080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:08:35.830429 containerd[1454]: time="2025-01-29T11:08:35.830364520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:08:35.830429 containerd[1454]: time="2025-01-29T11:08:35.830379280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:08:35.830429 containerd[1454]: time="2025-01-29T11:08:35.830391000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:08:35.830429 containerd[1454]: time="2025-01-29T11:08:35.830402800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:08:35.830429 containerd[1454]: time="2025-01-29T11:08:35.830415840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:08:35.830531 containerd[1454]: time="2025-01-29T11:08:35.830430960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:08:35.830531 containerd[1454]: time="2025-01-29T11:08:35.830443360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:08:35.830531 containerd[1454]: time="2025-01-29T11:08:35.830454920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:08:35.830531 containerd[1454]: time="2025-01-29T11:08:35.830466720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:08:35.830531 containerd[1454]: time="2025-01-29T11:08:35.830481960Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:08:35.830531 containerd[1454]: time="2025-01-29T11:08:35.830502720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:08:35.830531 containerd[1454]: time="2025-01-29T11:08:35.830516200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:08:35.830531 containerd[1454]: time="2025-01-29T11:08:35.830526560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:08:35.830831 containerd[1454]: time="2025-01-29T11:08:35.830812840Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:08:35.830857 containerd[1454]: time="2025-01-29T11:08:35.830837120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:08:35.830857 containerd[1454]: time="2025-01-29T11:08:35.830849000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:08:35.830894 containerd[1454]: time="2025-01-29T11:08:35.830860920Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:08:35.830894 containerd[1454]: time="2025-01-29T11:08:35.830870040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:08:35.830894 containerd[1454]: time="2025-01-29T11:08:35.830882440Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:08:35.830894 containerd[1454]: time="2025-01-29T11:08:35.830892720Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:08:35.830977 containerd[1454]: time="2025-01-29T11:08:35.830904160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:08:35.831255 containerd[1454]: time="2025-01-29T11:08:35.831198640Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:08:35.831386 containerd[1454]: time="2025-01-29T11:08:35.831264560Z" level=info msg="Connect containerd service" Jan 29 11:08:35.831386 containerd[1454]: time="2025-01-29T11:08:35.831300160Z" level=info msg="using legacy CRI server" Jan 29 11:08:35.831386 containerd[1454]: time="2025-01-29T11:08:35.831306920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:08:35.831991 containerd[1454]: time="2025-01-29T11:08:35.831553960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:08:35.832228 containerd[1454]: time="2025-01-29T11:08:35.832202040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:08:35.832486 containerd[1454]: time="2025-01-29T11:08:35.832454840Z" level=info msg="Start subscribing containerd event" Jan 29 11:08:35.832517 containerd[1454]: time="2025-01-29T11:08:35.832506920Z" level=info msg="Start recovering state" Jan 29 11:08:35.832584 containerd[1454]: time="2025-01-29T11:08:35.832570000Z" level=info msg="Start event monitor" Jan 29 11:08:35.832610 containerd[1454]: time="2025-01-29T11:08:35.832585600Z" level=info msg="Start snapshots syncer" Jan 29 11:08:35.832610 containerd[1454]: time="2025-01-29T11:08:35.832597400Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:08:35.832610 containerd[1454]: time="2025-01-29T11:08:35.832605200Z" level=info msg="Start streaming server" Jan 29 11:08:35.832857 containerd[1454]: time="2025-01-29T11:08:35.832838200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:08:35.832920 containerd[1454]: time="2025-01-29T11:08:35.832907680Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:08:35.832974 containerd[1454]: time="2025-01-29T11:08:35.832962600Z" level=info msg="containerd successfully booted in 0.042429s" Jan 29 11:08:35.833137 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:08:36.137484 sshd_keygen[1441]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:08:36.156546 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:08:36.165638 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:08:36.171328 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:08:36.173335 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:08:36.175834 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:08:36.189334 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:08:36.199559 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:08:36.201573 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 29 11:08:36.202625 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:08:37.101378 systemd-networkd[1391]: eth0: Gained IPv6LL Jan 29 11:08:37.104036 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:08:37.105537 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:08:37.116501 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:08:37.118664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:08:37.120498 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:08:37.136063 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:08:37.137463 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:08:37.138814 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:08:37.144307 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:08:37.632948 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:08:37.634186 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:08:37.636809 (kubelet)[1526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:08:37.638340 systemd[1]: Startup finished in 538ms (kernel) + 4.165s (initrd) + 3.801s (userspace) = 8.505s. Jan 29 11:08:37.653749 agetty[1502]: failed to open credentials directory Jan 29 11:08:37.653765 agetty[1503]: failed to open credentials directory Jan 29 11:08:38.124053 kubelet[1526]: E0129 11:08:38.123891 1526 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:08:38.126439 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:08:38.126593 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:08:42.309638 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:08:42.311071 systemd[1]: Started sshd@0-10.0.0.133:22-10.0.0.1:37840.service - OpenSSH per-connection server daemon (10.0.0.1:37840). Jan 29 11:08:42.410979 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 37840 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:08:42.413099 sshd-session[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:08:42.425400 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:08:42.433699 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:08:42.436021 systemd-logind[1429]: New session 1 of user core. Jan 29 11:08:42.444152 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:08:42.455649 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:08:42.458375 (systemd)[1544]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:08:42.530320 systemd[1544]: Queued start job for default target default.target. Jan 29 11:08:42.541202 systemd[1544]: Created slice app.slice - User Application Slice. Jan 29 11:08:42.541396 systemd[1544]: Reached target paths.target - Paths. Jan 29 11:08:42.541422 systemd[1544]: Reached target timers.target - Timers. Jan 29 11:08:42.542777 systemd[1544]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:08:42.553204 systemd[1544]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:08:42.553292 systemd[1544]: Reached target sockets.target - Sockets. Jan 29 11:08:42.553306 systemd[1544]: Reached target basic.target - Basic System. Jan 29 11:08:42.553350 systemd[1544]: Reached target default.target - Main User Target. Jan 29 11:08:42.553383 systemd[1544]: Startup finished in 89ms. Jan 29 11:08:42.554029 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:08:42.557102 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:08:42.622529 systemd[1]: Started sshd@1-10.0.0.133:22-10.0.0.1:35738.service - OpenSSH per-connection server daemon (10.0.0.1:35738). Jan 29 11:08:42.663453 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 35738 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:08:42.664668 sshd-session[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:08:42.669818 systemd-logind[1429]: New session 2 of user core. Jan 29 11:08:42.680389 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:08:42.732193 sshd[1557]: Connection closed by 10.0.0.1 port 35738 Jan 29 11:08:42.732826 sshd-session[1555]: pam_unix(sshd:session): session closed for user core Jan 29 11:08:42.742662 systemd[1]: sshd@1-10.0.0.133:22-10.0.0.1:35738.service: Deactivated successfully. Jan 29 11:08:42.744196 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:08:42.746391 systemd-logind[1429]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:08:42.758635 systemd[1]: Started sshd@2-10.0.0.133:22-10.0.0.1:35742.service - OpenSSH per-connection server daemon (10.0.0.1:35742). Jan 29 11:08:42.760830 systemd-logind[1429]: Removed session 2. Jan 29 11:08:42.797220 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 35742 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:08:42.798794 sshd-session[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:08:42.803085 systemd-logind[1429]: New session 3 of user core. Jan 29 11:08:42.810433 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:08:42.860109 sshd[1564]: Connection closed by 10.0.0.1 port 35742 Jan 29 11:08:42.860462 sshd-session[1562]: pam_unix(sshd:session): session closed for user core Jan 29 11:08:42.866598 systemd[1]: sshd@2-10.0.0.133:22-10.0.0.1:35742.service: Deactivated successfully. Jan 29 11:08:42.868059 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:08:42.869520 systemd-logind[1429]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:08:42.884544 systemd[1]: Started sshd@3-10.0.0.133:22-10.0.0.1:35750.service - OpenSSH per-connection server daemon (10.0.0.1:35750). Jan 29 11:08:42.885735 systemd-logind[1429]: Removed session 3. Jan 29 11:08:42.922888 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 35750 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:08:42.924094 sshd-session[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:08:42.928313 systemd-logind[1429]: New session 4 of user core. Jan 29 11:08:42.935475 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:08:42.987264 sshd[1571]: Connection closed by 10.0.0.1 port 35750 Jan 29 11:08:42.987763 sshd-session[1569]: pam_unix(sshd:session): session closed for user core Jan 29 11:08:43.000708 systemd[1]: sshd@3-10.0.0.133:22-10.0.0.1:35750.service: Deactivated successfully. Jan 29 11:08:43.002275 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:08:43.004500 systemd-logind[1429]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:08:43.005772 systemd[1]: Started sshd@4-10.0.0.133:22-10.0.0.1:35762.service - OpenSSH per-connection server daemon (10.0.0.1:35762). Jan 29 11:08:43.006496 systemd-logind[1429]: Removed session 4. Jan 29 11:08:43.047995 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 35762 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:08:43.049296 sshd-session[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:08:43.052952 systemd-logind[1429]: New session 5 of user core. Jan 29 11:08:43.059407 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:08:43.130460 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:08:43.130757 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:08:43.145228 sudo[1579]: pam_unix(sudo:session): session closed for user root Jan 29 11:08:43.147708 sshd[1578]: Connection closed by 10.0.0.1 port 35762 Jan 29 11:08:43.148319 sshd-session[1576]: pam_unix(sshd:session): session closed for user core Jan 29 11:08:43.163682 systemd[1]: sshd@4-10.0.0.133:22-10.0.0.1:35762.service: Deactivated successfully. Jan 29 11:08:43.165206 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:08:43.167446 systemd-logind[1429]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:08:43.177615 systemd[1]: Started sshd@5-10.0.0.133:22-10.0.0.1:35768.service - OpenSSH per-connection server daemon (10.0.0.1:35768). Jan 29 11:08:43.179974 systemd-logind[1429]: Removed session 5. Jan 29 11:08:43.216717 sshd[1584]: Accepted publickey for core from 10.0.0.1 port 35768 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:08:43.218100 sshd-session[1584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:08:43.222462 systemd-logind[1429]: New session 6 of user core. Jan 29 11:08:43.235440 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:08:43.286365 sudo[1588]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:08:43.286652 sudo[1588]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:08:43.289779 sudo[1588]: pam_unix(sudo:session): session closed for user root Jan 29 11:08:43.294477 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:08:43.294741 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:08:43.309686 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:08:43.332995 augenrules[1610]: No rules Jan 29 11:08:43.333601 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:08:43.333793 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:08:43.335838 sudo[1587]: pam_unix(sudo:session): session closed for user root Jan 29 11:08:43.336997 sshd[1586]: Connection closed by 10.0.0.1 port 35768 Jan 29 11:08:43.337384 sshd-session[1584]: pam_unix(sshd:session): session closed for user core Jan 29 11:08:43.349725 systemd[1]: sshd@5-10.0.0.133:22-10.0.0.1:35768.service: Deactivated successfully. Jan 29 11:08:43.351133 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:08:43.352317 systemd-logind[1429]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:08:43.359516 systemd[1]: Started sshd@6-10.0.0.133:22-10.0.0.1:35780.service - OpenSSH per-connection server daemon (10.0.0.1:35780). Jan 29 11:08:43.360397 systemd-logind[1429]: Removed session 6. Jan 29 11:08:43.405560 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 35780 ssh2: RSA SHA256:sI8AI+xrXTR4mEEiShqASd95E8XwBKuT8obfVDWOoDw Jan 29 11:08:43.406899 sshd-session[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:08:43.411333 systemd-logind[1429]: New session 7 of user core. Jan 29 11:08:43.425428 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:08:43.478564 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:08:43.478838 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:08:43.499568 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 11:08:43.518498 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:08:43.518717 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 11:08:44.061787 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:08:44.076508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:08:44.095590 systemd[1]: Reloading requested from client PID 1670 ('systemctl') (unit session-7.scope)... Jan 29 11:08:44.095606 systemd[1]: Reloading... Jan 29 11:08:44.180279 zram_generator::config[1711]: No configuration found. Jan 29 11:08:44.364513 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:08:44.424539 systemd[1]: Reloading finished in 328 ms. Jan 29 11:08:44.464059 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:08:44.466459 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:08:44.466649 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:08:44.468102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:08:44.582068 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:08:44.585900 (kubelet)[1755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:08:44.622270 kubelet[1755]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:08:44.622270 kubelet[1755]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:08:44.622270 kubelet[1755]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:08:44.623131 kubelet[1755]: I0129 11:08:44.623072 1755 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:08:45.252615 kubelet[1755]: I0129 11:08:45.252571 1755 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 11:08:45.252615 kubelet[1755]: I0129 11:08:45.252603 1755 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:08:45.252830 kubelet[1755]: I0129 11:08:45.252815 1755 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 11:08:45.286766 kubelet[1755]: I0129 11:08:45.286727 1755 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:08:45.299135 kubelet[1755]: I0129 11:08:45.299096 1755 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:08:45.300345 kubelet[1755]: I0129 11:08:45.300296 1755 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:08:45.300547 kubelet[1755]: I0129 11:08:45.300344 1755 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.133","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 11:08:45.300667 kubelet[1755]: I0129 11:08:45.300599 1755 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:08:45.300667 kubelet[1755]: I0129 11:08:45.300609 1755 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 11:08:45.300898 kubelet[1755]: I0129 11:08:45.300860 1755 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:08:45.301721 kubelet[1755]: I0129 11:08:45.301684 1755 kubelet.go:400] "Attempting to sync node with API server" Jan 29 11:08:45.301721 kubelet[1755]: I0129 11:08:45.301708 1755 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:08:45.302004 kubelet[1755]: I0129 11:08:45.301982 1755 kubelet.go:312] "Adding apiserver pod source" Jan 29 11:08:45.302149 kubelet[1755]: I0129 11:08:45.302133 1755 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:08:45.302949 kubelet[1755]: E0129 11:08:45.302438 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:08:45.302949 kubelet[1755]: E0129 11:08:45.302922 1755 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:08:45.305185 kubelet[1755]: I0129 11:08:45.305163 1755 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:08:45.305565 kubelet[1755]: I0129 11:08:45.305551 1755 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:08:45.305667 kubelet[1755]: W0129 11:08:45.305654 1755 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:08:45.306598 kubelet[1755]: I0129 11:08:45.306494 1755 server.go:1264] "Started kubelet" Jan 29 11:08:45.307319 kubelet[1755]: I0129 11:08:45.307147 1755 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:08:45.307541 kubelet[1755]: I0129 11:08:45.307465 1755 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:08:45.307541 kubelet[1755]: I0129 11:08:45.307513 1755 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:08:45.308998 kubelet[1755]: I0129 11:08:45.307914 1755 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:08:45.308998 kubelet[1755]: I0129 11:08:45.308754 1755 server.go:455] "Adding debug handlers to kubelet server" Jan 29 11:08:45.308998 kubelet[1755]: I0129 11:08:45.308811 1755 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 11:08:45.308998 kubelet[1755]: I0129 11:08:45.308910 1755 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:08:45.309970 kubelet[1755]: I0129 11:08:45.309954 1755 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:08:45.311330 kubelet[1755]: E0129 11:08:45.310999 1755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.133.181f253d0b9e6d1e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.133,UID:10.0.0.133,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.133,},FirstTimestamp:2025-01-29 11:08:45.306465566 +0000 UTC m=+0.717527401,LastTimestamp:2025-01-29 11:08:45.306465566 +0000 UTC m=+0.717527401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.133,}" Jan 29 11:08:45.311553 kubelet[1755]: W0129 11:08:45.311528 1755 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.133" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 11:08:45.311644 kubelet[1755]: E0129 11:08:45.311630 1755 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.133" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 11:08:45.311813 kubelet[1755]: W0129 11:08:45.311796 1755 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 11:08:45.311898 kubelet[1755]: E0129 11:08:45.311886 1755 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 11:08:45.314069 kubelet[1755]: E0129 11:08:45.314037 1755 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.133\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 29 11:08:45.314290 kubelet[1755]: W0129 11:08:45.314265 1755 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 29 11:08:45.314352 kubelet[1755]: E0129 11:08:45.314294 1755 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 29 11:08:45.315295 kubelet[1755]: I0129 11:08:45.315274 1755 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:08:45.316223 kubelet[1755]: I0129 11:08:45.315617 1755 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:08:45.316223 kubelet[1755]: E0129 11:08:45.315856 1755 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:08:45.316845 kubelet[1755]: E0129 11:08:45.316747 1755 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.133.181f253d0c2d8c3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.133,UID:10.0.0.133,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.133,},FirstTimestamp:2025-01-29 11:08:45.315845182 +0000 UTC m=+0.726906977,LastTimestamp:2025-01-29 11:08:45.315845182 +0000 UTC m=+0.726906977,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.133,}" Jan 29 11:08:45.317871 kubelet[1755]: I0129 11:08:45.317822 1755 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:08:45.331656 kubelet[1755]: I0129 11:08:45.331621 1755 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:08:45.331656 kubelet[1755]: I0129 11:08:45.331646 1755 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:08:45.331656 kubelet[1755]: I0129 11:08:45.331663 1755 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:08:45.396547 kubelet[1755]: I0129 11:08:45.396510 1755 policy_none.go:49] "None policy: Start" Jan 29 11:08:45.397815 kubelet[1755]: I0129 11:08:45.397373 1755 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:08:45.397815 kubelet[1755]: I0129 11:08:45.397517 1755 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:08:45.404495 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:08:45.410940 kubelet[1755]: I0129 11:08:45.410254 1755 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.133" Jan 29 11:08:45.415080 kubelet[1755]: I0129 11:08:45.415042 1755 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.133" Jan 29 11:08:45.417119 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:08:45.419986 kubelet[1755]: I0129 11:08:45.419842 1755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:08:45.421291 kubelet[1755]: I0129 11:08:45.421122 1755 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:08:45.421291 kubelet[1755]: I0129 11:08:45.421220 1755 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:08:45.421291 kubelet[1755]: I0129 11:08:45.421235 1755 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 11:08:45.421416 kubelet[1755]: E0129 11:08:45.421330 1755 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:08:45.426889 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:08:45.427943 kubelet[1755]: I0129 11:08:45.427912 1755 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:08:45.428343 kubelet[1755]: I0129 11:08:45.428132 1755 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:08:45.428343 kubelet[1755]: I0129 11:08:45.428282 1755 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:08:45.428809 kubelet[1755]: E0129 11:08:45.428785 1755 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" Jan 29 11:08:45.430284 kubelet[1755]: E0129 11:08:45.430259 1755 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.133\" not found" Jan 29 11:08:45.529925 kubelet[1755]: E0129 11:08:45.529810 1755 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" Jan 29 11:08:45.602392 sudo[1621]: pam_unix(sudo:session): session closed for user root Jan 29 11:08:45.603524 sshd[1620]: Connection closed by 10.0.0.1 port 35780 Jan 29 11:08:45.603901 sshd-session[1618]: pam_unix(sshd:session): session closed for user core Jan 29 11:08:45.606478 systemd[1]: sshd@6-10.0.0.133:22-10.0.0.1:35780.service: Deactivated successfully. Jan 29 11:08:45.608718 systemd-logind[1429]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:08:45.608833 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:08:45.609686 systemd-logind[1429]: Removed session 7. Jan 29 11:08:45.630698 kubelet[1755]: E0129 11:08:45.630656 1755 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" Jan 29 11:08:45.731445 kubelet[1755]: E0129 11:08:45.731402 1755 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" Jan 29 11:08:45.832564 kubelet[1755]: E0129 11:08:45.832458 1755 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" Jan 29 11:08:45.933548 kubelet[1755]: E0129 11:08:45.933509 1755 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" Jan 29 11:08:46.034317 kubelet[1755]: E0129 11:08:46.034282 1755 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" Jan 29 11:08:46.135367 kubelet[1755]: E0129 11:08:46.135278 1755 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" Jan 29 11:08:46.236228 kubelet[1755]: E0129 11:08:46.236190 1755 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" Jan 29 11:08:46.255553 kubelet[1755]: I0129 11:08:46.255514 1755 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 11:08:46.255698 kubelet[1755]: W0129 11:08:46.255674 1755 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:08:46.255698 kubelet[1755]: W0129 11:08:46.255677 1755 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:08:46.303139 kubelet[1755]: E0129 11:08:46.303108 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:08:46.336731 kubelet[1755]: E0129 11:08:46.336692 1755 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" Jan 29 11:08:46.437114 kubelet[1755]: E0129 11:08:46.436993 1755 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" Jan 29 11:08:46.537794 kubelet[1755]: E0129 11:08:46.537758 1755 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" Jan 29 11:08:46.638509 kubelet[1755]: E0129 11:08:46.638475 1755 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" Jan 29 11:08:46.739407 kubelet[1755]: E0129 11:08:46.739315 1755 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.133\" not found" Jan 29 11:08:46.840277 kubelet[1755]: I0129 11:08:46.840212 1755 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 11:08:46.840599 containerd[1454]: time="2025-01-29T11:08:46.840565783Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:08:46.841143 kubelet[1755]: I0129 11:08:46.840759 1755 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 11:08:47.304460 kubelet[1755]: I0129 11:08:47.304417 1755 apiserver.go:52] "Watching apiserver" Jan 29 11:08:47.304574 kubelet[1755]: E0129 11:08:47.304422 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:08:47.319546 kubelet[1755]: I0129 11:08:47.319494 1755 topology_manager.go:215] "Topology Admit Handler" podUID="76364d0f-3115-4fc7-9bd3-a3a1937d0465" podNamespace="kube-system" podName="cilium-xszqq" Jan 29 11:08:47.319762 kubelet[1755]: I0129 11:08:47.319730 1755 topology_manager.go:215] "Topology Admit Handler" podUID="ee7fb9bd-94fd-43c0-bcca-1aba09e4dfa7" podNamespace="kube-system" podName="kube-proxy-ddjfq" Jan 29 11:08:47.328977 systemd[1]: Created slice kubepods-burstable-pod76364d0f_3115_4fc7_9bd3_a3a1937d0465.slice - libcontainer container kubepods-burstable-pod76364d0f_3115_4fc7_9bd3_a3a1937d0465.slice. Jan 29 11:08:47.338927 systemd[1]: Created slice kubepods-besteffort-podee7fb9bd_94fd_43c0_bcca_1aba09e4dfa7.slice - libcontainer container kubepods-besteffort-podee7fb9bd_94fd_43c0_bcca_1aba09e4dfa7.slice. Jan 29 11:08:47.409440 kubelet[1755]: I0129 11:08:47.409408 1755 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:08:47.423457 kubelet[1755]: I0129 11:08:47.423431 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhq9t\" (UniqueName: \"kubernetes.io/projected/ee7fb9bd-94fd-43c0-bcca-1aba09e4dfa7-kube-api-access-fhq9t\") pod \"kube-proxy-ddjfq\" (UID: \"ee7fb9bd-94fd-43c0-bcca-1aba09e4dfa7\") " pod="kube-system/kube-proxy-ddjfq" Jan 29 11:08:47.423640 kubelet[1755]: I0129 11:08:47.423470 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-cilium-cgroup\") pod \"cilium-xszqq\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " pod="kube-system/cilium-xszqq" Jan 29 11:08:47.423640 kubelet[1755]: I0129 11:08:47.423501 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-etc-cni-netd\") pod \"cilium-xszqq\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " pod="kube-system/cilium-xszqq" Jan 29 11:08:47.423640 kubelet[1755]: I0129 11:08:47.423520 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-lib-modules\") pod \"cilium-xszqq\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " pod="kube-system/cilium-xszqq" Jan 29 11:08:47.423640 kubelet[1755]: I0129 11:08:47.423535 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-xtables-lock\") pod \"cilium-xszqq\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " pod="kube-system/cilium-xszqq" Jan 29 11:08:47.423640 kubelet[1755]: I0129 11:08:47.423550 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-host-proc-sys-kernel\") pod \"cilium-xszqq\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " pod="kube-system/cilium-xszqq" Jan 29 11:08:47.423640 kubelet[1755]: I0129 11:08:47.423566 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ee7fb9bd-94fd-43c0-bcca-1aba09e4dfa7-kube-proxy\") pod \"kube-proxy-ddjfq\" (UID: \"ee7fb9bd-94fd-43c0-bcca-1aba09e4dfa7\") " pod="kube-system/kube-proxy-ddjfq" Jan 29 11:08:47.423776 kubelet[1755]: I0129 11:08:47.423581 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee7fb9bd-94fd-43c0-bcca-1aba09e4dfa7-lib-modules\") pod \"kube-proxy-ddjfq\" (UID: \"ee7fb9bd-94fd-43c0-bcca-1aba09e4dfa7\") " pod="kube-system/kube-proxy-ddjfq" Jan 29 11:08:47.423776 kubelet[1755]: I0129 11:08:47.423605 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-cilium-run\") pod \"cilium-xszqq\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " pod="kube-system/cilium-xszqq" Jan 29 11:08:47.423776 kubelet[1755]: I0129 11:08:47.423627 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-host-proc-sys-net\") pod \"cilium-xszqq\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " pod="kube-system/cilium-xszqq" Jan 29 11:08:47.423776 kubelet[1755]: I0129 11:08:47.423642 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/76364d0f-3115-4fc7-9bd3-a3a1937d0465-hubble-tls\") pod \"cilium-xszqq\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " pod="kube-system/cilium-xszqq" Jan 29 11:08:47.423776 kubelet[1755]: I0129 11:08:47.423658 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee7fb9bd-94fd-43c0-bcca-1aba09e4dfa7-xtables-lock\") pod \"kube-proxy-ddjfq\" (UID: \"ee7fb9bd-94fd-43c0-bcca-1aba09e4dfa7\") " pod="kube-system/kube-proxy-ddjfq" Jan 29 11:08:47.423776 kubelet[1755]: I0129 11:08:47.423674 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-cni-path\") pod \"cilium-xszqq\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " pod="kube-system/cilium-xszqq" Jan 29 11:08:47.423901 kubelet[1755]: I0129 11:08:47.423689 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t8fn\" (UniqueName: \"kubernetes.io/projected/76364d0f-3115-4fc7-9bd3-a3a1937d0465-kube-api-access-4t8fn\") pod \"cilium-xszqq\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " pod="kube-system/cilium-xszqq" Jan 29 11:08:47.423901 kubelet[1755]: I0129 11:08:47.423704 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-bpf-maps\") pod \"cilium-xszqq\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " pod="kube-system/cilium-xszqq" Jan 29 11:08:47.423901 kubelet[1755]: I0129 11:08:47.423717 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-hostproc\") pod \"cilium-xszqq\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " pod="kube-system/cilium-xszqq" Jan 29 11:08:47.423901 kubelet[1755]: I0129 11:08:47.423733 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/76364d0f-3115-4fc7-9bd3-a3a1937d0465-clustermesh-secrets\") pod \"cilium-xszqq\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " pod="kube-system/cilium-xszqq" Jan 29 11:08:47.423901 kubelet[1755]: I0129 11:08:47.423747 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76364d0f-3115-4fc7-9bd3-a3a1937d0465-cilium-config-path\") pod \"cilium-xszqq\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " pod="kube-system/cilium-xszqq" Jan 29 11:08:47.637157 kubelet[1755]: E0129 11:08:47.637008 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:08:47.639569 containerd[1454]: time="2025-01-29T11:08:47.639522980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xszqq,Uid:76364d0f-3115-4fc7-9bd3-a3a1937d0465,Namespace:kube-system,Attempt:0,}" Jan 29 11:08:47.648975 kubelet[1755]: E0129 11:08:47.648941 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:08:47.649482 containerd[1454]: time="2025-01-29T11:08:47.649436129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ddjfq,Uid:ee7fb9bd-94fd-43c0-bcca-1aba09e4dfa7,Namespace:kube-system,Attempt:0,}" Jan 29 11:08:48.241557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount645664879.mount: Deactivated successfully. Jan 29 11:08:48.247183 containerd[1454]: time="2025-01-29T11:08:48.246914125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:08:48.248349 containerd[1454]: time="2025-01-29T11:08:48.248300791Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 29 11:08:48.249873 containerd[1454]: time="2025-01-29T11:08:48.249835917Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:08:48.250680 containerd[1454]: time="2025-01-29T11:08:48.250655135Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:08:48.250906 containerd[1454]: time="2025-01-29T11:08:48.250874450Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:08:48.253085 containerd[1454]: time="2025-01-29T11:08:48.253029522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:08:48.255443 containerd[1454]: time="2025-01-29T11:08:48.255368562Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 615.74763ms" Jan 29 11:08:48.256112 containerd[1454]: time="2025-01-29T11:08:48.256085715Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 606.567824ms" Jan 29 11:08:48.304825 kubelet[1755]: E0129 11:08:48.304773 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:08:48.379078 containerd[1454]: time="2025-01-29T11:08:48.378762356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:08:48.379078 containerd[1454]: time="2025-01-29T11:08:48.378833774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:08:48.379078 containerd[1454]: time="2025-01-29T11:08:48.378848917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:08:48.379078 containerd[1454]: time="2025-01-29T11:08:48.378926681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:08:48.379362 containerd[1454]: time="2025-01-29T11:08:48.379114545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:08:48.379362 containerd[1454]: time="2025-01-29T11:08:48.379173110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:08:48.379362 containerd[1454]: time="2025-01-29T11:08:48.379189136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:08:48.379362 containerd[1454]: time="2025-01-29T11:08:48.379296143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:08:48.492445 systemd[1]: Started cri-containerd-aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38.scope - libcontainer container aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38. Jan 29 11:08:48.493888 systemd[1]: Started cri-containerd-f9eca8b5c253ba913a0c7466191e32a461a019ee791b0cf260d9943be8d67ebf.scope - libcontainer container f9eca8b5c253ba913a0c7466191e32a461a019ee791b0cf260d9943be8d67ebf. Jan 29 11:08:48.512605 containerd[1454]: time="2025-01-29T11:08:48.512561165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xszqq,Uid:76364d0f-3115-4fc7-9bd3-a3a1937d0465,Namespace:kube-system,Attempt:0,} returns sandbox id \"aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38\"" Jan 29 11:08:48.513812 kubelet[1755]: E0129 11:08:48.513788 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:08:48.515099 containerd[1454]: time="2025-01-29T11:08:48.515068588Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 29 11:08:48.519106 containerd[1454]: time="2025-01-29T11:08:48.518887482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ddjfq,Uid:ee7fb9bd-94fd-43c0-bcca-1aba09e4dfa7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f9eca8b5c253ba913a0c7466191e32a461a019ee791b0cf260d9943be8d67ebf\"" Jan 29 11:08:48.519763 kubelet[1755]: E0129 11:08:48.519690 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:08:49.305340 kubelet[1755]: E0129 11:08:49.305302 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:08:50.305968 kubelet[1755]: E0129 11:08:50.305865 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:08:50.906303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount794439972.mount: Deactivated successfully. Jan 29 11:08:51.306528 kubelet[1755]: E0129 11:08:51.306403 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:08:52.166738 containerd[1454]: time="2025-01-29T11:08:52.166686224Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:08:52.167724 containerd[1454]: time="2025-01-29T11:08:52.167479544Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 29 11:08:52.168604 containerd[1454]: time="2025-01-29T11:08:52.168530374Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:08:52.170230 containerd[1454]: time="2025-01-29T11:08:52.170197232Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 3.655086961s" Jan 29 11:08:52.170230 containerd[1454]: time="2025-01-29T11:08:52.170233228Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 29 11:08:52.172229 containerd[1454]: time="2025-01-29T11:08:52.172133840Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 11:08:52.173307 containerd[1454]: time="2025-01-29T11:08:52.173272273Z" level=info msg="CreateContainer within sandbox \"aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:08:52.185721 containerd[1454]: time="2025-01-29T11:08:52.185640336Z" level=info msg="CreateContainer within sandbox \"aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"65c90376b3feef6fe407c811a2351608051d7640307c11212a28d5123065d2bf\"" Jan 29 11:08:52.186624 containerd[1454]: time="2025-01-29T11:08:52.186567006Z" level=info msg="StartContainer for \"65c90376b3feef6fe407c811a2351608051d7640307c11212a28d5123065d2bf\"" Jan 29 11:08:52.218438 systemd[1]: Started cri-containerd-65c90376b3feef6fe407c811a2351608051d7640307c11212a28d5123065d2bf.scope - libcontainer container 65c90376b3feef6fe407c811a2351608051d7640307c11212a28d5123065d2bf. Jan 29 11:08:52.238571 containerd[1454]: time="2025-01-29T11:08:52.238530375Z" level=info msg="StartContainer for \"65c90376b3feef6fe407c811a2351608051d7640307c11212a28d5123065d2bf\" returns successfully" Jan 29 11:08:52.298301 systemd[1]: cri-containerd-65c90376b3feef6fe407c811a2351608051d7640307c11212a28d5123065d2bf.scope: Deactivated successfully. Jan 29 11:08:52.307160 kubelet[1755]: E0129 11:08:52.307098 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:08:52.315783 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65c90376b3feef6fe407c811a2351608051d7640307c11212a28d5123065d2bf-rootfs.mount: Deactivated successfully. Jan 29 11:08:52.394308 containerd[1454]: time="2025-01-29T11:08:52.394205229Z" level=info msg="shim disconnected" id=65c90376b3feef6fe407c811a2351608051d7640307c11212a28d5123065d2bf namespace=k8s.io Jan 29 11:08:52.394308 containerd[1454]: time="2025-01-29T11:08:52.394277502Z" level=warning msg="cleaning up after shim disconnected" id=65c90376b3feef6fe407c811a2351608051d7640307c11212a28d5123065d2bf namespace=k8s.io Jan 29 11:08:52.394308 containerd[1454]: time="2025-01-29T11:08:52.394287615Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:08:52.442833 kubelet[1755]: E0129 11:08:52.442590 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:08:52.445141 containerd[1454]: time="2025-01-29T11:08:52.445062068Z" level=info msg="CreateContainer within sandbox \"aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:08:52.455351 containerd[1454]: time="2025-01-29T11:08:52.455313342Z" level=info msg="CreateContainer within sandbox \"aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c4da2eb7c6ae8cb6edcc31550a758673d692c0d59a424f4531e8cb29edd3098c\"" Jan 29 11:08:52.455729 containerd[1454]: time="2025-01-29T11:08:52.455689636Z" level=info msg="StartContainer for \"c4da2eb7c6ae8cb6edcc31550a758673d692c0d59a424f4531e8cb29edd3098c\"" Jan 29 11:08:52.479409 systemd[1]: Started cri-containerd-c4da2eb7c6ae8cb6edcc31550a758673d692c0d59a424f4531e8cb29edd3098c.scope - libcontainer container c4da2eb7c6ae8cb6edcc31550a758673d692c0d59a424f4531e8cb29edd3098c. Jan 29 11:08:52.499297 containerd[1454]: time="2025-01-29T11:08:52.499229910Z" level=info msg="StartContainer for \"c4da2eb7c6ae8cb6edcc31550a758673d692c0d59a424f4531e8cb29edd3098c\" returns successfully" Jan 29 11:08:52.516336 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:08:52.516826 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:08:52.516931 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:08:52.524684 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:08:52.524891 systemd[1]: cri-containerd-c4da2eb7c6ae8cb6edcc31550a758673d692c0d59a424f4531e8cb29edd3098c.scope: Deactivated successfully. Jan 29 11:08:52.535869 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:08:52.539656 containerd[1454]: time="2025-01-29T11:08:52.539601561Z" level=info msg="shim disconnected" id=c4da2eb7c6ae8cb6edcc31550a758673d692c0d59a424f4531e8cb29edd3098c namespace=k8s.io Jan 29 11:08:52.539656 containerd[1454]: time="2025-01-29T11:08:52.539657221Z" level=warning msg="cleaning up after shim disconnected" id=c4da2eb7c6ae8cb6edcc31550a758673d692c0d59a424f4531e8cb29edd3098c namespace=k8s.io Jan 29 11:08:52.539789 containerd[1454]: time="2025-01-29T11:08:52.539665447Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:08:53.308024 kubelet[1755]: E0129 11:08:53.307946 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:08:53.445749 kubelet[1755]: E0129 11:08:53.445568 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:08:53.448791 containerd[1454]: time="2025-01-29T11:08:53.448742665Z" level=info msg="CreateContainer within sandbox \"aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:08:53.470695 containerd[1454]: time="2025-01-29T11:08:53.470624945Z" level=info msg="CreateContainer within sandbox \"aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"16a0c75956f4de4efa53381504595be0ca49563d400bf2587cd707e3a1b55e7f\"" Jan 29 11:08:53.471381 containerd[1454]: time="2025-01-29T11:08:53.471043010Z" level=info msg="StartContainer for \"16a0c75956f4de4efa53381504595be0ca49563d400bf2587cd707e3a1b55e7f\"" Jan 29 11:08:53.475225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2593706634.mount: Deactivated successfully. Jan 29 11:08:53.498469 systemd[1]: Started cri-containerd-16a0c75956f4de4efa53381504595be0ca49563d400bf2587cd707e3a1b55e7f.scope - libcontainer container 16a0c75956f4de4efa53381504595be0ca49563d400bf2587cd707e3a1b55e7f. Jan 29 11:08:53.529584 containerd[1454]: time="2025-01-29T11:08:53.529532757Z" level=info msg="StartContainer for \"16a0c75956f4de4efa53381504595be0ca49563d400bf2587cd707e3a1b55e7f\" returns successfully" Jan 29 11:08:53.545060 systemd[1]: cri-containerd-16a0c75956f4de4efa53381504595be0ca49563d400bf2587cd707e3a1b55e7f.scope: Deactivated successfully. Jan 29 11:08:53.648425 containerd[1454]: time="2025-01-29T11:08:53.648116384Z" level=info msg="shim disconnected" id=16a0c75956f4de4efa53381504595be0ca49563d400bf2587cd707e3a1b55e7f namespace=k8s.io Jan 29 11:08:53.648425 containerd[1454]: time="2025-01-29T11:08:53.648177409Z" level=warning msg="cleaning up after shim disconnected" id=16a0c75956f4de4efa53381504595be0ca49563d400bf2587cd707e3a1b55e7f namespace=k8s.io Jan 29 11:08:53.648425 containerd[1454]: time="2025-01-29T11:08:53.648186476Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:08:53.746595 containerd[1454]: time="2025-01-29T11:08:53.746539099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:08:53.747359 containerd[1454]: time="2025-01-29T11:08:53.747310071Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662714" Jan 29 11:08:53.748001 containerd[1454]: time="2025-01-29T11:08:53.747961281Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:08:53.750083 containerd[1454]: time="2025-01-29T11:08:53.750047312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:08:53.750955 containerd[1454]: time="2025-01-29T11:08:53.750918307Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.578754251s" Jan 29 11:08:53.750997 containerd[1454]: time="2025-01-29T11:08:53.750952932Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 29 11:08:53.753089 containerd[1454]: time="2025-01-29T11:08:53.753060267Z" level=info msg="CreateContainer within sandbox \"f9eca8b5c253ba913a0c7466191e32a461a019ee791b0cf260d9943be8d67ebf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:08:53.764625 containerd[1454]: time="2025-01-29T11:08:53.764575063Z" level=info msg="CreateContainer within sandbox \"f9eca8b5c253ba913a0c7466191e32a461a019ee791b0cf260d9943be8d67ebf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0b2c4c7a17af64c27504f9bc449a351a62fe8fb679fd11c775c289e228e4c1bf\"" Jan 29 11:08:53.765275 containerd[1454]: time="2025-01-29T11:08:53.765157785Z" level=info msg="StartContainer for \"0b2c4c7a17af64c27504f9bc449a351a62fe8fb679fd11c775c289e228e4c1bf\"" Jan 29 11:08:53.791442 systemd[1]: Started cri-containerd-0b2c4c7a17af64c27504f9bc449a351a62fe8fb679fd11c775c289e228e4c1bf.scope - libcontainer container 0b2c4c7a17af64c27504f9bc449a351a62fe8fb679fd11c775c289e228e4c1bf. Jan 29 11:08:53.826944 containerd[1454]: time="2025-01-29T11:08:53.826895679Z" level=info msg="StartContainer for \"0b2c4c7a17af64c27504f9bc449a351a62fe8fb679fd11c775c289e228e4c1bf\" returns successfully" Jan 29 11:08:54.308817 kubelet[1755]: E0129 11:08:54.308758 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:08:54.449458 kubelet[1755]: E0129 11:08:54.449391 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:08:54.451722 kubelet[1755]: E0129 11:08:54.451702 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:08:54.454464 containerd[1454]: time="2025-01-29T11:08:54.454422432Z" level=info msg="CreateContainer within sandbox \"aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:08:54.458540 kubelet[1755]: I0129 11:08:54.458481 1755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ddjfq" podStartSLOduration=4.226901716 podStartE2EDuration="9.458468671s" podCreationTimestamp="2025-01-29 11:08:45 +0000 UTC" firstStartedPulling="2025-01-29 11:08:48.520165736 +0000 UTC m=+3.931227531" lastFinishedPulling="2025-01-29 11:08:53.751732691 +0000 UTC m=+9.162794486" observedRunningTime="2025-01-29 11:08:54.458274119 +0000 UTC m=+9.869335914" watchObservedRunningTime="2025-01-29 11:08:54.458468671 +0000 UTC m=+9.869530426" Jan 29 11:08:54.468148 containerd[1454]: time="2025-01-29T11:08:54.468069907Z" level=info msg="CreateContainer within sandbox \"aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3fdd48bb263184dcbe73e8a2be3c0027a5ede08d272e2a90af836357c1245657\"" Jan 29 11:08:54.468896 containerd[1454]: time="2025-01-29T11:08:54.468590463Z" level=info msg="StartContainer for \"3fdd48bb263184dcbe73e8a2be3c0027a5ede08d272e2a90af836357c1245657\"" Jan 29 11:08:54.498404 systemd[1]: Started cri-containerd-3fdd48bb263184dcbe73e8a2be3c0027a5ede08d272e2a90af836357c1245657.scope - libcontainer container 3fdd48bb263184dcbe73e8a2be3c0027a5ede08d272e2a90af836357c1245657. Jan 29 11:08:54.515913 systemd[1]: cri-containerd-3fdd48bb263184dcbe73e8a2be3c0027a5ede08d272e2a90af836357c1245657.scope: Deactivated successfully. Jan 29 11:08:54.518025 containerd[1454]: time="2025-01-29T11:08:54.517981291Z" level=info msg="StartContainer for \"3fdd48bb263184dcbe73e8a2be3c0027a5ede08d272e2a90af836357c1245657\" returns successfully" Jan 29 11:08:54.571362 containerd[1454]: time="2025-01-29T11:08:54.571193881Z" level=info msg="shim disconnected" id=3fdd48bb263184dcbe73e8a2be3c0027a5ede08d272e2a90af836357c1245657 namespace=k8s.io Jan 29 11:08:54.571362 containerd[1454]: time="2025-01-29T11:08:54.571277197Z" level=warning msg="cleaning up after shim disconnected" id=3fdd48bb263184dcbe73e8a2be3c0027a5ede08d272e2a90af836357c1245657 namespace=k8s.io Jan 29 11:08:54.571362 containerd[1454]: time="2025-01-29T11:08:54.571287346Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:08:55.182887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fdd48bb263184dcbe73e8a2be3c0027a5ede08d272e2a90af836357c1245657-rootfs.mount: Deactivated successfully. Jan 29 11:08:55.309340 kubelet[1755]: E0129 11:08:55.309279 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:08:55.458878 kubelet[1755]: E0129 11:08:55.458471 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:08:55.458878 kubelet[1755]: E0129 11:08:55.458515 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:08:55.462769 containerd[1454]: time="2025-01-29T11:08:55.460760231Z" level=info msg="CreateContainer within sandbox \"aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:08:55.482050 containerd[1454]: time="2025-01-29T11:08:55.481980194Z" level=info msg="CreateContainer within sandbox \"aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f\"" Jan 29 11:08:55.482952 containerd[1454]: time="2025-01-29T11:08:55.482461715Z" level=info msg="StartContainer for \"e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f\"" Jan 29 11:08:55.512447 systemd[1]: Started cri-containerd-e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f.scope - libcontainer container e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f. Jan 29 11:08:55.534507 containerd[1454]: time="2025-01-29T11:08:55.534392806Z" level=info msg="StartContainer for \"e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f\" returns successfully" Jan 29 11:08:55.610563 kubelet[1755]: I0129 11:08:55.610524 1755 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 11:08:56.082295 kernel: Initializing XFRM netlink socket Jan 29 11:08:56.310019 kubelet[1755]: E0129 11:08:56.309966 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:08:56.464065 kubelet[1755]: E0129 11:08:56.464022 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:08:56.536112 kubelet[1755]: I0129 11:08:56.536053 1755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xszqq" podStartSLOduration=7.87911558 podStartE2EDuration="11.536034744s" podCreationTimestamp="2025-01-29 11:08:45 +0000 UTC" firstStartedPulling="2025-01-29 11:08:48.514579065 +0000 UTC m=+3.925640820" lastFinishedPulling="2025-01-29 11:08:52.171498189 +0000 UTC m=+7.582559984" observedRunningTime="2025-01-29 11:08:56.47991622 +0000 UTC m=+11.890978015" watchObservedRunningTime="2025-01-29 11:08:56.536034744 +0000 UTC m=+11.947096539" Jan 29 11:08:56.536357 kubelet[1755]: I0129 11:08:56.536337 1755 topology_manager.go:215] "Topology Admit Handler" podUID="06eb9f58-9220-4600-90bc-60fb2369956f" podNamespace="default" podName="nginx-deployment-85f456d6dd-tkrqz" Jan 29 11:08:56.542229 systemd[1]: Created slice kubepods-besteffort-pod06eb9f58_9220_4600_90bc_60fb2369956f.slice - libcontainer container kubepods-besteffort-pod06eb9f58_9220_4600_90bc_60fb2369956f.slice. Jan 29 11:08:56.584926 kubelet[1755]: I0129 11:08:56.584181 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4xlb\" (UniqueName: \"kubernetes.io/projected/06eb9f58-9220-4600-90bc-60fb2369956f-kube-api-access-j4xlb\") pod \"nginx-deployment-85f456d6dd-tkrqz\" (UID: \"06eb9f58-9220-4600-90bc-60fb2369956f\") " pod="default/nginx-deployment-85f456d6dd-tkrqz" Jan 29 11:08:56.845455 containerd[1454]: time="2025-01-29T11:08:56.845334439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-tkrqz,Uid:06eb9f58-9220-4600-90bc-60fb2369956f,Namespace:default,Attempt:0,}" Jan 29 11:08:57.310661 kubelet[1755]: E0129 11:08:57.310547 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:08:57.465591 kubelet[1755]: E0129 11:08:57.465556 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:08:57.708101 systemd-networkd[1391]: cilium_host: Link UP Jan 29 11:08:57.708222 systemd-networkd[1391]: cilium_net: Link UP Jan 29 11:08:57.708383 systemd-networkd[1391]: cilium_net: Gained carrier Jan 29 11:08:57.708522 systemd-networkd[1391]: cilium_host: Gained carrier Jan 29 11:08:57.784603 systemd-networkd[1391]: cilium_vxlan: Link UP Jan 29 11:08:57.784611 systemd-networkd[1391]: cilium_vxlan: Gained carrier Jan 29 11:08:57.836381 systemd-networkd[1391]: cilium_net: Gained IPv6LL Jan 29 11:08:58.052387 systemd-networkd[1391]: cilium_host: Gained IPv6LL Jan 29 11:08:58.103310 kernel: NET: Registered PF_ALG protocol family Jan 29 11:08:58.311512 kubelet[1755]: E0129 11:08:58.311397 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:08:58.467310 kubelet[1755]: E0129 11:08:58.467238 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:08:58.669820 systemd-networkd[1391]: lxc_health: Link UP Jan 29 11:08:58.678444 systemd-networkd[1391]: lxc_health: Gained carrier Jan 29 11:08:58.907273 kernel: eth0: renamed from tmp329a1 Jan 29 11:08:58.913103 systemd-networkd[1391]: lxc423fabf61b7b: Link UP Jan 29 11:08:58.914159 systemd-networkd[1391]: lxc423fabf61b7b: Gained carrier Jan 29 11:08:59.312562 kubelet[1755]: E0129 11:08:59.312515 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:08:59.640159 kubelet[1755]: E0129 11:08:59.639468 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:08:59.756430 systemd-networkd[1391]: cilium_vxlan: Gained IPv6LL Jan 29 11:08:59.948515 systemd-networkd[1391]: lxc423fabf61b7b: Gained IPv6LL Jan 29 11:09:00.312854 kubelet[1755]: E0129 11:09:00.312625 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:00.460410 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jan 29 11:09:00.469862 kubelet[1755]: E0129 11:09:00.469829 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:01.313368 kubelet[1755]: E0129 11:09:01.313321 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:01.471337 kubelet[1755]: E0129 11:09:01.471310 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:02.314486 kubelet[1755]: E0129 11:09:02.314419 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:02.459912 containerd[1454]: time="2025-01-29T11:09:02.459823291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:09:02.460301 containerd[1454]: time="2025-01-29T11:09:02.459923661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:09:02.460301 containerd[1454]: time="2025-01-29T11:09:02.459957398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:09:02.460493 containerd[1454]: time="2025-01-29T11:09:02.460438374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:09:02.481431 systemd[1]: Started cri-containerd-329a10d036f90c50a7c95dbf0ad9c80c3f28e7ddf839bd1cb7e3f10e3830a6b3.scope - libcontainer container 329a10d036f90c50a7c95dbf0ad9c80c3f28e7ddf839bd1cb7e3f10e3830a6b3. Jan 29 11:09:02.491481 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:09:02.508982 containerd[1454]: time="2025-01-29T11:09:02.508933356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-tkrqz,Uid:06eb9f58-9220-4600-90bc-60fb2369956f,Namespace:default,Attempt:0,} returns sandbox id \"329a10d036f90c50a7c95dbf0ad9c80c3f28e7ddf839bd1cb7e3f10e3830a6b3\"" Jan 29 11:09:02.510340 containerd[1454]: time="2025-01-29T11:09:02.510314016Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 11:09:03.315790 kubelet[1755]: E0129 11:09:03.315330 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:04.167344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1168536585.mount: Deactivated successfully. Jan 29 11:09:04.316342 kubelet[1755]: E0129 11:09:04.316308 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:04.894277 containerd[1454]: time="2025-01-29T11:09:04.894209438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:04.895276 containerd[1454]: time="2025-01-29T11:09:04.894822511Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67680490" Jan 29 11:09:04.896281 containerd[1454]: time="2025-01-29T11:09:04.895621542Z" level=info msg="ImageCreate event name:\"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:04.898280 containerd[1454]: time="2025-01-29T11:09:04.898221455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:04.899549 containerd[1454]: time="2025-01-29T11:09:04.899403656Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 2.389031785s" Jan 29 11:09:04.899549 containerd[1454]: time="2025-01-29T11:09:04.899446119Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\"" Jan 29 11:09:04.901618 containerd[1454]: time="2025-01-29T11:09:04.901582342Z" level=info msg="CreateContainer within sandbox \"329a10d036f90c50a7c95dbf0ad9c80c3f28e7ddf839bd1cb7e3f10e3830a6b3\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 11:09:04.916910 containerd[1454]: time="2025-01-29T11:09:04.916866632Z" level=info msg="CreateContainer within sandbox \"329a10d036f90c50a7c95dbf0ad9c80c3f28e7ddf839bd1cb7e3f10e3830a6b3\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"0b775ebc5b57660f3788f8fe43aaa78522433d261000e8459034dda91c0c5c6b\"" Jan 29 11:09:04.917651 containerd[1454]: time="2025-01-29T11:09:04.917537671Z" level=info msg="StartContainer for \"0b775ebc5b57660f3788f8fe43aaa78522433d261000e8459034dda91c0c5c6b\"" Jan 29 11:09:04.953457 systemd[1]: Started cri-containerd-0b775ebc5b57660f3788f8fe43aaa78522433d261000e8459034dda91c0c5c6b.scope - libcontainer container 0b775ebc5b57660f3788f8fe43aaa78522433d261000e8459034dda91c0c5c6b. Jan 29 11:09:04.976404 containerd[1454]: time="2025-01-29T11:09:04.976324850Z" level=info msg="StartContainer for \"0b775ebc5b57660f3788f8fe43aaa78522433d261000e8459034dda91c0c5c6b\" returns successfully" Jan 29 11:09:05.302653 kubelet[1755]: E0129 11:09:05.302609 1755 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:05.317035 kubelet[1755]: E0129 11:09:05.316996 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:05.489508 kubelet[1755]: I0129 11:09:05.489423 1755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-tkrqz" podStartSLOduration=7.09907032 podStartE2EDuration="9.489401302s" podCreationTimestamp="2025-01-29 11:08:56 +0000 UTC" firstStartedPulling="2025-01-29 11:09:02.510060346 +0000 UTC m=+17.921122141" lastFinishedPulling="2025-01-29 11:09:04.900391328 +0000 UTC m=+20.311453123" observedRunningTime="2025-01-29 11:09:05.488739658 +0000 UTC m=+20.899801413" watchObservedRunningTime="2025-01-29 11:09:05.489401302 +0000 UTC m=+20.900463097" Jan 29 11:09:06.317350 kubelet[1755]: E0129 11:09:06.317300 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:07.318425 kubelet[1755]: E0129 11:09:07.318370 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:08.319210 kubelet[1755]: E0129 11:09:08.319162 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:08.921905 kubelet[1755]: I0129 11:09:08.921863 1755 topology_manager.go:215] "Topology Admit Handler" podUID="a810f21c-a751-48b6-8510-6c64393acb30" podNamespace="default" podName="nfs-server-provisioner-0" Jan 29 11:09:08.927666 systemd[1]: Created slice kubepods-besteffort-poda810f21c_a751_48b6_8510_6c64393acb30.slice - libcontainer container kubepods-besteffort-poda810f21c_a751_48b6_8510_6c64393acb30.slice. Jan 29 11:09:08.964184 kubelet[1755]: I0129 11:09:08.964141 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/a810f21c-a751-48b6-8510-6c64393acb30-data\") pod \"nfs-server-provisioner-0\" (UID: \"a810f21c-a751-48b6-8510-6c64393acb30\") " pod="default/nfs-server-provisioner-0" Jan 29 11:09:08.964184 kubelet[1755]: I0129 11:09:08.964184 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szvtl\" (UniqueName: \"kubernetes.io/projected/a810f21c-a751-48b6-8510-6c64393acb30-kube-api-access-szvtl\") pod \"nfs-server-provisioner-0\" (UID: \"a810f21c-a751-48b6-8510-6c64393acb30\") " pod="default/nfs-server-provisioner-0" Jan 29 11:09:09.235128 containerd[1454]: time="2025-01-29T11:09:09.234996572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a810f21c-a751-48b6-8510-6c64393acb30,Namespace:default,Attempt:0,}" Jan 29 11:09:09.268834 systemd-networkd[1391]: lxc8391c39df64e: Link UP Jan 29 11:09:09.276389 kernel: eth0: renamed from tmp467e8 Jan 29 11:09:09.282417 systemd-networkd[1391]: lxc8391c39df64e: Gained carrier Jan 29 11:09:09.319783 kubelet[1755]: E0129 11:09:09.319726 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:09.439231 containerd[1454]: time="2025-01-29T11:09:09.439043925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:09:09.439231 containerd[1454]: time="2025-01-29T11:09:09.439127695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:09:09.439231 containerd[1454]: time="2025-01-29T11:09:09.439140469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:09:09.439452 containerd[1454]: time="2025-01-29T11:09:09.439267927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:09:09.465493 systemd[1]: Started cri-containerd-467e81eb00ac365597e265cef2115f313a18666f55c674ed0b8aa7dda07d8611.scope - libcontainer container 467e81eb00ac365597e265cef2115f313a18666f55c674ed0b8aa7dda07d8611. Jan 29 11:09:09.477464 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:09:09.494596 containerd[1454]: time="2025-01-29T11:09:09.494136187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a810f21c-a751-48b6-8510-6c64393acb30,Namespace:default,Attempt:0,} returns sandbox id \"467e81eb00ac365597e265cef2115f313a18666f55c674ed0b8aa7dda07d8611\"" Jan 29 11:09:09.496036 containerd[1454]: time="2025-01-29T11:09:09.496001120Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 11:09:10.076344 systemd[1]: run-containerd-runc-k8s.io-467e81eb00ac365597e265cef2115f313a18666f55c674ed0b8aa7dda07d8611-runc.xRyiPf.mount: Deactivated successfully. Jan 29 11:09:10.320790 kubelet[1755]: E0129 11:09:10.320725 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:11.020450 systemd-networkd[1391]: lxc8391c39df64e: Gained IPv6LL Jan 29 11:09:11.321206 kubelet[1755]: E0129 11:09:11.321098 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:11.457228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3963329863.mount: Deactivated successfully. Jan 29 11:09:12.321776 kubelet[1755]: E0129 11:09:12.321736 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:12.871151 containerd[1454]: time="2025-01-29T11:09:12.871095882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:12.872208 containerd[1454]: time="2025-01-29T11:09:12.872164272Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Jan 29 11:09:12.872908 containerd[1454]: time="2025-01-29T11:09:12.872871902Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:12.875993 containerd[1454]: time="2025-01-29T11:09:12.875935987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:12.877090 containerd[1454]: time="2025-01-29T11:09:12.876940641Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.380694779s" Jan 29 11:09:12.877090 containerd[1454]: time="2025-01-29T11:09:12.876983559Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 29 11:09:12.879609 containerd[1454]: time="2025-01-29T11:09:12.879574424Z" level=info msg="CreateContainer within sandbox \"467e81eb00ac365597e265cef2115f313a18666f55c674ed0b8aa7dda07d8611\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 11:09:12.889802 containerd[1454]: time="2025-01-29T11:09:12.889752437Z" level=info msg="CreateContainer within sandbox \"467e81eb00ac365597e265cef2115f313a18666f55c674ed0b8aa7dda07d8611\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a23f0aecb0d209fb67f7ef0ccaf8ad31d31790e23f3f27e3b7fd70a336e9e578\"" Jan 29 11:09:12.890509 containerd[1454]: time="2025-01-29T11:09:12.890278985Z" level=info msg="StartContainer for \"a23f0aecb0d209fb67f7ef0ccaf8ad31d31790e23f3f27e3b7fd70a336e9e578\"" Jan 29 11:09:12.967488 systemd[1]: Started cri-containerd-a23f0aecb0d209fb67f7ef0ccaf8ad31d31790e23f3f27e3b7fd70a336e9e578.scope - libcontainer container a23f0aecb0d209fb67f7ef0ccaf8ad31d31790e23f3f27e3b7fd70a336e9e578. Jan 29 11:09:13.030887 containerd[1454]: time="2025-01-29T11:09:13.030839250Z" level=info msg="StartContainer for \"a23f0aecb0d209fb67f7ef0ccaf8ad31d31790e23f3f27e3b7fd70a336e9e578\" returns successfully" Jan 29 11:09:13.322290 kubelet[1755]: E0129 11:09:13.322192 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:13.509774 kubelet[1755]: I0129 11:09:13.509521 1755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.127313704 podStartE2EDuration="5.509504275s" podCreationTimestamp="2025-01-29 11:09:08 +0000 UTC" firstStartedPulling="2025-01-29 11:09:09.495696151 +0000 UTC m=+24.906757946" lastFinishedPulling="2025-01-29 11:09:12.877886722 +0000 UTC m=+28.288948517" observedRunningTime="2025-01-29 11:09:13.509163031 +0000 UTC m=+28.920224826" watchObservedRunningTime="2025-01-29 11:09:13.509504275 +0000 UTC m=+28.920566030" Jan 29 11:09:14.323215 kubelet[1755]: E0129 11:09:14.323162 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:15.324277 kubelet[1755]: E0129 11:09:15.324211 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:16.325374 kubelet[1755]: E0129 11:09:16.325317 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:17.325669 kubelet[1755]: E0129 11:09:17.325619 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:18.326274 kubelet[1755]: E0129 11:09:18.326220 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:19.327348 kubelet[1755]: E0129 11:09:19.327299 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:20.328521 kubelet[1755]: E0129 11:09:20.328464 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:21.134979 update_engine[1432]: I20250129 11:09:21.134884 1432 update_attempter.cc:509] Updating boot flags... Jan 29 11:09:21.161704 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3143) Jan 29 11:09:21.193359 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3141) Jan 29 11:09:21.229283 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3141) Jan 29 11:09:21.328749 kubelet[1755]: E0129 11:09:21.328698 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:22.329606 kubelet[1755]: E0129 11:09:22.329566 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:22.680855 kubelet[1755]: I0129 11:09:22.680815 1755 topology_manager.go:215] "Topology Admit Handler" podUID="58d0857f-8a3a-4564-8d9c-4f8e1c55fada" podNamespace="default" podName="test-pod-1" Jan 29 11:09:22.686537 systemd[1]: Created slice kubepods-besteffort-pod58d0857f_8a3a_4564_8d9c_4f8e1c55fada.slice - libcontainer container kubepods-besteffort-pod58d0857f_8a3a_4564_8d9c_4f8e1c55fada.slice. Jan 29 11:09:22.745041 kubelet[1755]: I0129 11:09:22.744948 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lw8q\" (UniqueName: \"kubernetes.io/projected/58d0857f-8a3a-4564-8d9c-4f8e1c55fada-kube-api-access-4lw8q\") pod \"test-pod-1\" (UID: \"58d0857f-8a3a-4564-8d9c-4f8e1c55fada\") " pod="default/test-pod-1" Jan 29 11:09:22.745041 kubelet[1755]: I0129 11:09:22.744996 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-92ce1e31-57c4-4415-bf27-d06d7b09da3c\" (UniqueName: \"kubernetes.io/nfs/58d0857f-8a3a-4564-8d9c-4f8e1c55fada-pvc-92ce1e31-57c4-4415-bf27-d06d7b09da3c\") pod \"test-pod-1\" (UID: \"58d0857f-8a3a-4564-8d9c-4f8e1c55fada\") " pod="default/test-pod-1" Jan 29 11:09:22.876297 kernel: FS-Cache: Loaded Jan 29 11:09:22.901281 kernel: RPC: Registered named UNIX socket transport module. Jan 29 11:09:22.901397 kernel: RPC: Registered udp transport module. Jan 29 11:09:22.902388 kernel: RPC: Registered tcp transport module. Jan 29 11:09:22.902440 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 11:09:22.902475 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 11:09:23.061495 kernel: NFS: Registering the id_resolver key type Jan 29 11:09:23.061653 kernel: Key type id_resolver registered Jan 29 11:09:23.061671 kernel: Key type id_legacy registered Jan 29 11:09:23.085496 nfsidmap[3163]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 29 11:09:23.091778 nfsidmap[3166]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 29 11:09:23.289092 containerd[1454]: time="2025-01-29T11:09:23.289036292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:58d0857f-8a3a-4564-8d9c-4f8e1c55fada,Namespace:default,Attempt:0,}" Jan 29 11:09:23.319449 systemd-networkd[1391]: lxc705028b2d0bf: Link UP Jan 29 11:09:23.329279 kernel: eth0: renamed from tmp17666 Jan 29 11:09:23.330440 kubelet[1755]: E0129 11:09:23.330401 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:23.336762 systemd-networkd[1391]: lxc705028b2d0bf: Gained carrier Jan 29 11:09:23.524190 containerd[1454]: time="2025-01-29T11:09:23.523949919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:09:23.524190 containerd[1454]: time="2025-01-29T11:09:23.524003502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:09:23.524190 containerd[1454]: time="2025-01-29T11:09:23.524014187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:09:23.524190 containerd[1454]: time="2025-01-29T11:09:23.524081816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:09:23.551438 systemd[1]: Started cri-containerd-1766621cf826305c00902a3e293054a2a2402f39f03d9b4d5fa6f84558cd8cba.scope - libcontainer container 1766621cf826305c00902a3e293054a2a2402f39f03d9b4d5fa6f84558cd8cba. Jan 29 11:09:23.562414 systemd-resolved[1313]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:09:23.584055 containerd[1454]: time="2025-01-29T11:09:23.584014080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:58d0857f-8a3a-4564-8d9c-4f8e1c55fada,Namespace:default,Attempt:0,} returns sandbox id \"1766621cf826305c00902a3e293054a2a2402f39f03d9b4d5fa6f84558cd8cba\"" Jan 29 11:09:23.585698 containerd[1454]: time="2025-01-29T11:09:23.585619982Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 11:09:23.871766 containerd[1454]: time="2025-01-29T11:09:23.871725688Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:23.872288 containerd[1454]: time="2025-01-29T11:09:23.872215902Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 11:09:23.875211 containerd[1454]: time="2025-01-29T11:09:23.875114531Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 289.458212ms" Jan 29 11:09:23.875211 containerd[1454]: time="2025-01-29T11:09:23.875165753Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\"" Jan 29 11:09:23.877406 containerd[1454]: time="2025-01-29T11:09:23.877369798Z" level=info msg="CreateContainer within sandbox \"1766621cf826305c00902a3e293054a2a2402f39f03d9b4d5fa6f84558cd8cba\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 11:09:23.899458 containerd[1454]: time="2025-01-29T11:09:23.899399036Z" level=info msg="CreateContainer within sandbox \"1766621cf826305c00902a3e293054a2a2402f39f03d9b4d5fa6f84558cd8cba\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"541904e44ed66acdaa6a5c673e1699ad310d8e7f36d5fdc8394a05aa00f561e2\"" Jan 29 11:09:23.899940 containerd[1454]: time="2025-01-29T11:09:23.899902137Z" level=info msg="StartContainer for \"541904e44ed66acdaa6a5c673e1699ad310d8e7f36d5fdc8394a05aa00f561e2\"" Jan 29 11:09:23.936426 systemd[1]: Started cri-containerd-541904e44ed66acdaa6a5c673e1699ad310d8e7f36d5fdc8394a05aa00f561e2.scope - libcontainer container 541904e44ed66acdaa6a5c673e1699ad310d8e7f36d5fdc8394a05aa00f561e2. Jan 29 11:09:23.956646 containerd[1454]: time="2025-01-29T11:09:23.956593942Z" level=info msg="StartContainer for \"541904e44ed66acdaa6a5c673e1699ad310d8e7f36d5fdc8394a05aa00f561e2\" returns successfully" Jan 29 11:09:24.331144 kubelet[1755]: E0129 11:09:24.331100 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:24.460531 systemd-networkd[1391]: lxc705028b2d0bf: Gained IPv6LL Jan 29 11:09:24.534529 kubelet[1755]: I0129 11:09:24.534476 1755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.243902639 podStartE2EDuration="15.534458211s" podCreationTimestamp="2025-01-29 11:09:09 +0000 UTC" firstStartedPulling="2025-01-29 11:09:23.585312328 +0000 UTC m=+38.996374123" lastFinishedPulling="2025-01-29 11:09:23.8758679 +0000 UTC m=+39.286929695" observedRunningTime="2025-01-29 11:09:24.533425347 +0000 UTC m=+39.944487142" watchObservedRunningTime="2025-01-29 11:09:24.534458211 +0000 UTC m=+39.945520006" Jan 29 11:09:25.303034 kubelet[1755]: E0129 11:09:25.302985 1755 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:25.332246 kubelet[1755]: E0129 11:09:25.332210 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:26.332366 kubelet[1755]: E0129 11:09:26.332325 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:27.333434 kubelet[1755]: E0129 11:09:27.333389 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:27.608641 systemd[1]: run-containerd-runc-k8s.io-e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f-runc.xyu7OL.mount: Deactivated successfully. Jan 29 11:09:27.634216 containerd[1454]: time="2025-01-29T11:09:27.634169690Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:09:27.639229 containerd[1454]: time="2025-01-29T11:09:27.639200190Z" level=info msg="StopContainer for \"e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f\" with timeout 2 (s)" Jan 29 11:09:27.641879 containerd[1454]: time="2025-01-29T11:09:27.641843043Z" level=info msg="Stop container \"e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f\" with signal terminated" Jan 29 11:09:27.647165 systemd-networkd[1391]: lxc_health: Link DOWN Jan 29 11:09:27.647172 systemd-networkd[1391]: lxc_health: Lost carrier Jan 29 11:09:27.676652 systemd[1]: cri-containerd-e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f.scope: Deactivated successfully. Jan 29 11:09:27.677094 systemd[1]: cri-containerd-e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f.scope: Consumed 6.542s CPU time. Jan 29 11:09:27.700648 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f-rootfs.mount: Deactivated successfully. Jan 29 11:09:27.709127 containerd[1454]: time="2025-01-29T11:09:27.709068728Z" level=info msg="shim disconnected" id=e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f namespace=k8s.io Jan 29 11:09:27.709576 containerd[1454]: time="2025-01-29T11:09:27.709325815Z" level=warning msg="cleaning up after shim disconnected" id=e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f namespace=k8s.io Jan 29 11:09:27.709576 containerd[1454]: time="2025-01-29T11:09:27.709430330Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:09:27.722760 containerd[1454]: time="2025-01-29T11:09:27.722711659Z" level=info msg="StopContainer for \"e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f\" returns successfully" Jan 29 11:09:27.723418 containerd[1454]: time="2025-01-29T11:09:27.723384647Z" level=info msg="StopPodSandbox for \"aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38\"" Jan 29 11:09:27.723469 containerd[1454]: time="2025-01-29T11:09:27.723422660Z" level=info msg="Container to stop \"e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:09:27.727407 containerd[1454]: time="2025-01-29T11:09:27.723434264Z" level=info msg="Container to stop \"65c90376b3feef6fe407c811a2351608051d7640307c11212a28d5123065d2bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:09:27.727407 containerd[1454]: time="2025-01-29T11:09:27.727403045Z" level=info msg="Container to stop \"c4da2eb7c6ae8cb6edcc31550a758673d692c0d59a424f4531e8cb29edd3098c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:09:27.727534 containerd[1454]: time="2025-01-29T11:09:27.727420931Z" level=info msg="Container to stop \"16a0c75956f4de4efa53381504595be0ca49563d400bf2587cd707e3a1b55e7f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:09:27.727534 containerd[1454]: time="2025-01-29T11:09:27.727430134Z" level=info msg="Container to stop \"3fdd48bb263184dcbe73e8a2be3c0027a5ede08d272e2a90af836357c1245657\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:09:27.728972 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38-shm.mount: Deactivated successfully. Jan 29 11:09:27.734626 systemd[1]: cri-containerd-aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38.scope: Deactivated successfully. Jan 29 11:09:27.751286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38-rootfs.mount: Deactivated successfully. Jan 29 11:09:27.759460 containerd[1454]: time="2025-01-29T11:09:27.759399821Z" level=info msg="shim disconnected" id=aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38 namespace=k8s.io Jan 29 11:09:27.759460 containerd[1454]: time="2025-01-29T11:09:27.759482929Z" level=warning msg="cleaning up after shim disconnected" id=aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38 namespace=k8s.io Jan 29 11:09:27.759460 containerd[1454]: time="2025-01-29T11:09:27.759494213Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:09:27.770781 containerd[1454]: time="2025-01-29T11:09:27.770618093Z" level=info msg="TearDown network for sandbox \"aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38\" successfully" Jan 29 11:09:27.770781 containerd[1454]: time="2025-01-29T11:09:27.770656546Z" level=info msg="StopPodSandbox for \"aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38\" returns successfully" Jan 29 11:09:27.873954 kubelet[1755]: I0129 11:09:27.873476 1755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-cilium-cgroup\") pod \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " Jan 29 11:09:27.873954 kubelet[1755]: I0129 11:09:27.873523 1755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-bpf-maps\") pod \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " Jan 29 11:09:27.873954 kubelet[1755]: I0129 11:09:27.873547 1755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/76364d0f-3115-4fc7-9bd3-a3a1937d0465-clustermesh-secrets\") pod \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " Jan 29 11:09:27.873954 kubelet[1755]: I0129 11:09:27.873562 1755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-host-proc-sys-net\") pod \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " Jan 29 11:09:27.873954 kubelet[1755]: I0129 11:09:27.873544 1755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "76364d0f-3115-4fc7-9bd3-a3a1937d0465" (UID: "76364d0f-3115-4fc7-9bd3-a3a1937d0465"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:09:27.874277 kubelet[1755]: I0129 11:09:27.873609 1755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "76364d0f-3115-4fc7-9bd3-a3a1937d0465" (UID: "76364d0f-3115-4fc7-9bd3-a3a1937d0465"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:09:27.874277 kubelet[1755]: I0129 11:09:27.873617 1755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "76364d0f-3115-4fc7-9bd3-a3a1937d0465" (UID: "76364d0f-3115-4fc7-9bd3-a3a1937d0465"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:09:27.874277 kubelet[1755]: I0129 11:09:27.873630 1755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "76364d0f-3115-4fc7-9bd3-a3a1937d0465" (UID: "76364d0f-3115-4fc7-9bd3-a3a1937d0465"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:09:27.874277 kubelet[1755]: I0129 11:09:27.873577 1755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-lib-modules\") pod \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " Jan 29 11:09:27.874277 kubelet[1755]: I0129 11:09:27.873709 1755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-xtables-lock\") pod \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " Jan 29 11:09:27.874402 kubelet[1755]: I0129 11:09:27.873727 1755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "76364d0f-3115-4fc7-9bd3-a3a1937d0465" (UID: "76364d0f-3115-4fc7-9bd3-a3a1937d0465"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:09:27.874402 kubelet[1755]: I0129 11:09:27.873732 1755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76364d0f-3115-4fc7-9bd3-a3a1937d0465-cilium-config-path\") pod \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " Jan 29 11:09:27.874402 kubelet[1755]: I0129 11:09:27.873767 1755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t8fn\" (UniqueName: \"kubernetes.io/projected/76364d0f-3115-4fc7-9bd3-a3a1937d0465-kube-api-access-4t8fn\") pod \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " Jan 29 11:09:27.874402 kubelet[1755]: I0129 11:09:27.873785 1755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/76364d0f-3115-4fc7-9bd3-a3a1937d0465-hubble-tls\") pod \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " Jan 29 11:09:27.874402 kubelet[1755]: I0129 11:09:27.873799 1755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-cni-path\") pod \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " Jan 29 11:09:27.874402 kubelet[1755]: I0129 11:09:27.873812 1755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-hostproc\") pod \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " Jan 29 11:09:27.874526 kubelet[1755]: I0129 11:09:27.873826 1755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-cilium-run\") pod \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " Jan 29 11:09:27.874526 kubelet[1755]: I0129 11:09:27.873845 1755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-etc-cni-netd\") pod \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " Jan 29 11:09:27.874526 kubelet[1755]: I0129 11:09:27.873859 1755 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-host-proc-sys-kernel\") pod \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\" (UID: \"76364d0f-3115-4fc7-9bd3-a3a1937d0465\") " Jan 29 11:09:27.874526 kubelet[1755]: I0129 11:09:27.873891 1755 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-cilium-cgroup\") on node \"10.0.0.133\" DevicePath \"\"" Jan 29 11:09:27.874526 kubelet[1755]: I0129 11:09:27.873901 1755 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-bpf-maps\") on node \"10.0.0.133\" DevicePath \"\"" Jan 29 11:09:27.874526 kubelet[1755]: I0129 11:09:27.873911 1755 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-host-proc-sys-net\") on node \"10.0.0.133\" DevicePath \"\"" Jan 29 11:09:27.874526 kubelet[1755]: I0129 11:09:27.873920 1755 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-xtables-lock\") on node \"10.0.0.133\" DevicePath \"\"" Jan 29 11:09:27.874671 kubelet[1755]: I0129 11:09:27.873927 1755 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-lib-modules\") on node \"10.0.0.133\" DevicePath \"\"" Jan 29 11:09:27.874671 kubelet[1755]: I0129 11:09:27.873949 1755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "76364d0f-3115-4fc7-9bd3-a3a1937d0465" (UID: "76364d0f-3115-4fc7-9bd3-a3a1937d0465"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:09:27.874671 kubelet[1755]: I0129 11:09:27.873966 1755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-cni-path" (OuterVolumeSpecName: "cni-path") pod "76364d0f-3115-4fc7-9bd3-a3a1937d0465" (UID: "76364d0f-3115-4fc7-9bd3-a3a1937d0465"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:09:27.874671 kubelet[1755]: I0129 11:09:27.873983 1755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-hostproc" (OuterVolumeSpecName: "hostproc") pod "76364d0f-3115-4fc7-9bd3-a3a1937d0465" (UID: "76364d0f-3115-4fc7-9bd3-a3a1937d0465"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:09:27.874671 kubelet[1755]: I0129 11:09:27.874000 1755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "76364d0f-3115-4fc7-9bd3-a3a1937d0465" (UID: "76364d0f-3115-4fc7-9bd3-a3a1937d0465"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:09:27.874770 kubelet[1755]: I0129 11:09:27.874015 1755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "76364d0f-3115-4fc7-9bd3-a3a1937d0465" (UID: "76364d0f-3115-4fc7-9bd3-a3a1937d0465"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:09:27.877872 kubelet[1755]: I0129 11:09:27.877826 1755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76364d0f-3115-4fc7-9bd3-a3a1937d0465-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "76364d0f-3115-4fc7-9bd3-a3a1937d0465" (UID: "76364d0f-3115-4fc7-9bd3-a3a1937d0465"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:09:27.881719 kubelet[1755]: I0129 11:09:27.881679 1755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76364d0f-3115-4fc7-9bd3-a3a1937d0465-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "76364d0f-3115-4fc7-9bd3-a3a1937d0465" (UID: "76364d0f-3115-4fc7-9bd3-a3a1937d0465"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:09:27.881950 kubelet[1755]: I0129 11:09:27.881912 1755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76364d0f-3115-4fc7-9bd3-a3a1937d0465-kube-api-access-4t8fn" (OuterVolumeSpecName: "kube-api-access-4t8fn") pod "76364d0f-3115-4fc7-9bd3-a3a1937d0465" (UID: "76364d0f-3115-4fc7-9bd3-a3a1937d0465"). InnerVolumeSpecName "kube-api-access-4t8fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:09:27.882020 kubelet[1755]: I0129 11:09:27.882001 1755 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76364d0f-3115-4fc7-9bd3-a3a1937d0465-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "76364d0f-3115-4fc7-9bd3-a3a1937d0465" (UID: "76364d0f-3115-4fc7-9bd3-a3a1937d0465"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:09:27.974438 kubelet[1755]: I0129 11:09:27.974365 1755 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-etc-cni-netd\") on node \"10.0.0.133\" DevicePath \"\"" Jan 29 11:09:27.974438 kubelet[1755]: I0129 11:09:27.974403 1755 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-host-proc-sys-kernel\") on node \"10.0.0.133\" DevicePath \"\"" Jan 29 11:09:27.974438 kubelet[1755]: I0129 11:09:27.974415 1755 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/76364d0f-3115-4fc7-9bd3-a3a1937d0465-clustermesh-secrets\") on node \"10.0.0.133\" DevicePath \"\"" Jan 29 11:09:27.974438 kubelet[1755]: I0129 11:09:27.974423 1755 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/76364d0f-3115-4fc7-9bd3-a3a1937d0465-cilium-config-path\") on node \"10.0.0.133\" DevicePath \"\"" Jan 29 11:09:27.974438 kubelet[1755]: I0129 11:09:27.974431 1755 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4t8fn\" (UniqueName: \"kubernetes.io/projected/76364d0f-3115-4fc7-9bd3-a3a1937d0465-kube-api-access-4t8fn\") on node \"10.0.0.133\" DevicePath \"\"" Jan 29 11:09:27.974438 kubelet[1755]: I0129 11:09:27.974444 1755 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-hostproc\") on node \"10.0.0.133\" DevicePath \"\"" Jan 29 11:09:27.974438 kubelet[1755]: I0129 11:09:27.974451 1755 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-cilium-run\") on node \"10.0.0.133\" DevicePath \"\"" Jan 29 11:09:27.974438 kubelet[1755]: I0129 11:09:27.974458 1755 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/76364d0f-3115-4fc7-9bd3-a3a1937d0465-hubble-tls\") on node \"10.0.0.133\" DevicePath \"\"" Jan 29 11:09:27.974736 kubelet[1755]: I0129 11:09:27.974466 1755 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/76364d0f-3115-4fc7-9bd3-a3a1937d0465-cni-path\") on node \"10.0.0.133\" DevicePath \"\"" Jan 29 11:09:28.334266 kubelet[1755]: E0129 11:09:28.334185 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:28.532832 kubelet[1755]: I0129 11:09:28.532803 1755 scope.go:117] "RemoveContainer" containerID="e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f" Jan 29 11:09:28.535177 containerd[1454]: time="2025-01-29T11:09:28.534904927Z" level=info msg="RemoveContainer for \"e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f\"" Jan 29 11:09:28.537022 systemd[1]: Removed slice kubepods-burstable-pod76364d0f_3115_4fc7_9bd3_a3a1937d0465.slice - libcontainer container kubepods-burstable-pod76364d0f_3115_4fc7_9bd3_a3a1937d0465.slice. Jan 29 11:09:28.537376 systemd[1]: kubepods-burstable-pod76364d0f_3115_4fc7_9bd3_a3a1937d0465.slice: Consumed 6.693s CPU time. Jan 29 11:09:28.538667 containerd[1454]: time="2025-01-29T11:09:28.538549042Z" level=info msg="RemoveContainer for \"e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f\" returns successfully" Jan 29 11:09:28.538877 kubelet[1755]: I0129 11:09:28.538851 1755 scope.go:117] "RemoveContainer" containerID="3fdd48bb263184dcbe73e8a2be3c0027a5ede08d272e2a90af836357c1245657" Jan 29 11:09:28.540201 containerd[1454]: time="2025-01-29T11:09:28.540177959Z" level=info msg="RemoveContainer for \"3fdd48bb263184dcbe73e8a2be3c0027a5ede08d272e2a90af836357c1245657\"" Jan 29 11:09:28.546713 containerd[1454]: time="2025-01-29T11:09:28.546639566Z" level=info msg="RemoveContainer for \"3fdd48bb263184dcbe73e8a2be3c0027a5ede08d272e2a90af836357c1245657\" returns successfully" Jan 29 11:09:28.546863 kubelet[1755]: I0129 11:09:28.546818 1755 scope.go:117] "RemoveContainer" containerID="16a0c75956f4de4efa53381504595be0ca49563d400bf2587cd707e3a1b55e7f" Jan 29 11:09:28.547959 containerd[1454]: time="2025-01-29T11:09:28.547933216Z" level=info msg="RemoveContainer for \"16a0c75956f4de4efa53381504595be0ca49563d400bf2587cd707e3a1b55e7f\"" Jan 29 11:09:28.550213 containerd[1454]: time="2025-01-29T11:09:28.550179688Z" level=info msg="RemoveContainer for \"16a0c75956f4de4efa53381504595be0ca49563d400bf2587cd707e3a1b55e7f\" returns successfully" Jan 29 11:09:28.550450 kubelet[1755]: I0129 11:09:28.550366 1755 scope.go:117] "RemoveContainer" containerID="c4da2eb7c6ae8cb6edcc31550a758673d692c0d59a424f4531e8cb29edd3098c" Jan 29 11:09:28.551442 containerd[1454]: time="2025-01-29T11:09:28.551413439Z" level=info msg="RemoveContainer for \"c4da2eb7c6ae8cb6edcc31550a758673d692c0d59a424f4531e8cb29edd3098c\"" Jan 29 11:09:28.553418 containerd[1454]: time="2025-01-29T11:09:28.553374861Z" level=info msg="RemoveContainer for \"c4da2eb7c6ae8cb6edcc31550a758673d692c0d59a424f4531e8cb29edd3098c\" returns successfully" Jan 29 11:09:28.553601 kubelet[1755]: I0129 11:09:28.553525 1755 scope.go:117] "RemoveContainer" containerID="65c90376b3feef6fe407c811a2351608051d7640307c11212a28d5123065d2bf" Jan 29 11:09:28.554486 containerd[1454]: time="2025-01-29T11:09:28.554458524Z" level=info msg="RemoveContainer for \"65c90376b3feef6fe407c811a2351608051d7640307c11212a28d5123065d2bf\"" Jan 29 11:09:28.556410 containerd[1454]: time="2025-01-29T11:09:28.556379293Z" level=info msg="RemoveContainer for \"65c90376b3feef6fe407c811a2351608051d7640307c11212a28d5123065d2bf\" returns successfully" Jan 29 11:09:28.556582 kubelet[1755]: I0129 11:09:28.556552 1755 scope.go:117] "RemoveContainer" containerID="e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f" Jan 29 11:09:28.556857 containerd[1454]: time="2025-01-29T11:09:28.556814231Z" level=error msg="ContainerStatus for \"e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f\": not found" Jan 29 11:09:28.556982 kubelet[1755]: E0129 11:09:28.556956 1755 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f\": not found" containerID="e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f" Jan 29 11:09:28.557059 kubelet[1755]: I0129 11:09:28.556986 1755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f"} err="failed to get container status \"e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f\": rpc error: code = NotFound desc = an error occurred when try to find container \"e852b271be706ff70932ff538cea2e1af8f946d3e4e65c20386280d4c3becb5f\": not found" Jan 29 11:09:28.557086 kubelet[1755]: I0129 11:09:28.557062 1755 scope.go:117] "RemoveContainer" containerID="3fdd48bb263184dcbe73e8a2be3c0027a5ede08d272e2a90af836357c1245657" Jan 29 11:09:28.557252 containerd[1454]: time="2025-01-29T11:09:28.557214638Z" level=error msg="ContainerStatus for \"3fdd48bb263184dcbe73e8a2be3c0027a5ede08d272e2a90af836357c1245657\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3fdd48bb263184dcbe73e8a2be3c0027a5ede08d272e2a90af836357c1245657\": not found" Jan 29 11:09:28.557348 kubelet[1755]: E0129 11:09:28.557329 1755 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3fdd48bb263184dcbe73e8a2be3c0027a5ede08d272e2a90af836357c1245657\": not found" containerID="3fdd48bb263184dcbe73e8a2be3c0027a5ede08d272e2a90af836357c1245657" Jan 29 11:09:28.557379 kubelet[1755]: I0129 11:09:28.557356 1755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3fdd48bb263184dcbe73e8a2be3c0027a5ede08d272e2a90af836357c1245657"} err="failed to get container status \"3fdd48bb263184dcbe73e8a2be3c0027a5ede08d272e2a90af836357c1245657\": rpc error: code = NotFound desc = an error occurred when try to find container \"3fdd48bb263184dcbe73e8a2be3c0027a5ede08d272e2a90af836357c1245657\": not found" Jan 29 11:09:28.557379 kubelet[1755]: I0129 11:09:28.557373 1755 scope.go:117] "RemoveContainer" containerID="16a0c75956f4de4efa53381504595be0ca49563d400bf2587cd707e3a1b55e7f" Jan 29 11:09:28.557568 containerd[1454]: time="2025-01-29T11:09:28.557539541Z" level=error msg="ContainerStatus for \"16a0c75956f4de4efa53381504595be0ca49563d400bf2587cd707e3a1b55e7f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"16a0c75956f4de4efa53381504595be0ca49563d400bf2587cd707e3a1b55e7f\": not found" Jan 29 11:09:28.557655 kubelet[1755]: E0129 11:09:28.557639 1755 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"16a0c75956f4de4efa53381504595be0ca49563d400bf2587cd707e3a1b55e7f\": not found" containerID="16a0c75956f4de4efa53381504595be0ca49563d400bf2587cd707e3a1b55e7f" Jan 29 11:09:28.557680 kubelet[1755]: I0129 11:09:28.557660 1755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"16a0c75956f4de4efa53381504595be0ca49563d400bf2587cd707e3a1b55e7f"} err="failed to get container status \"16a0c75956f4de4efa53381504595be0ca49563d400bf2587cd707e3a1b55e7f\": rpc error: code = NotFound desc = an error occurred when try to find container \"16a0c75956f4de4efa53381504595be0ca49563d400bf2587cd707e3a1b55e7f\": not found" Jan 29 11:09:28.557680 kubelet[1755]: I0129 11:09:28.557673 1755 scope.go:117] "RemoveContainer" containerID="c4da2eb7c6ae8cb6edcc31550a758673d692c0d59a424f4531e8cb29edd3098c" Jan 29 11:09:28.557929 containerd[1454]: time="2025-01-29T11:09:28.557897414Z" level=error msg="ContainerStatus for \"c4da2eb7c6ae8cb6edcc31550a758673d692c0d59a424f4531e8cb29edd3098c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c4da2eb7c6ae8cb6edcc31550a758673d692c0d59a424f4531e8cb29edd3098c\": not found" Jan 29 11:09:28.558078 kubelet[1755]: E0129 11:09:28.558058 1755 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c4da2eb7c6ae8cb6edcc31550a758673d692c0d59a424f4531e8cb29edd3098c\": not found" containerID="c4da2eb7c6ae8cb6edcc31550a758673d692c0d59a424f4531e8cb29edd3098c" Jan 29 11:09:28.558102 kubelet[1755]: I0129 11:09:28.558085 1755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c4da2eb7c6ae8cb6edcc31550a758673d692c0d59a424f4531e8cb29edd3098c"} err="failed to get container status \"c4da2eb7c6ae8cb6edcc31550a758673d692c0d59a424f4531e8cb29edd3098c\": rpc error: code = NotFound desc = an error occurred when try to find container \"c4da2eb7c6ae8cb6edcc31550a758673d692c0d59a424f4531e8cb29edd3098c\": not found" Jan 29 11:09:28.558126 kubelet[1755]: I0129 11:09:28.558103 1755 scope.go:117] "RemoveContainer" containerID="65c90376b3feef6fe407c811a2351608051d7640307c11212a28d5123065d2bf" Jan 29 11:09:28.558336 containerd[1454]: time="2025-01-29T11:09:28.558309665Z" level=error msg="ContainerStatus for \"65c90376b3feef6fe407c811a2351608051d7640307c11212a28d5123065d2bf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"65c90376b3feef6fe407c811a2351608051d7640307c11212a28d5123065d2bf\": not found" Jan 29 11:09:28.558435 kubelet[1755]: E0129 11:09:28.558415 1755 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"65c90376b3feef6fe407c811a2351608051d7640307c11212a28d5123065d2bf\": not found" containerID="65c90376b3feef6fe407c811a2351608051d7640307c11212a28d5123065d2bf" Jan 29 11:09:28.558460 kubelet[1755]: I0129 11:09:28.558442 1755 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"65c90376b3feef6fe407c811a2351608051d7640307c11212a28d5123065d2bf"} err="failed to get container status \"65c90376b3feef6fe407c811a2351608051d7640307c11212a28d5123065d2bf\": rpc error: code = NotFound desc = an error occurred when try to find container \"65c90376b3feef6fe407c811a2351608051d7640307c11212a28d5123065d2bf\": not found" Jan 29 11:09:28.605816 systemd[1]: var-lib-kubelet-pods-76364d0f\x2d3115\x2d4fc7\x2d9bd3\x2da3a1937d0465-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4t8fn.mount: Deactivated successfully. Jan 29 11:09:28.605913 systemd[1]: var-lib-kubelet-pods-76364d0f\x2d3115\x2d4fc7\x2d9bd3\x2da3a1937d0465-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 29 11:09:28.605975 systemd[1]: var-lib-kubelet-pods-76364d0f\x2d3115\x2d4fc7\x2d9bd3\x2da3a1937d0465-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 29 11:09:29.334361 kubelet[1755]: E0129 11:09:29.334307 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:29.424362 kubelet[1755]: I0129 11:09:29.424325 1755 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76364d0f-3115-4fc7-9bd3-a3a1937d0465" path="/var/lib/kubelet/pods/76364d0f-3115-4fc7-9bd3-a3a1937d0465/volumes" Jan 29 11:09:30.334755 kubelet[1755]: E0129 11:09:30.334713 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:30.437018 kubelet[1755]: E0129 11:09:30.436912 1755 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:09:30.513583 kubelet[1755]: I0129 11:09:30.513526 1755 topology_manager.go:215] "Topology Admit Handler" podUID="34cdd80e-9da7-4ec0-b90b-036f20c51975" podNamespace="kube-system" podName="cilium-operator-599987898-qn6zr" Jan 29 11:09:30.513583 kubelet[1755]: E0129 11:09:30.513578 1755 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="76364d0f-3115-4fc7-9bd3-a3a1937d0465" containerName="apply-sysctl-overwrites" Jan 29 11:09:30.513583 kubelet[1755]: E0129 11:09:30.513588 1755 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="76364d0f-3115-4fc7-9bd3-a3a1937d0465" containerName="mount-bpf-fs" Jan 29 11:09:30.513583 kubelet[1755]: E0129 11:09:30.513594 1755 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="76364d0f-3115-4fc7-9bd3-a3a1937d0465" containerName="clean-cilium-state" Jan 29 11:09:30.513583 kubelet[1755]: E0129 11:09:30.513600 1755 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="76364d0f-3115-4fc7-9bd3-a3a1937d0465" containerName="mount-cgroup" Jan 29 11:09:30.513798 kubelet[1755]: E0129 11:09:30.513607 1755 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="76364d0f-3115-4fc7-9bd3-a3a1937d0465" containerName="cilium-agent" Jan 29 11:09:30.513798 kubelet[1755]: I0129 11:09:30.513624 1755 memory_manager.go:354] "RemoveStaleState removing state" podUID="76364d0f-3115-4fc7-9bd3-a3a1937d0465" containerName="cilium-agent" Jan 29 11:09:30.514212 kubelet[1755]: I0129 11:09:30.514186 1755 topology_manager.go:215] "Topology Admit Handler" podUID="1bbcf199-41ef-4a2c-a979-6334497e792c" podNamespace="kube-system" podName="cilium-rzx74" Jan 29 11:09:30.519960 systemd[1]: Created slice kubepods-besteffort-pod34cdd80e_9da7_4ec0_b90b_036f20c51975.slice - libcontainer container kubepods-besteffort-pod34cdd80e_9da7_4ec0_b90b_036f20c51975.slice. Jan 29 11:09:30.523969 kubelet[1755]: W0129 11:09:30.523931 1755 reflector.go:547] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.0.0.133" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.133' and this object Jan 29 11:09:30.523969 kubelet[1755]: E0129 11:09:30.523971 1755 reflector.go:150] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.0.0.133" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.133' and this object Jan 29 11:09:30.524438 kubelet[1755]: W0129 11:09:30.524410 1755 reflector.go:547] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.0.0.133" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.133' and this object Jan 29 11:09:30.524438 kubelet[1755]: E0129 11:09:30.524438 1755 reflector.go:150] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:10.0.0.133" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.133' and this object Jan 29 11:09:30.525450 systemd[1]: Created slice kubepods-burstable-pod1bbcf199_41ef_4a2c_a979_6334497e792c.slice - libcontainer container kubepods-burstable-pod1bbcf199_41ef_4a2c_a979_6334497e792c.slice. Jan 29 11:09:30.592005 kubelet[1755]: I0129 11:09:30.588030 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1bbcf199-41ef-4a2c-a979-6334497e792c-bpf-maps\") pod \"cilium-rzx74\" (UID: \"1bbcf199-41ef-4a2c-a979-6334497e792c\") " pod="kube-system/cilium-rzx74" Jan 29 11:09:30.592005 kubelet[1755]: I0129 11:09:30.588078 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1bbcf199-41ef-4a2c-a979-6334497e792c-cilium-cgroup\") pod \"cilium-rzx74\" (UID: \"1bbcf199-41ef-4a2c-a979-6334497e792c\") " pod="kube-system/cilium-rzx74" Jan 29 11:09:30.592005 kubelet[1755]: I0129 11:09:30.588094 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1bbcf199-41ef-4a2c-a979-6334497e792c-lib-modules\") pod \"cilium-rzx74\" (UID: \"1bbcf199-41ef-4a2c-a979-6334497e792c\") " pod="kube-system/cilium-rzx74" Jan 29 11:09:30.592005 kubelet[1755]: I0129 11:09:30.588114 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/1bbcf199-41ef-4a2c-a979-6334497e792c-cilium-ipsec-secrets\") pod \"cilium-rzx74\" (UID: \"1bbcf199-41ef-4a2c-a979-6334497e792c\") " pod="kube-system/cilium-rzx74" Jan 29 11:09:30.592005 kubelet[1755]: I0129 11:09:30.588130 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1bbcf199-41ef-4a2c-a979-6334497e792c-xtables-lock\") pod \"cilium-rzx74\" (UID: \"1bbcf199-41ef-4a2c-a979-6334497e792c\") " pod="kube-system/cilium-rzx74" Jan 29 11:09:30.592005 kubelet[1755]: I0129 11:09:30.588144 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1bbcf199-41ef-4a2c-a979-6334497e792c-cilium-config-path\") pod \"cilium-rzx74\" (UID: \"1bbcf199-41ef-4a2c-a979-6334497e792c\") " pod="kube-system/cilium-rzx74" Jan 29 11:09:30.592222 kubelet[1755]: I0129 11:09:30.588161 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzf5d\" (UniqueName: \"kubernetes.io/projected/34cdd80e-9da7-4ec0-b90b-036f20c51975-kube-api-access-fzf5d\") pod \"cilium-operator-599987898-qn6zr\" (UID: \"34cdd80e-9da7-4ec0-b90b-036f20c51975\") " pod="kube-system/cilium-operator-599987898-qn6zr" Jan 29 11:09:30.592222 kubelet[1755]: I0129 11:09:30.588179 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1bbcf199-41ef-4a2c-a979-6334497e792c-cilium-run\") pod \"cilium-rzx74\" (UID: \"1bbcf199-41ef-4a2c-a979-6334497e792c\") " pod="kube-system/cilium-rzx74" Jan 29 11:09:30.592222 kubelet[1755]: I0129 11:09:30.588194 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1bbcf199-41ef-4a2c-a979-6334497e792c-cni-path\") pod \"cilium-rzx74\" (UID: \"1bbcf199-41ef-4a2c-a979-6334497e792c\") " pod="kube-system/cilium-rzx74" Jan 29 11:09:30.592222 kubelet[1755]: I0129 11:09:30.588207 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1bbcf199-41ef-4a2c-a979-6334497e792c-etc-cni-netd\") pod \"cilium-rzx74\" (UID: \"1bbcf199-41ef-4a2c-a979-6334497e792c\") " pod="kube-system/cilium-rzx74" Jan 29 11:09:30.592222 kubelet[1755]: I0129 11:09:30.588224 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1bbcf199-41ef-4a2c-a979-6334497e792c-host-proc-sys-net\") pod \"cilium-rzx74\" (UID: \"1bbcf199-41ef-4a2c-a979-6334497e792c\") " pod="kube-system/cilium-rzx74" Jan 29 11:09:30.592359 kubelet[1755]: I0129 11:09:30.588238 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1bbcf199-41ef-4a2c-a979-6334497e792c-hubble-tls\") pod \"cilium-rzx74\" (UID: \"1bbcf199-41ef-4a2c-a979-6334497e792c\") " pod="kube-system/cilium-rzx74" Jan 29 11:09:30.592359 kubelet[1755]: I0129 11:09:30.588278 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34cdd80e-9da7-4ec0-b90b-036f20c51975-cilium-config-path\") pod \"cilium-operator-599987898-qn6zr\" (UID: \"34cdd80e-9da7-4ec0-b90b-036f20c51975\") " pod="kube-system/cilium-operator-599987898-qn6zr" Jan 29 11:09:30.592359 kubelet[1755]: I0129 11:09:30.588295 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1bbcf199-41ef-4a2c-a979-6334497e792c-hostproc\") pod \"cilium-rzx74\" (UID: \"1bbcf199-41ef-4a2c-a979-6334497e792c\") " pod="kube-system/cilium-rzx74" Jan 29 11:09:30.592359 kubelet[1755]: I0129 11:09:30.588310 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1bbcf199-41ef-4a2c-a979-6334497e792c-clustermesh-secrets\") pod \"cilium-rzx74\" (UID: \"1bbcf199-41ef-4a2c-a979-6334497e792c\") " pod="kube-system/cilium-rzx74" Jan 29 11:09:30.592359 kubelet[1755]: I0129 11:09:30.588328 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1bbcf199-41ef-4a2c-a979-6334497e792c-host-proc-sys-kernel\") pod \"cilium-rzx74\" (UID: \"1bbcf199-41ef-4a2c-a979-6334497e792c\") " pod="kube-system/cilium-rzx74" Jan 29 11:09:30.592463 kubelet[1755]: I0129 11:09:30.588344 1755 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kc4xp\" (UniqueName: \"kubernetes.io/projected/1bbcf199-41ef-4a2c-a979-6334497e792c-kube-api-access-kc4xp\") pod \"cilium-rzx74\" (UID: \"1bbcf199-41ef-4a2c-a979-6334497e792c\") " pod="kube-system/cilium-rzx74" Jan 29 11:09:31.335067 kubelet[1755]: E0129 11:09:31.334991 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:31.722593 kubelet[1755]: E0129 11:09:31.722487 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:31.723045 containerd[1454]: time="2025-01-29T11:09:31.722993818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-qn6zr,Uid:34cdd80e-9da7-4ec0-b90b-036f20c51975,Namespace:kube-system,Attempt:0,}" Jan 29 11:09:31.736840 kubelet[1755]: E0129 11:09:31.736811 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:31.737373 containerd[1454]: time="2025-01-29T11:09:31.737333203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rzx74,Uid:1bbcf199-41ef-4a2c-a979-6334497e792c,Namespace:kube-system,Attempt:0,}" Jan 29 11:09:31.756022 containerd[1454]: time="2025-01-29T11:09:31.755428448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:09:31.756022 containerd[1454]: time="2025-01-29T11:09:31.755479542Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:09:31.756022 containerd[1454]: time="2025-01-29T11:09:31.755490584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:09:31.756022 containerd[1454]: time="2025-01-29T11:09:31.755571206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:09:31.760302 containerd[1454]: time="2025-01-29T11:09:31.759514675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:09:31.760302 containerd[1454]: time="2025-01-29T11:09:31.759560927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:09:31.760302 containerd[1454]: time="2025-01-29T11:09:31.759571930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:09:31.760302 containerd[1454]: time="2025-01-29T11:09:31.759638748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:09:31.776443 systemd[1]: Started cri-containerd-42b1f8ca988957af669e5b297a606d568ca0ff13e5a81d7c03d26a57eedb4030.scope - libcontainer container 42b1f8ca988957af669e5b297a606d568ca0ff13e5a81d7c03d26a57eedb4030. Jan 29 11:09:31.779445 systemd[1]: Started cri-containerd-687454aff9510ff2d88d525591d21d675d00f9ce54dc0fbd839d07b0308a22f2.scope - libcontainer container 687454aff9510ff2d88d525591d21d675d00f9ce54dc0fbd839d07b0308a22f2. Jan 29 11:09:31.795973 containerd[1454]: time="2025-01-29T11:09:31.795933786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rzx74,Uid:1bbcf199-41ef-4a2c-a979-6334497e792c,Namespace:kube-system,Attempt:0,} returns sandbox id \"42b1f8ca988957af669e5b297a606d568ca0ff13e5a81d7c03d26a57eedb4030\"" Jan 29 11:09:31.797462 kubelet[1755]: E0129 11:09:31.796788 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:31.799067 containerd[1454]: time="2025-01-29T11:09:31.799027354Z" level=info msg="CreateContainer within sandbox \"42b1f8ca988957af669e5b297a606d568ca0ff13e5a81d7c03d26a57eedb4030\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 29 11:09:31.808873 containerd[1454]: time="2025-01-29T11:09:31.808817790Z" level=info msg="CreateContainer within sandbox \"42b1f8ca988957af669e5b297a606d568ca0ff13e5a81d7c03d26a57eedb4030\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"780939cdf5cde3103932a804829f1bd4395623fa4daf26da2bc00d1ab8044a05\"" Jan 29 11:09:31.809480 containerd[1454]: time="2025-01-29T11:09:31.809448635Z" level=info msg="StartContainer for \"780939cdf5cde3103932a804829f1bd4395623fa4daf26da2bc00d1ab8044a05\"" Jan 29 11:09:31.811144 containerd[1454]: time="2025-01-29T11:09:31.811110829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-qn6zr,Uid:34cdd80e-9da7-4ec0-b90b-036f20c51975,Namespace:kube-system,Attempt:0,} returns sandbox id \"687454aff9510ff2d88d525591d21d675d00f9ce54dc0fbd839d07b0308a22f2\"" Jan 29 11:09:31.811900 kubelet[1755]: E0129 11:09:31.811880 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:31.812934 containerd[1454]: time="2025-01-29T11:09:31.812760180Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 29 11:09:31.838426 systemd[1]: Started cri-containerd-780939cdf5cde3103932a804829f1bd4395623fa4daf26da2bc00d1ab8044a05.scope - libcontainer container 780939cdf5cde3103932a804829f1bd4395623fa4daf26da2bc00d1ab8044a05. Jan 29 11:09:31.859404 containerd[1454]: time="2025-01-29T11:09:31.859351667Z" level=info msg="StartContainer for \"780939cdf5cde3103932a804829f1bd4395623fa4daf26da2bc00d1ab8044a05\" returns successfully" Jan 29 11:09:31.905343 systemd[1]: cri-containerd-780939cdf5cde3103932a804829f1bd4395623fa4daf26da2bc00d1ab8044a05.scope: Deactivated successfully. Jan 29 11:09:31.932719 containerd[1454]: time="2025-01-29T11:09:31.932655890Z" level=info msg="shim disconnected" id=780939cdf5cde3103932a804829f1bd4395623fa4daf26da2bc00d1ab8044a05 namespace=k8s.io Jan 29 11:09:31.932719 containerd[1454]: time="2025-01-29T11:09:31.932708903Z" level=warning msg="cleaning up after shim disconnected" id=780939cdf5cde3103932a804829f1bd4395623fa4daf26da2bc00d1ab8044a05 namespace=k8s.io Jan 29 11:09:31.932719 containerd[1454]: time="2025-01-29T11:09:31.932720106Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:09:32.335715 kubelet[1755]: E0129 11:09:32.335658 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:32.543078 kubelet[1755]: E0129 11:09:32.542604 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:32.547607 containerd[1454]: time="2025-01-29T11:09:32.547563927Z" level=info msg="CreateContainer within sandbox \"42b1f8ca988957af669e5b297a606d568ca0ff13e5a81d7c03d26a57eedb4030\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 29 11:09:32.559517 containerd[1454]: time="2025-01-29T11:09:32.559464720Z" level=info msg="CreateContainer within sandbox \"42b1f8ca988957af669e5b297a606d568ca0ff13e5a81d7c03d26a57eedb4030\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e88b8cb87d8939f43c846d10e0231fd70df9852fdc851859019c0ee01328b80e\"" Jan 29 11:09:32.560169 containerd[1454]: time="2025-01-29T11:09:32.560144326Z" level=info msg="StartContainer for \"e88b8cb87d8939f43c846d10e0231fd70df9852fdc851859019c0ee01328b80e\"" Jan 29 11:09:32.587432 systemd[1]: Started cri-containerd-e88b8cb87d8939f43c846d10e0231fd70df9852fdc851859019c0ee01328b80e.scope - libcontainer container e88b8cb87d8939f43c846d10e0231fd70df9852fdc851859019c0ee01328b80e. Jan 29 11:09:32.613079 containerd[1454]: time="2025-01-29T11:09:32.612557719Z" level=info msg="StartContainer for \"e88b8cb87d8939f43c846d10e0231fd70df9852fdc851859019c0ee01328b80e\" returns successfully" Jan 29 11:09:32.619000 systemd[1]: cri-containerd-e88b8cb87d8939f43c846d10e0231fd70df9852fdc851859019c0ee01328b80e.scope: Deactivated successfully. Jan 29 11:09:32.668795 containerd[1454]: time="2025-01-29T11:09:32.668549747Z" level=info msg="shim disconnected" id=e88b8cb87d8939f43c846d10e0231fd70df9852fdc851859019c0ee01328b80e namespace=k8s.io Jan 29 11:09:32.668971 containerd[1454]: time="2025-01-29T11:09:32.668729831Z" level=warning msg="cleaning up after shim disconnected" id=e88b8cb87d8939f43c846d10e0231fd70df9852fdc851859019c0ee01328b80e namespace=k8s.io Jan 29 11:09:32.668971 containerd[1454]: time="2025-01-29T11:09:32.668840538Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:09:32.805193 containerd[1454]: time="2025-01-29T11:09:32.805148189Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:32.806059 containerd[1454]: time="2025-01-29T11:09:32.805851441Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 29 11:09:32.806726 containerd[1454]: time="2025-01-29T11:09:32.806688886Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:09:32.808152 containerd[1454]: time="2025-01-29T11:09:32.808061662Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 995.271594ms" Jan 29 11:09:32.808152 containerd[1454]: time="2025-01-29T11:09:32.808093990Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 29 11:09:32.810761 containerd[1454]: time="2025-01-29T11:09:32.810645495Z" level=info msg="CreateContainer within sandbox \"687454aff9510ff2d88d525591d21d675d00f9ce54dc0fbd839d07b0308a22f2\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 29 11:09:32.823238 containerd[1454]: time="2025-01-29T11:09:32.823202169Z" level=info msg="CreateContainer within sandbox \"687454aff9510ff2d88d525591d21d675d00f9ce54dc0fbd839d07b0308a22f2\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"208e2ade9589032c588535baf035594f148007e55030d928e7ed5bad850ba6c5\"" Jan 29 11:09:32.824020 containerd[1454]: time="2025-01-29T11:09:32.823751584Z" level=info msg="StartContainer for \"208e2ade9589032c588535baf035594f148007e55030d928e7ed5bad850ba6c5\"" Jan 29 11:09:32.852448 systemd[1]: Started cri-containerd-208e2ade9589032c588535baf035594f148007e55030d928e7ed5bad850ba6c5.scope - libcontainer container 208e2ade9589032c588535baf035594f148007e55030d928e7ed5bad850ba6c5. Jan 29 11:09:32.917336 containerd[1454]: time="2025-01-29T11:09:32.917190940Z" level=info msg="StartContainer for \"208e2ade9589032c588535baf035594f148007e55030d928e7ed5bad850ba6c5\" returns successfully" Jan 29 11:09:33.336833 kubelet[1755]: E0129 11:09:33.336786 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:33.549842 kubelet[1755]: E0129 11:09:33.549802 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:33.551115 kubelet[1755]: E0129 11:09:33.551080 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:33.559513 containerd[1454]: time="2025-01-29T11:09:33.559461174Z" level=info msg="CreateContainer within sandbox \"42b1f8ca988957af669e5b297a606d568ca0ff13e5a81d7c03d26a57eedb4030\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 29 11:09:33.560588 kubelet[1755]: I0129 11:09:33.560451 1755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-qn6zr" podStartSLOduration=2.563957104 podStartE2EDuration="3.560437838s" podCreationTimestamp="2025-01-29 11:09:30 +0000 UTC" firstStartedPulling="2025-01-29 11:09:31.812572691 +0000 UTC m=+47.223634486" lastFinishedPulling="2025-01-29 11:09:32.809053425 +0000 UTC m=+48.220115220" observedRunningTime="2025-01-29 11:09:33.560343856 +0000 UTC m=+48.971405652" watchObservedRunningTime="2025-01-29 11:09:33.560437838 +0000 UTC m=+48.971499633" Jan 29 11:09:33.572978 containerd[1454]: time="2025-01-29T11:09:33.572908380Z" level=info msg="CreateContainer within sandbox \"42b1f8ca988957af669e5b297a606d568ca0ff13e5a81d7c03d26a57eedb4030\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b1a07702d9158bb7fd78e8439fe3053ae4da70fa2f362fd6ef1254ce9601b7ab\"" Jan 29 11:09:33.574438 containerd[1454]: time="2025-01-29T11:09:33.574397402Z" level=info msg="StartContainer for \"b1a07702d9158bb7fd78e8439fe3053ae4da70fa2f362fd6ef1254ce9601b7ab\"" Jan 29 11:09:33.599413 systemd[1]: Started cri-containerd-b1a07702d9158bb7fd78e8439fe3053ae4da70fa2f362fd6ef1254ce9601b7ab.scope - libcontainer container b1a07702d9158bb7fd78e8439fe3053ae4da70fa2f362fd6ef1254ce9601b7ab. Jan 29 11:09:33.623264 containerd[1454]: time="2025-01-29T11:09:33.622896774Z" level=info msg="StartContainer for \"b1a07702d9158bb7fd78e8439fe3053ae4da70fa2f362fd6ef1254ce9601b7ab\" returns successfully" Jan 29 11:09:33.625571 systemd[1]: cri-containerd-b1a07702d9158bb7fd78e8439fe3053ae4da70fa2f362fd6ef1254ce9601b7ab.scope: Deactivated successfully. Jan 29 11:09:33.646541 containerd[1454]: time="2025-01-29T11:09:33.646450500Z" level=info msg="shim disconnected" id=b1a07702d9158bb7fd78e8439fe3053ae4da70fa2f362fd6ef1254ce9601b7ab namespace=k8s.io Jan 29 11:09:33.646541 containerd[1454]: time="2025-01-29T11:09:33.646499231Z" level=warning msg="cleaning up after shim disconnected" id=b1a07702d9158bb7fd78e8439fe3053ae4da70fa2f362fd6ef1254ce9601b7ab namespace=k8s.io Jan 29 11:09:33.646541 containerd[1454]: time="2025-01-29T11:09:33.646507713Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:09:34.337163 kubelet[1755]: E0129 11:09:34.337124 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:34.555066 kubelet[1755]: E0129 11:09:34.555012 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:34.555348 kubelet[1755]: E0129 11:09:34.555324 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:34.557144 containerd[1454]: time="2025-01-29T11:09:34.557063429Z" level=info msg="CreateContainer within sandbox \"42b1f8ca988957af669e5b297a606d568ca0ff13e5a81d7c03d26a57eedb4030\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 29 11:09:34.568982 containerd[1454]: time="2025-01-29T11:09:34.568872771Z" level=info msg="CreateContainer within sandbox \"42b1f8ca988957af669e5b297a606d568ca0ff13e5a81d7c03d26a57eedb4030\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a5e3e81d5cae9d7bda97a7160899f2d91969cb223a8ee74a9bab271b75c38433\"" Jan 29 11:09:34.568960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2984441569.mount: Deactivated successfully. Jan 29 11:09:34.570150 containerd[1454]: time="2025-01-29T11:09:34.569510788Z" level=info msg="StartContainer for \"a5e3e81d5cae9d7bda97a7160899f2d91969cb223a8ee74a9bab271b75c38433\"" Jan 29 11:09:34.607436 systemd[1]: Started cri-containerd-a5e3e81d5cae9d7bda97a7160899f2d91969cb223a8ee74a9bab271b75c38433.scope - libcontainer container a5e3e81d5cae9d7bda97a7160899f2d91969cb223a8ee74a9bab271b75c38433. Jan 29 11:09:34.627577 systemd[1]: cri-containerd-a5e3e81d5cae9d7bda97a7160899f2d91969cb223a8ee74a9bab271b75c38433.scope: Deactivated successfully. Jan 29 11:09:34.628907 containerd[1454]: time="2025-01-29T11:09:34.628875442Z" level=info msg="StartContainer for \"a5e3e81d5cae9d7bda97a7160899f2d91969cb223a8ee74a9bab271b75c38433\" returns successfully" Jan 29 11:09:34.655986 containerd[1454]: time="2025-01-29T11:09:34.655877052Z" level=info msg="shim disconnected" id=a5e3e81d5cae9d7bda97a7160899f2d91969cb223a8ee74a9bab271b75c38433 namespace=k8s.io Jan 29 11:09:34.655986 containerd[1454]: time="2025-01-29T11:09:34.655958870Z" level=warning msg="cleaning up after shim disconnected" id=a5e3e81d5cae9d7bda97a7160899f2d91969cb223a8ee74a9bab271b75c38433 namespace=k8s.io Jan 29 11:09:34.656297 containerd[1454]: time="2025-01-29T11:09:34.655968592Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:09:34.730820 systemd[1]: run-containerd-runc-k8s.io-a5e3e81d5cae9d7bda97a7160899f2d91969cb223a8ee74a9bab271b75c38433-runc.VnI3dN.mount: Deactivated successfully. Jan 29 11:09:34.730936 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5e3e81d5cae9d7bda97a7160899f2d91969cb223a8ee74a9bab271b75c38433-rootfs.mount: Deactivated successfully. Jan 29 11:09:35.337574 kubelet[1755]: E0129 11:09:35.337526 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:35.438423 kubelet[1755]: E0129 11:09:35.438373 1755 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 29 11:09:35.559438 kubelet[1755]: E0129 11:09:35.559408 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:35.561753 containerd[1454]: time="2025-01-29T11:09:35.561713001Z" level=info msg="CreateContainer within sandbox \"42b1f8ca988957af669e5b297a606d568ca0ff13e5a81d7c03d26a57eedb4030\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 29 11:09:35.576155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1764896918.mount: Deactivated successfully. Jan 29 11:09:35.577298 containerd[1454]: time="2025-01-29T11:09:35.577229852Z" level=info msg="CreateContainer within sandbox \"42b1f8ca988957af669e5b297a606d568ca0ff13e5a81d7c03d26a57eedb4030\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"647a024fcc4468ae2481c5ce6a50ea6ce3265a5257ec618561ae34fec45a7be4\"" Jan 29 11:09:35.577794 containerd[1454]: time="2025-01-29T11:09:35.577752077Z" level=info msg="StartContainer for \"647a024fcc4468ae2481c5ce6a50ea6ce3265a5257ec618561ae34fec45a7be4\"" Jan 29 11:09:35.603432 systemd[1]: Started cri-containerd-647a024fcc4468ae2481c5ce6a50ea6ce3265a5257ec618561ae34fec45a7be4.scope - libcontainer container 647a024fcc4468ae2481c5ce6a50ea6ce3265a5257ec618561ae34fec45a7be4. Jan 29 11:09:35.625419 containerd[1454]: time="2025-01-29T11:09:35.625293628Z" level=info msg="StartContainer for \"647a024fcc4468ae2481c5ce6a50ea6ce3265a5257ec618561ae34fec45a7be4\" returns successfully" Jan 29 11:09:35.887359 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 29 11:09:36.338099 kubelet[1755]: E0129 11:09:36.338047 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:36.508194 kubelet[1755]: I0129 11:09:36.508148 1755 setters.go:580] "Node became not ready" node="10.0.0.133" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-29T11:09:36Z","lastTransitionTime":"2025-01-29T11:09:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 29 11:09:36.564496 kubelet[1755]: E0129 11:09:36.564469 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:36.577927 kubelet[1755]: I0129 11:09:36.577704 1755 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rzx74" podStartSLOduration=6.577690135 podStartE2EDuration="6.577690135s" podCreationTimestamp="2025-01-29 11:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:09:36.577181359 +0000 UTC m=+51.988243154" watchObservedRunningTime="2025-01-29 11:09:36.577690135 +0000 UTC m=+51.988751930" Jan 29 11:09:37.338680 kubelet[1755]: E0129 11:09:37.338636 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:37.738192 kubelet[1755]: E0129 11:09:37.738157 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:38.338781 kubelet[1755]: E0129 11:09:38.338729 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:38.794969 systemd-networkd[1391]: lxc_health: Link UP Jan 29 11:09:38.805453 systemd-networkd[1391]: lxc_health: Gained carrier Jan 29 11:09:39.185802 systemd[1]: run-containerd-runc-k8s.io-647a024fcc4468ae2481c5ce6a50ea6ce3265a5257ec618561ae34fec45a7be4-runc.IjsAmI.mount: Deactivated successfully. Jan 29 11:09:39.339096 kubelet[1755]: E0129 11:09:39.339047 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:39.740284 kubelet[1755]: E0129 11:09:39.738952 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:40.339261 kubelet[1755]: E0129 11:09:40.339191 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:40.462479 systemd-networkd[1391]: lxc_health: Gained IPv6LL Jan 29 11:09:40.571048 kubelet[1755]: E0129 11:09:40.571004 1755 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 11:09:41.339876 kubelet[1755]: E0129 11:09:41.339825 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:42.340296 kubelet[1755]: E0129 11:09:42.340235 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:43.341345 kubelet[1755]: E0129 11:09:43.341306 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:43.431668 systemd[1]: run-containerd-runc-k8s.io-647a024fcc4468ae2481c5ce6a50ea6ce3265a5257ec618561ae34fec45a7be4-runc.Hj2Otl.mount: Deactivated successfully. Jan 29 11:09:44.341546 kubelet[1755]: E0129 11:09:44.341500 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:45.302992 kubelet[1755]: E0129 11:09:45.302948 1755 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:45.317547 containerd[1454]: time="2025-01-29T11:09:45.317389041Z" level=info msg="StopPodSandbox for \"aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38\"" Jan 29 11:09:45.317547 containerd[1454]: time="2025-01-29T11:09:45.317476448Z" level=info msg="TearDown network for sandbox \"aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38\" successfully" Jan 29 11:09:45.317547 containerd[1454]: time="2025-01-29T11:09:45.317487169Z" level=info msg="StopPodSandbox for \"aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38\" returns successfully" Jan 29 11:09:45.317932 containerd[1454]: time="2025-01-29T11:09:45.317840718Z" level=info msg="RemovePodSandbox for \"aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38\"" Jan 29 11:09:45.317932 containerd[1454]: time="2025-01-29T11:09:45.317866360Z" level=info msg="Forcibly stopping sandbox \"aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38\"" Jan 29 11:09:45.317932 containerd[1454]: time="2025-01-29T11:09:45.317921445Z" level=info msg="TearDown network for sandbox \"aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38\" successfully" Jan 29 11:09:45.324608 containerd[1454]: time="2025-01-29T11:09:45.324572111Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:09:45.324860 containerd[1454]: time="2025-01-29T11:09:45.324748605Z" level=info msg="RemovePodSandbox \"aec1ca515acb40edf9ebbe4cac6fbf899560ff11c43a508fdc5600c951a51b38\" returns successfully" Jan 29 11:09:45.342214 kubelet[1755]: E0129 11:09:45.342174 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:09:46.342339 kubelet[1755]: E0129 11:09:46.342295 1755 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"