Jan 30 13:03:38.007635 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 30 13:03:38.007658 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Wed Jan 29 09:30:22 -00 2025 Jan 30 13:03:38.007668 kernel: KASLR enabled Jan 30 13:03:38.007674 kernel: efi: EFI v2.7 by EDK II Jan 30 13:03:38.007680 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Jan 30 13:03:38.007686 kernel: random: crng init done Jan 30 13:03:38.007693 kernel: secureboot: Secure boot disabled Jan 30 13:03:38.007698 kernel: ACPI: Early table checksum verification disabled Jan 30 13:03:38.007704 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jan 30 13:03:38.007711 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jan 30 13:03:38.007717 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:03:38.007723 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:03:38.007729 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:03:38.007735 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:03:38.007742 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:03:38.007750 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:03:38.007756 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:03:38.007762 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:03:38.007769 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:03:38.007775 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jan 30 13:03:38.007781 kernel: NUMA: Failed to initialise from firmware Jan 30 13:03:38.007787 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:03:38.007793 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jan 30 13:03:38.007799 kernel: Zone ranges: Jan 30 13:03:38.007806 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:03:38.007813 kernel: DMA32 empty Jan 30 13:03:38.007820 kernel: Normal empty Jan 30 13:03:38.007826 kernel: Movable zone start for each node Jan 30 13:03:38.007832 kernel: Early memory node ranges Jan 30 13:03:38.007838 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Jan 30 13:03:38.007845 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Jan 30 13:03:38.007851 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Jan 30 13:03:38.007857 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jan 30 13:03:38.007863 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jan 30 13:03:38.007869 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jan 30 13:03:38.007886 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jan 30 13:03:38.007892 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jan 30 13:03:38.007900 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jan 30 13:03:38.007907 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jan 30 13:03:38.007913 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jan 30 13:03:38.007922 kernel: psci: probing for conduit method from ACPI. Jan 30 13:03:38.007929 kernel: psci: PSCIv1.1 detected in firmware. Jan 30 13:03:38.007936 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 13:03:38.007943 kernel: psci: Trusted OS migration not required Jan 30 13:03:38.007950 kernel: psci: SMC Calling Convention v1.1 Jan 30 13:03:38.007957 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 30 13:03:38.007963 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 13:03:38.007970 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 13:03:38.007977 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 30 13:03:38.007983 kernel: Detected PIPT I-cache on CPU0 Jan 30 13:03:38.007990 kernel: CPU features: detected: GIC system register CPU interface Jan 30 13:03:38.007998 kernel: CPU features: detected: Hardware dirty bit management Jan 30 13:03:38.008004 kernel: CPU features: detected: Spectre-v4 Jan 30 13:03:38.008012 kernel: CPU features: detected: Spectre-BHB Jan 30 13:03:38.008019 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 30 13:03:38.008025 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 30 13:03:38.008032 kernel: CPU features: detected: ARM erratum 1418040 Jan 30 13:03:38.008038 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 30 13:03:38.008045 kernel: alternatives: applying boot alternatives Jan 30 13:03:38.008064 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:03:38.008072 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:03:38.008078 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:03:38.008085 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:03:38.008091 kernel: Fallback order for Node 0: 0 Jan 30 13:03:38.008100 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jan 30 13:03:38.008106 kernel: Policy zone: DMA Jan 30 13:03:38.008113 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:03:38.008119 kernel: software IO TLB: area num 4. Jan 30 13:03:38.008125 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jan 30 13:03:38.008132 kernel: Memory: 2385936K/2572288K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 186352K reserved, 0K cma-reserved) Jan 30 13:03:38.008139 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:03:38.008145 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:03:38.008152 kernel: rcu: RCU event tracing is enabled. Jan 30 13:03:38.008159 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:03:38.008166 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:03:38.008173 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:03:38.008189 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:03:38.008197 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:03:38.008203 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 13:03:38.008210 kernel: GICv3: 256 SPIs implemented Jan 30 13:03:38.008217 kernel: GICv3: 0 Extended SPIs implemented Jan 30 13:03:38.008223 kernel: Root IRQ handler: gic_handle_irq Jan 30 13:03:38.008230 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 30 13:03:38.008243 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 30 13:03:38.008250 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 30 13:03:38.008256 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 13:03:38.008266 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jan 30 13:03:38.008292 kernel: GICv3: using LPI property table @0x00000000400f0000 Jan 30 13:03:38.008299 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jan 30 13:03:38.008307 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:03:38.008313 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:03:38.008320 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 30 13:03:38.008327 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 30 13:03:38.008334 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 30 13:03:38.008340 kernel: arm-pv: using stolen time PV Jan 30 13:03:38.008347 kernel: Console: colour dummy device 80x25 Jan 30 13:03:38.008354 kernel: ACPI: Core revision 20230628 Jan 30 13:03:38.008361 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 30 13:03:38.008369 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:03:38.008376 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:03:38.008382 kernel: landlock: Up and running. Jan 30 13:03:38.008389 kernel: SELinux: Initializing. Jan 30 13:03:38.008396 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:03:38.008403 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:03:38.008409 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:03:38.008416 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:03:38.008423 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:03:38.008431 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:03:38.008438 kernel: Platform MSI: ITS@0x8080000 domain created Jan 30 13:03:38.008445 kernel: PCI/MSI: ITS@0x8080000 domain created Jan 30 13:03:38.008451 kernel: Remapping and enabling EFI services. Jan 30 13:03:38.008458 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:03:38.008465 kernel: Detected PIPT I-cache on CPU1 Jan 30 13:03:38.008472 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 30 13:03:38.008479 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jan 30 13:03:38.008486 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:03:38.008494 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 30 13:03:38.008501 kernel: Detected PIPT I-cache on CPU2 Jan 30 13:03:38.008514 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 30 13:03:38.008522 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jan 30 13:03:38.008529 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:03:38.008536 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 30 13:03:38.008543 kernel: Detected PIPT I-cache on CPU3 Jan 30 13:03:38.008550 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 30 13:03:38.008557 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jan 30 13:03:38.008565 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 30 13:03:38.008572 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 30 13:03:38.008579 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:03:38.008586 kernel: SMP: Total of 4 processors activated. Jan 30 13:03:38.008593 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 13:03:38.008605 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 30 13:03:38.008612 kernel: CPU features: detected: Common not Private translations Jan 30 13:03:38.008619 kernel: CPU features: detected: CRC32 instructions Jan 30 13:03:38.008627 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 30 13:03:38.008634 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 30 13:03:38.008641 kernel: CPU features: detected: LSE atomic instructions Jan 30 13:03:38.008648 kernel: CPU features: detected: Privileged Access Never Jan 30 13:03:38.008655 kernel: CPU features: detected: RAS Extension Support Jan 30 13:03:38.008663 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 30 13:03:38.008669 kernel: CPU: All CPU(s) started at EL1 Jan 30 13:03:38.008676 kernel: alternatives: applying system-wide alternatives Jan 30 13:03:38.008683 kernel: devtmpfs: initialized Jan 30 13:03:38.008690 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:03:38.008699 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:03:38.008706 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:03:38.008713 kernel: SMBIOS 3.0.0 present. Jan 30 13:03:38.008720 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jan 30 13:03:38.008728 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:03:38.008735 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 13:03:38.008742 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 13:03:38.008749 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 13:03:38.008757 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:03:38.008764 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Jan 30 13:03:38.008771 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:03:38.008778 kernel: cpuidle: using governor menu Jan 30 13:03:38.008785 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 13:03:38.008792 kernel: ASID allocator initialised with 32768 entries Jan 30 13:03:38.008799 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:03:38.008806 kernel: Serial: AMBA PL011 UART driver Jan 30 13:03:38.008813 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 30 13:03:38.008821 kernel: Modules: 0 pages in range for non-PLT usage Jan 30 13:03:38.008828 kernel: Modules: 508880 pages in range for PLT usage Jan 30 13:03:38.008835 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:03:38.008842 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:03:38.008849 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 13:03:38.008857 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 13:03:38.008863 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:03:38.008870 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:03:38.008877 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 13:03:38.008886 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 13:03:38.008892 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:03:38.008900 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:03:38.008907 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:03:38.008914 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:03:38.008921 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:03:38.008928 kernel: ACPI: Interpreter enabled Jan 30 13:03:38.008935 kernel: ACPI: Using GIC for interrupt routing Jan 30 13:03:38.008942 kernel: ACPI: MCFG table detected, 1 entries Jan 30 13:03:38.008949 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 30 13:03:38.008957 kernel: printk: console [ttyAMA0] enabled Jan 30 13:03:38.008964 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:03:38.009240 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:03:38.009326 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 13:03:38.009404 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 13:03:38.009475 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 30 13:03:38.009544 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 30 13:03:38.009560 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 30 13:03:38.009568 kernel: PCI host bridge to bus 0000:00 Jan 30 13:03:38.009648 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 30 13:03:38.009722 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 13:03:38.009815 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 30 13:03:38.009875 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:03:38.009959 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jan 30 13:03:38.010036 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:03:38.010119 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jan 30 13:03:38.010194 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jan 30 13:03:38.010262 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 13:03:38.010329 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jan 30 13:03:38.010393 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jan 30 13:03:38.010459 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jan 30 13:03:38.010525 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 30 13:03:38.010583 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 13:03:38.010643 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 30 13:03:38.010652 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 13:03:38.010659 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 13:03:38.010666 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 13:03:38.010673 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 13:03:38.010682 kernel: iommu: Default domain type: Translated Jan 30 13:03:38.010689 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 13:03:38.010696 kernel: efivars: Registered efivars operations Jan 30 13:03:38.010703 kernel: vgaarb: loaded Jan 30 13:03:38.010709 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 13:03:38.010717 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:03:38.010724 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:03:38.010730 kernel: pnp: PnP ACPI init Jan 30 13:03:38.010802 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 30 13:03:38.010815 kernel: pnp: PnP ACPI: found 1 devices Jan 30 13:03:38.010823 kernel: NET: Registered PF_INET protocol family Jan 30 13:03:38.010830 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:03:38.010838 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:03:38.010845 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:03:38.010852 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:03:38.010859 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:03:38.010867 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:03:38.010876 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:03:38.010884 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:03:38.010891 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:03:38.010898 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:03:38.010905 kernel: kvm [1]: HYP mode not available Jan 30 13:03:38.010912 kernel: Initialise system trusted keyrings Jan 30 13:03:38.010920 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:03:38.010927 kernel: Key type asymmetric registered Jan 30 13:03:38.010934 kernel: Asymmetric key parser 'x509' registered Jan 30 13:03:38.010941 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 13:03:38.010950 kernel: io scheduler mq-deadline registered Jan 30 13:03:38.010957 kernel: io scheduler kyber registered Jan 30 13:03:38.010964 kernel: io scheduler bfq registered Jan 30 13:03:38.010972 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 13:03:38.010979 kernel: ACPI: button: Power Button [PWRB] Jan 30 13:03:38.010986 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 13:03:38.011063 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jan 30 13:03:38.011073 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:03:38.011080 kernel: thunder_xcv, ver 1.0 Jan 30 13:03:38.011090 kernel: thunder_bgx, ver 1.0 Jan 30 13:03:38.011097 kernel: nicpf, ver 1.0 Jan 30 13:03:38.011104 kernel: nicvf, ver 1.0 Jan 30 13:03:38.011179 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 13:03:38.011257 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T13:03:37 UTC (1738242217) Jan 30 13:03:38.011267 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:03:38.011275 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jan 30 13:03:38.011282 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 13:03:38.011293 kernel: watchdog: Hard watchdog permanently disabled Jan 30 13:03:38.011300 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:03:38.011307 kernel: Segment Routing with IPv6 Jan 30 13:03:38.011314 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:03:38.011321 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:03:38.011329 kernel: Key type dns_resolver registered Jan 30 13:03:38.011336 kernel: registered taskstats version 1 Jan 30 13:03:38.011343 kernel: Loading compiled-in X.509 certificates Jan 30 13:03:38.011350 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: c31663d2c680b3b306c17f44b5295280d3a2e28a' Jan 30 13:03:38.011359 kernel: Key type .fscrypt registered Jan 30 13:03:38.011366 kernel: Key type fscrypt-provisioning registered Jan 30 13:03:38.011373 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:03:38.011380 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:03:38.011387 kernel: ima: No architecture policies found Jan 30 13:03:38.011394 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 13:03:38.011401 kernel: clk: Disabling unused clocks Jan 30 13:03:38.011408 kernel: Freeing unused kernel memory: 39936K Jan 30 13:03:38.011415 kernel: Run /init as init process Jan 30 13:03:38.011425 kernel: with arguments: Jan 30 13:03:38.011432 kernel: /init Jan 30 13:03:38.011439 kernel: with environment: Jan 30 13:03:38.011446 kernel: HOME=/ Jan 30 13:03:38.011461 kernel: TERM=linux Jan 30 13:03:38.011470 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:03:38.011480 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:03:38.011489 systemd[1]: Detected virtualization kvm. Jan 30 13:03:38.011498 systemd[1]: Detected architecture arm64. Jan 30 13:03:38.011505 systemd[1]: Running in initrd. Jan 30 13:03:38.011512 systemd[1]: No hostname configured, using default hostname. Jan 30 13:03:38.011520 systemd[1]: Hostname set to . Jan 30 13:03:38.011527 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:03:38.011535 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:03:38.011542 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:03:38.011550 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:03:38.011559 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:03:38.011567 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:03:38.011575 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:03:38.011582 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:03:38.011591 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:03:38.011599 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:03:38.011608 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:03:38.011616 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:03:38.011623 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:03:38.011631 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:03:38.011639 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:03:38.011647 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:03:38.011654 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:03:38.011662 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:03:38.011670 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:03:38.011680 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:03:38.011688 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:03:38.011696 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:03:38.011703 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:03:38.011711 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:03:38.011719 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:03:38.011727 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:03:38.011734 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:03:38.011744 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:03:38.011751 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:03:38.011759 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:03:38.011767 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:03:38.011775 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:03:38.011783 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:03:38.011790 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:03:38.011800 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:03:38.011808 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:03:38.011817 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:03:38.011825 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:03:38.011833 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:03:38.011859 systemd-journald[238]: Collecting audit messages is disabled. Jan 30 13:03:38.011881 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:03:38.011889 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:03:38.011898 systemd-journald[238]: Journal started Jan 30 13:03:38.011923 systemd-journald[238]: Runtime Journal (/run/log/journal/b2c00ca831c84fae9b48a18c8b8273ca) is 5.9M, max 47.3M, 41.4M free. Jan 30 13:03:37.973381 systemd-modules-load[239]: Inserted module 'overlay' Jan 30 13:03:38.017608 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:03:38.017657 kernel: Bridge firewalling registered Jan 30 13:03:38.017354 systemd-modules-load[239]: Inserted module 'br_netfilter' Jan 30 13:03:38.018503 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:03:38.020845 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:03:38.034283 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:03:38.035920 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:03:38.038759 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:03:38.046509 dracut-cmdline[266]: dracut-dracut-053 Jan 30 13:03:38.049119 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:03:38.050410 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:03:38.053047 dracut-cmdline[266]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e6957044c3256d96283265c263579aa4275d1d707b02496fcb081f5fc6356346 Jan 30 13:03:38.063500 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:03:38.089524 systemd-resolved[295]: Positive Trust Anchors: Jan 30 13:03:38.089541 systemd-resolved[295]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:03:38.089571 systemd-resolved[295]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:03:38.094571 systemd-resolved[295]: Defaulting to hostname 'linux'. Jan 30 13:03:38.095732 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:03:38.098588 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:03:38.123084 kernel: SCSI subsystem initialized Jan 30 13:03:38.128068 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:03:38.136077 kernel: iscsi: registered transport (tcp) Jan 30 13:03:38.150159 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:03:38.150226 kernel: QLogic iSCSI HBA Driver Jan 30 13:03:38.194278 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:03:38.202238 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:03:38.220082 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:03:38.220149 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:03:38.222071 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:03:38.269106 kernel: raid6: neonx8 gen() 15713 MB/s Jan 30 13:03:38.286076 kernel: raid6: neonx4 gen() 15443 MB/s Jan 30 13:03:38.303071 kernel: raid6: neonx2 gen() 13179 MB/s Jan 30 13:03:38.320067 kernel: raid6: neonx1 gen() 10500 MB/s Jan 30 13:03:38.337071 kernel: raid6: int64x8 gen() 6760 MB/s Jan 30 13:03:38.354069 kernel: raid6: int64x4 gen() 7347 MB/s Jan 30 13:03:38.371069 kernel: raid6: int64x2 gen() 6068 MB/s Jan 30 13:03:38.388070 kernel: raid6: int64x1 gen() 5058 MB/s Jan 30 13:03:38.388094 kernel: raid6: using algorithm neonx8 gen() 15713 MB/s Jan 30 13:03:38.405075 kernel: raid6: .... xor() 11975 MB/s, rmw enabled Jan 30 13:03:38.405088 kernel: raid6: using neon recovery algorithm Jan 30 13:03:38.410290 kernel: xor: measuring software checksum speed Jan 30 13:03:38.410311 kernel: 8regs : 21607 MB/sec Jan 30 13:03:38.411362 kernel: 32regs : 21676 MB/sec Jan 30 13:03:38.411375 kernel: arm64_neon : 27917 MB/sec Jan 30 13:03:38.411389 kernel: xor: using function: arm64_neon (27917 MB/sec) Jan 30 13:03:38.467084 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:03:38.478144 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:03:38.492268 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:03:38.503807 systemd-udevd[464]: Using default interface naming scheme 'v255'. Jan 30 13:03:38.506973 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:03:38.513289 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:03:38.528065 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Jan 30 13:03:38.564919 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:03:38.575235 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:03:38.622281 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:03:38.631513 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:03:38.646096 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:03:38.647853 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:03:38.649230 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:03:38.650258 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:03:38.659357 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:03:38.670764 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jan 30 13:03:38.683625 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:03:38.683762 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:03:38.683774 kernel: GPT:9289727 != 19775487 Jan 30 13:03:38.683784 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:03:38.683794 kernel: GPT:9289727 != 19775487 Jan 30 13:03:38.683803 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:03:38.683813 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:03:38.671155 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:03:38.678778 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:03:38.678892 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:03:38.683348 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:03:38.684393 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:03:38.684552 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:03:38.686859 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:03:38.695577 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:03:38.707512 kernel: BTRFS: device fsid 1e2e5fa7-c757-4d5d-af66-73afe98fbaae devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (512) Jan 30 13:03:38.710084 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:03:38.715070 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (513) Jan 30 13:03:38.715224 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:03:38.720504 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:03:38.727190 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:03:38.728208 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:03:38.733354 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:03:38.743268 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:03:38.744893 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:03:38.750542 disk-uuid[557]: Primary Header is updated. Jan 30 13:03:38.750542 disk-uuid[557]: Secondary Entries is updated. Jan 30 13:03:38.750542 disk-uuid[557]: Secondary Header is updated. Jan 30 13:03:38.753115 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:03:38.773348 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:03:39.764070 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:03:39.766524 disk-uuid[558]: The operation has completed successfully. Jan 30 13:03:39.808173 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:03:39.808275 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:03:39.823387 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:03:39.829784 sh[579]: Success Jan 30 13:03:39.872385 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 13:03:39.931759 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:03:39.934646 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:03:39.939311 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:03:39.954768 kernel: BTRFS info (device dm-0): first mount of filesystem 1e2e5fa7-c757-4d5d-af66-73afe98fbaae Jan 30 13:03:39.954818 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:03:39.954829 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:03:39.955630 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:03:39.956264 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:03:39.963669 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:03:39.964812 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:03:39.976262 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:03:39.977868 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:03:39.992567 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:03:39.992697 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:03:39.992720 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:03:39.996086 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:03:40.016122 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:03:40.021412 kernel: BTRFS info (device vda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:03:40.032458 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:03:40.041293 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:03:40.196883 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:03:40.215323 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:03:40.248737 systemd-networkd[771]: lo: Link UP Jan 30 13:03:40.249574 systemd-networkd[771]: lo: Gained carrier Jan 30 13:03:40.251485 systemd-networkd[771]: Enumeration completed Jan 30 13:03:40.252297 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:03:40.253337 systemd[1]: Reached target network.target - Network. Jan 30 13:03:40.255408 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:03:40.255420 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:03:40.257732 systemd-networkd[771]: eth0: Link UP Jan 30 13:03:40.258124 ignition[666]: Ignition 2.20.0 Jan 30 13:03:40.257735 systemd-networkd[771]: eth0: Gained carrier Jan 30 13:03:40.258131 ignition[666]: Stage: fetch-offline Jan 30 13:03:40.257741 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:03:40.258186 ignition[666]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:03:40.258196 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:03:40.258614 ignition[666]: parsed url from cmdline: "" Jan 30 13:03:40.258617 ignition[666]: no config URL provided Jan 30 13:03:40.258622 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:03:40.258629 ignition[666]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:03:40.258655 ignition[666]: op(1): [started] loading QEMU firmware config module Jan 30 13:03:40.258659 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:03:40.269909 ignition[666]: op(1): [finished] loading QEMU firmware config module Jan 30 13:03:40.272118 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.103/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:03:40.277105 ignition[666]: parsing config with SHA512: ebf71d24981a95711ed6bbd553d95297db40dad842e87dbc06166c82461187379f2df53d3156150f939dc43e93f06b28ca9b719dcce8ab4200d2afa3555cb464 Jan 30 13:03:40.283710 unknown[666]: fetched base config from "system" Jan 30 13:03:40.283726 unknown[666]: fetched user config from "qemu" Jan 30 13:03:40.284402 ignition[666]: fetch-offline: fetch-offline passed Jan 30 13:03:40.284495 ignition[666]: Ignition finished successfully Jan 30 13:03:40.285942 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:03:40.287446 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:03:40.297270 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:03:40.308685 ignition[779]: Ignition 2.20.0 Jan 30 13:03:40.308694 ignition[779]: Stage: kargs Jan 30 13:03:40.308868 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:03:40.308878 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:03:40.309630 ignition[779]: kargs: kargs passed Jan 30 13:03:40.312994 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:03:40.309675 ignition[779]: Ignition finished successfully Jan 30 13:03:40.321284 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:03:40.333267 ignition[788]: Ignition 2.20.0 Jan 30 13:03:40.333279 ignition[788]: Stage: disks Jan 30 13:03:40.333441 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:03:40.333451 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:03:40.334176 ignition[788]: disks: disks passed Jan 30 13:03:40.334230 ignition[788]: Ignition finished successfully Jan 30 13:03:40.338232 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:03:40.340331 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:03:40.342373 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:03:40.343438 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:03:40.345233 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:03:40.346835 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:03:40.358272 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:03:40.382857 systemd-fsck[798]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:03:40.387526 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:03:40.401238 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:03:40.449070 kernel: EXT4-fs (vda9): mounted filesystem 88903c49-366d-43ff-90b1-141790b6e85c r/w with ordered data mode. Quota mode: none. Jan 30 13:03:40.449497 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:03:40.450683 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:03:40.465171 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:03:40.467452 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:03:40.468367 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:03:40.468410 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:03:40.468433 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:03:40.474350 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:03:40.476086 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:03:40.482068 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (806) Jan 30 13:03:40.482130 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:03:40.484366 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:03:40.484403 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:03:40.489078 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:03:40.490836 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:03:40.528635 initrd-setup-root[830]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:03:40.533500 initrd-setup-root[837]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:03:40.538171 initrd-setup-root[844]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:03:40.542500 initrd-setup-root[851]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:03:40.645946 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:03:40.658162 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:03:40.659696 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:03:40.668082 kernel: BTRFS info (device vda6): last unmount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:03:40.688229 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:03:40.691039 ignition[919]: INFO : Ignition 2.20.0 Jan 30 13:03:40.691039 ignition[919]: INFO : Stage: mount Jan 30 13:03:40.692430 ignition[919]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:03:40.692430 ignition[919]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:03:40.692430 ignition[919]: INFO : mount: mount passed Jan 30 13:03:40.692430 ignition[919]: INFO : Ignition finished successfully Jan 30 13:03:40.693610 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:03:40.707167 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:03:40.954379 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:03:40.964262 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:03:40.973075 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (933) Jan 30 13:03:40.975247 kernel: BTRFS info (device vda6): first mount of filesystem 5265f28b-8d78-4be2-8b05-2145d9ab7cfa Jan 30 13:03:40.975266 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 30 13:03:40.975277 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:03:40.978069 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:03:40.979230 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:03:41.007177 ignition[951]: INFO : Ignition 2.20.0 Jan 30 13:03:41.007177 ignition[951]: INFO : Stage: files Jan 30 13:03:41.008672 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:03:41.008672 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:03:41.008672 ignition[951]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:03:41.011210 ignition[951]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:03:41.011210 ignition[951]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:03:41.014534 ignition[951]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:03:41.016319 ignition[951]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:03:41.016319 ignition[951]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:03:41.015043 unknown[951]: wrote ssh authorized keys file for user: core Jan 30 13:03:41.019239 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:03:41.019239 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:03:41.019239 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:03:41.019239 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:03:41.019239 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:03:41.019239 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:03:41.019239 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:03:41.019239 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 30 13:03:41.310600 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 30 13:03:41.548365 ignition[951]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 13:03:41.548365 ignition[951]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jan 30 13:03:41.554892 ignition[951]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:03:41.554892 ignition[951]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:03:41.554892 ignition[951]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jan 30 13:03:41.554892 ignition[951]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:03:41.576499 ignition[951]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:03:41.583510 ignition[951]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:03:41.584965 ignition[951]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:03:41.584965 ignition[951]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:03:41.584965 ignition[951]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:03:41.584965 ignition[951]: INFO : files: files passed Jan 30 13:03:41.584965 ignition[951]: INFO : Ignition finished successfully Jan 30 13:03:41.586258 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:03:41.594264 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:03:41.595845 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:03:41.598322 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:03:41.598415 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:03:41.605244 initrd-setup-root-after-ignition[979]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:03:41.612622 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:03:41.612622 initrd-setup-root-after-ignition[981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:03:41.620409 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:03:41.619110 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:03:41.621485 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:03:41.630234 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:03:41.651979 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:03:41.652117 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:03:41.654034 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:03:41.655655 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:03:41.657209 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:03:41.657994 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:03:41.674540 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:03:41.687294 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:03:41.696432 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:03:41.697411 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:03:41.699125 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:03:41.700739 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:03:41.700865 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:03:41.703012 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:03:41.704744 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:03:41.706194 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:03:41.707625 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:03:41.709175 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:03:41.710769 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:03:41.712414 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:03:41.714039 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:03:41.715764 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:03:41.717171 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:03:41.718437 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:03:41.718562 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:03:41.720652 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:03:41.722357 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:03:41.723980 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:03:41.724076 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:03:41.725822 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:03:41.725939 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:03:41.728371 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:03:41.728481 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:03:41.730087 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:03:41.731369 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:03:41.731471 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:03:41.733086 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:03:41.734579 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:03:41.735791 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:03:41.735878 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:03:41.737204 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:03:41.737282 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:03:41.738938 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:03:41.739044 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:03:41.740598 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:03:41.740692 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:03:41.753270 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:03:41.754780 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:03:41.755531 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:03:41.755650 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:03:41.757342 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:03:41.757444 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:03:41.762506 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:03:41.762598 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:03:41.766736 ignition[1005]: INFO : Ignition 2.20.0 Jan 30 13:03:41.766736 ignition[1005]: INFO : Stage: umount Jan 30 13:03:41.769149 ignition[1005]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:03:41.769149 ignition[1005]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:03:41.769149 ignition[1005]: INFO : umount: umount passed Jan 30 13:03:41.769149 ignition[1005]: INFO : Ignition finished successfully Jan 30 13:03:41.769811 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:03:41.769939 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:03:41.771504 systemd[1]: Stopped target network.target - Network. Jan 30 13:03:41.772792 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:03:41.772887 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:03:41.774938 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:03:41.774981 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:03:41.777244 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:03:41.777287 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:03:41.778745 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:03:41.778785 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:03:41.780615 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:03:41.781896 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:03:41.784226 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:03:41.792116 systemd-networkd[771]: eth0: DHCPv6 lease lost Jan 30 13:03:41.793159 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:03:41.793275 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:03:41.795689 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:03:41.795817 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:03:41.798273 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:03:41.798332 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:03:41.811194 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:03:41.811905 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:03:41.811966 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:03:41.813862 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:03:41.813908 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:03:41.815504 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:03:41.815547 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:03:41.817364 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:03:41.817408 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:03:41.822390 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:03:41.826747 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:03:41.826842 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:03:41.832042 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:03:41.832149 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:03:41.834927 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:03:41.835066 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:03:41.837029 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:03:41.837126 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:03:41.840111 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:03:41.840192 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:03:41.841279 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:03:41.841322 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:03:41.842929 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:03:41.842985 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:03:41.845385 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:03:41.845444 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:03:41.847583 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:03:41.847629 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:03:41.850360 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:03:41.851792 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:03:41.851850 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:03:41.853528 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:03:41.853580 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:03:41.855577 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:03:41.855625 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:03:41.857582 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:03:41.857629 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:03:41.861642 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:03:41.861750 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:03:41.863617 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:03:41.866376 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:03:41.876806 systemd[1]: Switching root. Jan 30 13:03:41.904697 systemd-journald[238]: Journal stopped Jan 30 13:03:42.671214 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jan 30 13:03:42.671272 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:03:42.671287 kernel: SELinux: policy capability open_perms=1 Jan 30 13:03:42.671296 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:03:42.671305 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:03:42.671314 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:03:42.671323 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:03:42.671340 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:03:42.671350 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:03:42.671359 kernel: audit: type=1403 audit(1738242222.033:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:03:42.671369 systemd[1]: Successfully loaded SELinux policy in 37.412ms. Jan 30 13:03:42.671387 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.768ms. Jan 30 13:03:42.671398 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:03:42.671409 systemd[1]: Detected virtualization kvm. Jan 30 13:03:42.671419 systemd[1]: Detected architecture arm64. Jan 30 13:03:42.671429 systemd[1]: Detected first boot. Jan 30 13:03:42.671438 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:03:42.671448 zram_generator::config[1050]: No configuration found. Jan 30 13:03:42.671459 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:03:42.671470 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:03:42.671482 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:03:42.671492 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:03:42.671504 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:03:42.671514 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:03:42.671524 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:03:42.671536 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:03:42.671545 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:03:42.671555 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:03:42.671565 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:03:42.671575 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:03:42.671585 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:03:42.671595 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:03:42.671605 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:03:42.671616 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:03:42.671627 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:03:42.671637 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:03:42.671647 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 30 13:03:42.671657 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:03:42.671667 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:03:42.671677 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:03:42.671687 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:03:42.671700 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:03:42.671710 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:03:42.671721 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:03:42.671731 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:03:42.671755 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:03:42.671766 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:03:42.671775 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:03:42.671787 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:03:42.671797 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:03:42.671808 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:03:42.671819 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:03:42.671829 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:03:42.671840 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:03:42.671850 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:03:42.671860 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:03:42.671870 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:03:42.671881 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:03:42.671891 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:03:42.671904 systemd[1]: Reached target machines.target - Containers. Jan 30 13:03:42.671914 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:03:42.671928 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:03:42.671938 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:03:42.671948 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:03:42.671959 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:03:42.671969 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:03:42.671979 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:03:42.671991 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:03:42.672001 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:03:42.672011 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:03:42.672021 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:03:42.672031 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:03:42.672041 kernel: fuse: init (API version 7.39) Jan 30 13:03:42.672050 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:03:42.674111 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:03:42.674127 kernel: loop: module loaded Jan 30 13:03:42.674144 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:03:42.674155 kernel: ACPI: bus type drm_connector registered Jan 30 13:03:42.674165 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:03:42.674175 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:03:42.674195 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:03:42.674208 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:03:42.674218 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:03:42.674258 systemd-journald[1117]: Collecting audit messages is disabled. Jan 30 13:03:42.674283 systemd[1]: Stopped verity-setup.service. Jan 30 13:03:42.674293 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:03:42.674304 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:03:42.674314 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:03:42.674327 systemd-journald[1117]: Journal started Jan 30 13:03:42.674353 systemd-journald[1117]: Runtime Journal (/run/log/journal/b2c00ca831c84fae9b48a18c8b8273ca) is 5.9M, max 47.3M, 41.4M free. Jan 30 13:03:42.485267 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:03:42.500566 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:03:42.500941 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:03:42.675712 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:03:42.677034 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:03:42.677978 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:03:42.679477 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:03:42.680529 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:03:42.683074 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:03:42.684337 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:03:42.684491 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:03:42.687509 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:03:42.687658 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:03:42.689754 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:03:42.689905 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:03:42.691035 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:03:42.691238 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:03:42.692434 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:03:42.692566 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:03:42.693711 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:03:42.693853 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:03:42.695093 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:03:42.696280 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:03:42.697619 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:03:42.712514 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:03:42.727216 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:03:42.729175 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:03:42.730000 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:03:42.730040 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:03:42.731882 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:03:42.734035 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:03:42.735988 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:03:42.737026 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:03:42.738515 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:03:42.742325 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:03:42.743429 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:03:42.745276 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:03:42.746195 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:03:42.750243 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:03:42.753270 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:03:42.758581 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:03:42.761166 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:03:42.762586 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:03:42.763615 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:03:42.766124 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:03:42.766411 systemd-journald[1117]: Time spent on flushing to /var/log/journal/b2c00ca831c84fae9b48a18c8b8273ca is 27.021ms for 846 entries. Jan 30 13:03:42.766411 systemd-journald[1117]: System Journal (/var/log/journal/b2c00ca831c84fae9b48a18c8b8273ca) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:03:42.801242 systemd-journald[1117]: Received client request to flush runtime journal. Jan 30 13:03:42.801281 kernel: loop0: detected capacity change from 0 to 194096 Jan 30 13:03:42.768031 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:03:42.780248 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:03:42.782771 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:03:42.785233 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:03:42.793115 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:03:42.803092 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:03:42.803596 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:03:42.812913 udevadm[1172]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:03:42.828037 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Jan 30 13:03:42.828065 systemd-tmpfiles[1162]: ACLs are not supported, ignoring. Jan 30 13:03:42.830777 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:03:42.833091 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:03:42.836522 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:03:42.844145 kernel: loop1: detected capacity change from 0 to 113552 Jan 30 13:03:42.844250 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:03:42.882093 kernel: loop2: detected capacity change from 0 to 116784 Jan 30 13:03:42.882823 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:03:42.895321 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:03:42.912528 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jan 30 13:03:42.912546 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jan 30 13:03:42.915076 kernel: loop3: detected capacity change from 0 to 194096 Jan 30 13:03:42.917091 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:03:42.925093 kernel: loop4: detected capacity change from 0 to 113552 Jan 30 13:03:42.930087 kernel: loop5: detected capacity change from 0 to 116784 Jan 30 13:03:42.933598 (sd-merge)[1188]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:03:42.933993 (sd-merge)[1188]: Merged extensions into '/usr'. Jan 30 13:03:42.938427 systemd[1]: Reloading requested from client PID 1161 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:03:42.938445 systemd[1]: Reloading... Jan 30 13:03:42.998078 zram_generator::config[1215]: No configuration found. Jan 30 13:03:43.073751 ldconfig[1156]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:03:43.101748 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:03:43.138326 systemd[1]: Reloading finished in 199 ms. Jan 30 13:03:43.168889 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:03:43.170262 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:03:43.186282 systemd[1]: Starting ensure-sysext.service... Jan 30 13:03:43.188238 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:03:43.197158 systemd[1]: Reloading requested from client PID 1250 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:03:43.197175 systemd[1]: Reloading... Jan 30 13:03:43.222212 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:03:43.222433 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:03:43.223091 systemd-tmpfiles[1251]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:03:43.223315 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Jan 30 13:03:43.223360 systemd-tmpfiles[1251]: ACLs are not supported, ignoring. Jan 30 13:03:43.226840 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:03:43.226852 systemd-tmpfiles[1251]: Skipping /boot Jan 30 13:03:43.236490 systemd-tmpfiles[1251]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:03:43.236507 systemd-tmpfiles[1251]: Skipping /boot Jan 30 13:03:43.245082 zram_generator::config[1278]: No configuration found. Jan 30 13:03:43.335024 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:03:43.372659 systemd[1]: Reloading finished in 175 ms. Jan 30 13:03:43.387381 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:03:43.401495 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:03:43.413638 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:03:43.416132 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:03:43.418383 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:03:43.423405 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:03:43.426975 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:03:43.431440 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:03:43.437430 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:03:43.438649 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:03:43.442637 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:03:43.448503 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:03:43.449665 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:03:43.454408 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:03:43.455832 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:03:43.457627 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:03:43.457760 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:03:43.459482 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:03:43.459608 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:03:43.463534 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:03:43.465894 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:03:43.471723 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:03:43.472993 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:03:43.474512 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:03:43.480618 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:03:43.481241 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:03:43.482910 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:03:43.483087 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:03:43.484587 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:03:43.484735 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:03:43.491840 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:03:43.492081 systemd-udevd[1319]: Using default interface naming scheme 'v255'. Jan 30 13:03:43.496541 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:03:43.506197 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:03:43.509601 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:03:43.516551 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:03:43.517692 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:03:43.518799 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:03:43.520494 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:03:43.522981 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:03:43.523156 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:03:43.524495 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:03:43.524623 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:03:43.526033 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:03:43.526258 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:03:43.528842 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:03:43.530213 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:03:43.538677 systemd[1]: Finished ensure-sysext.service. Jan 30 13:03:43.540785 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:03:43.549949 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:03:43.550126 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:03:43.573692 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:03:43.574954 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:03:43.577014 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:03:43.600432 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:03:43.601462 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:03:43.618627 augenrules[1390]: No rules Jan 30 13:03:43.620877 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:03:43.621134 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:03:43.666516 systemd-resolved[1317]: Positive Trust Anchors: Jan 30 13:03:43.669538 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:03:43.670814 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:03:43.671583 systemd-resolved[1317]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:03:43.671626 systemd-resolved[1317]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:03:43.681823 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 30 13:03:43.689276 systemd-resolved[1317]: Defaulting to hostname 'linux'. Jan 30 13:03:43.691603 systemd-networkd[1378]: lo: Link UP Jan 30 13:03:43.691616 systemd-networkd[1378]: lo: Gained carrier Jan 30 13:03:43.692598 systemd-networkd[1378]: Enumeration completed Jan 30 13:03:43.692711 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:03:43.709390 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:03:43.709399 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:03:43.710436 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:03:43.710473 systemd-networkd[1378]: eth0: Link UP Jan 30 13:03:43.710476 systemd-networkd[1378]: eth0: Gained carrier Jan 30 13:03:43.710484 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:03:43.711268 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:03:43.712534 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:03:43.713874 systemd[1]: Reached target network.target - Network. Jan 30 13:03:43.715390 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:03:43.721493 systemd-networkd[1378]: eth0: DHCPv4 address 10.0.0.103/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:03:43.727492 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. Jan 30 13:03:43.728156 systemd-timesyncd[1365]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:03:43.728222 systemd-timesyncd[1365]: Initial clock synchronization to Thu 2025-01-30 13:03:43.923373 UTC. Jan 30 13:03:43.732078 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1381) Jan 30 13:03:43.764982 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:03:43.788378 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:03:43.790849 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:03:43.792189 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:03:43.804307 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:03:43.805723 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:03:43.815345 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:03:43.846518 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:03:43.847959 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:03:43.850879 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:03:43.851896 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:03:43.852890 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:03:43.853918 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:03:43.855159 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:03:43.856191 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:03:43.857178 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:03:43.858130 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:03:43.858163 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:03:43.858855 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:03:43.861518 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:03:43.863754 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:03:43.875078 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:03:43.877309 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:03:43.878823 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:03:43.879932 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:03:43.880873 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:03:43.881698 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:03:43.881727 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:03:43.882834 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:03:43.884782 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:03:43.887198 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:03:43.888218 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:03:43.890326 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:03:43.893223 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:03:43.894365 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:03:43.900293 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:03:43.902476 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:03:43.905342 jq[1424]: false Jan 30 13:03:43.914245 extend-filesystems[1425]: Found loop3 Jan 30 13:03:43.914245 extend-filesystems[1425]: Found loop4 Jan 30 13:03:43.914245 extend-filesystems[1425]: Found loop5 Jan 30 13:03:43.914245 extend-filesystems[1425]: Found vda Jan 30 13:03:43.914245 extend-filesystems[1425]: Found vda1 Jan 30 13:03:43.914245 extend-filesystems[1425]: Found vda2 Jan 30 13:03:43.914245 extend-filesystems[1425]: Found vda3 Jan 30 13:03:43.914245 extend-filesystems[1425]: Found usr Jan 30 13:03:43.914245 extend-filesystems[1425]: Found vda4 Jan 30 13:03:43.914245 extend-filesystems[1425]: Found vda6 Jan 30 13:03:43.914245 extend-filesystems[1425]: Found vda7 Jan 30 13:03:43.914245 extend-filesystems[1425]: Found vda9 Jan 30 13:03:43.914245 extend-filesystems[1425]: Checking size of /dev/vda9 Jan 30 13:03:43.913281 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:03:43.915245 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:03:43.915732 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:03:43.918379 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:03:43.920449 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:03:43.924078 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:03:43.927034 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:03:43.931129 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:03:43.931492 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:03:43.931646 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:03:43.934030 jq[1439]: true Jan 30 13:03:43.936870 extend-filesystems[1425]: Resized partition /dev/vda9 Jan 30 13:03:43.938804 dbus-daemon[1423]: [system] SELinux support is enabled Jan 30 13:03:43.941328 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:03:43.946113 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1381) Jan 30 13:03:43.960189 (ntainerd)[1453]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:03:43.967712 jq[1452]: true Jan 30 13:03:43.970117 extend-filesystems[1451]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:03:43.977968 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:03:43.978174 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:03:43.984278 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:03:43.984439 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:03:43.985961 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:03:43.986006 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:03:43.996764 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:03:43.998646 systemd-logind[1432]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 13:03:44.002228 systemd-logind[1432]: New seat seat0. Jan 30 13:03:44.006851 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:03:44.011218 update_engine[1436]: I20250130 13:03:44.011049 1436 main.cc:92] Flatcar Update Engine starting Jan 30 13:03:44.014128 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:03:44.014301 update_engine[1436]: I20250130 13:03:44.014256 1436 update_check_scheduler.cc:74] Next update check in 4m25s Jan 30 13:03:44.025342 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:03:44.048112 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:03:44.063693 locksmithd[1468]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:03:44.067997 extend-filesystems[1451]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:03:44.067997 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:03:44.067997 extend-filesystems[1451]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:03:44.072356 extend-filesystems[1425]: Resized filesystem in /dev/vda9 Jan 30 13:03:44.069408 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:03:44.069594 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:03:44.074257 bash[1473]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:03:44.075805 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:03:44.077503 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:03:44.178713 containerd[1453]: time="2025-01-30T13:03:44.178567170Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 13:03:44.211782 containerd[1453]: time="2025-01-30T13:03:44.211549398Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:03:44.212942 containerd[1453]: time="2025-01-30T13:03:44.212909086Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:03:44.213017 containerd[1453]: time="2025-01-30T13:03:44.213002348Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:03:44.214061 containerd[1453]: time="2025-01-30T13:03:44.213076229Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:03:44.214061 containerd[1453]: time="2025-01-30T13:03:44.213243209Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:03:44.214061 containerd[1453]: time="2025-01-30T13:03:44.213274474Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:03:44.214061 containerd[1453]: time="2025-01-30T13:03:44.213338111Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:03:44.214061 containerd[1453]: time="2025-01-30T13:03:44.213350240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:03:44.214061 containerd[1453]: time="2025-01-30T13:03:44.213514393Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:03:44.214061 containerd[1453]: time="2025-01-30T13:03:44.213529144Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:03:44.214061 containerd[1453]: time="2025-01-30T13:03:44.213542257Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:03:44.214061 containerd[1453]: time="2025-01-30T13:03:44.213551517Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:03:44.214061 containerd[1453]: time="2025-01-30T13:03:44.213616220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:03:44.214061 containerd[1453]: time="2025-01-30T13:03:44.213800082Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:03:44.214306 containerd[1453]: time="2025-01-30T13:03:44.213905965Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:03:44.214306 containerd[1453]: time="2025-01-30T13:03:44.213918873Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:03:44.214306 containerd[1453]: time="2025-01-30T13:03:44.213985952Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:03:44.214306 containerd[1453]: time="2025-01-30T13:03:44.214022872Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:03:44.222863 containerd[1453]: time="2025-01-30T13:03:44.222827893Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:03:44.223015 containerd[1453]: time="2025-01-30T13:03:44.222992906Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:03:44.223116 containerd[1453]: time="2025-01-30T13:03:44.223101781Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:03:44.223211 containerd[1453]: time="2025-01-30T13:03:44.223196970Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:03:44.223270 containerd[1453]: time="2025-01-30T13:03:44.223257902Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:03:44.223492 containerd[1453]: time="2025-01-30T13:03:44.223471964Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:03:44.223847 containerd[1453]: time="2025-01-30T13:03:44.223828133Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:03:44.224020 containerd[1453]: time="2025-01-30T13:03:44.224000645Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:03:44.224101 containerd[1453]: time="2025-01-30T13:03:44.224085180Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:03:44.224176 containerd[1453]: time="2025-01-30T13:03:44.224162298Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:03:44.224229 containerd[1453]: time="2025-01-30T13:03:44.224217698Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:03:44.224291 containerd[1453]: time="2025-01-30T13:03:44.224270230Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:03:44.224363 containerd[1453]: time="2025-01-30T13:03:44.224350954Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:03:44.224427 containerd[1453]: time="2025-01-30T13:03:44.224414058Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:03:44.224481 containerd[1453]: time="2025-01-30T13:03:44.224470155Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:03:44.224543 containerd[1453]: time="2025-01-30T13:03:44.224521007Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:03:44.224595 containerd[1453]: time="2025-01-30T13:03:44.224583169Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:03:44.224646 containerd[1453]: time="2025-01-30T13:03:44.224634718Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:03:44.224730 containerd[1453]: time="2025-01-30T13:03:44.224715606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:03:44.224790 containerd[1453]: time="2025-01-30T13:03:44.224777316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:03:44.224842 containerd[1453]: time="2025-01-30T13:03:44.224830750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:03:44.224901 containerd[1453]: time="2025-01-30T13:03:44.224889019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:03:44.224952 containerd[1453]: time="2025-01-30T13:03:44.224941715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:03:44.225012 containerd[1453]: time="2025-01-30T13:03:44.224999902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:03:44.225070 containerd[1453]: time="2025-01-30T13:03:44.225058088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:03:44.225141 containerd[1453]: time="2025-01-30T13:03:44.225127175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:03:44.225197 containerd[1453]: time="2025-01-30T13:03:44.225185526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:03:44.225256 containerd[1453]: time="2025-01-30T13:03:44.225243672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:03:44.225306 containerd[1453]: time="2025-01-30T13:03:44.225295220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:03:44.225374 containerd[1453]: time="2025-01-30T13:03:44.225360865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:03:44.225426 containerd[1453]: time="2025-01-30T13:03:44.225414626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:03:44.225479 containerd[1453]: time="2025-01-30T13:03:44.225468101Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:03:44.225548 containerd[1453]: time="2025-01-30T13:03:44.225531533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:03:44.225629 containerd[1453]: time="2025-01-30T13:03:44.225615330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:03:44.225679 containerd[1453]: time="2025-01-30T13:03:44.225668436Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:03:44.225934 containerd[1453]: time="2025-01-30T13:03:44.225918065Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:03:44.226017 containerd[1453]: time="2025-01-30T13:03:44.226002559Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:03:44.226127 containerd[1453]: time="2025-01-30T13:03:44.226115368Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:03:44.226184 containerd[1453]: time="2025-01-30T13:03:44.226169293Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:03:44.226229 containerd[1453]: time="2025-01-30T13:03:44.226217973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:03:44.226280 containerd[1453]: time="2025-01-30T13:03:44.226268907Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:03:44.226359 containerd[1453]: time="2025-01-30T13:03:44.226333733Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:03:44.226412 containerd[1453]: time="2025-01-30T13:03:44.226400156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:03:44.226865 containerd[1453]: time="2025-01-30T13:03:44.226802055Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:03:44.227031 containerd[1453]: time="2025-01-30T13:03:44.227015092Z" level=info msg="Connect containerd service" Jan 30 13:03:44.227171 containerd[1453]: time="2025-01-30T13:03:44.227154372Z" level=info msg="using legacy CRI server" Jan 30 13:03:44.227223 containerd[1453]: time="2025-01-30T13:03:44.227211657Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:03:44.227524 containerd[1453]: time="2025-01-30T13:03:44.227505132Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:03:44.228348 containerd[1453]: time="2025-01-30T13:03:44.228319174Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:03:44.228695 containerd[1453]: time="2025-01-30T13:03:44.228661821Z" level=info msg="Start subscribing containerd event" Jan 30 13:03:44.228806 containerd[1453]: time="2025-01-30T13:03:44.228789422Z" level=info msg="Start recovering state" Jan 30 13:03:44.228918 containerd[1453]: time="2025-01-30T13:03:44.228904935Z" level=info msg="Start event monitor" Jan 30 13:03:44.229008 containerd[1453]: time="2025-01-30T13:03:44.228995535Z" level=info msg="Start snapshots syncer" Jan 30 13:03:44.229116 containerd[1453]: time="2025-01-30T13:03:44.229101828Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:03:44.229174 containerd[1453]: time="2025-01-30T13:03:44.229162392Z" level=info msg="Start streaming server" Jan 30 13:03:44.229749 containerd[1453]: time="2025-01-30T13:03:44.229724878Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:03:44.229787 containerd[1453]: time="2025-01-30T13:03:44.229780524Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:03:44.229852 containerd[1453]: time="2025-01-30T13:03:44.229840268Z" level=info msg="containerd successfully booted in 0.053945s" Jan 30 13:03:44.232225 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:03:44.461737 sshd_keygen[1440]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:03:44.483125 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:03:44.495355 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:03:44.501749 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:03:44.501976 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:03:44.506755 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:03:44.519835 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:03:44.523355 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:03:44.525575 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 30 13:03:44.526846 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:03:45.649488 systemd-networkd[1378]: eth0: Gained IPv6LL Jan 30 13:03:45.652897 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:03:45.654644 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:03:45.670439 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:03:45.673390 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:03:45.675873 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:03:45.701621 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:03:45.704979 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:03:45.705331 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:03:45.707947 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:03:46.375038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:03:46.376699 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:03:46.382298 systemd[1]: Startup finished in 634ms (kernel) + 4.319s (initrd) + 4.386s (userspace) = 9.341s. Jan 30 13:03:46.383264 (kubelet)[1530]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:03:46.415021 agetty[1507]: failed to open credentials directory Jan 30 13:03:46.417991 agetty[1506]: failed to open credentials directory Jan 30 13:03:47.207466 kubelet[1530]: E0130 13:03:47.207403 1530 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:03:47.209588 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:03:47.209738 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:03:47.210676 systemd[1]: kubelet.service: Consumed 1.014s CPU time. Jan 30 13:03:51.121947 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:03:51.123219 systemd[1]: Started sshd@0-10.0.0.103:22-10.0.0.1:54276.service - OpenSSH per-connection server daemon (10.0.0.1:54276). Jan 30 13:03:51.192918 sshd[1544]: Accepted publickey for core from 10.0.0.1 port 54276 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:03:51.196869 sshd-session[1544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:51.205297 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:03:51.215388 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:03:51.217044 systemd-logind[1432]: New session 1 of user core. Jan 30 13:03:51.227185 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:03:51.239420 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:03:51.241849 (systemd)[1548]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:03:51.319605 systemd[1548]: Queued start job for default target default.target. Jan 30 13:03:51.332151 systemd[1548]: Created slice app.slice - User Application Slice. Jan 30 13:03:51.332204 systemd[1548]: Reached target paths.target - Paths. Jan 30 13:03:51.332217 systemd[1548]: Reached target timers.target - Timers. Jan 30 13:03:51.333607 systemd[1548]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:03:51.349865 systemd[1548]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:03:51.349992 systemd[1548]: Reached target sockets.target - Sockets. Jan 30 13:03:51.350009 systemd[1548]: Reached target basic.target - Basic System. Jan 30 13:03:51.350049 systemd[1548]: Reached target default.target - Main User Target. Jan 30 13:03:51.350093 systemd[1548]: Startup finished in 101ms. Jan 30 13:03:51.350302 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:03:51.353252 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:03:51.412625 systemd[1]: Started sshd@1-10.0.0.103:22-10.0.0.1:54286.service - OpenSSH per-connection server daemon (10.0.0.1:54286). Jan 30 13:03:51.457818 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 54286 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:03:51.459260 sshd-session[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:51.463317 systemd-logind[1432]: New session 2 of user core. Jan 30 13:03:51.473283 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:03:51.527873 sshd[1561]: Connection closed by 10.0.0.1 port 54286 Jan 30 13:03:51.528258 sshd-session[1559]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:51.535620 systemd[1]: sshd@1-10.0.0.103:22-10.0.0.1:54286.service: Deactivated successfully. Jan 30 13:03:51.537286 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:03:51.539291 systemd-logind[1432]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:03:51.540584 systemd[1]: Started sshd@2-10.0.0.103:22-10.0.0.1:54290.service - OpenSSH per-connection server daemon (10.0.0.1:54290). Jan 30 13:03:51.541806 systemd-logind[1432]: Removed session 2. Jan 30 13:03:51.588264 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 54290 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:03:51.589600 sshd-session[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:51.593803 systemd-logind[1432]: New session 3 of user core. Jan 30 13:03:51.612310 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:03:51.662311 sshd[1568]: Connection closed by 10.0.0.1 port 54290 Jan 30 13:03:51.662777 sshd-session[1566]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:51.669609 systemd[1]: sshd@2-10.0.0.103:22-10.0.0.1:54290.service: Deactivated successfully. Jan 30 13:03:51.671631 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:03:51.673205 systemd-logind[1432]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:03:51.674645 systemd[1]: Started sshd@3-10.0.0.103:22-10.0.0.1:54306.service - OpenSSH per-connection server daemon (10.0.0.1:54306). Jan 30 13:03:51.675567 systemd-logind[1432]: Removed session 3. Jan 30 13:03:51.741318 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 54306 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:03:51.742681 sshd-session[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:51.746918 systemd-logind[1432]: New session 4 of user core. Jan 30 13:03:51.755279 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:03:51.807586 sshd[1575]: Connection closed by 10.0.0.1 port 54306 Jan 30 13:03:51.808248 sshd-session[1573]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:51.820706 systemd[1]: sshd@3-10.0.0.103:22-10.0.0.1:54306.service: Deactivated successfully. Jan 30 13:03:51.822583 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:03:51.825295 systemd-logind[1432]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:03:51.827017 systemd[1]: Started sshd@4-10.0.0.103:22-10.0.0.1:54312.service - OpenSSH per-connection server daemon (10.0.0.1:54312). Jan 30 13:03:51.827859 systemd-logind[1432]: Removed session 4. Jan 30 13:03:51.875018 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 54312 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:03:51.876428 sshd-session[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:51.880685 systemd-logind[1432]: New session 5 of user core. Jan 30 13:03:51.888334 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:03:51.959890 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:03:51.960618 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:03:51.973686 sudo[1583]: pam_unix(sudo:session): session closed for user root Jan 30 13:03:51.975575 sshd[1582]: Connection closed by 10.0.0.1 port 54312 Jan 30 13:03:51.976122 sshd-session[1580]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:51.986867 systemd[1]: sshd@4-10.0.0.103:22-10.0.0.1:54312.service: Deactivated successfully. Jan 30 13:03:51.990385 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:03:51.992543 systemd-logind[1432]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:03:52.005973 systemd[1]: Started sshd@5-10.0.0.103:22-10.0.0.1:54328.service - OpenSSH per-connection server daemon (10.0.0.1:54328). Jan 30 13:03:52.006917 systemd-logind[1432]: Removed session 5. Jan 30 13:03:52.050371 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 54328 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:03:52.051851 sshd-session[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:52.056151 systemd-logind[1432]: New session 6 of user core. Jan 30 13:03:52.066290 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:03:52.119008 sudo[1592]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:03:52.119378 sudo[1592]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:03:52.123133 sudo[1592]: pam_unix(sudo:session): session closed for user root Jan 30 13:03:52.128540 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 30 13:03:52.128853 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:03:52.151456 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 13:03:52.177228 augenrules[1614]: No rules Jan 30 13:03:52.177939 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:03:52.178149 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 13:03:52.181330 sudo[1591]: pam_unix(sudo:session): session closed for user root Jan 30 13:03:52.183895 sshd[1590]: Connection closed by 10.0.0.1 port 54328 Jan 30 13:03:52.183307 sshd-session[1588]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:52.190536 systemd[1]: sshd@5-10.0.0.103:22-10.0.0.1:54328.service: Deactivated successfully. Jan 30 13:03:52.192347 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:03:52.194462 systemd-logind[1432]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:03:52.198651 systemd[1]: Started sshd@6-10.0.0.103:22-10.0.0.1:54338.service - OpenSSH per-connection server daemon (10.0.0.1:54338). Jan 30 13:03:52.199520 systemd-logind[1432]: Removed session 6. Jan 30 13:03:52.244155 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 54338 ssh2: RSA SHA256:DFbjE3cliO0t0vQoroiQEd9uw5v6TFYRV953GUOdMNo Jan 30 13:03:52.246150 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:03:52.253792 systemd-logind[1432]: New session 7 of user core. Jan 30 13:03:52.268584 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:03:52.322587 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:03:52.322895 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:03:52.348429 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:03:52.365802 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:03:52.366001 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:03:52.926825 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:03:52.927001 systemd[1]: kubelet.service: Consumed 1.014s CPU time. Jan 30 13:03:52.942393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:03:52.960297 systemd[1]: Reloading requested from client PID 1674 ('systemctl') (unit session-7.scope)... Jan 30 13:03:52.960316 systemd[1]: Reloading... Jan 30 13:03:53.041111 zram_generator::config[1712]: No configuration found. Jan 30 13:03:53.241773 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:03:53.296975 systemd[1]: Reloading finished in 336 ms. Jan 30 13:03:53.342595 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:03:53.342727 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:03:53.342997 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:03:53.344885 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:03:53.449578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:03:53.455502 (kubelet)[1757]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:03:53.503533 kubelet[1757]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:03:53.503533 kubelet[1757]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:03:53.503533 kubelet[1757]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:03:53.503893 kubelet[1757]: I0130 13:03:53.503607 1757 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:03:53.886347 kubelet[1757]: I0130 13:03:53.886292 1757 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:03:53.886347 kubelet[1757]: I0130 13:03:53.886328 1757 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:03:53.886560 kubelet[1757]: I0130 13:03:53.886539 1757 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:03:53.921030 kubelet[1757]: I0130 13:03:53.920942 1757 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:03:53.937148 kubelet[1757]: I0130 13:03:53.937108 1757 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:03:53.937585 kubelet[1757]: I0130 13:03:53.937528 1757 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:03:53.937780 kubelet[1757]: I0130 13:03:53.937566 1757 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.103","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:03:53.937942 kubelet[1757]: I0130 13:03:53.937929 1757 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:03:53.937942 kubelet[1757]: I0130 13:03:53.937941 1757 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:03:53.938273 kubelet[1757]: I0130 13:03:53.938223 1757 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:03:53.939505 kubelet[1757]: I0130 13:03:53.939464 1757 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:03:53.939505 kubelet[1757]: I0130 13:03:53.939489 1757 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:03:53.939854 kubelet[1757]: I0130 13:03:53.939827 1757 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:03:53.940180 kubelet[1757]: I0130 13:03:53.940159 1757 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:03:53.940180 kubelet[1757]: E0130 13:03:53.940164 1757 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:03:53.940583 kubelet[1757]: E0130 13:03:53.940551 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:03:53.942503 kubelet[1757]: I0130 13:03:53.942470 1757 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 13:03:53.943914 kubelet[1757]: I0130 13:03:53.942960 1757 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:03:53.943914 kubelet[1757]: W0130 13:03:53.943021 1757 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:03:53.944088 kubelet[1757]: I0130 13:03:53.944044 1757 server.go:1264] "Started kubelet" Jan 30 13:03:53.945171 kubelet[1757]: I0130 13:03:53.945122 1757 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:03:53.946289 kubelet[1757]: I0130 13:03:53.946230 1757 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:03:53.946627 kubelet[1757]: I0130 13:03:53.946603 1757 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:03:53.948685 kubelet[1757]: I0130 13:03:53.948634 1757 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:03:53.949893 kubelet[1757]: I0130 13:03:53.949861 1757 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:03:53.952796 kubelet[1757]: I0130 13:03:53.952772 1757 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:03:53.953885 kubelet[1757]: I0130 13:03:53.953861 1757 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:03:53.953985 kubelet[1757]: E0130 13:03:53.953481 1757 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Jan 30 13:03:53.954137 kubelet[1757]: I0130 13:03:53.954124 1757 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:03:53.954335 kubelet[1757]: I0130 13:03:53.953523 1757 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:03:53.954552 kubelet[1757]: I0130 13:03:53.954527 1757 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:03:53.958712 kubelet[1757]: I0130 13:03:53.956357 1757 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:03:53.959724 kubelet[1757]: E0130 13:03:53.959672 1757 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:03:53.965319 kubelet[1757]: W0130 13:03:53.965207 1757 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 30 13:03:53.965501 kubelet[1757]: E0130 13:03:53.965392 1757 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jan 30 13:03:53.965541 kubelet[1757]: W0130 13:03:53.965510 1757 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 13:03:53.965541 kubelet[1757]: E0130 13:03:53.965530 1757 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 30 13:03:53.965677 kubelet[1757]: E0130 13:03:53.965205 1757 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.103\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jan 30 13:03:53.966079 kubelet[1757]: W0130 13:03:53.966050 1757 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.103" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 13:03:53.966193 kubelet[1757]: E0130 13:03:53.966178 1757 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.103" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 30 13:03:53.967737 kubelet[1757]: E0130 13:03:53.965566 1757 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.103.181f7a1a27d08f22 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.103,UID:10.0.0.103,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.103,},FirstTimestamp:2025-01-30 13:03:53.944018722 +0000 UTC m=+0.485148046,LastTimestamp:2025-01-30 13:03:53.944018722 +0000 UTC m=+0.485148046,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.103,}" Jan 30 13:03:53.969204 kubelet[1757]: E0130 13:03:53.969000 1757 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.103.181f7a1a28bf2a15 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.103,UID:10.0.0.103,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.103,},FirstTimestamp:2025-01-30 13:03:53.959655957 +0000 UTC m=+0.500785321,LastTimestamp:2025-01-30 13:03:53.959655957 +0000 UTC m=+0.500785321,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.103,}" Jan 30 13:03:53.971909 kubelet[1757]: I0130 13:03:53.971614 1757 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:03:53.971909 kubelet[1757]: I0130 13:03:53.971637 1757 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:03:53.971909 kubelet[1757]: I0130 13:03:53.971656 1757 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:03:54.040815 kubelet[1757]: I0130 13:03:54.040777 1757 policy_none.go:49] "None policy: Start" Jan 30 13:03:54.041829 kubelet[1757]: I0130 13:03:54.041791 1757 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:03:54.041829 kubelet[1757]: I0130 13:03:54.041825 1757 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:03:54.049392 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:03:54.055852 kubelet[1757]: I0130 13:03:54.055563 1757 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.103" Jan 30 13:03:54.063111 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:03:54.065115 kubelet[1757]: I0130 13:03:54.064935 1757 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.103" Jan 30 13:03:54.067009 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:03:54.072629 kubelet[1757]: I0130 13:03:54.072462 1757 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:03:54.074298 kubelet[1757]: I0130 13:03:54.073746 1757 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:03:54.074298 kubelet[1757]: I0130 13:03:54.073903 1757 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:03:54.074298 kubelet[1757]: I0130 13:03:54.073924 1757 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:03:54.074298 kubelet[1757]: E0130 13:03:54.073970 1757 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:03:54.077029 kubelet[1757]: E0130 13:03:54.076998 1757 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Jan 30 13:03:54.079053 kubelet[1757]: I0130 13:03:54.079020 1757 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:03:54.079329 kubelet[1757]: I0130 13:03:54.079281 1757 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:03:54.079408 kubelet[1757]: I0130 13:03:54.079392 1757 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:03:54.081084 kubelet[1757]: E0130 13:03:54.081016 1757 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.103\" not found" Jan 30 13:03:54.178146 kubelet[1757]: E0130 13:03:54.177486 1757 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Jan 30 13:03:54.278455 kubelet[1757]: E0130 13:03:54.278388 1757 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Jan 30 13:03:54.379523 kubelet[1757]: E0130 13:03:54.379452 1757 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Jan 30 13:03:54.480439 kubelet[1757]: E0130 13:03:54.480309 1757 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Jan 30 13:03:54.580639 kubelet[1757]: E0130 13:03:54.580569 1757 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Jan 30 13:03:54.681682 kubelet[1757]: E0130 13:03:54.681616 1757 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Jan 30 13:03:54.782547 kubelet[1757]: E0130 13:03:54.782394 1757 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Jan 30 13:03:54.882767 kubelet[1757]: E0130 13:03:54.882696 1757 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Jan 30 13:03:54.889410 kubelet[1757]: I0130 13:03:54.889235 1757 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 13:03:54.889410 kubelet[1757]: W0130 13:03:54.889402 1757 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 30 13:03:54.941401 kubelet[1757]: E0130 13:03:54.941338 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:03:54.983746 kubelet[1757]: E0130 13:03:54.983643 1757 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Jan 30 13:03:54.987694 sudo[1625]: pam_unix(sudo:session): session closed for user root Jan 30 13:03:54.989861 sshd[1624]: Connection closed by 10.0.0.1 port 54338 Jan 30 13:03:54.990299 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Jan 30 13:03:54.995522 systemd[1]: sshd@6-10.0.0.103:22-10.0.0.1:54338.service: Deactivated successfully. Jan 30 13:03:54.998375 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:03:54.999113 systemd-logind[1432]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:03:55.001304 systemd-logind[1432]: Removed session 7. Jan 30 13:03:55.084518 kubelet[1757]: E0130 13:03:55.084478 1757 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Jan 30 13:03:55.185506 kubelet[1757]: E0130 13:03:55.185463 1757 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.103\" not found" Jan 30 13:03:55.286760 kubelet[1757]: I0130 13:03:55.286713 1757 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 30 13:03:55.287534 containerd[1453]: time="2025-01-30T13:03:55.287111142Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:03:55.287817 kubelet[1757]: I0130 13:03:55.287339 1757 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 30 13:03:55.942163 kubelet[1757]: E0130 13:03:55.942115 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:03:55.942163 kubelet[1757]: I0130 13:03:55.942122 1757 apiserver.go:52] "Watching apiserver" Jan 30 13:03:55.951173 kubelet[1757]: I0130 13:03:55.951132 1757 topology_manager.go:215] "Topology Admit Handler" podUID="7932fb71-0256-4097-8707-e8d6a31accf4" podNamespace="kube-system" podName="cilium-6l69w" Jan 30 13:03:55.951941 kubelet[1757]: I0130 13:03:55.951915 1757 topology_manager.go:215] "Topology Admit Handler" podUID="3815af0a-4a48-48c1-a9f6-aa296e9221bc" podNamespace="kube-system" podName="kube-proxy-k5r68" Jan 30 13:03:55.954356 kubelet[1757]: I0130 13:03:55.954318 1757 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:03:55.963929 systemd[1]: Created slice kubepods-burstable-pod7932fb71_0256_4097_8707_e8d6a31accf4.slice - libcontainer container kubepods-burstable-pod7932fb71_0256_4097_8707_e8d6a31accf4.slice. Jan 30 13:03:55.968142 kubelet[1757]: I0130 13:03:55.968096 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-cilium-run\") pod \"cilium-6l69w\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " pod="kube-system/cilium-6l69w" Jan 30 13:03:55.968142 kubelet[1757]: I0130 13:03:55.968137 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-bpf-maps\") pod \"cilium-6l69w\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " pod="kube-system/cilium-6l69w" Jan 30 13:03:55.968244 kubelet[1757]: I0130 13:03:55.968157 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-cilium-cgroup\") pod \"cilium-6l69w\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " pod="kube-system/cilium-6l69w" Jan 30 13:03:55.968244 kubelet[1757]: I0130 13:03:55.968199 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7932fb71-0256-4097-8707-e8d6a31accf4-hubble-tls\") pod \"cilium-6l69w\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " pod="kube-system/cilium-6l69w" Jan 30 13:03:55.968304 kubelet[1757]: I0130 13:03:55.968241 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3815af0a-4a48-48c1-a9f6-aa296e9221bc-xtables-lock\") pod \"kube-proxy-k5r68\" (UID: \"3815af0a-4a48-48c1-a9f6-aa296e9221bc\") " pod="kube-system/kube-proxy-k5r68" Jan 30 13:03:55.968304 kubelet[1757]: I0130 13:03:55.968271 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-etc-cni-netd\") pod \"cilium-6l69w\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " pod="kube-system/cilium-6l69w" Jan 30 13:03:55.968304 kubelet[1757]: I0130 13:03:55.968292 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3815af0a-4a48-48c1-a9f6-aa296e9221bc-lib-modules\") pod \"kube-proxy-k5r68\" (UID: \"3815af0a-4a48-48c1-a9f6-aa296e9221bc\") " pod="kube-system/kube-proxy-k5r68" Jan 30 13:03:55.968364 kubelet[1757]: I0130 13:03:55.968319 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96gmn\" (UniqueName: \"kubernetes.io/projected/3815af0a-4a48-48c1-a9f6-aa296e9221bc-kube-api-access-96gmn\") pod \"kube-proxy-k5r68\" (UID: \"3815af0a-4a48-48c1-a9f6-aa296e9221bc\") " pod="kube-system/kube-proxy-k5r68" Jan 30 13:03:55.968364 kubelet[1757]: I0130 13:03:55.968338 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-hostproc\") pod \"cilium-6l69w\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " pod="kube-system/cilium-6l69w" Jan 30 13:03:55.968405 kubelet[1757]: I0130 13:03:55.968369 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-cni-path\") pod \"cilium-6l69w\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " pod="kube-system/cilium-6l69w" Jan 30 13:03:55.968405 kubelet[1757]: I0130 13:03:55.968398 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-host-proc-sys-kernel\") pod \"cilium-6l69w\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " pod="kube-system/cilium-6l69w" Jan 30 13:03:55.968440 kubelet[1757]: I0130 13:03:55.968415 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-lib-modules\") pod \"cilium-6l69w\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " pod="kube-system/cilium-6l69w" Jan 30 13:03:55.968462 kubelet[1757]: I0130 13:03:55.968443 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-xtables-lock\") pod \"cilium-6l69w\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " pod="kube-system/cilium-6l69w" Jan 30 13:03:55.968483 kubelet[1757]: I0130 13:03:55.968462 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7932fb71-0256-4097-8707-e8d6a31accf4-clustermesh-secrets\") pod \"cilium-6l69w\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " pod="kube-system/cilium-6l69w" Jan 30 13:03:55.968504 kubelet[1757]: I0130 13:03:55.968487 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7932fb71-0256-4097-8707-e8d6a31accf4-cilium-config-path\") pod \"cilium-6l69w\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " pod="kube-system/cilium-6l69w" Jan 30 13:03:55.968523 kubelet[1757]: I0130 13:03:55.968504 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-host-proc-sys-net\") pod \"cilium-6l69w\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " pod="kube-system/cilium-6l69w" Jan 30 13:03:55.968542 kubelet[1757]: I0130 13:03:55.968521 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh6cn\" (UniqueName: \"kubernetes.io/projected/7932fb71-0256-4097-8707-e8d6a31accf4-kube-api-access-kh6cn\") pod \"cilium-6l69w\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " pod="kube-system/cilium-6l69w" Jan 30 13:03:55.968565 kubelet[1757]: I0130 13:03:55.968547 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3815af0a-4a48-48c1-a9f6-aa296e9221bc-kube-proxy\") pod \"kube-proxy-k5r68\" (UID: \"3815af0a-4a48-48c1-a9f6-aa296e9221bc\") " pod="kube-system/kube-proxy-k5r68" Jan 30 13:03:55.977966 systemd[1]: Created slice kubepods-besteffort-pod3815af0a_4a48_48c1_a9f6_aa296e9221bc.slice - libcontainer container kubepods-besteffort-pod3815af0a_4a48_48c1_a9f6_aa296e9221bc.slice. Jan 30 13:03:56.277028 kubelet[1757]: E0130 13:03:56.276712 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:56.277875 containerd[1453]: time="2025-01-30T13:03:56.277805642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6l69w,Uid:7932fb71-0256-4097-8707-e8d6a31accf4,Namespace:kube-system,Attempt:0,}" Jan 30 13:03:56.293845 kubelet[1757]: E0130 13:03:56.293458 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:56.294098 containerd[1453]: time="2025-01-30T13:03:56.294047164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k5r68,Uid:3815af0a-4a48-48c1-a9f6-aa296e9221bc,Namespace:kube-system,Attempt:0,}" Jan 30 13:03:56.898751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4105546831.mount: Deactivated successfully. Jan 30 13:03:56.912074 containerd[1453]: time="2025-01-30T13:03:56.909963003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:03:56.919024 containerd[1453]: time="2025-01-30T13:03:56.918679138Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jan 30 13:03:56.923203 containerd[1453]: time="2025-01-30T13:03:56.923154801Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:03:56.924448 containerd[1453]: time="2025-01-30T13:03:56.924415129Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:03:56.926821 containerd[1453]: time="2025-01-30T13:03:56.926772264Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:03:56.928417 containerd[1453]: time="2025-01-30T13:03:56.928372375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:03:56.929402 containerd[1453]: time="2025-01-30T13:03:56.929256743Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 635.120221ms" Jan 30 13:03:56.935700 containerd[1453]: time="2025-01-30T13:03:56.935622938Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 657.649354ms" Jan 30 13:03:56.943049 kubelet[1757]: E0130 13:03:56.942978 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:03:57.087488 containerd[1453]: time="2025-01-30T13:03:57.086633667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:03:57.087488 containerd[1453]: time="2025-01-30T13:03:57.086711762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:03:57.087488 containerd[1453]: time="2025-01-30T13:03:57.086729116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:57.087488 containerd[1453]: time="2025-01-30T13:03:57.086810625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:57.088450 containerd[1453]: time="2025-01-30T13:03:57.088286991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:03:57.088450 containerd[1453]: time="2025-01-30T13:03:57.088349981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:03:57.088450 containerd[1453]: time="2025-01-30T13:03:57.088365809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:57.088605 containerd[1453]: time="2025-01-30T13:03:57.088436512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:03:57.199289 systemd[1]: run-containerd-runc-k8s.io-3a4b98774c0b2f52f9379c26ab2a2ca4cf8d96c8d82e47a88911b63d536dfb61-runc.d2F9Ja.mount: Deactivated successfully. Jan 30 13:03:57.211321 systemd[1]: Started cri-containerd-75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd.scope - libcontainer container 75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd. Jan 30 13:03:57.214860 systemd[1]: Started cri-containerd-3a4b98774c0b2f52f9379c26ab2a2ca4cf8d96c8d82e47a88911b63d536dfb61.scope - libcontainer container 3a4b98774c0b2f52f9379c26ab2a2ca4cf8d96c8d82e47a88911b63d536dfb61. Jan 30 13:03:57.235690 containerd[1453]: time="2025-01-30T13:03:57.235643447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6l69w,Uid:7932fb71-0256-4097-8707-e8d6a31accf4,Namespace:kube-system,Attempt:0,} returns sandbox id \"75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd\"" Jan 30 13:03:57.236932 kubelet[1757]: E0130 13:03:57.236884 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:57.238365 containerd[1453]: time="2025-01-30T13:03:57.238334698Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jan 30 13:03:57.239037 containerd[1453]: time="2025-01-30T13:03:57.239011437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k5r68,Uid:3815af0a-4a48-48c1-a9f6-aa296e9221bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a4b98774c0b2f52f9379c26ab2a2ca4cf8d96c8d82e47a88911b63d536dfb61\"" Jan 30 13:03:57.239657 kubelet[1757]: E0130 13:03:57.239637 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:03:57.943534 kubelet[1757]: E0130 13:03:57.943468 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:03:58.944457 kubelet[1757]: E0130 13:03:58.944369 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:03:59.945231 kubelet[1757]: E0130 13:03:59.945196 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:00.647181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1273594833.mount: Deactivated successfully. Jan 30 13:04:00.945985 kubelet[1757]: E0130 13:04:00.945876 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:01.899683 containerd[1453]: time="2025-01-30T13:04:01.899629301Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:04:01.900845 containerd[1453]: time="2025-01-30T13:04:01.900467690Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jan 30 13:04:01.901165 containerd[1453]: time="2025-01-30T13:04:01.901142308Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:04:01.903852 containerd[1453]: time="2025-01-30T13:04:01.903585695Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.665209362s" Jan 30 13:04:01.903852 containerd[1453]: time="2025-01-30T13:04:01.903621465Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jan 30 13:04:01.904992 containerd[1453]: time="2025-01-30T13:04:01.904959432Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:04:01.906258 containerd[1453]: time="2025-01-30T13:04:01.906138839Z" level=info msg="CreateContainer within sandbox \"75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:04:01.935736 containerd[1453]: time="2025-01-30T13:04:01.935688187Z" level=info msg="CreateContainer within sandbox \"75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6973b5623ee6fbd3fb367c0e92992a1f09bffe5bc17dd6f2dccd47fb8314dd0b\"" Jan 30 13:04:01.940392 containerd[1453]: time="2025-01-30T13:04:01.938025629Z" level=info msg="StartContainer for \"6973b5623ee6fbd3fb367c0e92992a1f09bffe5bc17dd6f2dccd47fb8314dd0b\"" Jan 30 13:04:01.947178 kubelet[1757]: E0130 13:04:01.947143 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:01.973886 systemd[1]: Started cri-containerd-6973b5623ee6fbd3fb367c0e92992a1f09bffe5bc17dd6f2dccd47fb8314dd0b.scope - libcontainer container 6973b5623ee6fbd3fb367c0e92992a1f09bffe5bc17dd6f2dccd47fb8314dd0b. Jan 30 13:04:02.012485 containerd[1453]: time="2025-01-30T13:04:02.012439848Z" level=info msg="StartContainer for \"6973b5623ee6fbd3fb367c0e92992a1f09bffe5bc17dd6f2dccd47fb8314dd0b\" returns successfully" Jan 30 13:04:02.068044 systemd[1]: cri-containerd-6973b5623ee6fbd3fb367c0e92992a1f09bffe5bc17dd6f2dccd47fb8314dd0b.scope: Deactivated successfully. Jan 30 13:04:02.094548 kubelet[1757]: E0130 13:04:02.094508 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:02.209386 containerd[1453]: time="2025-01-30T13:04:02.209261738Z" level=info msg="shim disconnected" id=6973b5623ee6fbd3fb367c0e92992a1f09bffe5bc17dd6f2dccd47fb8314dd0b namespace=k8s.io Jan 30 13:04:02.209865 containerd[1453]: time="2025-01-30T13:04:02.209700785Z" level=warning msg="cleaning up after shim disconnected" id=6973b5623ee6fbd3fb367c0e92992a1f09bffe5bc17dd6f2dccd47fb8314dd0b namespace=k8s.io Jan 30 13:04:02.210413 containerd[1453]: time="2025-01-30T13:04:02.210390143Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:04:02.923769 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6973b5623ee6fbd3fb367c0e92992a1f09bffe5bc17dd6f2dccd47fb8314dd0b-rootfs.mount: Deactivated successfully. Jan 30 13:04:02.947951 kubelet[1757]: E0130 13:04:02.947654 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:03.043173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount169400248.mount: Deactivated successfully. Jan 30 13:04:03.098317 kubelet[1757]: E0130 13:04:03.097811 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:03.100703 containerd[1453]: time="2025-01-30T13:04:03.100558177Z" level=info msg="CreateContainer within sandbox \"75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:04:03.124891 containerd[1453]: time="2025-01-30T13:04:03.124835651Z" level=info msg="CreateContainer within sandbox \"75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bb4d7118507d8ffd8fdb3eb33e296cff7521275ed07722f43945ef26c0f20ef5\"" Jan 30 13:04:03.125462 containerd[1453]: time="2025-01-30T13:04:03.125434686Z" level=info msg="StartContainer for \"bb4d7118507d8ffd8fdb3eb33e296cff7521275ed07722f43945ef26c0f20ef5\"" Jan 30 13:04:03.162430 systemd[1]: Started cri-containerd-bb4d7118507d8ffd8fdb3eb33e296cff7521275ed07722f43945ef26c0f20ef5.scope - libcontainer container bb4d7118507d8ffd8fdb3eb33e296cff7521275ed07722f43945ef26c0f20ef5. Jan 30 13:04:03.188716 containerd[1453]: time="2025-01-30T13:04:03.188603404Z" level=info msg="StartContainer for \"bb4d7118507d8ffd8fdb3eb33e296cff7521275ed07722f43945ef26c0f20ef5\" returns successfully" Jan 30 13:04:03.206993 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:04:03.207359 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:04:03.207427 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:04:03.215449 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:04:03.215639 systemd[1]: cri-containerd-bb4d7118507d8ffd8fdb3eb33e296cff7521275ed07722f43945ef26c0f20ef5.scope: Deactivated successfully. Jan 30 13:04:03.227528 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:04:03.333360 containerd[1453]: time="2025-01-30T13:04:03.333281831Z" level=info msg="shim disconnected" id=bb4d7118507d8ffd8fdb3eb33e296cff7521275ed07722f43945ef26c0f20ef5 namespace=k8s.io Jan 30 13:04:03.333360 containerd[1453]: time="2025-01-30T13:04:03.333340344Z" level=warning msg="cleaning up after shim disconnected" id=bb4d7118507d8ffd8fdb3eb33e296cff7521275ed07722f43945ef26c0f20ef5 namespace=k8s.io Jan 30 13:04:03.333360 containerd[1453]: time="2025-01-30T13:04:03.333350764Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:04:03.362157 containerd[1453]: time="2025-01-30T13:04:03.362109636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:04:03.363050 containerd[1453]: time="2025-01-30T13:04:03.363004161Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662714" Jan 30 13:04:03.364200 containerd[1453]: time="2025-01-30T13:04:03.364161111Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:04:03.366741 containerd[1453]: time="2025-01-30T13:04:03.366475291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:04:03.367688 containerd[1453]: time="2025-01-30T13:04:03.367600340Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.462600292s" Jan 30 13:04:03.367688 containerd[1453]: time="2025-01-30T13:04:03.367634125Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 30 13:04:03.369960 containerd[1453]: time="2025-01-30T13:04:03.369845387Z" level=info msg="CreateContainer within sandbox \"3a4b98774c0b2f52f9379c26ab2a2ca4cf8d96c8d82e47a88911b63d536dfb61\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:04:03.383128 containerd[1453]: time="2025-01-30T13:04:03.383070959Z" level=info msg="CreateContainer within sandbox \"3a4b98774c0b2f52f9379c26ab2a2ca4cf8d96c8d82e47a88911b63d536dfb61\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2629c63fd784b03da8e71a4a75bc6a14cf81e1a3984d4119e11b553449986e82\"" Jan 30 13:04:03.383719 containerd[1453]: time="2025-01-30T13:04:03.383643583Z" level=info msg="StartContainer for \"2629c63fd784b03da8e71a4a75bc6a14cf81e1a3984d4119e11b553449986e82\"" Jan 30 13:04:03.416291 systemd[1]: Started cri-containerd-2629c63fd784b03da8e71a4a75bc6a14cf81e1a3984d4119e11b553449986e82.scope - libcontainer container 2629c63fd784b03da8e71a4a75bc6a14cf81e1a3984d4119e11b553449986e82. Jan 30 13:04:03.452837 containerd[1453]: time="2025-01-30T13:04:03.448981121Z" level=info msg="StartContainer for \"2629c63fd784b03da8e71a4a75bc6a14cf81e1a3984d4119e11b553449986e82\" returns successfully" Jan 30 13:04:03.924683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb4d7118507d8ffd8fdb3eb33e296cff7521275ed07722f43945ef26c0f20ef5-rootfs.mount: Deactivated successfully. Jan 30 13:04:03.948760 kubelet[1757]: E0130 13:04:03.948704 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:04.101707 kubelet[1757]: E0130 13:04:04.100967 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:04.103576 containerd[1453]: time="2025-01-30T13:04:04.103536795Z" level=info msg="CreateContainer within sandbox \"75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:04:04.105232 kubelet[1757]: E0130 13:04:04.104830 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:04.125897 containerd[1453]: time="2025-01-30T13:04:04.125772666Z" level=info msg="CreateContainer within sandbox \"75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e1831472880ddffa0a25beea49140603a12d94ef7f105f44ccec6f44363451bd\"" Jan 30 13:04:04.128178 containerd[1453]: time="2025-01-30T13:04:04.126545610Z" level=info msg="StartContainer for \"e1831472880ddffa0a25beea49140603a12d94ef7f105f44ccec6f44363451bd\"" Jan 30 13:04:04.141549 kubelet[1757]: I0130 13:04:04.141451 1757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k5r68" podStartSLOduration=4.013139663 podStartE2EDuration="10.141432844s" podCreationTimestamp="2025-01-30 13:03:54 +0000 UTC" firstStartedPulling="2025-01-30 13:03:57.239974604 +0000 UTC m=+3.781103969" lastFinishedPulling="2025-01-30 13:04:03.368267786 +0000 UTC m=+9.909397150" observedRunningTime="2025-01-30 13:04:04.141217681 +0000 UTC m=+10.682347045" watchObservedRunningTime="2025-01-30 13:04:04.141432844 +0000 UTC m=+10.682562208" Jan 30 13:04:04.154278 systemd[1]: Started cri-containerd-e1831472880ddffa0a25beea49140603a12d94ef7f105f44ccec6f44363451bd.scope - libcontainer container e1831472880ddffa0a25beea49140603a12d94ef7f105f44ccec6f44363451bd. Jan 30 13:04:04.186208 containerd[1453]: time="2025-01-30T13:04:04.186111295Z" level=info msg="StartContainer for \"e1831472880ddffa0a25beea49140603a12d94ef7f105f44ccec6f44363451bd\" returns successfully" Jan 30 13:04:04.198366 systemd[1]: cri-containerd-e1831472880ddffa0a25beea49140603a12d94ef7f105f44ccec6f44363451bd.scope: Deactivated successfully. Jan 30 13:04:04.244632 containerd[1453]: time="2025-01-30T13:04:04.244562140Z" level=info msg="shim disconnected" id=e1831472880ddffa0a25beea49140603a12d94ef7f105f44ccec6f44363451bd namespace=k8s.io Jan 30 13:04:04.244897 containerd[1453]: time="2025-01-30T13:04:04.244880517Z" level=warning msg="cleaning up after shim disconnected" id=e1831472880ddffa0a25beea49140603a12d94ef7f105f44ccec6f44363451bd namespace=k8s.io Jan 30 13:04:04.244959 containerd[1453]: time="2025-01-30T13:04:04.244945667Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:04:04.922814 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1831472880ddffa0a25beea49140603a12d94ef7f105f44ccec6f44363451bd-rootfs.mount: Deactivated successfully. Jan 30 13:04:04.949044 kubelet[1757]: E0130 13:04:04.948999 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:05.110844 kubelet[1757]: E0130 13:04:05.109764 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:05.110844 kubelet[1757]: E0130 13:04:05.110317 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:05.112522 containerd[1453]: time="2025-01-30T13:04:05.112478860Z" level=info msg="CreateContainer within sandbox \"75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:04:05.128333 containerd[1453]: time="2025-01-30T13:04:05.128288321Z" level=info msg="CreateContainer within sandbox \"75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"26c0476929c6494110a3f10dedb9ea6fbc6779779d011d0134abf7fbecc42c89\"" Jan 30 13:04:05.128940 containerd[1453]: time="2025-01-30T13:04:05.128912483Z" level=info msg="StartContainer for \"26c0476929c6494110a3f10dedb9ea6fbc6779779d011d0134abf7fbecc42c89\"" Jan 30 13:04:05.155254 systemd[1]: Started cri-containerd-26c0476929c6494110a3f10dedb9ea6fbc6779779d011d0134abf7fbecc42c89.scope - libcontainer container 26c0476929c6494110a3f10dedb9ea6fbc6779779d011d0134abf7fbecc42c89. Jan 30 13:04:05.181534 systemd[1]: cri-containerd-26c0476929c6494110a3f10dedb9ea6fbc6779779d011d0134abf7fbecc42c89.scope: Deactivated successfully. Jan 30 13:04:05.183193 containerd[1453]: time="2025-01-30T13:04:05.183147756Z" level=info msg="StartContainer for \"26c0476929c6494110a3f10dedb9ea6fbc6779779d011d0134abf7fbecc42c89\" returns successfully" Jan 30 13:04:05.205496 containerd[1453]: time="2025-01-30T13:04:05.204953390Z" level=info msg="shim disconnected" id=26c0476929c6494110a3f10dedb9ea6fbc6779779d011d0134abf7fbecc42c89 namespace=k8s.io Jan 30 13:04:05.205496 containerd[1453]: time="2025-01-30T13:04:05.205327863Z" level=warning msg="cleaning up after shim disconnected" id=26c0476929c6494110a3f10dedb9ea6fbc6779779d011d0134abf7fbecc42c89 namespace=k8s.io Jan 30 13:04:05.205496 containerd[1453]: time="2025-01-30T13:04:05.205339360Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:04:05.922895 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26c0476929c6494110a3f10dedb9ea6fbc6779779d011d0134abf7fbecc42c89-rootfs.mount: Deactivated successfully. Jan 30 13:04:05.949491 kubelet[1757]: E0130 13:04:05.949442 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:06.114181 kubelet[1757]: E0130 13:04:06.114127 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:06.116753 containerd[1453]: time="2025-01-30T13:04:06.116717578Z" level=info msg="CreateContainer within sandbox \"75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:04:06.135401 containerd[1453]: time="2025-01-30T13:04:06.135151917Z" level=info msg="CreateContainer within sandbox \"75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a\"" Jan 30 13:04:06.136093 containerd[1453]: time="2025-01-30T13:04:06.135907373Z" level=info msg="StartContainer for \"c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a\"" Jan 30 13:04:06.164237 systemd[1]: Started cri-containerd-c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a.scope - libcontainer container c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a. Jan 30 13:04:06.195205 containerd[1453]: time="2025-01-30T13:04:06.194756812Z" level=info msg="StartContainer for \"c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a\" returns successfully" Jan 30 13:04:06.324681 kubelet[1757]: I0130 13:04:06.324495 1757 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:04:06.748146 kernel: Initializing XFRM netlink socket Jan 30 13:04:06.950348 kubelet[1757]: E0130 13:04:06.950293 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:07.118927 kubelet[1757]: E0130 13:04:07.118879 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:07.950794 kubelet[1757]: E0130 13:04:07.950748 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:08.120318 kubelet[1757]: E0130 13:04:08.120279 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:08.390680 systemd-networkd[1378]: cilium_host: Link UP Jan 30 13:04:08.391206 systemd-networkd[1378]: cilium_net: Link UP Jan 30 13:04:08.391537 systemd-networkd[1378]: cilium_net: Gained carrier Jan 30 13:04:08.391664 systemd-networkd[1378]: cilium_host: Gained carrier Jan 30 13:04:08.486988 systemd-networkd[1378]: cilium_vxlan: Link UP Jan 30 13:04:08.487005 systemd-networkd[1378]: cilium_vxlan: Gained carrier Jan 30 13:04:08.538221 systemd-networkd[1378]: cilium_host: Gained IPv6LL Jan 30 13:04:08.825101 kernel: NET: Registered PF_ALG protocol family Jan 30 13:04:08.944280 systemd-networkd[1378]: cilium_net: Gained IPv6LL Jan 30 13:04:08.951038 kubelet[1757]: E0130 13:04:08.950985 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:09.122475 kubelet[1757]: E0130 13:04:09.122115 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:09.480300 systemd-networkd[1378]: lxc_health: Link UP Jan 30 13:04:09.493497 systemd-networkd[1378]: lxc_health: Gained carrier Jan 30 13:04:09.951179 kubelet[1757]: E0130 13:04:09.951128 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:10.279252 kubelet[1757]: E0130 13:04:10.279029 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:10.324409 kubelet[1757]: I0130 13:04:10.324337 1757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6l69w" podStartSLOduration=11.657516808 podStartE2EDuration="16.324303089s" podCreationTimestamp="2025-01-30 13:03:54 +0000 UTC" firstStartedPulling="2025-01-30 13:03:57.237757585 +0000 UTC m=+3.778886949" lastFinishedPulling="2025-01-30 13:04:01.904543866 +0000 UTC m=+8.445673230" observedRunningTime="2025-01-30 13:04:07.133626655 +0000 UTC m=+13.674756019" watchObservedRunningTime="2025-01-30 13:04:10.324303089 +0000 UTC m=+16.865432413" Jan 30 13:04:10.416269 systemd-networkd[1378]: cilium_vxlan: Gained IPv6LL Jan 30 13:04:10.608386 systemd-networkd[1378]: lxc_health: Gained IPv6LL Jan 30 13:04:10.951385 kubelet[1757]: E0130 13:04:10.951229 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:11.311123 kubelet[1757]: I0130 13:04:11.311009 1757 topology_manager.go:215] "Topology Admit Handler" podUID="39175d1b-1a6a-4636-aac5-a2d458c384a7" podNamespace="default" podName="nginx-deployment-85f456d6dd-4wg67" Jan 30 13:04:11.317782 systemd[1]: Created slice kubepods-besteffort-pod39175d1b_1a6a_4636_aac5_a2d458c384a7.slice - libcontainer container kubepods-besteffort-pod39175d1b_1a6a_4636_aac5_a2d458c384a7.slice. Jan 30 13:04:11.382277 kubelet[1757]: I0130 13:04:11.382199 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x4kx\" (UniqueName: \"kubernetes.io/projected/39175d1b-1a6a-4636-aac5-a2d458c384a7-kube-api-access-8x4kx\") pod \"nginx-deployment-85f456d6dd-4wg67\" (UID: \"39175d1b-1a6a-4636-aac5-a2d458c384a7\") " pod="default/nginx-deployment-85f456d6dd-4wg67" Jan 30 13:04:11.622581 containerd[1453]: time="2025-01-30T13:04:11.622465943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4wg67,Uid:39175d1b-1a6a-4636-aac5-a2d458c384a7,Namespace:default,Attempt:0,}" Jan 30 13:04:11.741652 systemd-networkd[1378]: lxc1106ac439302: Link UP Jan 30 13:04:11.744086 kernel: eth0: renamed from tmp15cd6 Jan 30 13:04:11.750472 systemd-networkd[1378]: lxc1106ac439302: Gained carrier Jan 30 13:04:11.951470 kubelet[1757]: E0130 13:04:11.951342 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:12.952100 kubelet[1757]: E0130 13:04:12.952036 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:13.104227 systemd-networkd[1378]: lxc1106ac439302: Gained IPv6LL Jan 30 13:04:13.941175 kubelet[1757]: E0130 13:04:13.941138 1757 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:13.952446 kubelet[1757]: E0130 13:04:13.952409 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:14.072300 containerd[1453]: time="2025-01-30T13:04:14.072175872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:04:14.072300 containerd[1453]: time="2025-01-30T13:04:14.072239140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:04:14.072300 containerd[1453]: time="2025-01-30T13:04:14.072251666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:04:14.072797 containerd[1453]: time="2025-01-30T13:04:14.072742724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:04:14.100261 systemd[1]: Started cri-containerd-15cd66a26f6d2236e605d2cf95c799635564bfb721e72e7a8d6181ac81212a35.scope - libcontainer container 15cd66a26f6d2236e605d2cf95c799635564bfb721e72e7a8d6181ac81212a35. Jan 30 13:04:14.110702 systemd-resolved[1317]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:04:14.133096 containerd[1453]: time="2025-01-30T13:04:14.132979090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4wg67,Uid:39175d1b-1a6a-4636-aac5-a2d458c384a7,Namespace:default,Attempt:0,} returns sandbox id \"15cd66a26f6d2236e605d2cf95c799635564bfb721e72e7a8d6181ac81212a35\"" Jan 30 13:04:14.134645 containerd[1453]: time="2025-01-30T13:04:14.134609655Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:04:14.953398 kubelet[1757]: E0130 13:04:14.953352 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:15.819010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount708802991.mount: Deactivated successfully. Jan 30 13:04:15.953927 kubelet[1757]: E0130 13:04:15.953889 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:16.608989 containerd[1453]: time="2025-01-30T13:04:16.608927343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:04:16.609489 containerd[1453]: time="2025-01-30T13:04:16.609436323Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67680490" Jan 30 13:04:16.610508 containerd[1453]: time="2025-01-30T13:04:16.610469769Z" level=info msg="ImageCreate event name:\"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:04:16.613163 containerd[1453]: time="2025-01-30T13:04:16.613125589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:04:16.614101 containerd[1453]: time="2025-01-30T13:04:16.614042714Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 2.479378356s" Jan 30 13:04:16.614151 containerd[1453]: time="2025-01-30T13:04:16.614102735Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\"" Jan 30 13:04:16.616204 containerd[1453]: time="2025-01-30T13:04:16.616173748Z" level=info msg="CreateContainer within sandbox \"15cd66a26f6d2236e605d2cf95c799635564bfb721e72e7a8d6181ac81212a35\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 30 13:04:16.633344 containerd[1453]: time="2025-01-30T13:04:16.633288366Z" level=info msg="CreateContainer within sandbox \"15cd66a26f6d2236e605d2cf95c799635564bfb721e72e7a8d6181ac81212a35\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"170857f2b6d3ee3868a43d45f47207bbceb448504e97ca2ef5c860cc8990dd97\"" Jan 30 13:04:16.634146 containerd[1453]: time="2025-01-30T13:04:16.634037151Z" level=info msg="StartContainer for \"170857f2b6d3ee3868a43d45f47207bbceb448504e97ca2ef5c860cc8990dd97\"" Jan 30 13:04:16.666280 systemd[1]: Started cri-containerd-170857f2b6d3ee3868a43d45f47207bbceb448504e97ca2ef5c860cc8990dd97.scope - libcontainer container 170857f2b6d3ee3868a43d45f47207bbceb448504e97ca2ef5c860cc8990dd97. Jan 30 13:04:16.699085 containerd[1453]: time="2025-01-30T13:04:16.694172755Z" level=info msg="StartContainer for \"170857f2b6d3ee3868a43d45f47207bbceb448504e97ca2ef5c860cc8990dd97\" returns successfully" Jan 30 13:04:16.954905 kubelet[1757]: E0130 13:04:16.954737 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:17.955853 kubelet[1757]: E0130 13:04:17.955804 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:18.956550 kubelet[1757]: E0130 13:04:18.956491 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:19.957248 kubelet[1757]: E0130 13:04:19.957201 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:20.957749 kubelet[1757]: E0130 13:04:20.957696 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:21.958888 kubelet[1757]: E0130 13:04:21.958827 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:22.959586 kubelet[1757]: E0130 13:04:22.959512 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:23.699162 kubelet[1757]: I0130 13:04:23.699101 1757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-4wg67" podStartSLOduration=10.218207596 podStartE2EDuration="12.69908345s" podCreationTimestamp="2025-01-30 13:04:11 +0000 UTC" firstStartedPulling="2025-01-30 13:04:14.13412784 +0000 UTC m=+20.675257204" lastFinishedPulling="2025-01-30 13:04:16.615003734 +0000 UTC m=+23.156133058" observedRunningTime="2025-01-30 13:04:17.14578829 +0000 UTC m=+23.686917654" watchObservedRunningTime="2025-01-30 13:04:23.69908345 +0000 UTC m=+30.240212814" Jan 30 13:04:23.699348 kubelet[1757]: I0130 13:04:23.699233 1757 topology_manager.go:215] "Topology Admit Handler" podUID="d9da47c9-de06-47ce-b81d-11ac30d06c7c" podNamespace="default" podName="nfs-server-provisioner-0" Jan 30 13:04:23.706106 systemd[1]: Created slice kubepods-besteffort-podd9da47c9_de06_47ce_b81d_11ac30d06c7c.slice - libcontainer container kubepods-besteffort-podd9da47c9_de06_47ce_b81d_11ac30d06c7c.slice. Jan 30 13:04:23.749801 kubelet[1757]: I0130 13:04:23.749739 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d9da47c9-de06-47ce-b81d-11ac30d06c7c-data\") pod \"nfs-server-provisioner-0\" (UID: \"d9da47c9-de06-47ce-b81d-11ac30d06c7c\") " pod="default/nfs-server-provisioner-0" Jan 30 13:04:23.749801 kubelet[1757]: I0130 13:04:23.749796 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnl2l\" (UniqueName: \"kubernetes.io/projected/d9da47c9-de06-47ce-b81d-11ac30d06c7c-kube-api-access-gnl2l\") pod \"nfs-server-provisioner-0\" (UID: \"d9da47c9-de06-47ce-b81d-11ac30d06c7c\") " pod="default/nfs-server-provisioner-0" Jan 30 13:04:23.960093 kubelet[1757]: E0130 13:04:23.959950 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:24.010133 containerd[1453]: time="2025-01-30T13:04:24.010088346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d9da47c9-de06-47ce-b81d-11ac30d06c7c,Namespace:default,Attempt:0,}" Jan 30 13:04:24.059143 kernel: eth0: renamed from tmpf1830 Jan 30 13:04:24.069594 systemd-networkd[1378]: lxcf3e7319a1b1c: Link UP Jan 30 13:04:24.070336 systemd-networkd[1378]: lxcf3e7319a1b1c: Gained carrier Jan 30 13:04:24.219791 containerd[1453]: time="2025-01-30T13:04:24.219423125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:04:24.219791 containerd[1453]: time="2025-01-30T13:04:24.219491060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:04:24.219791 containerd[1453]: time="2025-01-30T13:04:24.219507384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:04:24.220094 containerd[1453]: time="2025-01-30T13:04:24.219597844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:04:24.245294 systemd[1]: Started cri-containerd-f18306fac6b33b8445a23a46f8dc3c5995b95179c03d9711874a6df892d30e65.scope - libcontainer container f18306fac6b33b8445a23a46f8dc3c5995b95179c03d9711874a6df892d30e65. Jan 30 13:04:24.257303 systemd-resolved[1317]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:04:24.285350 containerd[1453]: time="2025-01-30T13:04:24.284870307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d9da47c9-de06-47ce-b81d-11ac30d06c7c,Namespace:default,Attempt:0,} returns sandbox id \"f18306fac6b33b8445a23a46f8dc3c5995b95179c03d9711874a6df892d30e65\"" Jan 30 13:04:24.286828 containerd[1453]: time="2025-01-30T13:04:24.286785262Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 30 13:04:24.961116 kubelet[1757]: E0130 13:04:24.961067 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:25.713294 systemd-networkd[1378]: lxcf3e7319a1b1c: Gained IPv6LL Jan 30 13:04:25.930849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2115154343.mount: Deactivated successfully. Jan 30 13:04:25.962252 kubelet[1757]: E0130 13:04:25.962202 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:26.061711 kubelet[1757]: I0130 13:04:26.061523 1757 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:04:26.062960 kubelet[1757]: E0130 13:04:26.062935 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:26.152524 kubelet[1757]: E0130 13:04:26.152483 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:26.963065 kubelet[1757]: E0130 13:04:26.963012 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:27.468786 containerd[1453]: time="2025-01-30T13:04:27.468722981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:04:27.469263 containerd[1453]: time="2025-01-30T13:04:27.469175068Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" Jan 30 13:04:27.470270 containerd[1453]: time="2025-01-30T13:04:27.470237355Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:04:27.473156 containerd[1453]: time="2025-01-30T13:04:27.473110152Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:04:27.475034 containerd[1453]: time="2025-01-30T13:04:27.474863772Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.188028939s" Jan 30 13:04:27.475034 containerd[1453]: time="2025-01-30T13:04:27.474903380Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Jan 30 13:04:27.477556 containerd[1453]: time="2025-01-30T13:04:27.477516807Z" level=info msg="CreateContainer within sandbox \"f18306fac6b33b8445a23a46f8dc3c5995b95179c03d9711874a6df892d30e65\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 30 13:04:27.492288 containerd[1453]: time="2025-01-30T13:04:27.491748049Z" level=info msg="CreateContainer within sandbox \"f18306fac6b33b8445a23a46f8dc3c5995b95179c03d9711874a6df892d30e65\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"53c8398221cd672237922935d4dbeffc643aaf0701634633af036c6302880922\"" Jan 30 13:04:27.492646 containerd[1453]: time="2025-01-30T13:04:27.492434742Z" level=info msg="StartContainer for \"53c8398221cd672237922935d4dbeffc643aaf0701634633af036c6302880922\"" Jan 30 13:04:27.578086 systemd[1]: run-containerd-runc-k8s.io-53c8398221cd672237922935d4dbeffc643aaf0701634633af036c6302880922-runc.QKK7Hw.mount: Deactivated successfully. Jan 30 13:04:27.591294 systemd[1]: Started cri-containerd-53c8398221cd672237922935d4dbeffc643aaf0701634633af036c6302880922.scope - libcontainer container 53c8398221cd672237922935d4dbeffc643aaf0701634633af036c6302880922. Jan 30 13:04:27.616258 containerd[1453]: time="2025-01-30T13:04:27.616126744Z" level=info msg="StartContainer for \"53c8398221cd672237922935d4dbeffc643aaf0701634633af036c6302880922\" returns successfully" Jan 30 13:04:27.963357 kubelet[1757]: E0130 13:04:27.963307 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:28.169425 kubelet[1757]: I0130 13:04:28.165140 1757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.9755226860000001 podStartE2EDuration="5.165121825s" podCreationTimestamp="2025-01-30 13:04:23 +0000 UTC" firstStartedPulling="2025-01-30 13:04:24.286376129 +0000 UTC m=+30.827505453" lastFinishedPulling="2025-01-30 13:04:27.475975228 +0000 UTC m=+34.017104592" observedRunningTime="2025-01-30 13:04:28.164894223 +0000 UTC m=+34.706023587" watchObservedRunningTime="2025-01-30 13:04:28.165121825 +0000 UTC m=+34.706251149" Jan 30 13:04:28.963694 kubelet[1757]: E0130 13:04:28.963640 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:29.042899 update_engine[1436]: I20250130 13:04:29.042811 1436 update_attempter.cc:509] Updating boot flags... Jan 30 13:04:29.072157 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3144) Jan 30 13:04:29.104129 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3143) Jan 30 13:04:29.124084 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3143) Jan 30 13:04:29.964720 kubelet[1757]: E0130 13:04:29.964670 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:30.965564 kubelet[1757]: E0130 13:04:30.965518 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:31.965973 kubelet[1757]: E0130 13:04:31.965906 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:32.966480 kubelet[1757]: E0130 13:04:32.966436 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:33.939975 kubelet[1757]: E0130 13:04:33.939909 1757 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:33.967273 kubelet[1757]: E0130 13:04:33.967221 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:34.967860 kubelet[1757]: E0130 13:04:34.967794 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:35.968269 kubelet[1757]: E0130 13:04:35.968230 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:36.969029 kubelet[1757]: E0130 13:04:36.968979 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:37.452029 kubelet[1757]: I0130 13:04:37.451948 1757 topology_manager.go:215] "Topology Admit Handler" podUID="300a9e32-cc76-4fef-8821-5565f9d82fb7" podNamespace="default" podName="test-pod-1" Jan 30 13:04:37.457373 systemd[1]: Created slice kubepods-besteffort-pod300a9e32_cc76_4fef_8821_5565f9d82fb7.slice - libcontainer container kubepods-besteffort-pod300a9e32_cc76_4fef_8821_5565f9d82fb7.slice. Jan 30 13:04:37.533880 kubelet[1757]: I0130 13:04:37.533813 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tpnx\" (UniqueName: \"kubernetes.io/projected/300a9e32-cc76-4fef-8821-5565f9d82fb7-kube-api-access-7tpnx\") pod \"test-pod-1\" (UID: \"300a9e32-cc76-4fef-8821-5565f9d82fb7\") " pod="default/test-pod-1" Jan 30 13:04:37.533880 kubelet[1757]: I0130 13:04:37.533854 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-114d26df-1b8c-4383-af40-155073a10a78\" (UniqueName: \"kubernetes.io/nfs/300a9e32-cc76-4fef-8821-5565f9d82fb7-pvc-114d26df-1b8c-4383-af40-155073a10a78\") pod \"test-pod-1\" (UID: \"300a9e32-cc76-4fef-8821-5565f9d82fb7\") " pod="default/test-pod-1" Jan 30 13:04:37.669090 kernel: FS-Cache: Loaded Jan 30 13:04:37.695220 kernel: RPC: Registered named UNIX socket transport module. Jan 30 13:04:37.695327 kernel: RPC: Registered udp transport module. Jan 30 13:04:37.695340 kernel: RPC: Registered tcp transport module. Jan 30 13:04:37.696120 kernel: RPC: Registered tcp-with-tls transport module. Jan 30 13:04:37.696162 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 30 13:04:37.856234 kernel: NFS: Registering the id_resolver key type Jan 30 13:04:37.856367 kernel: Key type id_resolver registered Jan 30 13:04:37.856410 kernel: Key type id_legacy registered Jan 30 13:04:37.895943 nfsidmap[3174]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 30 13:04:37.906772 nfsidmap[3177]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jan 30 13:04:37.969171 kubelet[1757]: E0130 13:04:37.969121 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:38.061240 containerd[1453]: time="2025-01-30T13:04:38.061184547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:300a9e32-cc76-4fef-8821-5565f9d82fb7,Namespace:default,Attempt:0,}" Jan 30 13:04:38.104639 systemd-networkd[1378]: lxcf075f40fe979: Link UP Jan 30 13:04:38.119111 kernel: eth0: renamed from tmp8e0e2 Jan 30 13:04:38.135034 systemd-networkd[1378]: lxcf075f40fe979: Gained carrier Jan 30 13:04:38.313887 containerd[1453]: time="2025-01-30T13:04:38.313603359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:04:38.313887 containerd[1453]: time="2025-01-30T13:04:38.313693169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:04:38.313887 containerd[1453]: time="2025-01-30T13:04:38.313708651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:04:38.314377 containerd[1453]: time="2025-01-30T13:04:38.314335884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:04:38.330272 systemd[1]: Started cri-containerd-8e0e25b5aaf2e1e18fa6ce101550bb43c674b16c56c7092459f95b0183aed6bd.scope - libcontainer container 8e0e25b5aaf2e1e18fa6ce101550bb43c674b16c56c7092459f95b0183aed6bd. Jan 30 13:04:38.345093 systemd-resolved[1317]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:04:38.364796 containerd[1453]: time="2025-01-30T13:04:38.364692495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:300a9e32-cc76-4fef-8821-5565f9d82fb7,Namespace:default,Attempt:0,} returns sandbox id \"8e0e25b5aaf2e1e18fa6ce101550bb43c674b16c56c7092459f95b0183aed6bd\"" Jan 30 13:04:38.367376 containerd[1453]: time="2025-01-30T13:04:38.367307677Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 30 13:04:38.606366 containerd[1453]: time="2025-01-30T13:04:38.606320503Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:04:38.607032 containerd[1453]: time="2025-01-30T13:04:38.606969017Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 30 13:04:38.609882 containerd[1453]: time="2025-01-30T13:04:38.609770301Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 242.423459ms" Jan 30 13:04:38.609882 containerd[1453]: time="2025-01-30T13:04:38.609802584Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\"" Jan 30 13:04:38.611893 containerd[1453]: time="2025-01-30T13:04:38.611788614Z" level=info msg="CreateContainer within sandbox \"8e0e25b5aaf2e1e18fa6ce101550bb43c674b16c56c7092459f95b0183aed6bd\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 30 13:04:38.623566 containerd[1453]: time="2025-01-30T13:04:38.623188689Z" level=info msg="CreateContainer within sandbox \"8e0e25b5aaf2e1e18fa6ce101550bb43c674b16c56c7092459f95b0183aed6bd\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"6ae5d65945836b719786a021eb413fb5b8e84297c1cbd6745165c11e807c3a4d\"" Jan 30 13:04:38.624803 containerd[1453]: time="2025-01-30T13:04:38.623803880Z" level=info msg="StartContainer for \"6ae5d65945836b719786a021eb413fb5b8e84297c1cbd6745165c11e807c3a4d\"" Jan 30 13:04:38.648222 systemd[1]: Started cri-containerd-6ae5d65945836b719786a021eb413fb5b8e84297c1cbd6745165c11e807c3a4d.scope - libcontainer container 6ae5d65945836b719786a021eb413fb5b8e84297c1cbd6745165c11e807c3a4d. Jan 30 13:04:38.674164 containerd[1453]: time="2025-01-30T13:04:38.674126728Z" level=info msg="StartContainer for \"6ae5d65945836b719786a021eb413fb5b8e84297c1cbd6745165c11e807c3a4d\" returns successfully" Jan 30 13:04:38.969634 kubelet[1757]: E0130 13:04:38.969485 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:39.188914 kubelet[1757]: I0130 13:04:39.188658 1757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.945107341 podStartE2EDuration="16.188644128s" podCreationTimestamp="2025-01-30 13:04:23 +0000 UTC" firstStartedPulling="2025-01-30 13:04:38.366888869 +0000 UTC m=+44.908018233" lastFinishedPulling="2025-01-30 13:04:38.610425656 +0000 UTC m=+45.151555020" observedRunningTime="2025-01-30 13:04:39.188012778 +0000 UTC m=+45.729142142" watchObservedRunningTime="2025-01-30 13:04:39.188644128 +0000 UTC m=+45.729773492" Jan 30 13:04:39.969894 kubelet[1757]: E0130 13:04:39.969855 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:40.112249 systemd-networkd[1378]: lxcf075f40fe979: Gained IPv6LL Jan 30 13:04:40.970428 kubelet[1757]: E0130 13:04:40.970379 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:41.276853 containerd[1453]: time="2025-01-30T13:04:41.276755361Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:04:41.282234 containerd[1453]: time="2025-01-30T13:04:41.282197637Z" level=info msg="StopContainer for \"c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a\" with timeout 2 (s)" Jan 30 13:04:41.284037 containerd[1453]: time="2025-01-30T13:04:41.283992100Z" level=info msg="Stop container \"c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a\" with signal terminated" Jan 30 13:04:41.291450 systemd-networkd[1378]: lxc_health: Link DOWN Jan 30 13:04:41.291456 systemd-networkd[1378]: lxc_health: Lost carrier Jan 30 13:04:41.312008 systemd[1]: cri-containerd-c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a.scope: Deactivated successfully. Jan 30 13:04:41.312306 systemd[1]: cri-containerd-c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a.scope: Consumed 6.751s CPU time. Jan 30 13:04:41.328264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a-rootfs.mount: Deactivated successfully. Jan 30 13:04:41.356887 containerd[1453]: time="2025-01-30T13:04:41.356821930Z" level=info msg="shim disconnected" id=c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a namespace=k8s.io Jan 30 13:04:41.357251 containerd[1453]: time="2025-01-30T13:04:41.357087917Z" level=warning msg="cleaning up after shim disconnected" id=c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a namespace=k8s.io Jan 30 13:04:41.357251 containerd[1453]: time="2025-01-30T13:04:41.357105959Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:04:41.370462 containerd[1453]: time="2025-01-30T13:04:41.370401355Z" level=info msg="StopContainer for \"c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a\" returns successfully" Jan 30 13:04:41.371106 containerd[1453]: time="2025-01-30T13:04:41.371080905Z" level=info msg="StopPodSandbox for \"75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd\"" Jan 30 13:04:41.371171 containerd[1453]: time="2025-01-30T13:04:41.371121629Z" level=info msg="Container to stop \"6973b5623ee6fbd3fb367c0e92992a1f09bffe5bc17dd6f2dccd47fb8314dd0b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:04:41.371171 containerd[1453]: time="2025-01-30T13:04:41.371134350Z" level=info msg="Container to stop \"bb4d7118507d8ffd8fdb3eb33e296cff7521275ed07722f43945ef26c0f20ef5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:04:41.371171 containerd[1453]: time="2025-01-30T13:04:41.371143111Z" level=info msg="Container to stop \"e1831472880ddffa0a25beea49140603a12d94ef7f105f44ccec6f44363451bd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:04:41.371171 containerd[1453]: time="2025-01-30T13:04:41.371151712Z" level=info msg="Container to stop \"c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:04:41.371171 containerd[1453]: time="2025-01-30T13:04:41.371159553Z" level=info msg="Container to stop \"26c0476929c6494110a3f10dedb9ea6fbc6779779d011d0134abf7fbecc42c89\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:04:41.372587 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd-shm.mount: Deactivated successfully. Jan 30 13:04:41.377809 systemd[1]: cri-containerd-75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd.scope: Deactivated successfully. Jan 30 13:04:41.392407 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd-rootfs.mount: Deactivated successfully. Jan 30 13:04:41.397669 containerd[1453]: time="2025-01-30T13:04:41.396378166Z" level=info msg="shim disconnected" id=75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd namespace=k8s.io Jan 30 13:04:41.397669 containerd[1453]: time="2025-01-30T13:04:41.396430531Z" level=warning msg="cleaning up after shim disconnected" id=75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd namespace=k8s.io Jan 30 13:04:41.397669 containerd[1453]: time="2025-01-30T13:04:41.396440732Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:04:41.408137 containerd[1453]: time="2025-01-30T13:04:41.408101442Z" level=info msg="TearDown network for sandbox \"75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd\" successfully" Jan 30 13:04:41.408271 containerd[1453]: time="2025-01-30T13:04:41.408256378Z" level=info msg="StopPodSandbox for \"75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd\" returns successfully" Jan 30 13:04:41.556280 kubelet[1757]: I0130 13:04:41.556218 1757 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-etc-cni-netd\") pod \"7932fb71-0256-4097-8707-e8d6a31accf4\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " Jan 30 13:04:41.556280 kubelet[1757]: I0130 13:04:41.556229 1757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7932fb71-0256-4097-8707-e8d6a31accf4" (UID: "7932fb71-0256-4097-8707-e8d6a31accf4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:04:41.556280 kubelet[1757]: I0130 13:04:41.556277 1757 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7932fb71-0256-4097-8707-e8d6a31accf4-clustermesh-secrets\") pod \"7932fb71-0256-4097-8707-e8d6a31accf4\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " Jan 30 13:04:41.556472 kubelet[1757]: I0130 13:04:41.556301 1757 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7932fb71-0256-4097-8707-e8d6a31accf4-cilium-config-path\") pod \"7932fb71-0256-4097-8707-e8d6a31accf4\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " Jan 30 13:04:41.556472 kubelet[1757]: I0130 13:04:41.556320 1757 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kh6cn\" (UniqueName: \"kubernetes.io/projected/7932fb71-0256-4097-8707-e8d6a31accf4-kube-api-access-kh6cn\") pod \"7932fb71-0256-4097-8707-e8d6a31accf4\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " Jan 30 13:04:41.556472 kubelet[1757]: I0130 13:04:41.556338 1757 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7932fb71-0256-4097-8707-e8d6a31accf4-hubble-tls\") pod \"7932fb71-0256-4097-8707-e8d6a31accf4\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " Jan 30 13:04:41.556472 kubelet[1757]: I0130 13:04:41.556354 1757 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-host-proc-sys-kernel\") pod \"7932fb71-0256-4097-8707-e8d6a31accf4\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " Jan 30 13:04:41.556472 kubelet[1757]: I0130 13:04:41.556369 1757 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-cilium-run\") pod \"7932fb71-0256-4097-8707-e8d6a31accf4\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " Jan 30 13:04:41.556472 kubelet[1757]: I0130 13:04:41.556389 1757 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-hostproc\") pod \"7932fb71-0256-4097-8707-e8d6a31accf4\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " Jan 30 13:04:41.556588 kubelet[1757]: I0130 13:04:41.556404 1757 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-lib-modules\") pod \"7932fb71-0256-4097-8707-e8d6a31accf4\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " Jan 30 13:04:41.556588 kubelet[1757]: I0130 13:04:41.556419 1757 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-host-proc-sys-net\") pod \"7932fb71-0256-4097-8707-e8d6a31accf4\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " Jan 30 13:04:41.556588 kubelet[1757]: I0130 13:04:41.556435 1757 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-cilium-cgroup\") pod \"7932fb71-0256-4097-8707-e8d6a31accf4\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " Jan 30 13:04:41.556588 kubelet[1757]: I0130 13:04:41.556450 1757 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-cni-path\") pod \"7932fb71-0256-4097-8707-e8d6a31accf4\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " Jan 30 13:04:41.556588 kubelet[1757]: I0130 13:04:41.556464 1757 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-xtables-lock\") pod \"7932fb71-0256-4097-8707-e8d6a31accf4\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " Jan 30 13:04:41.556588 kubelet[1757]: I0130 13:04:41.556509 1757 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-bpf-maps\") pod \"7932fb71-0256-4097-8707-e8d6a31accf4\" (UID: \"7932fb71-0256-4097-8707-e8d6a31accf4\") " Jan 30 13:04:41.556717 kubelet[1757]: I0130 13:04:41.556539 1757 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-etc-cni-netd\") on node \"10.0.0.103\" DevicePath \"\"" Jan 30 13:04:41.556717 kubelet[1757]: I0130 13:04:41.556566 1757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7932fb71-0256-4097-8707-e8d6a31accf4" (UID: "7932fb71-0256-4097-8707-e8d6a31accf4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:04:41.556717 kubelet[1757]: I0130 13:04:41.556641 1757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-hostproc" (OuterVolumeSpecName: "hostproc") pod "7932fb71-0256-4097-8707-e8d6a31accf4" (UID: "7932fb71-0256-4097-8707-e8d6a31accf4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:04:41.557521 kubelet[1757]: I0130 13:04:41.556805 1757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7932fb71-0256-4097-8707-e8d6a31accf4" (UID: "7932fb71-0256-4097-8707-e8d6a31accf4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:04:41.557701 kubelet[1757]: I0130 13:04:41.557679 1757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7932fb71-0256-4097-8707-e8d6a31accf4" (UID: "7932fb71-0256-4097-8707-e8d6a31accf4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:04:41.557782 kubelet[1757]: I0130 13:04:41.557769 1757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7932fb71-0256-4097-8707-e8d6a31accf4" (UID: "7932fb71-0256-4097-8707-e8d6a31accf4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:04:41.557854 kubelet[1757]: I0130 13:04:41.557842 1757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7932fb71-0256-4097-8707-e8d6a31accf4" (UID: "7932fb71-0256-4097-8707-e8d6a31accf4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:04:41.557930 kubelet[1757]: I0130 13:04:41.557899 1757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7932fb71-0256-4097-8707-e8d6a31accf4" (UID: "7932fb71-0256-4097-8707-e8d6a31accf4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:04:41.557965 kubelet[1757]: I0130 13:04:41.557908 1757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-cni-path" (OuterVolumeSpecName: "cni-path") pod "7932fb71-0256-4097-8707-e8d6a31accf4" (UID: "7932fb71-0256-4097-8707-e8d6a31accf4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:04:41.558009 kubelet[1757]: I0130 13:04:41.557994 1757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7932fb71-0256-4097-8707-e8d6a31accf4" (UID: "7932fb71-0256-4097-8707-e8d6a31accf4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:04:41.558520 kubelet[1757]: I0130 13:04:41.558480 1757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7932fb71-0256-4097-8707-e8d6a31accf4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7932fb71-0256-4097-8707-e8d6a31accf4" (UID: "7932fb71-0256-4097-8707-e8d6a31accf4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:04:41.561069 kubelet[1757]: I0130 13:04:41.561022 1757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7932fb71-0256-4097-8707-e8d6a31accf4-kube-api-access-kh6cn" (OuterVolumeSpecName: "kube-api-access-kh6cn") pod "7932fb71-0256-4097-8707-e8d6a31accf4" (UID: "7932fb71-0256-4097-8707-e8d6a31accf4"). InnerVolumeSpecName "kube-api-access-kh6cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:41.562072 kubelet[1757]: I0130 13:04:41.562015 1757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7932fb71-0256-4097-8707-e8d6a31accf4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7932fb71-0256-4097-8707-e8d6a31accf4" (UID: "7932fb71-0256-4097-8707-e8d6a31accf4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:04:41.562782 systemd[1]: var-lib-kubelet-pods-7932fb71\x2d0256\x2d4097\x2d8707\x2de8d6a31accf4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkh6cn.mount: Deactivated successfully. Jan 30 13:04:41.562878 systemd[1]: var-lib-kubelet-pods-7932fb71\x2d0256\x2d4097\x2d8707\x2de8d6a31accf4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jan 30 13:04:41.563524 kubelet[1757]: I0130 13:04:41.563002 1757 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7932fb71-0256-4097-8707-e8d6a31accf4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7932fb71-0256-4097-8707-e8d6a31accf4" (UID: "7932fb71-0256-4097-8707-e8d6a31accf4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:04:41.657358 kubelet[1757]: I0130 13:04:41.657319 1757 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7932fb71-0256-4097-8707-e8d6a31accf4-cilium-config-path\") on node \"10.0.0.103\" DevicePath \"\"" Jan 30 13:04:41.657358 kubelet[1757]: I0130 13:04:41.657352 1757 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kh6cn\" (UniqueName: \"kubernetes.io/projected/7932fb71-0256-4097-8707-e8d6a31accf4-kube-api-access-kh6cn\") on node \"10.0.0.103\" DevicePath \"\"" Jan 30 13:04:41.657358 kubelet[1757]: I0130 13:04:41.657364 1757 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7932fb71-0256-4097-8707-e8d6a31accf4-hubble-tls\") on node \"10.0.0.103\" DevicePath \"\"" Jan 30 13:04:41.657571 kubelet[1757]: I0130 13:04:41.657372 1757 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7932fb71-0256-4097-8707-e8d6a31accf4-clustermesh-secrets\") on node \"10.0.0.103\" DevicePath \"\"" Jan 30 13:04:41.657571 kubelet[1757]: I0130 13:04:41.657381 1757 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-cilium-run\") on node \"10.0.0.103\" DevicePath \"\"" Jan 30 13:04:41.657571 kubelet[1757]: I0130 13:04:41.657389 1757 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-host-proc-sys-kernel\") on node \"10.0.0.103\" DevicePath \"\"" Jan 30 13:04:41.657571 kubelet[1757]: I0130 13:04:41.657397 1757 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-host-proc-sys-net\") on node \"10.0.0.103\" DevicePath \"\"" Jan 30 13:04:41.657571 kubelet[1757]: I0130 13:04:41.657404 1757 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-cilium-cgroup\") on node \"10.0.0.103\" DevicePath \"\"" Jan 30 13:04:41.657571 kubelet[1757]: I0130 13:04:41.657412 1757 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-hostproc\") on node \"10.0.0.103\" DevicePath \"\"" Jan 30 13:04:41.657571 kubelet[1757]: I0130 13:04:41.657419 1757 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-lib-modules\") on node \"10.0.0.103\" DevicePath \"\"" Jan 30 13:04:41.657571 kubelet[1757]: I0130 13:04:41.657426 1757 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-bpf-maps\") on node \"10.0.0.103\" DevicePath \"\"" Jan 30 13:04:41.657760 kubelet[1757]: I0130 13:04:41.657434 1757 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-cni-path\") on node \"10.0.0.103\" DevicePath \"\"" Jan 30 13:04:41.657760 kubelet[1757]: I0130 13:04:41.657443 1757 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7932fb71-0256-4097-8707-e8d6a31accf4-xtables-lock\") on node \"10.0.0.103\" DevicePath \"\"" Jan 30 13:04:41.970662 kubelet[1757]: E0130 13:04:41.970540 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:42.080936 systemd[1]: Removed slice kubepods-burstable-pod7932fb71_0256_4097_8707_e8d6a31accf4.slice - libcontainer container kubepods-burstable-pod7932fb71_0256_4097_8707_e8d6a31accf4.slice. Jan 30 13:04:42.081025 systemd[1]: kubepods-burstable-pod7932fb71_0256_4097_8707_e8d6a31accf4.slice: Consumed 6.910s CPU time. Jan 30 13:04:42.187585 kubelet[1757]: I0130 13:04:42.187559 1757 scope.go:117] "RemoveContainer" containerID="c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a" Jan 30 13:04:42.189696 containerd[1453]: time="2025-01-30T13:04:42.189357144Z" level=info msg="RemoveContainer for \"c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a\"" Jan 30 13:04:42.192681 containerd[1453]: time="2025-01-30T13:04:42.192631665Z" level=info msg="RemoveContainer for \"c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a\" returns successfully" Jan 30 13:04:42.192935 kubelet[1757]: I0130 13:04:42.192906 1757 scope.go:117] "RemoveContainer" containerID="26c0476929c6494110a3f10dedb9ea6fbc6779779d011d0134abf7fbecc42c89" Jan 30 13:04:42.194059 containerd[1453]: time="2025-01-30T13:04:42.194034683Z" level=info msg="RemoveContainer for \"26c0476929c6494110a3f10dedb9ea6fbc6779779d011d0134abf7fbecc42c89\"" Jan 30 13:04:42.196618 containerd[1453]: time="2025-01-30T13:04:42.196581373Z" level=info msg="RemoveContainer for \"26c0476929c6494110a3f10dedb9ea6fbc6779779d011d0134abf7fbecc42c89\" returns successfully" Jan 30 13:04:42.196793 kubelet[1757]: I0130 13:04:42.196771 1757 scope.go:117] "RemoveContainer" containerID="e1831472880ddffa0a25beea49140603a12d94ef7f105f44ccec6f44363451bd" Jan 30 13:04:42.197921 containerd[1453]: time="2025-01-30T13:04:42.197898542Z" level=info msg="RemoveContainer for \"e1831472880ddffa0a25beea49140603a12d94ef7f105f44ccec6f44363451bd\"" Jan 30 13:04:42.200497 containerd[1453]: time="2025-01-30T13:04:42.200458193Z" level=info msg="RemoveContainer for \"e1831472880ddffa0a25beea49140603a12d94ef7f105f44ccec6f44363451bd\" returns successfully" Jan 30 13:04:42.200706 kubelet[1757]: I0130 13:04:42.200679 1757 scope.go:117] "RemoveContainer" containerID="bb4d7118507d8ffd8fdb3eb33e296cff7521275ed07722f43945ef26c0f20ef5" Jan 30 13:04:42.202118 containerd[1453]: time="2025-01-30T13:04:42.202084152Z" level=info msg="RemoveContainer for \"bb4d7118507d8ffd8fdb3eb33e296cff7521275ed07722f43945ef26c0f20ef5\"" Jan 30 13:04:42.204724 containerd[1453]: time="2025-01-30T13:04:42.204682527Z" level=info msg="RemoveContainer for \"bb4d7118507d8ffd8fdb3eb33e296cff7521275ed07722f43945ef26c0f20ef5\" returns successfully" Jan 30 13:04:42.204912 kubelet[1757]: I0130 13:04:42.204871 1757 scope.go:117] "RemoveContainer" containerID="6973b5623ee6fbd3fb367c0e92992a1f09bffe5bc17dd6f2dccd47fb8314dd0b" Jan 30 13:04:42.206421 containerd[1453]: time="2025-01-30T13:04:42.206348851Z" level=info msg="RemoveContainer for \"6973b5623ee6fbd3fb367c0e92992a1f09bffe5bc17dd6f2dccd47fb8314dd0b\"" Jan 30 13:04:42.208991 containerd[1453]: time="2025-01-30T13:04:42.208948586Z" level=info msg="RemoveContainer for \"6973b5623ee6fbd3fb367c0e92992a1f09bffe5bc17dd6f2dccd47fb8314dd0b\" returns successfully" Jan 30 13:04:42.209227 kubelet[1757]: I0130 13:04:42.209189 1757 scope.go:117] "RemoveContainer" containerID="c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a" Jan 30 13:04:42.209466 containerd[1453]: time="2025-01-30T13:04:42.209426313Z" level=error msg="ContainerStatus for \"c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a\": not found" Jan 30 13:04:42.209600 kubelet[1757]: E0130 13:04:42.209559 1757 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a\": not found" containerID="c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a" Jan 30 13:04:42.209732 kubelet[1757]: I0130 13:04:42.209589 1757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a"} err="failed to get container status \"c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a\": rpc error: code = NotFound desc = an error occurred when try to find container \"c022bed44f0150fa27747e579e05a4cb432aea3ee9dd31efbb88e44d2f93d58a\": not found" Jan 30 13:04:42.209732 kubelet[1757]: I0130 13:04:42.209727 1757 scope.go:117] "RemoveContainer" containerID="26c0476929c6494110a3f10dedb9ea6fbc6779779d011d0134abf7fbecc42c89" Jan 30 13:04:42.209959 containerd[1453]: time="2025-01-30T13:04:42.209900559Z" level=error msg="ContainerStatus for \"26c0476929c6494110a3f10dedb9ea6fbc6779779d011d0134abf7fbecc42c89\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"26c0476929c6494110a3f10dedb9ea6fbc6779779d011d0134abf7fbecc42c89\": not found" Jan 30 13:04:42.210062 kubelet[1757]: E0130 13:04:42.210029 1757 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"26c0476929c6494110a3f10dedb9ea6fbc6779779d011d0134abf7fbecc42c89\": not found" containerID="26c0476929c6494110a3f10dedb9ea6fbc6779779d011d0134abf7fbecc42c89" Jan 30 13:04:42.210101 kubelet[1757]: I0130 13:04:42.210065 1757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"26c0476929c6494110a3f10dedb9ea6fbc6779779d011d0134abf7fbecc42c89"} err="failed to get container status \"26c0476929c6494110a3f10dedb9ea6fbc6779779d011d0134abf7fbecc42c89\": rpc error: code = NotFound desc = an error occurred when try to find container \"26c0476929c6494110a3f10dedb9ea6fbc6779779d011d0134abf7fbecc42c89\": not found" Jan 30 13:04:42.210101 kubelet[1757]: I0130 13:04:42.210083 1757 scope.go:117] "RemoveContainer" containerID="e1831472880ddffa0a25beea49140603a12d94ef7f105f44ccec6f44363451bd" Jan 30 13:04:42.210294 containerd[1453]: time="2025-01-30T13:04:42.210262315Z" level=error msg="ContainerStatus for \"e1831472880ddffa0a25beea49140603a12d94ef7f105f44ccec6f44363451bd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e1831472880ddffa0a25beea49140603a12d94ef7f105f44ccec6f44363451bd\": not found" Jan 30 13:04:42.210439 kubelet[1757]: E0130 13:04:42.210417 1757 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e1831472880ddffa0a25beea49140603a12d94ef7f105f44ccec6f44363451bd\": not found" containerID="e1831472880ddffa0a25beea49140603a12d94ef7f105f44ccec6f44363451bd" Jan 30 13:04:42.210478 kubelet[1757]: I0130 13:04:42.210444 1757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e1831472880ddffa0a25beea49140603a12d94ef7f105f44ccec6f44363451bd"} err="failed to get container status \"e1831472880ddffa0a25beea49140603a12d94ef7f105f44ccec6f44363451bd\": rpc error: code = NotFound desc = an error occurred when try to find container \"e1831472880ddffa0a25beea49140603a12d94ef7f105f44ccec6f44363451bd\": not found" Jan 30 13:04:42.210478 kubelet[1757]: I0130 13:04:42.210459 1757 scope.go:117] "RemoveContainer" containerID="bb4d7118507d8ffd8fdb3eb33e296cff7521275ed07722f43945ef26c0f20ef5" Jan 30 13:04:42.210759 containerd[1453]: time="2025-01-30T13:04:42.210675515Z" level=error msg="ContainerStatus for \"bb4d7118507d8ffd8fdb3eb33e296cff7521275ed07722f43945ef26c0f20ef5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb4d7118507d8ffd8fdb3eb33e296cff7521275ed07722f43945ef26c0f20ef5\": not found" Jan 30 13:04:42.210806 kubelet[1757]: E0130 13:04:42.210790 1757 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb4d7118507d8ffd8fdb3eb33e296cff7521275ed07722f43945ef26c0f20ef5\": not found" containerID="bb4d7118507d8ffd8fdb3eb33e296cff7521275ed07722f43945ef26c0f20ef5" Jan 30 13:04:42.210855 kubelet[1757]: I0130 13:04:42.210815 1757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb4d7118507d8ffd8fdb3eb33e296cff7521275ed07722f43945ef26c0f20ef5"} err="failed to get container status \"bb4d7118507d8ffd8fdb3eb33e296cff7521275ed07722f43945ef26c0f20ef5\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb4d7118507d8ffd8fdb3eb33e296cff7521275ed07722f43945ef26c0f20ef5\": not found" Jan 30 13:04:42.210855 kubelet[1757]: I0130 13:04:42.210829 1757 scope.go:117] "RemoveContainer" containerID="6973b5623ee6fbd3fb367c0e92992a1f09bffe5bc17dd6f2dccd47fb8314dd0b" Jan 30 13:04:42.211130 containerd[1453]: time="2025-01-30T13:04:42.211092076Z" level=error msg="ContainerStatus for \"6973b5623ee6fbd3fb367c0e92992a1f09bffe5bc17dd6f2dccd47fb8314dd0b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6973b5623ee6fbd3fb367c0e92992a1f09bffe5bc17dd6f2dccd47fb8314dd0b\": not found" Jan 30 13:04:42.211264 kubelet[1757]: E0130 13:04:42.211238 1757 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6973b5623ee6fbd3fb367c0e92992a1f09bffe5bc17dd6f2dccd47fb8314dd0b\": not found" containerID="6973b5623ee6fbd3fb367c0e92992a1f09bffe5bc17dd6f2dccd47fb8314dd0b" Jan 30 13:04:42.211303 kubelet[1757]: I0130 13:04:42.211268 1757 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6973b5623ee6fbd3fb367c0e92992a1f09bffe5bc17dd6f2dccd47fb8314dd0b"} err="failed to get container status \"6973b5623ee6fbd3fb367c0e92992a1f09bffe5bc17dd6f2dccd47fb8314dd0b\": rpc error: code = NotFound desc = an error occurred when try to find container \"6973b5623ee6fbd3fb367c0e92992a1f09bffe5bc17dd6f2dccd47fb8314dd0b\": not found" Jan 30 13:04:42.256695 systemd[1]: var-lib-kubelet-pods-7932fb71\x2d0256\x2d4097\x2d8707\x2de8d6a31accf4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jan 30 13:04:42.971523 kubelet[1757]: E0130 13:04:42.971469 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:43.971607 kubelet[1757]: E0130 13:04:43.971557 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:44.077978 kubelet[1757]: I0130 13:04:44.077516 1757 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7932fb71-0256-4097-8707-e8d6a31accf4" path="/var/lib/kubelet/pods/7932fb71-0256-4097-8707-e8d6a31accf4/volumes" Jan 30 13:04:44.097165 kubelet[1757]: E0130 13:04:44.097116 1757 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 30 13:04:44.515948 kubelet[1757]: I0130 13:04:44.512099 1757 topology_manager.go:215] "Topology Admit Handler" podUID="7e444de7-1440-422d-9cc0-6d89bfed3a6e" podNamespace="kube-system" podName="cilium-operator-599987898-59dkj" Jan 30 13:04:44.515948 kubelet[1757]: E0130 13:04:44.512155 1757 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7932fb71-0256-4097-8707-e8d6a31accf4" containerName="mount-bpf-fs" Jan 30 13:04:44.515948 kubelet[1757]: E0130 13:04:44.512164 1757 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7932fb71-0256-4097-8707-e8d6a31accf4" containerName="mount-cgroup" Jan 30 13:04:44.515948 kubelet[1757]: E0130 13:04:44.512170 1757 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7932fb71-0256-4097-8707-e8d6a31accf4" containerName="apply-sysctl-overwrites" Jan 30 13:04:44.515948 kubelet[1757]: E0130 13:04:44.512177 1757 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7932fb71-0256-4097-8707-e8d6a31accf4" containerName="clean-cilium-state" Jan 30 13:04:44.515948 kubelet[1757]: E0130 13:04:44.512184 1757 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7932fb71-0256-4097-8707-e8d6a31accf4" containerName="cilium-agent" Jan 30 13:04:44.515948 kubelet[1757]: I0130 13:04:44.512203 1757 memory_manager.go:354] "RemoveStaleState removing state" podUID="7932fb71-0256-4097-8707-e8d6a31accf4" containerName="cilium-agent" Jan 30 13:04:44.531484 systemd[1]: Created slice kubepods-besteffort-pod7e444de7_1440_422d_9cc0_6d89bfed3a6e.slice - libcontainer container kubepods-besteffort-pod7e444de7_1440_422d_9cc0_6d89bfed3a6e.slice. Jan 30 13:04:44.535942 kubelet[1757]: I0130 13:04:44.533172 1757 topology_manager.go:215] "Topology Admit Handler" podUID="f53d459b-d0e8-4c70-9e2b-5723582baa3f" podNamespace="kube-system" podName="cilium-zr5sd" Jan 30 13:04:44.544273 systemd[1]: Created slice kubepods-burstable-podf53d459b_d0e8_4c70_9e2b_5723582baa3f.slice - libcontainer container kubepods-burstable-podf53d459b_d0e8_4c70_9e2b_5723582baa3f.slice. Jan 30 13:04:44.575864 kubelet[1757]: I0130 13:04:44.575792 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f53d459b-d0e8-4c70-9e2b-5723582baa3f-cilium-run\") pod \"cilium-zr5sd\" (UID: \"f53d459b-d0e8-4c70-9e2b-5723582baa3f\") " pod="kube-system/cilium-zr5sd" Jan 30 13:04:44.575864 kubelet[1757]: I0130 13:04:44.575837 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f53d459b-d0e8-4c70-9e2b-5723582baa3f-bpf-maps\") pod \"cilium-zr5sd\" (UID: \"f53d459b-d0e8-4c70-9e2b-5723582baa3f\") " pod="kube-system/cilium-zr5sd" Jan 30 13:04:44.575864 kubelet[1757]: I0130 13:04:44.575857 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f53d459b-d0e8-4c70-9e2b-5723582baa3f-etc-cni-netd\") pod \"cilium-zr5sd\" (UID: \"f53d459b-d0e8-4c70-9e2b-5723582baa3f\") " pod="kube-system/cilium-zr5sd" Jan 30 13:04:44.575864 kubelet[1757]: I0130 13:04:44.575875 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f53d459b-d0e8-4c70-9e2b-5723582baa3f-clustermesh-secrets\") pod \"cilium-zr5sd\" (UID: \"f53d459b-d0e8-4c70-9e2b-5723582baa3f\") " pod="kube-system/cilium-zr5sd" Jan 30 13:04:44.576227 kubelet[1757]: I0130 13:04:44.575899 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxv8x\" (UniqueName: \"kubernetes.io/projected/7e444de7-1440-422d-9cc0-6d89bfed3a6e-kube-api-access-mxv8x\") pod \"cilium-operator-599987898-59dkj\" (UID: \"7e444de7-1440-422d-9cc0-6d89bfed3a6e\") " pod="kube-system/cilium-operator-599987898-59dkj" Jan 30 13:04:44.576227 kubelet[1757]: I0130 13:04:44.575918 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f53d459b-d0e8-4c70-9e2b-5723582baa3f-cilium-ipsec-secrets\") pod \"cilium-zr5sd\" (UID: \"f53d459b-d0e8-4c70-9e2b-5723582baa3f\") " pod="kube-system/cilium-zr5sd" Jan 30 13:04:44.576227 kubelet[1757]: I0130 13:04:44.575933 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f53d459b-d0e8-4c70-9e2b-5723582baa3f-hostproc\") pod \"cilium-zr5sd\" (UID: \"f53d459b-d0e8-4c70-9e2b-5723582baa3f\") " pod="kube-system/cilium-zr5sd" Jan 30 13:04:44.576227 kubelet[1757]: I0130 13:04:44.575947 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f53d459b-d0e8-4c70-9e2b-5723582baa3f-cilium-cgroup\") pod \"cilium-zr5sd\" (UID: \"f53d459b-d0e8-4c70-9e2b-5723582baa3f\") " pod="kube-system/cilium-zr5sd" Jan 30 13:04:44.576227 kubelet[1757]: I0130 13:04:44.575960 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f53d459b-d0e8-4c70-9e2b-5723582baa3f-cni-path\") pod \"cilium-zr5sd\" (UID: \"f53d459b-d0e8-4c70-9e2b-5723582baa3f\") " pod="kube-system/cilium-zr5sd" Jan 30 13:04:44.576347 kubelet[1757]: I0130 13:04:44.575974 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f53d459b-d0e8-4c70-9e2b-5723582baa3f-lib-modules\") pod \"cilium-zr5sd\" (UID: \"f53d459b-d0e8-4c70-9e2b-5723582baa3f\") " pod="kube-system/cilium-zr5sd" Jan 30 13:04:44.576347 kubelet[1757]: I0130 13:04:44.575989 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f53d459b-d0e8-4c70-9e2b-5723582baa3f-xtables-lock\") pod \"cilium-zr5sd\" (UID: \"f53d459b-d0e8-4c70-9e2b-5723582baa3f\") " pod="kube-system/cilium-zr5sd" Jan 30 13:04:44.576347 kubelet[1757]: I0130 13:04:44.576009 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f53d459b-d0e8-4c70-9e2b-5723582baa3f-cilium-config-path\") pod \"cilium-zr5sd\" (UID: \"f53d459b-d0e8-4c70-9e2b-5723582baa3f\") " pod="kube-system/cilium-zr5sd" Jan 30 13:04:44.576347 kubelet[1757]: I0130 13:04:44.576023 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f53d459b-d0e8-4c70-9e2b-5723582baa3f-host-proc-sys-kernel\") pod \"cilium-zr5sd\" (UID: \"f53d459b-d0e8-4c70-9e2b-5723582baa3f\") " pod="kube-system/cilium-zr5sd" Jan 30 13:04:44.576347 kubelet[1757]: I0130 13:04:44.576040 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7e444de7-1440-422d-9cc0-6d89bfed3a6e-cilium-config-path\") pod \"cilium-operator-599987898-59dkj\" (UID: \"7e444de7-1440-422d-9cc0-6d89bfed3a6e\") " pod="kube-system/cilium-operator-599987898-59dkj" Jan 30 13:04:44.576457 kubelet[1757]: I0130 13:04:44.576074 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f53d459b-d0e8-4c70-9e2b-5723582baa3f-host-proc-sys-net\") pod \"cilium-zr5sd\" (UID: \"f53d459b-d0e8-4c70-9e2b-5723582baa3f\") " pod="kube-system/cilium-zr5sd" Jan 30 13:04:44.576457 kubelet[1757]: I0130 13:04:44.576096 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f53d459b-d0e8-4c70-9e2b-5723582baa3f-hubble-tls\") pod \"cilium-zr5sd\" (UID: \"f53d459b-d0e8-4c70-9e2b-5723582baa3f\") " pod="kube-system/cilium-zr5sd" Jan 30 13:04:44.576457 kubelet[1757]: I0130 13:04:44.576112 1757 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6ntm\" (UniqueName: \"kubernetes.io/projected/f53d459b-d0e8-4c70-9e2b-5723582baa3f-kube-api-access-d6ntm\") pod \"cilium-zr5sd\" (UID: \"f53d459b-d0e8-4c70-9e2b-5723582baa3f\") " pod="kube-system/cilium-zr5sd" Jan 30 13:04:44.837547 kubelet[1757]: E0130 13:04:44.837497 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:44.838090 containerd[1453]: time="2025-01-30T13:04:44.838036415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-59dkj,Uid:7e444de7-1440-422d-9cc0-6d89bfed3a6e,Namespace:kube-system,Attempt:0,}" Jan 30 13:04:44.855802 kubelet[1757]: E0130 13:04:44.855767 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:44.856308 containerd[1453]: time="2025-01-30T13:04:44.856273754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zr5sd,Uid:f53d459b-d0e8-4c70-9e2b-5723582baa3f,Namespace:kube-system,Attempt:0,}" Jan 30 13:04:44.864757 containerd[1453]: time="2025-01-30T13:04:44.864390693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:04:44.864757 containerd[1453]: time="2025-01-30T13:04:44.864442137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:04:44.864757 containerd[1453]: time="2025-01-30T13:04:44.864466940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:04:44.864757 containerd[1453]: time="2025-01-30T13:04:44.864549307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:04:44.875619 containerd[1453]: time="2025-01-30T13:04:44.875369732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:04:44.875619 containerd[1453]: time="2025-01-30T13:04:44.875442738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:04:44.875619 containerd[1453]: time="2025-01-30T13:04:44.875465220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:04:44.875789 containerd[1453]: time="2025-01-30T13:04:44.875572470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:04:44.886248 systemd[1]: Started cri-containerd-914cda86fc26b512c172ab0fc06926604ff0919e58744c6a8e2c6b867b508af4.scope - libcontainer container 914cda86fc26b512c172ab0fc06926604ff0919e58744c6a8e2c6b867b508af4. Jan 30 13:04:44.889579 systemd[1]: Started cri-containerd-8112d53469a7729aa9d0c288f5b584b7da72b5c85cb22cbf145c2cda56088be3.scope - libcontainer container 8112d53469a7729aa9d0c288f5b584b7da72b5c85cb22cbf145c2cda56088be3. Jan 30 13:04:44.912520 containerd[1453]: time="2025-01-30T13:04:44.912352297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zr5sd,Uid:f53d459b-d0e8-4c70-9e2b-5723582baa3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"8112d53469a7729aa9d0c288f5b584b7da72b5c85cb22cbf145c2cda56088be3\"" Jan 30 13:04:44.913690 kubelet[1757]: E0130 13:04:44.913664 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:44.916486 containerd[1453]: time="2025-01-30T13:04:44.916353301Z" level=info msg="CreateContainer within sandbox \"8112d53469a7729aa9d0c288f5b584b7da72b5c85cb22cbf145c2cda56088be3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jan 30 13:04:44.918685 containerd[1453]: time="2025-01-30T13:04:44.918655390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-59dkj,Uid:7e444de7-1440-422d-9cc0-6d89bfed3a6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"914cda86fc26b512c172ab0fc06926604ff0919e58744c6a8e2c6b867b508af4\"" Jan 30 13:04:44.920302 kubelet[1757]: E0130 13:04:44.920094 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:44.921284 containerd[1453]: time="2025-01-30T13:04:44.921257187Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jan 30 13:04:44.930560 containerd[1453]: time="2025-01-30T13:04:44.930436062Z" level=info msg="CreateContainer within sandbox \"8112d53469a7729aa9d0c288f5b584b7da72b5c85cb22cbf145c2cda56088be3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7a3b1a8800c73e3aa52443747c24eabc7a77de215190497996ac055898918aab\"" Jan 30 13:04:44.930899 containerd[1453]: time="2025-01-30T13:04:44.930870982Z" level=info msg="StartContainer for \"7a3b1a8800c73e3aa52443747c24eabc7a77de215190497996ac055898918aab\"" Jan 30 13:04:44.958273 systemd[1]: Started cri-containerd-7a3b1a8800c73e3aa52443747c24eabc7a77de215190497996ac055898918aab.scope - libcontainer container 7a3b1a8800c73e3aa52443747c24eabc7a77de215190497996ac055898918aab. Jan 30 13:04:44.972183 kubelet[1757]: E0130 13:04:44.971914 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:44.981033 containerd[1453]: time="2025-01-30T13:04:44.980891213Z" level=info msg="StartContainer for \"7a3b1a8800c73e3aa52443747c24eabc7a77de215190497996ac055898918aab\" returns successfully" Jan 30 13:04:45.020443 kubelet[1757]: I0130 13:04:45.019702 1757 setters.go:580] "Node became not ready" node="10.0.0.103" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-30T13:04:45Z","lastTransitionTime":"2025-01-30T13:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jan 30 13:04:45.038002 systemd[1]: cri-containerd-7a3b1a8800c73e3aa52443747c24eabc7a77de215190497996ac055898918aab.scope: Deactivated successfully. Jan 30 13:04:45.063479 containerd[1453]: time="2025-01-30T13:04:45.063281869Z" level=info msg="shim disconnected" id=7a3b1a8800c73e3aa52443747c24eabc7a77de215190497996ac055898918aab namespace=k8s.io Jan 30 13:04:45.063479 containerd[1453]: time="2025-01-30T13:04:45.063330433Z" level=warning msg="cleaning up after shim disconnected" id=7a3b1a8800c73e3aa52443747c24eabc7a77de215190497996ac055898918aab namespace=k8s.io Jan 30 13:04:45.063479 containerd[1453]: time="2025-01-30T13:04:45.063337914Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:04:45.196654 kubelet[1757]: E0130 13:04:45.195978 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:45.198839 containerd[1453]: time="2025-01-30T13:04:45.198708994Z" level=info msg="CreateContainer within sandbox \"8112d53469a7729aa9d0c288f5b584b7da72b5c85cb22cbf145c2cda56088be3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jan 30 13:04:45.213029 containerd[1453]: time="2025-01-30T13:04:45.212872877Z" level=info msg="CreateContainer within sandbox \"8112d53469a7729aa9d0c288f5b584b7da72b5c85cb22cbf145c2cda56088be3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"af24a0e780089c35824a47036ce5b77ad87eda4698d476278c7f3b1f2dc51bdb\"" Jan 30 13:04:45.213425 containerd[1453]: time="2025-01-30T13:04:45.213403204Z" level=info msg="StartContainer for \"af24a0e780089c35824a47036ce5b77ad87eda4698d476278c7f3b1f2dc51bdb\"" Jan 30 13:04:45.245271 systemd[1]: Started cri-containerd-af24a0e780089c35824a47036ce5b77ad87eda4698d476278c7f3b1f2dc51bdb.scope - libcontainer container af24a0e780089c35824a47036ce5b77ad87eda4698d476278c7f3b1f2dc51bdb. Jan 30 13:04:45.270418 containerd[1453]: time="2025-01-30T13:04:45.270261314Z" level=info msg="StartContainer for \"af24a0e780089c35824a47036ce5b77ad87eda4698d476278c7f3b1f2dc51bdb\" returns successfully" Jan 30 13:04:45.297646 systemd[1]: cri-containerd-af24a0e780089c35824a47036ce5b77ad87eda4698d476278c7f3b1f2dc51bdb.scope: Deactivated successfully. Jan 30 13:04:45.321714 containerd[1453]: time="2025-01-30T13:04:45.321460087Z" level=info msg="shim disconnected" id=af24a0e780089c35824a47036ce5b77ad87eda4698d476278c7f3b1f2dc51bdb namespace=k8s.io Jan 30 13:04:45.321714 containerd[1453]: time="2025-01-30T13:04:45.321522053Z" level=warning msg="cleaning up after shim disconnected" id=af24a0e780089c35824a47036ce5b77ad87eda4698d476278c7f3b1f2dc51bdb namespace=k8s.io Jan 30 13:04:45.321714 containerd[1453]: time="2025-01-30T13:04:45.321530533Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:04:45.972496 kubelet[1757]: E0130 13:04:45.972447 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:46.203325 kubelet[1757]: E0130 13:04:46.202942 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:46.204851 containerd[1453]: time="2025-01-30T13:04:46.204612408Z" level=info msg="CreateContainer within sandbox \"8112d53469a7729aa9d0c288f5b584b7da72b5c85cb22cbf145c2cda56088be3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jan 30 13:04:46.215418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount736738398.mount: Deactivated successfully. Jan 30 13:04:46.217125 containerd[1453]: time="2025-01-30T13:04:46.217089185Z" level=info msg="CreateContainer within sandbox \"8112d53469a7729aa9d0c288f5b584b7da72b5c85cb22cbf145c2cda56088be3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"04d7da1493f04dc789f3ffce05ddcd17839cf4fd69d98ff4fbb0911bfffe0338\"" Jan 30 13:04:46.217722 containerd[1453]: time="2025-01-30T13:04:46.217700037Z" level=info msg="StartContainer for \"04d7da1493f04dc789f3ffce05ddcd17839cf4fd69d98ff4fbb0911bfffe0338\"" Jan 30 13:04:46.246298 systemd[1]: Started cri-containerd-04d7da1493f04dc789f3ffce05ddcd17839cf4fd69d98ff4fbb0911bfffe0338.scope - libcontainer container 04d7da1493f04dc789f3ffce05ddcd17839cf4fd69d98ff4fbb0911bfffe0338. Jan 30 13:04:46.271708 containerd[1453]: time="2025-01-30T13:04:46.271669130Z" level=info msg="StartContainer for \"04d7da1493f04dc789f3ffce05ddcd17839cf4fd69d98ff4fbb0911bfffe0338\" returns successfully" Jan 30 13:04:46.273335 systemd[1]: cri-containerd-04d7da1493f04dc789f3ffce05ddcd17839cf4fd69d98ff4fbb0911bfffe0338.scope: Deactivated successfully. Jan 30 13:04:46.298343 containerd[1453]: time="2025-01-30T13:04:46.298281785Z" level=info msg="shim disconnected" id=04d7da1493f04dc789f3ffce05ddcd17839cf4fd69d98ff4fbb0911bfffe0338 namespace=k8s.io Jan 30 13:04:46.298343 containerd[1453]: time="2025-01-30T13:04:46.298338030Z" level=warning msg="cleaning up after shim disconnected" id=04d7da1493f04dc789f3ffce05ddcd17839cf4fd69d98ff4fbb0911bfffe0338 namespace=k8s.io Jan 30 13:04:46.298343 containerd[1453]: time="2025-01-30T13:04:46.298346511Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:04:46.681263 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04d7da1493f04dc789f3ffce05ddcd17839cf4fd69d98ff4fbb0911bfffe0338-rootfs.mount: Deactivated successfully. Jan 30 13:04:46.973420 kubelet[1757]: E0130 13:04:46.973299 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:47.021419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2257983060.mount: Deactivated successfully. Jan 30 13:04:47.208561 kubelet[1757]: E0130 13:04:47.208040 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:47.219272 containerd[1453]: time="2025-01-30T13:04:47.219225563Z" level=info msg="CreateContainer within sandbox \"8112d53469a7729aa9d0c288f5b584b7da72b5c85cb22cbf145c2cda56088be3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jan 30 13:04:47.238153 containerd[1453]: time="2025-01-30T13:04:47.237860569Z" level=info msg="CreateContainer within sandbox \"8112d53469a7729aa9d0c288f5b584b7da72b5c85cb22cbf145c2cda56088be3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2f37ac1229206c880cc0b3797e67b86671c62a983390d67dca473f596d6eea63\"" Jan 30 13:04:47.238967 containerd[1453]: time="2025-01-30T13:04:47.238823208Z" level=info msg="StartContainer for \"2f37ac1229206c880cc0b3797e67b86671c62a983390d67dca473f596d6eea63\"" Jan 30 13:04:47.266250 systemd[1]: Started cri-containerd-2f37ac1229206c880cc0b3797e67b86671c62a983390d67dca473f596d6eea63.scope - libcontainer container 2f37ac1229206c880cc0b3797e67b86671c62a983390d67dca473f596d6eea63. Jan 30 13:04:47.293582 systemd[1]: cri-containerd-2f37ac1229206c880cc0b3797e67b86671c62a983390d67dca473f596d6eea63.scope: Deactivated successfully. Jan 30 13:04:47.296268 containerd[1453]: time="2025-01-30T13:04:47.296223789Z" level=info msg="StartContainer for \"2f37ac1229206c880cc0b3797e67b86671c62a983390d67dca473f596d6eea63\" returns successfully" Jan 30 13:04:47.366576 containerd[1453]: time="2025-01-30T13:04:47.366245724Z" level=info msg="shim disconnected" id=2f37ac1229206c880cc0b3797e67b86671c62a983390d67dca473f596d6eea63 namespace=k8s.io Jan 30 13:04:47.366576 containerd[1453]: time="2025-01-30T13:04:47.366298248Z" level=warning msg="cleaning up after shim disconnected" id=2f37ac1229206c880cc0b3797e67b86671c62a983390d67dca473f596d6eea63 namespace=k8s.io Jan 30 13:04:47.366576 containerd[1453]: time="2025-01-30T13:04:47.366306369Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:04:47.413230 containerd[1453]: time="2025-01-30T13:04:47.413171567Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:04:47.413957 containerd[1453]: time="2025-01-30T13:04:47.413909067Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jan 30 13:04:47.414648 containerd[1453]: time="2025-01-30T13:04:47.414623846Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:04:47.415833 containerd[1453]: time="2025-01-30T13:04:47.415797942Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.494317494s" Jan 30 13:04:47.415879 containerd[1453]: time="2025-01-30T13:04:47.415833345Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jan 30 13:04:47.417883 containerd[1453]: time="2025-01-30T13:04:47.417855391Z" level=info msg="CreateContainer within sandbox \"914cda86fc26b512c172ab0fc06926604ff0919e58744c6a8e2c6b867b508af4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jan 30 13:04:47.443284 containerd[1453]: time="2025-01-30T13:04:47.443235589Z" level=info msg="CreateContainer within sandbox \"914cda86fc26b512c172ab0fc06926604ff0919e58744c6a8e2c6b867b508af4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"56ece6cdfbc1607dc336d33629a0e0ee993dacaf22e836f23b405f72403f6aa3\"" Jan 30 13:04:47.444081 containerd[1453]: time="2025-01-30T13:04:47.443973210Z" level=info msg="StartContainer for \"56ece6cdfbc1607dc336d33629a0e0ee993dacaf22e836f23b405f72403f6aa3\"" Jan 30 13:04:47.468236 systemd[1]: Started cri-containerd-56ece6cdfbc1607dc336d33629a0e0ee993dacaf22e836f23b405f72403f6aa3.scope - libcontainer container 56ece6cdfbc1607dc336d33629a0e0ee993dacaf22e836f23b405f72403f6aa3. Jan 30 13:04:47.508635 containerd[1453]: time="2025-01-30T13:04:47.508298638Z" level=info msg="StartContainer for \"56ece6cdfbc1607dc336d33629a0e0ee993dacaf22e836f23b405f72403f6aa3\" returns successfully" Jan 30 13:04:47.973984 kubelet[1757]: E0130 13:04:47.973935 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:48.211118 kubelet[1757]: E0130 13:04:48.210878 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:48.214893 kubelet[1757]: E0130 13:04:48.214859 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:48.217397 containerd[1453]: time="2025-01-30T13:04:48.217357261Z" level=info msg="CreateContainer within sandbox \"8112d53469a7729aa9d0c288f5b584b7da72b5c85cb22cbf145c2cda56088be3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jan 30 13:04:48.225458 kubelet[1757]: I0130 13:04:48.225185 1757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-59dkj" podStartSLOduration=1.729610959 podStartE2EDuration="4.225165399s" podCreationTimestamp="2025-01-30 13:04:44 +0000 UTC" firstStartedPulling="2025-01-30 13:04:44.920979522 +0000 UTC m=+51.462108886" lastFinishedPulling="2025-01-30 13:04:47.416533962 +0000 UTC m=+53.957663326" observedRunningTime="2025-01-30 13:04:48.222767329 +0000 UTC m=+54.763896693" watchObservedRunningTime="2025-01-30 13:04:48.225165399 +0000 UTC m=+54.766294763" Jan 30 13:04:48.238859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount57363517.mount: Deactivated successfully. Jan 30 13:04:48.252277 containerd[1453]: time="2025-01-30T13:04:48.239872245Z" level=info msg="CreateContainer within sandbox \"8112d53469a7729aa9d0c288f5b584b7da72b5c85cb22cbf145c2cda56088be3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b7bd317dc312132477efcad79da22500f6e3a5082b13cae99553a69fb268e120\"" Jan 30 13:04:48.254937 containerd[1453]: time="2025-01-30T13:04:48.253037848Z" level=info msg="StartContainer for \"b7bd317dc312132477efcad79da22500f6e3a5082b13cae99553a69fb268e120\"" Jan 30 13:04:48.294173 systemd[1]: Started cri-containerd-b7bd317dc312132477efcad79da22500f6e3a5082b13cae99553a69fb268e120.scope - libcontainer container b7bd317dc312132477efcad79da22500f6e3a5082b13cae99553a69fb268e120. Jan 30 13:04:48.345694 containerd[1453]: time="2025-01-30T13:04:48.345630345Z" level=info msg="StartContainer for \"b7bd317dc312132477efcad79da22500f6e3a5082b13cae99553a69fb268e120\" returns successfully" Jan 30 13:04:48.638122 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jan 30 13:04:48.682975 systemd[1]: run-containerd-runc-k8s.io-b7bd317dc312132477efcad79da22500f6e3a5082b13cae99553a69fb268e120-runc.3it7qw.mount: Deactivated successfully. Jan 30 13:04:48.975174 kubelet[1757]: E0130 13:04:48.975012 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:49.220423 kubelet[1757]: E0130 13:04:49.220392 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:49.220749 kubelet[1757]: E0130 13:04:49.220713 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:49.975597 kubelet[1757]: E0130 13:04:49.975560 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:50.858180 kubelet[1757]: E0130 13:04:50.858044 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:50.975984 kubelet[1757]: E0130 13:04:50.975757 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:51.706719 systemd-networkd[1378]: lxc_health: Link UP Jan 30 13:04:51.718684 systemd-networkd[1378]: lxc_health: Gained carrier Jan 30 13:04:51.977277 kubelet[1757]: E0130 13:04:51.977157 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:52.784253 systemd-networkd[1378]: lxc_health: Gained IPv6LL Jan 30 13:04:52.862224 kubelet[1757]: E0130 13:04:52.862185 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:52.881271 kubelet[1757]: I0130 13:04:52.879399 1757 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zr5sd" podStartSLOduration=8.879378998 podStartE2EDuration="8.879378998s" podCreationTimestamp="2025-01-30 13:04:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:04:49.249909259 +0000 UTC m=+55.791038623" watchObservedRunningTime="2025-01-30 13:04:52.879378998 +0000 UTC m=+59.420508362" Jan 30 13:04:52.978420 kubelet[1757]: E0130 13:04:52.978368 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:53.229360 kubelet[1757]: E0130 13:04:53.229167 1757 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:04:53.940765 kubelet[1757]: E0130 13:04:53.940719 1757 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:53.963367 containerd[1453]: time="2025-01-30T13:04:53.963297153Z" level=info msg="StopPodSandbox for \"75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd\"" Jan 30 13:04:53.963962 containerd[1453]: time="2025-01-30T13:04:53.963419521Z" level=info msg="TearDown network for sandbox \"75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd\" successfully" Jan 30 13:04:53.963962 containerd[1453]: time="2025-01-30T13:04:53.963430082Z" level=info msg="StopPodSandbox for \"75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd\" returns successfully" Jan 30 13:04:53.963962 containerd[1453]: time="2025-01-30T13:04:53.963854791Z" level=info msg="RemovePodSandbox for \"75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd\"" Jan 30 13:04:53.963962 containerd[1453]: time="2025-01-30T13:04:53.963881633Z" level=info msg="Forcibly stopping sandbox \"75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd\"" Jan 30 13:04:53.963962 containerd[1453]: time="2025-01-30T13:04:53.963930676Z" level=info msg="TearDown network for sandbox \"75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd\" successfully" Jan 30 13:04:53.979206 kubelet[1757]: E0130 13:04:53.979141 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:53.984357 containerd[1453]: time="2025-01-30T13:04:53.984280265Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:04:53.984357 containerd[1453]: time="2025-01-30T13:04:53.984354390Z" level=info msg="RemovePodSandbox \"75e05d1a0a3e3e1432da6e42437bc284ac6d16143cbdcef1747981b3425be8dd\" returns successfully" Jan 30 13:04:54.980146 kubelet[1757]: E0130 13:04:54.980095 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:55.980788 kubelet[1757]: E0130 13:04:55.980729 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:56.982206 kubelet[1757]: E0130 13:04:56.982145 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:57.982819 kubelet[1757]: E0130 13:04:57.982741 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:58.983436 kubelet[1757]: E0130 13:04:58.983357 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:04:59.984543 kubelet[1757]: E0130 13:04:59.984480 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 30 13:05:00.985415 kubelet[1757]: E0130 13:05:00.985370 1757 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"