Feb 13 15:35:51.958352 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1]
Feb 13 15:35:51.958376 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 14:02:42 -00 2025
Feb 13 15:35:51.958386 kernel: KASLR enabled
Feb 13 15:35:51.958392 kernel: efi: EFI v2.7 by EDK II
Feb 13 15:35:51.958398 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 
Feb 13 15:35:51.958404 kernel: random: crng init done
Feb 13 15:35:51.958411 kernel: secureboot: Secure boot disabled
Feb 13 15:35:51.958417 kernel: ACPI: Early table checksum verification disabled
Feb 13 15:35:51.958432 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS )
Feb 13 15:35:51.958441 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS  BXPC     00000001      01000013)
Feb 13 15:35:51.958447 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:35:51.958453 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:35:51.958459 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:35:51.958465 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:35:51.958472 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:35:51.958480 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:35:51.958486 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:35:51.958492 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:35:51.958498 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:35:51.958505 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600
Feb 13 15:35:51.958511 kernel: NUMA: Failed to initialise from firmware
Feb 13 15:35:51.958517 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff]
Feb 13 15:35:51.958523 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff]
Feb 13 15:35:51.958529 kernel: Zone ranges:
Feb 13 15:35:51.958535 kernel:   DMA      [mem 0x0000000040000000-0x00000000dcffffff]
Feb 13 15:35:51.958543 kernel:   DMA32    empty
Feb 13 15:35:51.958549 kernel:   Normal   empty
Feb 13 15:35:51.958555 kernel: Movable zone start for each node
Feb 13 15:35:51.958561 kernel: Early memory node ranges
Feb 13 15:35:51.958568 kernel:   node   0: [mem 0x0000000040000000-0x00000000d967ffff]
Feb 13 15:35:51.958574 kernel:   node   0: [mem 0x00000000d9680000-0x00000000d968ffff]
Feb 13 15:35:51.958580 kernel:   node   0: [mem 0x00000000d9690000-0x00000000d976ffff]
Feb 13 15:35:51.958586 kernel:   node   0: [mem 0x00000000d9770000-0x00000000d9b3ffff]
Feb 13 15:35:51.958592 kernel:   node   0: [mem 0x00000000d9b40000-0x00000000dce1ffff]
Feb 13 15:35:51.958598 kernel:   node   0: [mem 0x00000000dce20000-0x00000000dceaffff]
Feb 13 15:35:51.958604 kernel:   node   0: [mem 0x00000000dceb0000-0x00000000dcebffff]
Feb 13 15:35:51.958610 kernel:   node   0: [mem 0x00000000dcec0000-0x00000000dcfdffff]
Feb 13 15:35:51.958618 kernel:   node   0: [mem 0x00000000dcfe0000-0x00000000dcffffff]
Feb 13 15:35:51.958624 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff]
Feb 13 15:35:51.958631 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges
Feb 13 15:35:51.958639 kernel: psci: probing for conduit method from ACPI.
Feb 13 15:35:51.958646 kernel: psci: PSCIv1.1 detected in firmware.
Feb 13 15:35:51.958653 kernel: psci: Using standard PSCI v0.2 function IDs
Feb 13 15:35:51.958660 kernel: psci: Trusted OS migration not required
Feb 13 15:35:51.958667 kernel: psci: SMC Calling Convention v1.1
Feb 13 15:35:51.958723 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003)
Feb 13 15:35:51.958731 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976
Feb 13 15:35:51.958738 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096
Feb 13 15:35:51.958745 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 
Feb 13 15:35:51.958752 kernel: Detected PIPT I-cache on CPU0
Feb 13 15:35:51.958759 kernel: CPU features: detected: GIC system register CPU interface
Feb 13 15:35:51.958766 kernel: CPU features: detected: Hardware dirty bit management
Feb 13 15:35:51.958772 kernel: CPU features: detected: Spectre-v4
Feb 13 15:35:51.958782 kernel: CPU features: detected: Spectre-BHB
Feb 13 15:35:51.958789 kernel: CPU features: kernel page table isolation forced ON by KASLR
Feb 13 15:35:51.958796 kernel: CPU features: detected: Kernel page table isolation (KPTI)
Feb 13 15:35:51.958802 kernel: CPU features: detected: ARM erratum 1418040
Feb 13 15:35:51.958905 kernel: CPU features: detected: SSBS not fully self-synchronizing
Feb 13 15:35:51.958913 kernel: alternatives: applying boot alternatives
Feb 13 15:35:51.958921 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a
Feb 13 15:35:51.958928 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb 13 15:35:51.958935 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 13 15:35:51.958942 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb 13 15:35:51.958949 kernel: Fallback order for Node 0: 0 
Feb 13 15:35:51.958959 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 633024
Feb 13 15:35:51.958965 kernel: Policy zone: DMA
Feb 13 15:35:51.958972 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 13 15:35:51.958978 kernel: software IO TLB: area num 4.
Feb 13 15:35:51.958985 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB)
Feb 13 15:35:51.958992 kernel: Memory: 2385940K/2572288K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39936K init, 897K bss, 186348K reserved, 0K cma-reserved)
Feb 13 15:35:51.958998 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Feb 13 15:35:51.959005 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 13 15:35:51.959060 kernel: rcu:         RCU event tracing is enabled.
Feb 13 15:35:51.959068 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4.
Feb 13 15:35:51.959075 kernel:         Trampoline variant of Tasks RCU enabled.
Feb 13 15:35:51.959081 kernel:         Tracing variant of Tasks RCU enabled.
Feb 13 15:35:51.959091 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 13 15:35:51.959097 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Feb 13 15:35:51.959104 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Feb 13 15:35:51.959110 kernel: GICv3: 256 SPIs implemented
Feb 13 15:35:51.959117 kernel: GICv3: 0 Extended SPIs implemented
Feb 13 15:35:51.959123 kernel: Root IRQ handler: gic_handle_irq
Feb 13 15:35:51.959130 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI
Feb 13 15:35:51.959136 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000
Feb 13 15:35:51.959143 kernel: ITS [mem 0x08080000-0x0809ffff]
Feb 13 15:35:51.959150 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1)
Feb 13 15:35:51.959157 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1)
Feb 13 15:35:51.959164 kernel: GICv3: using LPI property table @0x00000000400f0000
Feb 13 15:35:51.959171 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000
Feb 13 15:35:51.959178 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 13 15:35:51.959184 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 15:35:51.959191 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt).
Feb 13 15:35:51.959198 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns
Feb 13 15:35:51.959204 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns
Feb 13 15:35:51.959211 kernel: arm-pv: using stolen time PV
Feb 13 15:35:51.959218 kernel: Console: colour dummy device 80x25
Feb 13 15:35:51.959225 kernel: ACPI: Core revision 20230628
Feb 13 15:35:51.959267 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000)
Feb 13 15:35:51.959277 kernel: pid_max: default: 32768 minimum: 301
Feb 13 15:35:51.959284 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Feb 13 15:35:51.959291 kernel: landlock: Up and running.
Feb 13 15:35:51.959298 kernel: SELinux:  Initializing.
Feb 13 15:35:51.959304 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 15:35:51.959311 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 15:35:51.959318 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Feb 13 15:35:51.959325 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Feb 13 15:35:51.959332 kernel: rcu: Hierarchical SRCU implementation.
Feb 13 15:35:51.959340 kernel: rcu:         Max phase no-delay instances is 400.
Feb 13 15:35:51.959347 kernel: Platform MSI: ITS@0x8080000 domain created
Feb 13 15:35:51.959354 kernel: PCI/MSI: ITS@0x8080000 domain created
Feb 13 15:35:51.959360 kernel: Remapping and enabling EFI services.
Feb 13 15:35:51.959367 kernel: smp: Bringing up secondary CPUs ...
Feb 13 15:35:51.959374 kernel: Detected PIPT I-cache on CPU1
Feb 13 15:35:51.959381 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000
Feb 13 15:35:51.959388 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000
Feb 13 15:35:51.959394 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 15:35:51.959402 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1]
Feb 13 15:35:51.959410 kernel: Detected PIPT I-cache on CPU2
Feb 13 15:35:51.959427 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000
Feb 13 15:35:51.959481 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000
Feb 13 15:35:51.959489 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 15:35:51.959499 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1]
Feb 13 15:35:51.959508 kernel: Detected PIPT I-cache on CPU3
Feb 13 15:35:51.959518 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000
Feb 13 15:35:51.959525 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000
Feb 13 15:35:51.959535 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 15:35:51.959542 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1]
Feb 13 15:35:51.959550 kernel: smp: Brought up 1 node, 4 CPUs
Feb 13 15:35:51.959557 kernel: SMP: Total of 4 processors activated.
Feb 13 15:35:51.959564 kernel: CPU features: detected: 32-bit EL0 Support
Feb 13 15:35:51.959573 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence
Feb 13 15:35:51.959580 kernel: CPU features: detected: Common not Private translations
Feb 13 15:35:51.959587 kernel: CPU features: detected: CRC32 instructions
Feb 13 15:35:51.959597 kernel: CPU features: detected: Enhanced Virtualization Traps
Feb 13 15:35:51.959604 kernel: CPU features: detected: RCpc load-acquire (LDAPR)
Feb 13 15:35:51.959612 kernel: CPU features: detected: LSE atomic instructions
Feb 13 15:35:51.959622 kernel: CPU features: detected: Privileged Access Never
Feb 13 15:35:51.959630 kernel: CPU features: detected: RAS Extension Support
Feb 13 15:35:51.959642 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS)
Feb 13 15:35:51.959649 kernel: CPU: All CPU(s) started at EL1
Feb 13 15:35:51.959686 kernel: alternatives: applying system-wide alternatives
Feb 13 15:35:51.959695 kernel: devtmpfs: initialized
Feb 13 15:35:51.959702 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 13 15:35:51.959712 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Feb 13 15:35:51.959719 kernel: pinctrl core: initialized pinctrl subsystem
Feb 13 15:35:51.959726 kernel: SMBIOS 3.0.0 present.
Feb 13 15:35:51.959733 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022
Feb 13 15:35:51.959740 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 13 15:35:51.959747 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Feb 13 15:35:51.959755 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 13 15:35:51.959762 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 13 15:35:51.959770 kernel: audit: initializing netlink subsys (disabled)
Feb 13 15:35:51.959777 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1
Feb 13 15:35:51.959784 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 13 15:35:51.959791 kernel: cpuidle: using governor menu
Feb 13 15:35:51.959798 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Feb 13 15:35:51.959805 kernel: ASID allocator initialised with 32768 entries
Feb 13 15:35:51.959812 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 13 15:35:51.959819 kernel: Serial: AMBA PL011 UART driver
Feb 13 15:35:51.959826 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL
Feb 13 15:35:51.959833 kernel: Modules: 0 pages in range for non-PLT usage
Feb 13 15:35:51.959892 kernel: Modules: 508880 pages in range for PLT usage
Feb 13 15:35:51.959903 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 13 15:35:51.959910 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
Feb 13 15:35:51.959917 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
Feb 13 15:35:51.959924 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
Feb 13 15:35:51.959932 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 13 15:35:51.959939 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
Feb 13 15:35:51.960051 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
Feb 13 15:35:51.960097 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
Feb 13 15:35:51.960104 kernel: ACPI: Added _OSI(Module Device)
Feb 13 15:35:51.960112 kernel: ACPI: Added _OSI(Processor Device)
Feb 13 15:35:51.960119 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb 13 15:35:51.960126 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 13 15:35:51.960133 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 13 15:35:51.960140 kernel: ACPI: Interpreter enabled
Feb 13 15:35:51.960147 kernel: ACPI: Using GIC for interrupt routing
Feb 13 15:35:51.960154 kernel: ACPI: MCFG table detected, 1 entries
Feb 13 15:35:51.960162 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA
Feb 13 15:35:51.960170 kernel: printk: console [ttyAMA0] enabled
Feb 13 15:35:51.960178 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 13 15:35:51.960506 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Feb 13 15:35:51.961139 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR]
Feb 13 15:35:51.961246 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
Feb 13 15:35:51.961311 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00
Feb 13 15:35:51.961372 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff]
Feb 13 15:35:51.961386 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io  0x0000-0xffff window]
Feb 13 15:35:51.961393 kernel: PCI host bridge to bus 0000:00
Feb 13 15:35:51.961480 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window]
Feb 13 15:35:51.961540 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0xffff window]
Feb 13 15:35:51.961598 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window]
Feb 13 15:35:51.961654 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 13 15:35:51.961733 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000
Feb 13 15:35:51.961816 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00
Feb 13 15:35:51.961883 kernel: pci 0000:00:01.0: reg 0x10: [io  0x0000-0x001f]
Feb 13 15:35:51.961948 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff]
Feb 13 15:35:51.962011 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref]
Feb 13 15:35:51.962090 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref]
Feb 13 15:35:51.962155 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff]
Feb 13 15:35:51.962220 kernel: pci 0000:00:01.0: BAR 0: assigned [io  0x1000-0x101f]
Feb 13 15:35:51.962282 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window]
Feb 13 15:35:51.962340 kernel: pci_bus 0000:00: resource 5 [io  0x0000-0xffff window]
Feb 13 15:35:51.962396 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window]
Feb 13 15:35:51.962405 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35
Feb 13 15:35:51.962413 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36
Feb 13 15:35:51.962420 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37
Feb 13 15:35:51.962436 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38
Feb 13 15:35:51.962446 kernel: iommu: Default domain type: Translated
Feb 13 15:35:51.962453 kernel: iommu: DMA domain TLB invalidation policy: strict mode
Feb 13 15:35:51.962461 kernel: efivars: Registered efivars operations
Feb 13 15:35:51.962468 kernel: vgaarb: loaded
Feb 13 15:35:51.962475 kernel: clocksource: Switched to clocksource arch_sys_counter
Feb 13 15:35:51.962482 kernel: VFS: Disk quotas dquot_6.6.0
Feb 13 15:35:51.962489 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 13 15:35:51.962496 kernel: pnp: PnP ACPI init
Feb 13 15:35:51.962572 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved
Feb 13 15:35:51.962584 kernel: pnp: PnP ACPI: found 1 devices
Feb 13 15:35:51.962592 kernel: NET: Registered PF_INET protocol family
Feb 13 15:35:51.962599 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 13 15:35:51.962607 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Feb 13 15:35:51.962614 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 13 15:35:51.962621 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb 13 15:35:51.962628 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Feb 13 15:35:51.962636 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Feb 13 15:35:51.962643 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 15:35:51.962652 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 15:35:51.962659 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 13 15:35:51.962666 kernel: PCI: CLS 0 bytes, default 64
Feb 13 15:35:51.962673 kernel: kvm [1]: HYP mode not available
Feb 13 15:35:51.962680 kernel: Initialise system trusted keyrings
Feb 13 15:35:51.962687 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Feb 13 15:35:51.962694 kernel: Key type asymmetric registered
Feb 13 15:35:51.962701 kernel: Asymmetric key parser 'x509' registered
Feb 13 15:35:51.962708 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
Feb 13 15:35:51.962717 kernel: io scheduler mq-deadline registered
Feb 13 15:35:51.962724 kernel: io scheduler kyber registered
Feb 13 15:35:51.962731 kernel: io scheduler bfq registered
Feb 13 15:35:51.962739 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Feb 13 15:35:51.962746 kernel: ACPI: button: Power Button [PWRB]
Feb 13 15:35:51.962754 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36
Feb 13 15:35:51.962832 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007)
Feb 13 15:35:51.962841 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 13 15:35:51.962848 kernel: thunder_xcv, ver 1.0
Feb 13 15:35:51.962857 kernel: thunder_bgx, ver 1.0
Feb 13 15:35:51.962864 kernel: nicpf, ver 1.0
Feb 13 15:35:51.962871 kernel: nicvf, ver 1.0
Feb 13 15:35:51.962945 kernel: rtc-efi rtc-efi.0: registered as rtc0
Feb 13 15:35:51.963006 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:35:51 UTC (1739460951)
Feb 13 15:35:51.963037 kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 13 15:35:51.963045 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available
Feb 13 15:35:51.963053 kernel: watchdog: Delayed init of the lockup detector failed: -19
Feb 13 15:35:51.963063 kernel: watchdog: Hard watchdog permanently disabled
Feb 13 15:35:51.963070 kernel: NET: Registered PF_INET6 protocol family
Feb 13 15:35:51.963077 kernel: Segment Routing with IPv6
Feb 13 15:35:51.963084 kernel: In-situ OAM (IOAM) with IPv6
Feb 13 15:35:51.963091 kernel: NET: Registered PF_PACKET protocol family
Feb 13 15:35:51.963098 kernel: Key type dns_resolver registered
Feb 13 15:35:51.963105 kernel: registered taskstats version 1
Feb 13 15:35:51.963113 kernel: Loading compiled-in X.509 certificates
Feb 13 15:35:51.963120 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 62d673f884efd54b6d6ef802a9b879413c8a346e'
Feb 13 15:35:51.963128 kernel: Key type .fscrypt registered
Feb 13 15:35:51.963135 kernel: Key type fscrypt-provisioning registered
Feb 13 15:35:51.963142 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 13 15:35:51.963150 kernel: ima: Allocated hash algorithm: sha1
Feb 13 15:35:51.963157 kernel: ima: No architecture policies found
Feb 13 15:35:51.963164 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Feb 13 15:35:51.963171 kernel: clk: Disabling unused clocks
Feb 13 15:35:51.963178 kernel: Freeing unused kernel memory: 39936K
Feb 13 15:35:51.963185 kernel: Run /init as init process
Feb 13 15:35:51.963193 kernel:   with arguments:
Feb 13 15:35:51.963200 kernel:     /init
Feb 13 15:35:51.963207 kernel:   with environment:
Feb 13 15:35:51.963214 kernel:     HOME=/
Feb 13 15:35:51.963221 kernel:     TERM=linux
Feb 13 15:35:51.963228 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb 13 15:35:51.963237 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 15:35:51.963246 systemd[1]: Detected virtualization kvm.
Feb 13 15:35:51.963255 systemd[1]: Detected architecture arm64.
Feb 13 15:35:51.963262 systemd[1]: Running in initrd.
Feb 13 15:35:51.963270 systemd[1]: No hostname configured, using default hostname.
Feb 13 15:35:51.963277 systemd[1]: Hostname set to <localhost>.
Feb 13 15:35:51.963285 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 15:35:51.963292 systemd[1]: Queued start job for default target initrd.target.
Feb 13 15:35:51.963300 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:35:51.963308 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:35:51.963317 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Feb 13 15:35:51.963325 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 15:35:51.963333 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Feb 13 15:35:51.963341 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Feb 13 15:35:51.963350 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Feb 13 15:35:51.963358 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Feb 13 15:35:51.963367 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:35:51.963375 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:35:51.963382 systemd[1]: Reached target paths.target - Path Units.
Feb 13 15:35:51.963390 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 15:35:51.963398 systemd[1]: Reached target swap.target - Swaps.
Feb 13 15:35:51.963405 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 15:35:51.963413 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 15:35:51.963420 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 15:35:51.963437 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Feb 13 15:35:51.963446 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Feb 13 15:35:51.963454 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:35:51.963462 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:35:51.963470 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:35:51.963477 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 15:35:51.963485 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Feb 13 15:35:51.963493 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 15:35:51.963501 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Feb 13 15:35:51.963510 systemd[1]: Starting systemd-fsck-usr.service...
Feb 13 15:35:51.963518 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 15:35:51.963525 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 15:35:51.963533 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:35:51.963541 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Feb 13 15:35:51.963549 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:35:51.963557 systemd[1]: Finished systemd-fsck-usr.service.
Feb 13 15:35:51.963585 systemd-journald[240]: Collecting audit messages is disabled.
Feb 13 15:35:51.963604 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 15:35:51.963614 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:35:51.963621 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 15:35:51.963629 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 13 15:35:51.963637 systemd-journald[240]: Journal started
Feb 13 15:35:51.963660 systemd-journald[240]: Runtime Journal (/run/log/journal/1dc2cefa35174ba4af1068f00c48fdb3) is 5.9M, max 47.3M, 41.4M free.
Feb 13 15:35:51.949166 systemd-modules-load[241]: Inserted module 'overlay'
Feb 13 15:35:51.966985 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 15:35:51.967627 systemd-modules-load[241]: Inserted module 'br_netfilter'
Feb 13 15:35:51.968661 kernel: Bridge firewalling registered
Feb 13 15:35:51.969133 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:35:51.971597 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:35:51.973402 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:35:51.976143 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 15:35:51.979460 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 15:35:51.983375 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:35:51.988264 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:35:51.994108 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:35:52.008219 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 15:35:52.009470 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:35:52.012924 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Feb 13 15:35:52.027280 dracut-cmdline[281]: dracut-dracut-053
Feb 13 15:35:52.029855 dracut-cmdline[281]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=685b18f1e2a119f561f35348e788538aade62ddb9fa889a87d9e00058aaa4b5a
Feb 13 15:35:52.039739 systemd-resolved[279]: Positive Trust Anchors:
Feb 13 15:35:52.039756 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 15:35:52.039787 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 15:35:52.044342 systemd-resolved[279]: Defaulting to hostname 'linux'.
Feb 13 15:35:52.045291 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 15:35:52.049331 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:35:52.102053 kernel: SCSI subsystem initialized
Feb 13 15:35:52.107038 kernel: Loading iSCSI transport class v2.0-870.
Feb 13 15:35:52.114042 kernel: iscsi: registered transport (tcp)
Feb 13 15:35:52.127181 kernel: iscsi: registered transport (qla4xxx)
Feb 13 15:35:52.127208 kernel: QLogic iSCSI HBA Driver
Feb 13 15:35:52.170696 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Feb 13 15:35:52.180156 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Feb 13 15:35:52.197080 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 13 15:35:52.197150 kernel: device-mapper: uevent: version 1.0.3
Feb 13 15:35:52.197161 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Feb 13 15:35:52.246062 kernel: raid6: neonx8   gen() 15755 MB/s
Feb 13 15:35:52.263059 kernel: raid6: neonx4   gen() 15773 MB/s
Feb 13 15:35:52.280073 kernel: raid6: neonx2   gen() 13186 MB/s
Feb 13 15:35:52.297049 kernel: raid6: neonx1   gen() 10482 MB/s
Feb 13 15:35:52.314040 kernel: raid6: int64x8  gen()  6780 MB/s
Feb 13 15:35:52.331063 kernel: raid6: int64x4  gen()  7324 MB/s
Feb 13 15:35:52.348061 kernel: raid6: int64x2  gen()  6093 MB/s
Feb 13 15:35:52.365223 kernel: raid6: int64x1  gen()  5041 MB/s
Feb 13 15:35:52.365264 kernel: raid6: using algorithm neonx4 gen() 15773 MB/s
Feb 13 15:35:52.383263 kernel: raid6: .... xor() 12380 MB/s, rmw enabled
Feb 13 15:35:52.383315 kernel: raid6: using neon recovery algorithm
Feb 13 15:35:52.388045 kernel: xor: measuring software checksum speed
Feb 13 15:35:52.389331 kernel:    8regs           : 19074 MB/sec
Feb 13 15:35:52.389355 kernel:    32regs          : 21681 MB/sec
Feb 13 15:35:52.390701 kernel:    arm64_neon      : 26373 MB/sec
Feb 13 15:35:52.390723 kernel: xor: using function: arm64_neon (26373 MB/sec)
Feb 13 15:35:52.444050 kernel: Btrfs loaded, zoned=no, fsverity=no
Feb 13 15:35:52.458187 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 15:35:52.473210 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:35:52.486119 systemd-udevd[462]: Using default interface naming scheme 'v255'.
Feb 13 15:35:52.489249 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:35:52.492655 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Feb 13 15:35:52.510056 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation
Feb 13 15:35:52.546984 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 15:35:52.564384 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 15:35:52.603740 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:35:52.613514 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Feb 13 15:35:52.627083 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Feb 13 15:35:52.630668 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 15:35:52.632276 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:35:52.633748 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 15:35:52.644238 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Feb 13 15:35:52.654488 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 15:35:52.659469 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues
Feb 13 15:35:52.668806 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB)
Feb 13 15:35:52.668913 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb 13 15:35:52.668925 kernel: GPT:9289727 != 19775487
Feb 13 15:35:52.668940 kernel: GPT:Alternate GPT header not at the end of the disk.
Feb 13 15:35:52.668950 kernel: GPT:9289727 != 19775487
Feb 13 15:35:52.668960 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb 13 15:35:52.668970 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb 13 15:35:52.668436 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 15:35:52.668591 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:35:52.671599 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:35:52.673065 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:35:52.673201 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:35:52.680058 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:35:52.700044 kernel: BTRFS: device fsid dbbe73f5-49db-4e16-b023-d47ce63b488f devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (511)
Feb 13 15:35:52.700079 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (508)
Feb 13 15:35:52.701910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:35:52.715288 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:35:52.720219 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT.
Feb 13 15:35:52.724854 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM.
Feb 13 15:35:52.728690 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A.
Feb 13 15:35:52.730048 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132.
Feb 13 15:35:52.735620 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Feb 13 15:35:52.749232 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Feb 13 15:35:52.754184 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:35:52.771833 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:35:52.826657 disk-uuid[551]: Primary Header is updated.
Feb 13 15:35:52.826657 disk-uuid[551]: Secondary Entries is updated.
Feb 13 15:35:52.826657 disk-uuid[551]: Secondary Header is updated.
Feb 13 15:35:52.830040 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb 13 15:35:53.842037 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb 13 15:35:53.842978 disk-uuid[560]: The operation has completed successfully.
Feb 13 15:35:53.864366 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb 13 15:35:53.864481 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Feb 13 15:35:53.890186 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Feb 13 15:35:53.893486 sh[572]: Success
Feb 13 15:35:53.910043 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Feb 13 15:35:53.939604 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Feb 13 15:35:53.958656 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Feb 13 15:35:53.962057 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Feb 13 15:35:53.974619 kernel: BTRFS info (device dm-0): first mount of filesystem dbbe73f5-49db-4e16-b023-d47ce63b488f
Feb 13 15:35:53.974655 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm
Feb 13 15:35:53.974666 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Feb 13 15:35:53.976711 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Feb 13 15:35:53.976727 kernel: BTRFS info (device dm-0): using free space tree
Feb 13 15:35:53.981874 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Feb 13 15:35:53.982953 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Feb 13 15:35:53.990220 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Feb 13 15:35:53.992228 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Feb 13 15:35:54.011260 kernel: BTRFS info (device vda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74
Feb 13 15:35:54.011321 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 15:35:54.011332 kernel: BTRFS info (device vda6): using free space tree
Feb 13 15:35:54.014044 kernel: BTRFS info (device vda6): auto enabling async discard
Feb 13 15:35:54.022477 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb 13 15:35:54.024999 kernel: BTRFS info (device vda6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74
Feb 13 15:35:54.033037 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Feb 13 15:35:54.040202 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Feb 13 15:35:54.104921 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 15:35:54.113273 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 15:35:54.146487 systemd-networkd[761]: lo: Link UP
Feb 13 15:35:54.146499 systemd-networkd[761]: lo: Gained carrier
Feb 13 15:35:54.148456 systemd-networkd[761]: Enumeration completed
Feb 13 15:35:54.148609 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 15:35:54.149047 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:35:54.149051 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 15:35:54.149994 systemd-networkd[761]: eth0: Link UP
Feb 13 15:35:54.149997 systemd-networkd[761]: eth0: Gained carrier
Feb 13 15:35:54.150004 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:35:54.150388 systemd[1]: Reached target network.target - Network.
Feb 13 15:35:54.159036 ignition[673]: Ignition 2.20.0
Feb 13 15:35:54.159043 ignition[673]: Stage: fetch-offline
Feb 13 15:35:54.159082 ignition[673]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:35:54.159091 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:35:54.159253 ignition[673]: parsed url from cmdline: ""
Feb 13 15:35:54.159259 ignition[673]: no config URL provided
Feb 13 15:35:54.159264 ignition[673]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 15:35:54.159271 ignition[673]: no config at "/usr/lib/ignition/user.ign"
Feb 13 15:35:54.159301 ignition[673]: op(1): [started]  loading QEMU firmware config module
Feb 13 15:35:54.159306 ignition[673]: op(1): executing: "modprobe" "qemu_fw_cfg"
Feb 13 15:35:54.168622 ignition[673]: op(1): [finished] loading QEMU firmware config module
Feb 13 15:35:54.170085 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.105/16, gateway 10.0.0.1 acquired from 10.0.0.1
Feb 13 15:35:54.176438 ignition[673]: parsing config with SHA512: 71e0f93d8d2013cdfccb765ad955ca7b6de937faa4c124e86aadebbbee55da99fddc7e67b322bcdabdc8120414d7529877b87c396dd8ac0d032b92195d12125b
Feb 13 15:35:54.180603 unknown[673]: fetched base config from "system"
Feb 13 15:35:54.180615 unknown[673]: fetched user config from "qemu"
Feb 13 15:35:54.180899 ignition[673]: fetch-offline: fetch-offline passed
Feb 13 15:35:54.182767 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 15:35:54.180973 ignition[673]: Ignition finished successfully
Feb 13 15:35:54.184422 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json).
Feb 13 15:35:54.194202 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Feb 13 15:35:54.208236 ignition[768]: Ignition 2.20.0
Feb 13 15:35:54.208246 ignition[768]: Stage: kargs
Feb 13 15:35:54.208428 ignition[768]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:35:54.208440 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:35:54.209164 ignition[768]: kargs: kargs passed
Feb 13 15:35:54.209213 ignition[768]: Ignition finished successfully
Feb 13 15:35:54.213726 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Feb 13 15:35:54.223258 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Feb 13 15:35:54.233324 ignition[777]: Ignition 2.20.0
Feb 13 15:35:54.233336 ignition[777]: Stage: disks
Feb 13 15:35:54.233519 ignition[777]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:35:54.233529 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:35:54.235782 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Feb 13 15:35:54.234226 ignition[777]: disks: disks passed
Feb 13 15:35:54.238238 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Feb 13 15:35:54.234274 ignition[777]: Ignition finished successfully
Feb 13 15:35:54.239455 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Feb 13 15:35:54.241531 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 15:35:54.243108 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 15:35:54.245275 systemd[1]: Reached target basic.target - Basic System.
Feb 13 15:35:54.255197 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Feb 13 15:35:54.269866 systemd-fsck[787]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Feb 13 15:35:54.275258 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Feb 13 15:35:54.289165 systemd[1]: Mounting sysroot.mount - /sysroot...
Feb 13 15:35:54.338049 kernel: EXT4-fs (vda9): mounted filesystem 469d244b-00c1-45f4-bce0-c1d88e98a895 r/w with ordered data mode. Quota mode: none.
Feb 13 15:35:54.338151 systemd[1]: Mounted sysroot.mount - /sysroot.
Feb 13 15:35:54.339511 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Feb 13 15:35:54.348126 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 15:35:54.350147 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Feb 13 15:35:54.352515 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Feb 13 15:35:54.352579 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb 13 15:35:54.352606 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 15:35:54.360399 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (795)
Feb 13 15:35:54.360430 kernel: BTRFS info (device vda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74
Feb 13 15:35:54.357042 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Feb 13 15:35:54.364460 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 15:35:54.364483 kernel: BTRFS info (device vda6): using free space tree
Feb 13 15:35:54.360217 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Feb 13 15:35:54.367367 kernel: BTRFS info (device vda6): auto enabling async discard
Feb 13 15:35:54.369487 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 15:35:54.402206 initrd-setup-root[819]: cut: /sysroot/etc/passwd: No such file or directory
Feb 13 15:35:54.406625 initrd-setup-root[826]: cut: /sysroot/etc/group: No such file or directory
Feb 13 15:35:54.411141 initrd-setup-root[833]: cut: /sysroot/etc/shadow: No such file or directory
Feb 13 15:35:54.415194 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory
Feb 13 15:35:54.515061 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Feb 13 15:35:54.527188 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Feb 13 15:35:54.529926 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Feb 13 15:35:54.535140 kernel: BTRFS info (device vda6): last unmount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74
Feb 13 15:35:54.562117 ignition[908]: INFO     : Ignition 2.20.0
Feb 13 15:35:54.562117 ignition[908]: INFO     : Stage: mount
Feb 13 15:35:54.562117 ignition[908]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:35:54.562117 ignition[908]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:35:54.567568 ignition[908]: INFO     : mount: mount passed
Feb 13 15:35:54.567568 ignition[908]: INFO     : Ignition finished successfully
Feb 13 15:35:54.564571 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Feb 13 15:35:54.575138 systemd[1]: Starting ignition-files.service - Ignition (files)...
Feb 13 15:35:54.576316 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Feb 13 15:35:54.972735 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Feb 13 15:35:54.985223 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 15:35:54.993192 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (922)
Feb 13 15:35:54.993233 kernel: BTRFS info (device vda6): first mount of filesystem f03a17c4-6ca2-4f02-a9a3-5e771d63df74
Feb 13 15:35:54.993243 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 15:35:54.994193 kernel: BTRFS info (device vda6): using free space tree
Feb 13 15:35:54.998038 kernel: BTRFS info (device vda6): auto enabling async discard
Feb 13 15:35:54.998783 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 15:35:55.022097 ignition[939]: INFO     : Ignition 2.20.0
Feb 13 15:35:55.022097 ignition[939]: INFO     : Stage: files
Feb 13 15:35:55.023900 ignition[939]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:35:55.023900 ignition[939]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:35:55.023900 ignition[939]: DEBUG    : files: compiled without relabeling support, skipping
Feb 13 15:35:55.027760 ignition[939]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb 13 15:35:55.027760 ignition[939]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb 13 15:35:55.027760 ignition[939]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb 13 15:35:55.027760 ignition[939]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb 13 15:35:55.027760 ignition[939]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb 13 15:35:55.026554 unknown[939]: wrote ssh authorized keys file for user: core
Feb 13 15:35:55.035876 ignition[939]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/home/core/install.sh"
Feb 13 15:35:55.035876 ignition[939]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh"
Feb 13 15:35:55.035876 ignition[939]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 15:35:55.035876 ignition[939]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 15:35:55.035876 ignition[939]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Feb 13 15:35:55.035876 ignition[939]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Feb 13 15:35:55.035876 ignition[939]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Feb 13 15:35:55.035876 ignition[939]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1
Feb 13 15:35:55.253583 ignition[939]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET result: OK
Feb 13 15:35:55.481741 ignition[939]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Feb 13 15:35:55.481741 ignition[939]: INFO     : files: op(7): [started]  processing unit "coreos-metadata.service"
Feb 13 15:35:55.485334 ignition[939]: INFO     : files: op(7): op(8): [started]  writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Feb 13 15:35:55.485334 ignition[939]: INFO     : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Feb 13 15:35:55.485334 ignition[939]: INFO     : files: op(7): [finished] processing unit "coreos-metadata.service"
Feb 13 15:35:55.485334 ignition[939]: INFO     : files: op(9): [started]  setting preset to disabled for "coreos-metadata.service"
Feb 13 15:35:55.519444 ignition[939]: INFO     : files: op(9): op(a): [started]  removing enablement symlink(s) for "coreos-metadata.service"
Feb 13 15:35:55.523938 ignition[939]: INFO     : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service"
Feb 13 15:35:55.525669 ignition[939]: INFO     : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service"
Feb 13 15:35:55.525669 ignition[939]: INFO     : files: createResultFile: createFiles: op(b): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb 13 15:35:55.525669 ignition[939]: INFO     : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb 13 15:35:55.525669 ignition[939]: INFO     : files: files passed
Feb 13 15:35:55.525669 ignition[939]: INFO     : Ignition finished successfully
Feb 13 15:35:55.528726 systemd[1]: Finished ignition-files.service - Ignition (files).
Feb 13 15:35:55.539249 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Feb 13 15:35:55.541924 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Feb 13 15:35:55.543434 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb 13 15:35:55.543527 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Feb 13 15:35:55.548883 systemd-networkd[761]: eth0: Gained IPv6LL
Feb 13 15:35:55.551202 initrd-setup-root-after-ignition[968]: grep: /sysroot/oem/oem-release: No such file or directory
Feb 13 15:35:55.554170 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:35:55.554170 initrd-setup-root-after-ignition[970]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:35:55.557983 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:35:55.557576 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 15:35:55.559492 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Feb 13 15:35:55.572537 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Feb 13 15:35:55.595221 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 13 15:35:55.596095 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Feb 13 15:35:55.597690 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Feb 13 15:35:55.599647 systemd[1]: Reached target initrd.target - Initrd Default Target.
Feb 13 15:35:55.601573 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Feb 13 15:35:55.602492 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Feb 13 15:35:55.619707 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 15:35:55.632262 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Feb 13 15:35:55.642984 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:35:55.645336 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:35:55.646710 systemd[1]: Stopped target timers.target - Timer Units.
Feb 13 15:35:55.648535 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 13 15:35:55.648672 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 15:35:55.651171 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Feb 13 15:35:55.653420 systemd[1]: Stopped target basic.target - Basic System.
Feb 13 15:35:55.655056 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Feb 13 15:35:55.656853 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 15:35:55.658895 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Feb 13 15:35:55.660994 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Feb 13 15:35:55.662958 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 15:35:55.665064 systemd[1]: Stopped target sysinit.target - System Initialization.
Feb 13 15:35:55.667140 systemd[1]: Stopped target local-fs.target - Local File Systems.
Feb 13 15:35:55.668985 systemd[1]: Stopped target swap.target - Swaps.
Feb 13 15:35:55.670584 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 13 15:35:55.670718 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 15:35:55.673064 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:35:55.675093 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:35:55.676949 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Feb 13 15:35:55.678151 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:35:55.680194 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 13 15:35:55.680328 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Feb 13 15:35:55.683145 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb 13 15:35:55.683285 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 15:35:55.685348 systemd[1]: Stopped target paths.target - Path Units.
Feb 13 15:35:55.686982 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 13 15:35:55.692096 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:35:55.693532 systemd[1]: Stopped target slices.target - Slice Units.
Feb 13 15:35:55.695712 systemd[1]: Stopped target sockets.target - Socket Units.
Feb 13 15:35:55.697402 systemd[1]: iscsid.socket: Deactivated successfully.
Feb 13 15:35:55.697514 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 15:35:55.699171 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb 13 15:35:55.699262 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 15:35:55.700946 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb 13 15:35:55.701081 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 15:35:55.702964 systemd[1]: ignition-files.service: Deactivated successfully.
Feb 13 15:35:55.703094 systemd[1]: Stopped ignition-files.service - Ignition (files).
Feb 13 15:35:55.716233 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Feb 13 15:35:55.718025 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Feb 13 15:35:55.718898 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 13 15:35:55.719053 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:35:55.721067 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 13 15:35:55.721182 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 15:35:55.727692 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 13 15:35:55.727797 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Feb 13 15:35:55.732375 ignition[994]: INFO     : Ignition 2.20.0
Feb 13 15:35:55.732375 ignition[994]: INFO     : Stage: umount
Feb 13 15:35:55.732375 ignition[994]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:35:55.732375 ignition[994]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:35:55.732375 ignition[994]: INFO     : umount: umount passed
Feb 13 15:35:55.732375 ignition[994]: INFO     : Ignition finished successfully
Feb 13 15:35:55.733118 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb 13 15:35:55.733275 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Feb 13 15:35:55.735310 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb 13 15:35:55.735730 systemd[1]: Stopped target network.target - Network.
Feb 13 15:35:55.737003 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb 13 15:35:55.737088 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Feb 13 15:35:55.739132 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb 13 15:35:55.739184 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Feb 13 15:35:55.740884 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb 13 15:35:55.740932 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Feb 13 15:35:55.742634 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Feb 13 15:35:55.742682 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Feb 13 15:35:55.744753 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Feb 13 15:35:55.746563 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Feb 13 15:35:55.753111 systemd-networkd[761]: eth0: DHCPv6 lease lost
Feb 13 15:35:55.754837 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb 13 15:35:55.754970 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Feb 13 15:35:55.757484 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb 13 15:35:55.757542 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:35:55.770244 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Feb 13 15:35:55.771156 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb 13 15:35:55.771229 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 15:35:55.773349 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:35:55.776148 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb 13 15:35:55.776820 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Feb 13 15:35:55.780596 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 15:35:55.780659 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:35:55.782599 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 13 15:35:55.782652 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:35:55.784701 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 13 15:35:55.784753 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:35:55.787985 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb 13 15:35:55.789480 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Feb 13 15:35:55.790765 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 13 15:35:55.790899 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:35:55.792848 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb 13 15:35:55.792928 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Feb 13 15:35:55.795941 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 13 15:35:55.795996 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:35:55.797473 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 13 15:35:55.797513 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:35:55.799242 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 13 15:35:55.799306 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 15:35:55.802339 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 13 15:35:55.802390 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Feb 13 15:35:55.805109 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 15:35:55.805163 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:35:55.808282 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb 13 15:35:55.808334 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Feb 13 15:35:55.816176 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Feb 13 15:35:55.817427 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 13 15:35:55.817501 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:35:55.819532 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully.
Feb 13 15:35:55.819586 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 15:35:55.821699 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 13 15:35:55.821772 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:35:55.823976 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:35:55.824040 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:35:55.826278 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 13 15:35:55.828061 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Feb 13 15:35:55.829881 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Feb 13 15:35:55.832806 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Feb 13 15:35:55.843914 systemd[1]: Switching root.
Feb 13 15:35:55.869398 systemd-journald[240]: Journal stopped
Feb 13 15:35:56.586604 systemd-journald[240]: Received SIGTERM from PID 1 (systemd).
Feb 13 15:35:56.586661 kernel: SELinux:  policy capability network_peer_controls=1
Feb 13 15:35:56.586674 kernel: SELinux:  policy capability open_perms=1
Feb 13 15:35:56.586687 kernel: SELinux:  policy capability extended_socket_class=1
Feb 13 15:35:56.586696 kernel: SELinux:  policy capability always_check_network=0
Feb 13 15:35:56.586706 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 13 15:35:56.586715 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 13 15:35:56.586724 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb 13 15:35:56.586782 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb 13 15:35:56.586798 kernel: audit: type=1403 audit(1739460955.998:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 13 15:35:56.586809 systemd[1]: Successfully loaded SELinux policy in 33.425ms.
Feb 13 15:35:56.586830 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.863ms.
Feb 13 15:35:56.586842 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 15:35:56.586852 systemd[1]: Detected virtualization kvm.
Feb 13 15:35:56.586864 systemd[1]: Detected architecture arm64.
Feb 13 15:35:56.586894 systemd[1]: Detected first boot.
Feb 13 15:35:56.586907 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 15:35:56.586922 zram_generator::config[1039]: No configuration found.
Feb 13 15:35:56.586933 systemd[1]: Populated /etc with preset unit settings.
Feb 13 15:35:56.586943 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Feb 13 15:35:56.586953 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Feb 13 15:35:56.586963 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 13 15:35:56.586975 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Feb 13 15:35:56.586990 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Feb 13 15:35:56.587000 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Feb 13 15:35:56.587011 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Feb 13 15:35:56.587031 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Feb 13 15:35:56.587042 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Feb 13 15:35:56.587052 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Feb 13 15:35:56.587062 systemd[1]: Created slice user.slice - User and Session Slice.
Feb 13 15:35:56.587075 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:35:56.587085 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:35:56.587096 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Feb 13 15:35:56.587106 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Feb 13 15:35:56.587116 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Feb 13 15:35:56.587128 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 15:35:56.587139 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0...
Feb 13 15:35:56.587149 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:35:56.587158 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Feb 13 15:35:56.587170 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Feb 13 15:35:56.587180 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Feb 13 15:35:56.587190 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Feb 13 15:35:56.587201 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:35:56.587211 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 15:35:56.587221 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 15:35:56.587231 systemd[1]: Reached target swap.target - Swaps.
Feb 13 15:35:56.587241 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Feb 13 15:35:56.587253 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Feb 13 15:35:56.587263 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:35:56.587273 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:35:56.587284 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:35:56.587294 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Feb 13 15:35:56.587304 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Feb 13 15:35:56.587314 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Feb 13 15:35:56.587324 systemd[1]: Mounting media.mount - External Media Directory...
Feb 13 15:35:56.587334 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Feb 13 15:35:56.587346 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Feb 13 15:35:56.587357 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Feb 13 15:35:56.587367 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 13 15:35:56.587377 systemd[1]: Reached target machines.target - Containers.
Feb 13 15:35:56.587386 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Feb 13 15:35:56.587397 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:35:56.587415 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 15:35:56.587426 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Feb 13 15:35:56.587435 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:35:56.587448 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 15:35:56.587458 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:35:56.587468 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Feb 13 15:35:56.587478 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:35:56.587489 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb 13 15:35:56.587499 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Feb 13 15:35:56.587509 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Feb 13 15:35:56.587519 kernel: fuse: init (API version 7.39)
Feb 13 15:35:56.587530 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Feb 13 15:35:56.587540 systemd[1]: Stopped systemd-fsck-usr.service.
Feb 13 15:35:56.587550 kernel: loop: module loaded
Feb 13 15:35:56.587559 kernel: ACPI: bus type drm_connector registered
Feb 13 15:35:56.587569 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 15:35:56.587579 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 15:35:56.587590 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Feb 13 15:35:56.587600 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Feb 13 15:35:56.587610 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 15:35:56.587622 systemd[1]: verity-setup.service: Deactivated successfully.
Feb 13 15:35:56.587652 systemd-journald[1113]: Collecting audit messages is disabled.
Feb 13 15:35:56.587673 systemd[1]: Stopped verity-setup.service.
Feb 13 15:35:56.587684 systemd-journald[1113]: Journal started
Feb 13 15:35:56.587709 systemd-journald[1113]: Runtime Journal (/run/log/journal/1dc2cefa35174ba4af1068f00c48fdb3) is 5.9M, max 47.3M, 41.4M free.
Feb 13 15:35:56.383958 systemd[1]: Queued start job for default target multi-user.target.
Feb 13 15:35:56.397973 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6.
Feb 13 15:35:56.398344 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 13 15:35:56.589846 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 15:35:56.590494 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Feb 13 15:35:56.591695 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Feb 13 15:35:56.592879 systemd[1]: Mounted media.mount - External Media Directory.
Feb 13 15:35:56.593988 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Feb 13 15:35:56.595174 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Feb 13 15:35:56.596363 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Feb 13 15:35:56.599041 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Feb 13 15:35:56.600440 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:35:56.601911 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 13 15:35:56.602065 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Feb 13 15:35:56.603452 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:35:56.603584 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:35:56.605004 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 15:35:56.605181 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 15:35:56.606436 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:35:56.606570 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:35:56.607986 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 13 15:35:56.608126 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Feb 13 15:35:56.609497 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:35:56.609618 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:35:56.610937 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:35:56.613099 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Feb 13 15:35:56.614646 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Feb 13 15:35:56.626185 systemd[1]: Reached target network-pre.target - Preparation for Network.
Feb 13 15:35:56.639145 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Feb 13 15:35:56.641358 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Feb 13 15:35:56.642479 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb 13 15:35:56.642516 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 15:35:56.644458 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Feb 13 15:35:56.646670 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Feb 13 15:35:56.648844 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Feb 13 15:35:56.650006 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:35:56.651388 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Feb 13 15:35:56.653352 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Feb 13 15:35:56.654602 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 15:35:56.658194 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Feb 13 15:35:56.660545 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 15:35:56.663108 systemd-journald[1113]: Time spent on flushing to /var/log/journal/1dc2cefa35174ba4af1068f00c48fdb3 is 19.855ms for 840 entries.
Feb 13 15:35:56.663108 systemd-journald[1113]: System Journal (/var/log/journal/1dc2cefa35174ba4af1068f00c48fdb3) is 8.0M, max 195.6M, 187.6M free.
Feb 13 15:35:56.692515 systemd-journald[1113]: Received client request to flush runtime journal.
Feb 13 15:35:56.664207 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:35:56.666552 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Feb 13 15:35:56.668538 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 15:35:56.673044 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:35:56.674598 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Feb 13 15:35:56.675907 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Feb 13 15:35:56.677451 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Feb 13 15:35:56.687077 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Feb 13 15:35:56.690951 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Feb 13 15:35:56.703303 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Feb 13 15:35:56.704048 kernel: loop0: detected capacity change from 0 to 194096
Feb 13 15:35:56.706292 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Feb 13 15:35:56.710687 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Feb 13 15:35:56.712635 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:35:56.719049 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Feb 13 15:35:56.720550 systemd-tmpfiles[1151]: ACLs are not supported, ignoring.
Feb 13 15:35:56.720566 systemd-tmpfiles[1151]: ACLs are not supported, ignoring.
Feb 13 15:35:56.725275 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb 13 15:35:56.725917 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Feb 13 15:35:56.729503 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 15:35:56.732421 udevadm[1162]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in.
Feb 13 15:35:56.740241 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Feb 13 15:35:56.747086 kernel: loop1: detected capacity change from 0 to 116784
Feb 13 15:35:56.768857 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Feb 13 15:35:56.781274 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 15:35:56.789175 kernel: loop2: detected capacity change from 0 to 113552
Feb 13 15:35:56.794076 systemd-tmpfiles[1176]: ACLs are not supported, ignoring.
Feb 13 15:35:56.794094 systemd-tmpfiles[1176]: ACLs are not supported, ignoring.
Feb 13 15:35:56.800699 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:35:56.826088 kernel: loop3: detected capacity change from 0 to 194096
Feb 13 15:35:56.834055 kernel: loop4: detected capacity change from 0 to 116784
Feb 13 15:35:56.840086 kernel: loop5: detected capacity change from 0 to 113552
Feb 13 15:35:56.844934 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'.
Feb 13 15:35:56.845478 (sd-merge)[1180]: Merged extensions into '/usr'.
Feb 13 15:35:56.851285 systemd[1]: Reloading requested from client PID 1150 ('systemd-sysext') (unit systemd-sysext.service)...
Feb 13 15:35:56.851301 systemd[1]: Reloading...
Feb 13 15:35:56.904040 zram_generator::config[1205]: No configuration found.
Feb 13 15:35:56.944418 ldconfig[1145]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb 13 15:35:56.997651 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:35:57.033573 systemd[1]: Reloading finished in 181 ms.
Feb 13 15:35:57.066444 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Feb 13 15:35:57.067939 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Feb 13 15:35:57.089735 systemd[1]: Starting ensure-sysext.service...
Feb 13 15:35:57.092369 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 15:35:57.102451 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)...
Feb 13 15:35:57.102467 systemd[1]: Reloading...
Feb 13 15:35:57.117059 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb 13 15:35:57.117262 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Feb 13 15:35:57.117884 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb 13 15:35:57.118098 systemd-tmpfiles[1242]: ACLs are not supported, ignoring.
Feb 13 15:35:57.118145 systemd-tmpfiles[1242]: ACLs are not supported, ignoring.
Feb 13 15:35:57.121636 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 15:35:57.121748 systemd-tmpfiles[1242]: Skipping /boot
Feb 13 15:35:57.130347 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 15:35:57.130496 systemd-tmpfiles[1242]: Skipping /boot
Feb 13 15:35:57.149050 zram_generator::config[1270]: No configuration found.
Feb 13 15:35:57.232759 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:35:57.268307 systemd[1]: Reloading finished in 165 ms.
Feb 13 15:35:57.283319 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Feb 13 15:35:57.294678 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:35:57.300684 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 15:35:57.304091 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Feb 13 15:35:57.306562 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Feb 13 15:35:57.310180 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 15:35:57.313253 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:35:57.316216 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Feb 13 15:35:57.324882 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Feb 13 15:35:57.327549 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:35:57.330291 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:35:57.333262 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:35:57.335676 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:35:57.336827 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:35:57.339916 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:35:57.341118 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:35:57.341630 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Feb 13 15:35:57.345557 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:35:57.345697 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:35:57.352473 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Feb 13 15:35:57.359094 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Feb 13 15:35:57.362039 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Feb 13 15:35:57.362250 systemd-udevd[1312]: Using default interface naming scheme 'v255'.
Feb 13 15:35:57.364068 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:35:57.364284 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:35:57.366137 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:35:57.366275 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:35:57.370611 systemd[1]: Finished ensure-sysext.service.
Feb 13 15:35:57.377296 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Feb 13 15:35:57.378865 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:35:57.385808 augenrules[1344]: No rules
Feb 13 15:35:57.386278 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:35:57.389218 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 15:35:57.390710 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:35:57.390761 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 15:35:57.398156 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization...
Feb 13 15:35:57.399281 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb 13 15:35:57.399573 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Feb 13 15:35:57.400755 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:35:57.402233 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 15:35:57.402389 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 15:35:57.403692 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:35:57.403833 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:35:57.405658 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 15:35:57.405779 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 15:35:57.422184 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 15:35:57.423808 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 15:35:57.437044 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped.
Feb 13 15:35:57.467165 systemd-resolved[1309]: Positive Trust Anchors:
Feb 13 15:35:57.467181 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 15:35:57.467212 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 15:35:57.473601 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1358)
Feb 13 15:35:57.485931 systemd-resolved[1309]: Defaulting to hostname 'linux'.
Feb 13 15:35:57.497933 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 15:35:57.499527 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:35:57.500368 systemd-networkd[1378]: lo: Link UP
Feb 13 15:35:57.500608 systemd-networkd[1378]: lo: Gained carrier
Feb 13 15:35:57.501279 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization.
Feb 13 15:35:57.501690 systemd-networkd[1378]: Enumeration completed
Feb 13 15:35:57.502561 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:35:57.502636 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 15:35:57.502811 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 15:35:57.503541 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:35:57.503641 systemd-networkd[1378]: eth0: Link UP
Feb 13 15:35:57.503682 systemd-networkd[1378]: eth0: Gained carrier
Feb 13 15:35:57.503743 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:35:57.504414 systemd[1]: Reached target network.target - Network.
Feb 13 15:35:57.505550 systemd[1]: Reached target time-set.target - System Time Set.
Feb 13 15:35:57.514245 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Feb 13 15:35:57.517132 systemd-networkd[1378]: eth0: DHCPv4 address 10.0.0.105/16, gateway 10.0.0.1 acquired from 10.0.0.1
Feb 13 15:35:57.517721 systemd-timesyncd[1357]: Network configuration changed, trying to establish connection.
Feb 13 15:35:57.519538 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Feb 13 15:35:57.519827 systemd-timesyncd[1357]: Contacted time server 10.0.0.1:123 (10.0.0.1).
Feb 13 15:35:57.519894 systemd-timesyncd[1357]: Initial clock synchronization to Thu 2025-02-13 15:35:57.469596 UTC.
Feb 13 15:35:57.522329 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Feb 13 15:35:57.540066 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Feb 13 15:35:57.573786 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:35:57.579377 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Feb 13 15:35:57.582493 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Feb 13 15:35:57.599400 lvm[1397]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 15:35:57.611058 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:35:57.634959 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Feb 13 15:35:57.637688 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:35:57.638792 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 15:35:57.639917 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Feb 13 15:35:57.641132 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Feb 13 15:35:57.642389 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Feb 13 15:35:57.643514 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Feb 13 15:35:57.644713 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Feb 13 15:35:57.645933 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb 13 15:35:57.645968 systemd[1]: Reached target paths.target - Path Units.
Feb 13 15:35:57.646846 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 15:35:57.648542 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Feb 13 15:35:57.650736 systemd[1]: Starting docker.socket - Docker Socket for the API...
Feb 13 15:35:57.658787 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Feb 13 15:35:57.660873 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Feb 13 15:35:57.662357 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Feb 13 15:35:57.663508 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 15:35:57.664372 systemd[1]: Reached target basic.target - Basic System.
Feb 13 15:35:57.665220 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Feb 13 15:35:57.665249 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Feb 13 15:35:57.666075 systemd[1]: Starting containerd.service - containerd container runtime...
Feb 13 15:35:57.667884 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Feb 13 15:35:57.671121 lvm[1404]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 15:35:57.670756 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Feb 13 15:35:57.673261 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Feb 13 15:35:57.674296 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Feb 13 15:35:57.677413 jq[1407]: false
Feb 13 15:35:57.676372 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Feb 13 15:35:57.681229 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Feb 13 15:35:57.687107 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Feb 13 15:35:57.691258 systemd[1]: Starting systemd-logind.service - User Login Management...
Feb 13 15:35:57.692835 dbus-daemon[1406]: [system] SELinux support is enabled
Feb 13 15:35:57.693300 extend-filesystems[1408]: Found loop3
Feb 13 15:35:57.693300 extend-filesystems[1408]: Found loop4
Feb 13 15:35:57.693300 extend-filesystems[1408]: Found loop5
Feb 13 15:35:57.693300 extend-filesystems[1408]: Found vda
Feb 13 15:35:57.693300 extend-filesystems[1408]: Found vda1
Feb 13 15:35:57.693300 extend-filesystems[1408]: Found vda2
Feb 13 15:35:57.693300 extend-filesystems[1408]: Found vda3
Feb 13 15:35:57.693300 extend-filesystems[1408]: Found usr
Feb 13 15:35:57.693300 extend-filesystems[1408]: Found vda4
Feb 13 15:35:57.693300 extend-filesystems[1408]: Found vda6
Feb 13 15:35:57.693300 extend-filesystems[1408]: Found vda7
Feb 13 15:35:57.693300 extend-filesystems[1408]: Found vda9
Feb 13 15:35:57.693300 extend-filesystems[1408]: Checking size of /dev/vda9
Feb 13 15:35:57.718356 extend-filesystems[1408]: Resized partition /dev/vda9
Feb 13 15:35:57.694670 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Feb 13 15:35:57.720049 extend-filesystems[1429]: resize2fs 1.47.1 (20-May-2024)
Feb 13 15:35:57.695102 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Feb 13 15:35:57.722302 jq[1422]: true
Feb 13 15:35:57.729103 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks
Feb 13 15:35:57.729131 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1366)
Feb 13 15:35:57.695692 systemd[1]: Starting update-engine.service - Update Engine...
Feb 13 15:35:57.700155 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Feb 13 15:35:57.702256 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Feb 13 15:35:57.707738 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Feb 13 15:35:57.712009 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb 13 15:35:57.712175 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Feb 13 15:35:57.714592 systemd[1]: motdgen.service: Deactivated successfully.
Feb 13 15:35:57.714749 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Feb 13 15:35:57.716228 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb 13 15:35:57.716372 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Feb 13 15:35:57.737957 jq[1430]: true
Feb 13 15:35:57.746762 (ntainerd)[1433]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Feb 13 15:35:57.757767 kernel: EXT4-fs (vda9): resized filesystem to 1864699
Feb 13 15:35:57.767285 update_engine[1421]: I20250213 15:35:57.767135  1421 main.cc:92] Flatcar Update Engine starting
Feb 13 15:35:57.770207 extend-filesystems[1429]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Feb 13 15:35:57.770207 extend-filesystems[1429]: old_desc_blocks = 1, new_desc_blocks = 1
Feb 13 15:35:57.770207 extend-filesystems[1429]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long.
Feb 13 15:35:57.777927 extend-filesystems[1408]: Resized filesystem in /dev/vda9
Feb 13 15:35:57.779687 update_engine[1421]: I20250213 15:35:57.773345  1421 update_check_scheduler.cc:74] Next update check in 7m20s
Feb 13 15:35:57.772531 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb 13 15:35:57.772729 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Feb 13 15:35:57.777906 systemd[1]: Started update-engine.service - Update Engine.
Feb 13 15:35:57.779180 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb 13 15:35:57.779202 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Feb 13 15:35:57.782234 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb 13 15:35:57.782258 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Feb 13 15:35:57.792191 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Feb 13 15:35:57.796003 systemd-logind[1415]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 13 15:35:57.796656 systemd-logind[1415]: New seat seat0.
Feb 13 15:35:57.797886 systemd[1]: Started systemd-logind.service - User Login Management.
Feb 13 15:35:57.816759 bash[1458]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 15:35:57.818689 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Feb 13 15:35:57.820804 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met.
Feb 13 15:35:57.841033 locksmithd[1457]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb 13 15:35:57.935329 containerd[1433]: time="2025-02-13T15:35:57.935008640Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23
Feb 13 15:35:57.961524 containerd[1433]: time="2025-02-13T15:35:57.961472640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:35:57.962836 containerd[1433]: time="2025-02-13T15:35:57.962787200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:35:57.962836 containerd[1433]: time="2025-02-13T15:35:57.962820320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb 13 15:35:57.962836 containerd[1433]: time="2025-02-13T15:35:57.962837160Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb 13 15:35:57.963023 containerd[1433]: time="2025-02-13T15:35:57.962987080Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Feb 13 15:35:57.963023 containerd[1433]: time="2025-02-13T15:35:57.963010000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Feb 13 15:35:57.963095 containerd[1433]: time="2025-02-13T15:35:57.963079800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:35:57.963114 containerd[1433]: time="2025-02-13T15:35:57.963095800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:35:57.963276 containerd[1433]: time="2025-02-13T15:35:57.963252440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:35:57.963276 containerd[1433]: time="2025-02-13T15:35:57.963271560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb 13 15:35:57.963309 containerd[1433]: time="2025-02-13T15:35:57.963286280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:35:57.963309 containerd[1433]: time="2025-02-13T15:35:57.963296080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb 13 15:35:57.963374 containerd[1433]: time="2025-02-13T15:35:57.963363360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:35:57.963583 containerd[1433]: time="2025-02-13T15:35:57.963560560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:35:57.963676 containerd[1433]: time="2025-02-13T15:35:57.963661240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:35:57.963700 containerd[1433]: time="2025-02-13T15:35:57.963677960Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb 13 15:35:57.963763 containerd[1433]: time="2025-02-13T15:35:57.963751080Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb 13 15:35:57.963804 containerd[1433]: time="2025-02-13T15:35:57.963794040Z" level=info msg="metadata content store policy set" policy=shared
Feb 13 15:35:57.967551 containerd[1433]: time="2025-02-13T15:35:57.967519680Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb 13 15:35:57.967595 containerd[1433]: time="2025-02-13T15:35:57.967569240Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb 13 15:35:57.967595 containerd[1433]: time="2025-02-13T15:35:57.967583680Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Feb 13 15:35:57.967642 containerd[1433]: time="2025-02-13T15:35:57.967599640Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Feb 13 15:35:57.967642 containerd[1433]: time="2025-02-13T15:35:57.967615440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb 13 15:35:57.967769 containerd[1433]: time="2025-02-13T15:35:57.967741080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb 13 15:35:57.967978 containerd[1433]: time="2025-02-13T15:35:57.967958960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb 13 15:35:57.968083 containerd[1433]: time="2025-02-13T15:35:57.968064240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Feb 13 15:35:57.968104 containerd[1433]: time="2025-02-13T15:35:57.968085440Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Feb 13 15:35:57.968128 containerd[1433]: time="2025-02-13T15:35:57.968101480Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Feb 13 15:35:57.968128 containerd[1433]: time="2025-02-13T15:35:57.968115040Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb 13 15:35:57.968164 containerd[1433]: time="2025-02-13T15:35:57.968127240Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb 13 15:35:57.968164 containerd[1433]: time="2025-02-13T15:35:57.968138680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb 13 15:35:57.968164 containerd[1433]: time="2025-02-13T15:35:57.968152040Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb 13 15:35:57.968212 containerd[1433]: time="2025-02-13T15:35:57.968165200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb 13 15:35:57.968212 containerd[1433]: time="2025-02-13T15:35:57.968177600Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb 13 15:35:57.968212 containerd[1433]: time="2025-02-13T15:35:57.968190160Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb 13 15:35:57.968212 containerd[1433]: time="2025-02-13T15:35:57.968201920Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb 13 15:35:57.968270 containerd[1433]: time="2025-02-13T15:35:57.968222240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb 13 15:35:57.968270 containerd[1433]: time="2025-02-13T15:35:57.968236200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb 13 15:35:57.968270 containerd[1433]: time="2025-02-13T15:35:57.968247680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb 13 15:35:57.968270 containerd[1433]: time="2025-02-13T15:35:57.968259160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb 13 15:35:57.968336 containerd[1433]: time="2025-02-13T15:35:57.968270200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb 13 15:35:57.968336 containerd[1433]: time="2025-02-13T15:35:57.968283040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb 13 15:35:57.968336 containerd[1433]: time="2025-02-13T15:35:57.968293920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb 13 15:35:57.968336 containerd[1433]: time="2025-02-13T15:35:57.968305760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb 13 15:35:57.968336 containerd[1433]: time="2025-02-13T15:35:57.968317400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Feb 13 15:35:57.968336 containerd[1433]: time="2025-02-13T15:35:57.968331520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Feb 13 15:35:57.968442 containerd[1433]: time="2025-02-13T15:35:57.968341880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb 13 15:35:57.968442 containerd[1433]: time="2025-02-13T15:35:57.968353120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Feb 13 15:35:57.968442 containerd[1433]: time="2025-02-13T15:35:57.968365120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb 13 15:35:57.968442 containerd[1433]: time="2025-02-13T15:35:57.968383640Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Feb 13 15:35:57.968442 containerd[1433]: time="2025-02-13T15:35:57.968412160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Feb 13 15:35:57.968442 containerd[1433]: time="2025-02-13T15:35:57.968426160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb 13 15:35:57.968442 containerd[1433]: time="2025-02-13T15:35:57.968436720Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb 13 15:35:57.968619 containerd[1433]: time="2025-02-13T15:35:57.968596640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb 13 15:35:57.968644 containerd[1433]: time="2025-02-13T15:35:57.968617320Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Feb 13 15:35:57.968644 containerd[1433]: time="2025-02-13T15:35:57.968628920Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb 13 15:35:57.968644 containerd[1433]: time="2025-02-13T15:35:57.968640320Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Feb 13 15:35:57.968692 containerd[1433]: time="2025-02-13T15:35:57.968648960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb 13 15:35:57.968692 containerd[1433]: time="2025-02-13T15:35:57.968660640Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Feb 13 15:35:57.968692 containerd[1433]: time="2025-02-13T15:35:57.968670960Z" level=info msg="NRI interface is disabled by configuration."
Feb 13 15:35:57.968692 containerd[1433]: time="2025-02-13T15:35:57.968680240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb 13 15:35:57.968991 containerd[1433]: time="2025-02-13T15:35:57.968942440Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb 13 15:35:57.968991 containerd[1433]: time="2025-02-13T15:35:57.968991440Z" level=info msg="Connect containerd service"
Feb 13 15:35:57.969113 containerd[1433]: time="2025-02-13T15:35:57.969031000Z" level=info msg="using legacy CRI server"
Feb 13 15:35:57.969113 containerd[1433]: time="2025-02-13T15:35:57.969038000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Feb 13 15:35:57.969266 containerd[1433]: time="2025-02-13T15:35:57.969253440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb 13 15:35:57.972417 containerd[1433]: time="2025-02-13T15:35:57.972373840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 15:35:57.973055 containerd[1433]: time="2025-02-13T15:35:57.972683400Z" level=info msg="Start subscribing containerd event"
Feb 13 15:35:57.973055 containerd[1433]: time="2025-02-13T15:35:57.972730520Z" level=info msg="Start recovering state"
Feb 13 15:35:57.973055 containerd[1433]: time="2025-02-13T15:35:57.972788680Z" level=info msg="Start event monitor"
Feb 13 15:35:57.973055 containerd[1433]: time="2025-02-13T15:35:57.972809080Z" level=info msg="Start snapshots syncer"
Feb 13 15:35:57.973055 containerd[1433]: time="2025-02-13T15:35:57.972820480Z" level=info msg="Start cni network conf syncer for default"
Feb 13 15:35:57.973055 containerd[1433]: time="2025-02-13T15:35:57.972827560Z" level=info msg="Start streaming server"
Feb 13 15:35:57.973055 containerd[1433]: time="2025-02-13T15:35:57.972920640Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb 13 15:35:57.973055 containerd[1433]: time="2025-02-13T15:35:57.972956320Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb 13 15:35:57.973088 systemd[1]: Started containerd.service - containerd container runtime.
Feb 13 15:35:57.974130 containerd[1433]: time="2025-02-13T15:35:57.974088800Z" level=info msg="containerd successfully booted in 0.040242s"
Feb 13 15:35:58.344711 sshd_keygen[1427]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb 13 15:35:58.362178 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Feb 13 15:35:58.375268 systemd[1]: Starting issuegen.service - Generate /run/issue...
Feb 13 15:35:58.380205 systemd[1]: issuegen.service: Deactivated successfully.
Feb 13 15:35:58.380384 systemd[1]: Finished issuegen.service - Generate /run/issue.
Feb 13 15:35:58.382904 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Feb 13 15:35:58.395752 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Feb 13 15:35:58.398642 systemd[1]: Started getty@tty1.service - Getty on tty1.
Feb 13 15:35:58.400661 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0.
Feb 13 15:35:58.401984 systemd[1]: Reached target getty.target - Login Prompts.
Feb 13 15:35:58.876185 systemd-networkd[1378]: eth0: Gained IPv6LL
Feb 13 15:35:58.878932 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Feb 13 15:35:58.880639 systemd[1]: Reached target network-online.target - Network is Online.
Feb 13 15:35:58.898298 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent...
Feb 13 15:35:58.900814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:35:58.902851 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Feb 13 15:35:58.917032 systemd[1]: coreos-metadata.service: Deactivated successfully.
Feb 13 15:35:58.917252 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent.
Feb 13 15:35:58.918907 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Feb 13 15:35:58.924728 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Feb 13 15:35:59.386160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:35:59.387836 systemd[1]: Reached target multi-user.target - Multi-User System.
Feb 13 15:35:59.389622 (kubelet)[1512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 15:35:59.389837 systemd[1]: Startup finished in 592ms (kernel) + 4.268s (initrd) + 3.427s (userspace) = 8.288s.
Feb 13 15:35:59.403655 agetty[1488]: failed to open credentials directory
Feb 13 15:35:59.403705 agetty[1489]: failed to open credentials directory
Feb 13 15:35:59.861880 kubelet[1512]: E0213 15:35:59.861772    1512 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 15:35:59.864456 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 15:35:59.864602 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 15:36:05.102476 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Feb 13 15:36:05.103534 systemd[1]: Started sshd@0-10.0.0.105:22-10.0.0.1:49734.service - OpenSSH per-connection server daemon (10.0.0.1:49734).
Feb 13 15:36:05.158573 sshd[1526]: Accepted publickey for core from 10.0.0.1 port 49734 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:36:05.160314 sshd-session[1526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:36:05.171772 systemd-logind[1415]: New session 1 of user core.
Feb 13 15:36:05.172760 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Feb 13 15:36:05.183235 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Feb 13 15:36:05.192184 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Feb 13 15:36:05.195467 systemd[1]: Starting user@500.service - User Manager for UID 500...
Feb 13 15:36:05.200948 (systemd)[1530]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb 13 15:36:05.272523 systemd[1530]: Queued start job for default target default.target.
Feb 13 15:36:05.282922 systemd[1530]: Created slice app.slice - User Application Slice.
Feb 13 15:36:05.282971 systemd[1530]: Reached target paths.target - Paths.
Feb 13 15:36:05.282984 systemd[1530]: Reached target timers.target - Timers.
Feb 13 15:36:05.284202 systemd[1530]: Starting dbus.socket - D-Bus User Message Bus Socket...
Feb 13 15:36:05.293431 systemd[1530]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Feb 13 15:36:05.293480 systemd[1530]: Reached target sockets.target - Sockets.
Feb 13 15:36:05.293492 systemd[1530]: Reached target basic.target - Basic System.
Feb 13 15:36:05.293524 systemd[1530]: Reached target default.target - Main User Target.
Feb 13 15:36:05.293548 systemd[1530]: Startup finished in 87ms.
Feb 13 15:36:05.293774 systemd[1]: Started user@500.service - User Manager for UID 500.
Feb 13 15:36:05.295221 systemd[1]: Started session-1.scope - Session 1 of User core.
Feb 13 15:36:05.352514 systemd[1]: Started sshd@1-10.0.0.105:22-10.0.0.1:49736.service - OpenSSH per-connection server daemon (10.0.0.1:49736).
Feb 13 15:36:05.416306 sshd[1541]: Accepted publickey for core from 10.0.0.1 port 49736 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:36:05.417425 sshd-session[1541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:36:05.421065 systemd-logind[1415]: New session 2 of user core.
Feb 13 15:36:05.430156 systemd[1]: Started session-2.scope - Session 2 of User core.
Feb 13 15:36:05.480396 sshd[1543]: Connection closed by 10.0.0.1 port 49736
Feb 13 15:36:05.480839 sshd-session[1541]: pam_unix(sshd:session): session closed for user core
Feb 13 15:36:05.489973 systemd[1]: sshd@1-10.0.0.105:22-10.0.0.1:49736.service: Deactivated successfully.
Feb 13 15:36:05.491202 systemd[1]: session-2.scope: Deactivated successfully.
Feb 13 15:36:05.493080 systemd-logind[1415]: Session 2 logged out. Waiting for processes to exit.
Feb 13 15:36:05.493469 systemd[1]: Started sshd@2-10.0.0.105:22-10.0.0.1:49752.service - OpenSSH per-connection server daemon (10.0.0.1:49752).
Feb 13 15:36:05.494369 systemd-logind[1415]: Removed session 2.
Feb 13 15:36:05.532798 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 49752 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:36:05.533835 sshd-session[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:36:05.537603 systemd-logind[1415]: New session 3 of user core.
Feb 13 15:36:05.546138 systemd[1]: Started session-3.scope - Session 3 of User core.
Feb 13 15:36:05.594657 sshd[1550]: Connection closed by 10.0.0.1 port 49752
Feb 13 15:36:05.595231 sshd-session[1548]: pam_unix(sshd:session): session closed for user core
Feb 13 15:36:05.614242 systemd[1]: sshd@2-10.0.0.105:22-10.0.0.1:49752.service: Deactivated successfully.
Feb 13 15:36:05.615492 systemd[1]: session-3.scope: Deactivated successfully.
Feb 13 15:36:05.617213 systemd-logind[1415]: Session 3 logged out. Waiting for processes to exit.
Feb 13 15:36:05.617750 systemd[1]: Started sshd@3-10.0.0.105:22-10.0.0.1:49768.service - OpenSSH per-connection server daemon (10.0.0.1:49768).
Feb 13 15:36:05.618464 systemd-logind[1415]: Removed session 3.
Feb 13 15:36:05.657106 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 49768 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:36:05.658195 sshd-session[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:36:05.661925 systemd-logind[1415]: New session 4 of user core.
Feb 13 15:36:05.671200 systemd[1]: Started session-4.scope - Session 4 of User core.
Feb 13 15:36:05.721961 sshd[1557]: Connection closed by 10.0.0.1 port 49768
Feb 13 15:36:05.722428 sshd-session[1555]: pam_unix(sshd:session): session closed for user core
Feb 13 15:36:05.734244 systemd[1]: sshd@3-10.0.0.105:22-10.0.0.1:49768.service: Deactivated successfully.
Feb 13 15:36:05.735593 systemd[1]: session-4.scope: Deactivated successfully.
Feb 13 15:36:05.738048 systemd-logind[1415]: Session 4 logged out. Waiting for processes to exit.
Feb 13 15:36:05.739124 systemd[1]: Started sshd@4-10.0.0.105:22-10.0.0.1:49780.service - OpenSSH per-connection server daemon (10.0.0.1:49780).
Feb 13 15:36:05.740354 systemd-logind[1415]: Removed session 4.
Feb 13 15:36:05.779425 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 49780 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:36:05.780627 sshd-session[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:36:05.784509 systemd-logind[1415]: New session 5 of user core.
Feb 13 15:36:05.794149 systemd[1]: Started session-5.scope - Session 5 of User core.
Feb 13 15:36:05.852347 sudo[1565]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Feb 13 15:36:05.852634 sudo[1565]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:36:05.863860 sudo[1565]: pam_unix(sudo:session): session closed for user root
Feb 13 15:36:05.868877 sshd[1564]: Connection closed by 10.0.0.1 port 49780
Feb 13 15:36:05.869384 sshd-session[1562]: pam_unix(sshd:session): session closed for user core
Feb 13 15:36:05.887365 systemd[1]: sshd@4-10.0.0.105:22-10.0.0.1:49780.service: Deactivated successfully.
Feb 13 15:36:05.888773 systemd[1]: session-5.scope: Deactivated successfully.
Feb 13 15:36:05.891121 systemd-logind[1415]: Session 5 logged out. Waiting for processes to exit.
Feb 13 15:36:05.892352 systemd[1]: Started sshd@5-10.0.0.105:22-10.0.0.1:49788.service - OpenSSH per-connection server daemon (10.0.0.1:49788).
Feb 13 15:36:05.893055 systemd-logind[1415]: Removed session 5.
Feb 13 15:36:05.932003 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 49788 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:36:05.933267 sshd-session[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:36:05.937049 systemd-logind[1415]: New session 6 of user core.
Feb 13 15:36:05.949190 systemd[1]: Started session-6.scope - Session 6 of User core.
Feb 13 15:36:06.000748 sudo[1574]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Feb 13 15:36:06.001041 sudo[1574]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:36:06.004001 sudo[1574]: pam_unix(sudo:session): session closed for user root
Feb 13 15:36:06.008443 sudo[1573]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules
Feb 13 15:36:06.008695 sudo[1573]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:36:06.024297 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 15:36:06.046479 augenrules[1596]: No rules
Feb 13 15:36:06.047570 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 15:36:06.049053 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 15:36:06.050035 sudo[1573]: pam_unix(sudo:session): session closed for user root
Feb 13 15:36:06.051751 sshd[1572]: Connection closed by 10.0.0.1 port 49788
Feb 13 15:36:06.051646 sshd-session[1570]: pam_unix(sshd:session): session closed for user core
Feb 13 15:36:06.061432 systemd[1]: sshd@5-10.0.0.105:22-10.0.0.1:49788.service: Deactivated successfully.
Feb 13 15:36:06.062849 systemd[1]: session-6.scope: Deactivated successfully.
Feb 13 15:36:06.063994 systemd-logind[1415]: Session 6 logged out. Waiting for processes to exit.
Feb 13 15:36:06.065155 systemd[1]: Started sshd@6-10.0.0.105:22-10.0.0.1:49794.service - OpenSSH per-connection server daemon (10.0.0.1:49794).
Feb 13 15:36:06.065865 systemd-logind[1415]: Removed session 6.
Feb 13 15:36:06.106675 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 49794 ssh2: RSA SHA256:CeGxNR6ysFRdtgdjec6agQcEKsB5ZRoCP9SurKj0GwY
Feb 13 15:36:06.107681 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:36:06.111477 systemd-logind[1415]: New session 7 of user core.
Feb 13 15:36:06.117226 systemd[1]: Started session-7.scope - Session 7 of User core.
Feb 13 15:36:06.167256 sudo[1607]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb 13 15:36:06.167823 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:36:06.191281 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent...
Feb 13 15:36:06.205400 systemd[1]: coreos-metadata.service: Deactivated successfully.
Feb 13 15:36:06.207059 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent.
Feb 13 15:36:06.710325 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:36:06.718248 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:36:06.734222 systemd[1]: Reloading requested from client PID 1656 ('systemctl') (unit session-7.scope)...
Feb 13 15:36:06.734237 systemd[1]: Reloading...
Feb 13 15:36:06.801035 zram_generator::config[1694]: No configuration found.
Feb 13 15:36:06.979314 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:36:07.030856 systemd[1]: Reloading finished in 296 ms.
Feb 13 15:36:07.076402 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:36:07.079431 systemd[1]: kubelet.service: Deactivated successfully.
Feb 13 15:36:07.079610 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:36:07.081037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:36:07.170205 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:36:07.174152 (kubelet)[1741]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 15:36:07.209233 kubelet[1741]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:36:07.209233 kubelet[1741]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 13 15:36:07.209233 kubelet[1741]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:36:07.210182 kubelet[1741]: I0213 15:36:07.210134    1741 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 13 15:36:08.016811 kubelet[1741]: I0213 15:36:08.016679    1741 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
Feb 13 15:36:08.016811 kubelet[1741]: I0213 15:36:08.016708    1741 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 13 15:36:08.016955 kubelet[1741]: I0213 15:36:08.016893    1741 server.go:927] "Client rotation is on, will bootstrap in background"
Feb 13 15:36:08.067667 kubelet[1741]: I0213 15:36:08.067541    1741 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 15:36:08.077749 kubelet[1741]: I0213 15:36:08.077697    1741 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 13 15:36:08.078933 kubelet[1741]: I0213 15:36:08.078859    1741 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 13 15:36:08.079132 kubelet[1741]: I0213 15:36:08.078917    1741 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.105","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Feb 13 15:36:08.079242 kubelet[1741]: I0213 15:36:08.079206    1741 topology_manager.go:138] "Creating topology manager with none policy"
Feb 13 15:36:08.079242 kubelet[1741]: I0213 15:36:08.079215    1741 container_manager_linux.go:301] "Creating device plugin manager"
Feb 13 15:36:08.079865 kubelet[1741]: I0213 15:36:08.079837    1741 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:36:08.080760 kubelet[1741]: I0213 15:36:08.080731    1741 kubelet.go:400] "Attempting to sync node with API server"
Feb 13 15:36:08.080760 kubelet[1741]: I0213 15:36:08.080757    1741 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 13 15:36:08.081228 kubelet[1741]: I0213 15:36:08.080902    1741 kubelet.go:312] "Adding apiserver pod source"
Feb 13 15:36:08.081228 kubelet[1741]: I0213 15:36:08.080980    1741 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 13 15:36:08.081228 kubelet[1741]: E0213 15:36:08.081119    1741 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:08.081322 kubelet[1741]: E0213 15:36:08.081234    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:08.082310 kubelet[1741]: I0213 15:36:08.082266    1741 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Feb 13 15:36:08.082620 kubelet[1741]: I0213 15:36:08.082609    1741 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 13 15:36:08.082718 kubelet[1741]: W0213 15:36:08.082706    1741 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb 13 15:36:08.083513 kubelet[1741]: I0213 15:36:08.083478    1741 server.go:1264] "Started kubelet"
Feb 13 15:36:08.084716 kubelet[1741]: I0213 15:36:08.084567    1741 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 13 15:36:08.085028 kubelet[1741]: I0213 15:36:08.084509    1741 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 13 15:36:08.085243 kubelet[1741]: I0213 15:36:08.085227    1741 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 13 15:36:08.085462 kubelet[1741]: I0213 15:36:08.084939    1741 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Feb 13 15:36:08.086642 kubelet[1741]: I0213 15:36:08.086622    1741 server.go:455] "Adding debug handlers to kubelet server"
Feb 13 15:36:08.091126 kubelet[1741]: I0213 15:36:08.091098    1741 volume_manager.go:291] "Starting Kubelet Volume Manager"
Feb 13 15:36:08.093116 kubelet[1741]: I0213 15:36:08.091221    1741 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Feb 13 15:36:08.093116 kubelet[1741]: I0213 15:36:08.091417    1741 reconciler.go:26] "Reconciler: start to sync state"
Feb 13 15:36:08.093116 kubelet[1741]: I0213 15:36:08.092527    1741 factory.go:221] Registration of the systemd container factory successfully
Feb 13 15:36:08.093116 kubelet[1741]: I0213 15:36:08.092628    1741 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 13 15:36:08.093875 kubelet[1741]: I0213 15:36:08.093698    1741 factory.go:221] Registration of the containerd container factory successfully
Feb 13 15:36:08.093875 kubelet[1741]: E0213 15:36:08.093737    1741 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 13 15:36:08.099068 kubelet[1741]: E0213 15:36:08.098707    1741 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.105.1823ce88cf456a31  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.105,UID:10.0.0.105,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.105,},FirstTimestamp:2025-02-13 15:36:08.083450417 +0000 UTC m=+0.906207799,LastTimestamp:2025-02-13 15:36:08.083450417 +0000 UTC m=+0.906207799,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.105,}"
Feb 13 15:36:08.099190 kubelet[1741]: E0213 15:36:08.099078    1741 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.105\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms"
Feb 13 15:36:08.099368 kubelet[1741]: W0213 15:36:08.099326    1741 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb 13 15:36:08.099368 kubelet[1741]: E0213 15:36:08.099364    1741 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb 13 15:36:08.099430 kubelet[1741]: W0213 15:36:08.099416    1741 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.105" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb 13 15:36:08.099430 kubelet[1741]: E0213 15:36:08.099427    1741 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.105" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb 13 15:36:08.105667 kubelet[1741]: I0213 15:36:08.105643    1741 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 13 15:36:08.105667 kubelet[1741]: I0213 15:36:08.105657    1741 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 13 15:36:08.105667 kubelet[1741]: I0213 15:36:08.105675    1741 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:36:08.166259 kubelet[1741]: I0213 15:36:08.166222    1741 policy_none.go:49] "None policy: Start"
Feb 13 15:36:08.167103 kubelet[1741]: I0213 15:36:08.167078    1741 memory_manager.go:170] "Starting memorymanager" policy="None"
Feb 13 15:36:08.167103 kubelet[1741]: I0213 15:36:08.167104    1741 state_mem.go:35] "Initializing new in-memory state store"
Feb 13 15:36:08.172505 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Feb 13 15:36:08.185138 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Feb 13 15:36:08.188828 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Feb 13 15:36:08.192621 kubelet[1741]: I0213 15:36:08.192434    1741 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.105"
Feb 13 15:36:08.195729 kubelet[1741]: I0213 15:36:08.195701    1741 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 13 15:36:08.195896 kubelet[1741]: I0213 15:36:08.195875    1741 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 13 15:36:08.196151 kubelet[1741]: I0213 15:36:08.196120    1741 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Feb 13 15:36:08.196238 kubelet[1741]: I0213 15:36:08.196226    1741 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 13 15:36:08.196645 kubelet[1741]: I0213 15:36:08.196617    1741 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 13 15:36:08.196743 kubelet[1741]: I0213 15:36:08.196706    1741 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb 13 15:36:08.196743 kubelet[1741]: I0213 15:36:08.196729    1741 kubelet.go:2337] "Starting kubelet main sync loop"
Feb 13 15:36:08.196788 kubelet[1741]: E0213 15:36:08.196768    1741 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Feb 13 15:36:08.198505 kubelet[1741]: E0213 15:36:08.198342    1741 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.105\" not found"
Feb 13 15:36:08.199759 kubelet[1741]: I0213 15:36:08.199711    1741 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.105"
Feb 13 15:36:08.202756 kubelet[1741]: I0213 15:36:08.202727    1741 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24"
Feb 13 15:36:08.203190 containerd[1433]: time="2025-02-13T15:36:08.203148129Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb 13 15:36:08.203528 kubelet[1741]: I0213 15:36:08.203340    1741 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24"
Feb 13 15:36:08.211296 kubelet[1741]: E0213 15:36:08.211269    1741 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.105\" not found"
Feb 13 15:36:08.311702 kubelet[1741]: E0213 15:36:08.311581    1741 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.105\" not found"
Feb 13 15:36:08.412126 kubelet[1741]: E0213 15:36:08.412080    1741 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.105\" not found"
Feb 13 15:36:08.512617 kubelet[1741]: E0213 15:36:08.512573    1741 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.105\" not found"
Feb 13 15:36:08.613353 kubelet[1741]: E0213 15:36:08.613274    1741 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.105\" not found"
Feb 13 15:36:08.684669 sudo[1607]: pam_unix(sudo:session): session closed for user root
Feb 13 15:36:08.685950 sshd[1606]: Connection closed by 10.0.0.1 port 49794
Feb 13 15:36:08.686817 sshd-session[1604]: pam_unix(sshd:session): session closed for user core
Feb 13 15:36:08.690023 systemd[1]: sshd@6-10.0.0.105:22-10.0.0.1:49794.service: Deactivated successfully.
Feb 13 15:36:08.692685 systemd[1]: session-7.scope: Deactivated successfully.
Feb 13 15:36:08.693480 systemd-logind[1415]: Session 7 logged out. Waiting for processes to exit.
Feb 13 15:36:08.694576 systemd-logind[1415]: Removed session 7.
Feb 13 15:36:08.714270 kubelet[1741]: E0213 15:36:08.714221    1741 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.105\" not found"
Feb 13 15:36:08.814765 kubelet[1741]: E0213 15:36:08.814723    1741 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.105\" not found"
Feb 13 15:36:08.915293 kubelet[1741]: E0213 15:36:08.915198    1741 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.105\" not found"
Feb 13 15:36:09.015751 kubelet[1741]: E0213 15:36:09.015706    1741 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.105\" not found"
Feb 13 15:36:09.018875 kubelet[1741]: I0213 15:36:09.018835    1741 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials"
Feb 13 15:36:09.019089 kubelet[1741]: W0213 15:36:09.018992    1741 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
Feb 13 15:36:09.019089 kubelet[1741]: W0213 15:36:09.019052    1741 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
Feb 13 15:36:09.081561 kubelet[1741]: I0213 15:36:09.081516    1741 apiserver.go:52] "Watching apiserver"
Feb 13 15:36:09.081670 kubelet[1741]: E0213 15:36:09.081563    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:09.089138 kubelet[1741]: I0213 15:36:09.089101    1741 topology_manager.go:215] "Topology Admit Handler" podUID="a67a9995-159f-4453-97f3-afbee008ae12" podNamespace="kube-system" podName="cilium-5kf95"
Feb 13 15:36:09.089275 kubelet[1741]: I0213 15:36:09.089251    1741 topology_manager.go:215] "Topology Admit Handler" podUID="11a69541-3b13-4363-831a-9179660c1881" podNamespace="kube-system" podName="kube-proxy-qwml4"
Feb 13 15:36:09.092282 kubelet[1741]: I0213 15:36:09.092027    1741 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
Feb 13 15:36:09.096748 systemd[1]: Created slice kubepods-burstable-poda67a9995_159f_4453_97f3_afbee008ae12.slice - libcontainer container kubepods-burstable-poda67a9995_159f_4453_97f3_afbee008ae12.slice.
Feb 13 15:36:09.097312 kubelet[1741]: I0213 15:36:09.097286    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-xtables-lock\") pod \"cilium-5kf95\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") " pod="kube-system/cilium-5kf95"
Feb 13 15:36:09.097366 kubelet[1741]: I0213 15:36:09.097322    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a67a9995-159f-4453-97f3-afbee008ae12-clustermesh-secrets\") pod \"cilium-5kf95\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") " pod="kube-system/cilium-5kf95"
Feb 13 15:36:09.097366 kubelet[1741]: I0213 15:36:09.097359    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-host-proc-sys-net\") pod \"cilium-5kf95\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") " pod="kube-system/cilium-5kf95"
Feb 13 15:36:09.097426 kubelet[1741]: I0213 15:36:09.097380    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11a69541-3b13-4363-831a-9179660c1881-xtables-lock\") pod \"kube-proxy-qwml4\" (UID: \"11a69541-3b13-4363-831a-9179660c1881\") " pod="kube-system/kube-proxy-qwml4"
Feb 13 15:36:09.097426 kubelet[1741]: I0213 15:36:09.097413    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11a69541-3b13-4363-831a-9179660c1881-lib-modules\") pod \"kube-proxy-qwml4\" (UID: \"11a69541-3b13-4363-831a-9179660c1881\") " pod="kube-system/kube-proxy-qwml4"
Feb 13 15:36:09.097466 kubelet[1741]: I0213 15:36:09.097427    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-cilium-run\") pod \"cilium-5kf95\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") " pod="kube-system/cilium-5kf95"
Feb 13 15:36:09.097466 kubelet[1741]: I0213 15:36:09.097442    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-hostproc\") pod \"cilium-5kf95\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") " pod="kube-system/cilium-5kf95"
Feb 13 15:36:09.097466 kubelet[1741]: I0213 15:36:09.097454    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-lib-modules\") pod \"cilium-5kf95\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") " pod="kube-system/cilium-5kf95"
Feb 13 15:36:09.097533 kubelet[1741]: I0213 15:36:09.097472    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szw2w\" (UniqueName: \"kubernetes.io/projected/11a69541-3b13-4363-831a-9179660c1881-kube-api-access-szw2w\") pod \"kube-proxy-qwml4\" (UID: \"11a69541-3b13-4363-831a-9179660c1881\") " pod="kube-system/kube-proxy-qwml4"
Feb 13 15:36:09.097533 kubelet[1741]: I0213 15:36:09.097493    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-bpf-maps\") pod \"cilium-5kf95\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") " pod="kube-system/cilium-5kf95"
Feb 13 15:36:09.097533 kubelet[1741]: I0213 15:36:09.097507    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-cilium-cgroup\") pod \"cilium-5kf95\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") " pod="kube-system/cilium-5kf95"
Feb 13 15:36:09.097601 kubelet[1741]: I0213 15:36:09.097540    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/11a69541-3b13-4363-831a-9179660c1881-kube-proxy\") pod \"kube-proxy-qwml4\" (UID: \"11a69541-3b13-4363-831a-9179660c1881\") " pod="kube-system/kube-proxy-qwml4"
Feb 13 15:36:09.097601 kubelet[1741]: I0213 15:36:09.097556    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a67a9995-159f-4453-97f3-afbee008ae12-hubble-tls\") pod \"cilium-5kf95\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") " pod="kube-system/cilium-5kf95"
Feb 13 15:36:09.097601 kubelet[1741]: I0213 15:36:09.097571    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-cni-path\") pod \"cilium-5kf95\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") " pod="kube-system/cilium-5kf95"
Feb 13 15:36:09.097800 kubelet[1741]: I0213 15:36:09.097681    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a67a9995-159f-4453-97f3-afbee008ae12-cilium-config-path\") pod \"cilium-5kf95\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") " pod="kube-system/cilium-5kf95"
Feb 13 15:36:09.097836 kubelet[1741]: I0213 15:36:09.097809    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-host-proc-sys-kernel\") pod \"cilium-5kf95\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") " pod="kube-system/cilium-5kf95"
Feb 13 15:36:09.097856 kubelet[1741]: I0213 15:36:09.097830    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-etc-cni-netd\") pod \"cilium-5kf95\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") " pod="kube-system/cilium-5kf95"
Feb 13 15:36:09.097856 kubelet[1741]: I0213 15:36:09.097851    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvw9f\" (UniqueName: \"kubernetes.io/projected/a67a9995-159f-4453-97f3-afbee008ae12-kube-api-access-pvw9f\") pod \"cilium-5kf95\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") " pod="kube-system/cilium-5kf95"
Feb 13 15:36:09.112126 systemd[1]: Created slice kubepods-besteffort-pod11a69541_3b13_4363_831a_9179660c1881.slice - libcontainer container kubepods-besteffort-pod11a69541_3b13_4363_831a_9179660c1881.slice.
Feb 13 15:36:09.410585 kubelet[1741]: E0213 15:36:09.410555    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:36:09.411368 containerd[1433]: time="2025-02-13T15:36:09.411334273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5kf95,Uid:a67a9995-159f-4453-97f3-afbee008ae12,Namespace:kube-system,Attempt:0,}"
Feb 13 15:36:09.424345 kubelet[1741]: E0213 15:36:09.424317    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:36:09.424674 containerd[1433]: time="2025-02-13T15:36:09.424639645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qwml4,Uid:11a69541-3b13-4363-831a-9179660c1881,Namespace:kube-system,Attempt:0,}"
Feb 13 15:36:09.895998 containerd[1433]: time="2025-02-13T15:36:09.895933974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 15:36:09.899051 containerd[1433]: time="2025-02-13T15:36:09.898998665Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 15:36:09.899884 containerd[1433]: time="2025-02-13T15:36:09.899817362Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175"
Feb 13 15:36:09.900710 containerd[1433]: time="2025-02-13T15:36:09.900672047Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Feb 13 15:36:09.901915 containerd[1433]: time="2025-02-13T15:36:09.901873471Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 15:36:09.905466 containerd[1433]: time="2025-02-13T15:36:09.905418388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 15:36:09.906292 containerd[1433]: time="2025-02-13T15:36:09.906259413Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 481.538246ms"
Feb 13 15:36:09.906938 containerd[1433]: time="2025-02-13T15:36:09.906905759Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 495.488845ms"
Feb 13 15:36:10.018247 containerd[1433]: time="2025-02-13T15:36:10.018134916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:36:10.018247 containerd[1433]: time="2025-02-13T15:36:10.018205747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:36:10.018541 containerd[1433]: time="2025-02-13T15:36:10.018227759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:36:10.018541 containerd[1433]: time="2025-02-13T15:36:10.018301865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:36:10.019555 containerd[1433]: time="2025-02-13T15:36:10.019393525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:36:10.019555 containerd[1433]: time="2025-02-13T15:36:10.019466233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:36:10.020241 containerd[1433]: time="2025-02-13T15:36:10.020140101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:36:10.020333 containerd[1433]: time="2025-02-13T15:36:10.020239695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:36:10.081747 kubelet[1741]: E0213 15:36:10.081678    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:10.113218 systemd[1]: Started cri-containerd-0a00c72b243d53ad3b42956b2ce448ed2ae9697a7cc1946811c158867e0c1e7e.scope - libcontainer container 0a00c72b243d53ad3b42956b2ce448ed2ae9697a7cc1946811c158867e0c1e7e.
Feb 13 15:36:10.117289 systemd[1]: Started cri-containerd-459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9.scope - libcontainer container 459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9.
Feb 13 15:36:10.132868 containerd[1433]: time="2025-02-13T15:36:10.132745062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qwml4,Uid:11a69541-3b13-4363-831a-9179660c1881,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a00c72b243d53ad3b42956b2ce448ed2ae9697a7cc1946811c158867e0c1e7e\""
Feb 13 15:36:10.141787 kubelet[1741]: E0213 15:36:10.138659    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:36:10.141787 kubelet[1741]: E0213 15:36:10.140147    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:36:10.141923 containerd[1433]: time="2025-02-13T15:36:10.139598038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5kf95,Uid:a67a9995-159f-4453-97f3-afbee008ae12,Namespace:kube-system,Attempt:0,} returns sandbox id \"459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9\""
Feb 13 15:36:10.142878 containerd[1433]: time="2025-02-13T15:36:10.142850965Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\""
Feb 13 15:36:10.204406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1741899640.mount: Deactivated successfully.
Feb 13 15:36:11.082417 kubelet[1741]: E0213 15:36:11.082386    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:11.163701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3135490350.mount: Deactivated successfully.
Feb 13 15:36:11.376129 containerd[1433]: time="2025-02-13T15:36:11.375891173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:36:11.376489 containerd[1433]: time="2025-02-13T15:36:11.376204866Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663372"
Feb 13 15:36:11.377080 containerd[1433]: time="2025-02-13T15:36:11.377049452Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:36:11.379019 containerd[1433]: time="2025-02-13T15:36:11.378976161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:36:11.379686 containerd[1433]: time="2025-02-13T15:36:11.379651294Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.23676665s"
Feb 13 15:36:11.379719 containerd[1433]: time="2025-02-13T15:36:11.379684378Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\""
Feb 13 15:36:11.381086 containerd[1433]: time="2025-02-13T15:36:11.381011070Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Feb 13 15:36:11.381827 containerd[1433]: time="2025-02-13T15:36:11.381797400Z" level=info msg="CreateContainer within sandbox \"0a00c72b243d53ad3b42956b2ce448ed2ae9697a7cc1946811c158867e0c1e7e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb 13 15:36:11.395946 containerd[1433]: time="2025-02-13T15:36:11.395881703Z" level=info msg="CreateContainer within sandbox \"0a00c72b243d53ad3b42956b2ce448ed2ae9697a7cc1946811c158867e0c1e7e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7f278589e168dd76d1cc6dadbd36fe310e23137dc073b283726254f834fc724c\""
Feb 13 15:36:11.396669 containerd[1433]: time="2025-02-13T15:36:11.396644499Z" level=info msg="StartContainer for \"7f278589e168dd76d1cc6dadbd36fe310e23137dc073b283726254f834fc724c\""
Feb 13 15:36:11.431208 systemd[1]: Started cri-containerd-7f278589e168dd76d1cc6dadbd36fe310e23137dc073b283726254f834fc724c.scope - libcontainer container 7f278589e168dd76d1cc6dadbd36fe310e23137dc073b283726254f834fc724c.
Feb 13 15:36:11.456611 containerd[1433]: time="2025-02-13T15:36:11.456560070Z" level=info msg="StartContainer for \"7f278589e168dd76d1cc6dadbd36fe310e23137dc073b283726254f834fc724c\" returns successfully"
Feb 13 15:36:12.082691 kubelet[1741]: E0213 15:36:12.082655    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:12.207961 kubelet[1741]: E0213 15:36:12.207935    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:36:13.083605 kubelet[1741]: E0213 15:36:13.083548    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:13.209244 kubelet[1741]: E0213 15:36:13.209219    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:36:14.083769 kubelet[1741]: E0213 15:36:14.083717    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:15.084094 kubelet[1741]: E0213 15:36:15.084042    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:16.085070 kubelet[1741]: E0213 15:36:16.085011    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:17.085761 kubelet[1741]: E0213 15:36:17.085707    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:18.086342 kubelet[1741]: E0213 15:36:18.086299    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:19.087398 kubelet[1741]: E0213 15:36:19.087333    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:20.087824 kubelet[1741]: E0213 15:36:20.087783    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:20.874356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4070687278.mount: Deactivated successfully.
Feb 13 15:36:21.088558 kubelet[1741]: E0213 15:36:21.088519    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:22.074625 containerd[1433]: time="2025-02-13T15:36:22.074564874Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:36:22.075040 containerd[1433]: time="2025-02-13T15:36:22.075003522Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710"
Feb 13 15:36:22.075900 containerd[1433]: time="2025-02-13T15:36:22.075862024Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:36:22.078063 containerd[1433]: time="2025-02-13T15:36:22.078027953Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.696941989s"
Feb 13 15:36:22.078098 containerd[1433]: time="2025-02-13T15:36:22.078062784Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\""
Feb 13 15:36:22.080303 containerd[1433]: time="2025-02-13T15:36:22.080275421Z" level=info msg="CreateContainer within sandbox \"459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb 13 15:36:22.088703 kubelet[1741]: E0213 15:36:22.088677    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:22.091518 containerd[1433]: time="2025-02-13T15:36:22.091474212Z" level=info msg="CreateContainer within sandbox \"459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c94d17e5546222424407ffdcad6f28101a787e58b7b3464ca552d62a616e5013\""
Feb 13 15:36:22.092155 containerd[1433]: time="2025-02-13T15:36:22.092123007Z" level=info msg="StartContainer for \"c94d17e5546222424407ffdcad6f28101a787e58b7b3464ca552d62a616e5013\""
Feb 13 15:36:22.120176 systemd[1]: Started cri-containerd-c94d17e5546222424407ffdcad6f28101a787e58b7b3464ca552d62a616e5013.scope - libcontainer container c94d17e5546222424407ffdcad6f28101a787e58b7b3464ca552d62a616e5013.
Feb 13 15:36:22.140789 containerd[1433]: time="2025-02-13T15:36:22.140744319Z" level=info msg="StartContainer for \"c94d17e5546222424407ffdcad6f28101a787e58b7b3464ca552d62a616e5013\" returns successfully"
Feb 13 15:36:22.179779 systemd[1]: cri-containerd-c94d17e5546222424407ffdcad6f28101a787e58b7b3464ca552d62a616e5013.scope: Deactivated successfully.
Feb 13 15:36:22.223967 kubelet[1741]: E0213 15:36:22.223910    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:36:22.255774 kubelet[1741]: I0213 15:36:22.255686    1741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qwml4" podStartSLOduration=13.017673504 podStartE2EDuration="14.255670404s" podCreationTimestamp="2025-02-13 15:36:08 +0000 UTC" firstStartedPulling="2025-02-13 15:36:10.142437887 +0000 UTC m=+2.965195269" lastFinishedPulling="2025-02-13 15:36:11.380434788 +0000 UTC m=+4.203192169" observedRunningTime="2025-02-13 15:36:12.217217786 +0000 UTC m=+5.039975168" watchObservedRunningTime="2025-02-13 15:36:22.255670404 +0000 UTC m=+15.078427786"
Feb 13 15:36:22.318171 containerd[1433]: time="2025-02-13T15:36:22.318109800Z" level=info msg="shim disconnected" id=c94d17e5546222424407ffdcad6f28101a787e58b7b3464ca552d62a616e5013 namespace=k8s.io
Feb 13 15:36:22.318171 containerd[1433]: time="2025-02-13T15:36:22.318165666Z" level=warning msg="cleaning up after shim disconnected" id=c94d17e5546222424407ffdcad6f28101a787e58b7b3464ca552d62a616e5013 namespace=k8s.io
Feb 13 15:36:22.318171 containerd[1433]: time="2025-02-13T15:36:22.318174704Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:36:23.086905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c94d17e5546222424407ffdcad6f28101a787e58b7b3464ca552d62a616e5013-rootfs.mount: Deactivated successfully.
Feb 13 15:36:23.088838 kubelet[1741]: E0213 15:36:23.088805    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:23.227440 kubelet[1741]: E0213 15:36:23.227190    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:36:23.229763 containerd[1433]: time="2025-02-13T15:36:23.229724439Z" level=info msg="CreateContainer within sandbox \"459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb 13 15:36:23.280443 containerd[1433]: time="2025-02-13T15:36:23.280385483Z" level=info msg="CreateContainer within sandbox \"459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ff1e5a799bc6ee2efa587ac440245e8a63daac55ea1cfda5d9c3b279fd9d4cb6\""
Feb 13 15:36:23.280848 containerd[1433]: time="2025-02-13T15:36:23.280804310Z" level=info msg="StartContainer for \"ff1e5a799bc6ee2efa587ac440245e8a63daac55ea1cfda5d9c3b279fd9d4cb6\""
Feb 13 15:36:23.304174 systemd[1]: Started cri-containerd-ff1e5a799bc6ee2efa587ac440245e8a63daac55ea1cfda5d9c3b279fd9d4cb6.scope - libcontainer container ff1e5a799bc6ee2efa587ac440245e8a63daac55ea1cfda5d9c3b279fd9d4cb6.
Feb 13 15:36:23.323517 containerd[1433]: time="2025-02-13T15:36:23.323480771Z" level=info msg="StartContainer for \"ff1e5a799bc6ee2efa587ac440245e8a63daac55ea1cfda5d9c3b279fd9d4cb6\" returns successfully"
Feb 13 15:36:23.343578 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 15:36:23.343796 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:36:23.343863 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:36:23.351348 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:36:23.352091 systemd[1]: cri-containerd-ff1e5a799bc6ee2efa587ac440245e8a63daac55ea1cfda5d9c3b279fd9d4cb6.scope: Deactivated successfully.
Feb 13 15:36:23.361505 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:36:23.372872 containerd[1433]: time="2025-02-13T15:36:23.372820549Z" level=info msg="shim disconnected" id=ff1e5a799bc6ee2efa587ac440245e8a63daac55ea1cfda5d9c3b279fd9d4cb6 namespace=k8s.io
Feb 13 15:36:23.373204 containerd[1433]: time="2025-02-13T15:36:23.373064015Z" level=warning msg="cleaning up after shim disconnected" id=ff1e5a799bc6ee2efa587ac440245e8a63daac55ea1cfda5d9c3b279fd9d4cb6 namespace=k8s.io
Feb 13 15:36:23.373204 containerd[1433]: time="2025-02-13T15:36:23.373082531Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:36:24.086444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff1e5a799bc6ee2efa587ac440245e8a63daac55ea1cfda5d9c3b279fd9d4cb6-rootfs.mount: Deactivated successfully.
Feb 13 15:36:24.089606 kubelet[1741]: E0213 15:36:24.089568    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:24.230888 kubelet[1741]: E0213 15:36:24.230822    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:36:24.234573 containerd[1433]: time="2025-02-13T15:36:24.234531929Z" level=info msg="CreateContainer within sandbox \"459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb 13 15:36:24.272398 containerd[1433]: time="2025-02-13T15:36:24.272293695Z" level=info msg="CreateContainer within sandbox \"459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"594daa148b87f52414bf82fa391061d17542954dd193a37608c306a08d2330f4\""
Feb 13 15:36:24.272908 containerd[1433]: time="2025-02-13T15:36:24.272776121Z" level=info msg="StartContainer for \"594daa148b87f52414bf82fa391061d17542954dd193a37608c306a08d2330f4\""
Feb 13 15:36:24.303252 systemd[1]: Started cri-containerd-594daa148b87f52414bf82fa391061d17542954dd193a37608c306a08d2330f4.scope - libcontainer container 594daa148b87f52414bf82fa391061d17542954dd193a37608c306a08d2330f4.
Feb 13 15:36:24.343693 systemd[1]: cri-containerd-594daa148b87f52414bf82fa391061d17542954dd193a37608c306a08d2330f4.scope: Deactivated successfully.
Feb 13 15:36:24.347667 containerd[1433]: time="2025-02-13T15:36:24.347620385Z" level=info msg="StartContainer for \"594daa148b87f52414bf82fa391061d17542954dd193a37608c306a08d2330f4\" returns successfully"
Feb 13 15:36:24.363433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-594daa148b87f52414bf82fa391061d17542954dd193a37608c306a08d2330f4-rootfs.mount: Deactivated successfully.
Feb 13 15:36:24.372378 containerd[1433]: time="2025-02-13T15:36:24.372317256Z" level=info msg="shim disconnected" id=594daa148b87f52414bf82fa391061d17542954dd193a37608c306a08d2330f4 namespace=k8s.io
Feb 13 15:36:24.372378 containerd[1433]: time="2025-02-13T15:36:24.372372525Z" level=warning msg="cleaning up after shim disconnected" id=594daa148b87f52414bf82fa391061d17542954dd193a37608c306a08d2330f4 namespace=k8s.io
Feb 13 15:36:24.372378 containerd[1433]: time="2025-02-13T15:36:24.372382043Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:36:25.090127 kubelet[1741]: E0213 15:36:25.090094    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:25.237831 kubelet[1741]: E0213 15:36:25.236593    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:36:25.239899 containerd[1433]: time="2025-02-13T15:36:25.239858029Z" level=info msg="CreateContainer within sandbox \"459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb 13 15:36:25.251464 containerd[1433]: time="2025-02-13T15:36:25.251410181Z" level=info msg="CreateContainer within sandbox \"459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a8882f4fae780f404ee3d5b497d1c2c9f9ed948b68944d1167720bbb65a7d6c5\""
Feb 13 15:36:25.252005 containerd[1433]: time="2025-02-13T15:36:25.251918694Z" level=info msg="StartContainer for \"a8882f4fae780f404ee3d5b497d1c2c9f9ed948b68944d1167720bbb65a7d6c5\""
Feb 13 15:36:25.281282 systemd[1]: Started cri-containerd-a8882f4fae780f404ee3d5b497d1c2c9f9ed948b68944d1167720bbb65a7d6c5.scope - libcontainer container a8882f4fae780f404ee3d5b497d1c2c9f9ed948b68944d1167720bbb65a7d6c5.
Feb 13 15:36:25.301615 systemd[1]: cri-containerd-a8882f4fae780f404ee3d5b497d1c2c9f9ed948b68944d1167720bbb65a7d6c5.scope: Deactivated successfully.
Feb 13 15:36:25.304476 containerd[1433]: time="2025-02-13T15:36:25.304357079Z" level=info msg="StartContainer for \"a8882f4fae780f404ee3d5b497d1c2c9f9ed948b68944d1167720bbb65a7d6c5\" returns successfully"
Feb 13 15:36:25.326116 containerd[1433]: time="2025-02-13T15:36:25.326052462Z" level=info msg="shim disconnected" id=a8882f4fae780f404ee3d5b497d1c2c9f9ed948b68944d1167720bbb65a7d6c5 namespace=k8s.io
Feb 13 15:36:25.326303 containerd[1433]: time="2025-02-13T15:36:25.326128769Z" level=warning msg="cleaning up after shim disconnected" id=a8882f4fae780f404ee3d5b497d1c2c9f9ed948b68944d1167720bbb65a7d6c5 namespace=k8s.io
Feb 13 15:36:25.326303 containerd[1433]: time="2025-02-13T15:36:25.326140567Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:36:26.090325 kubelet[1741]: E0213 15:36:26.090274    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:26.237449 kubelet[1741]: E0213 15:36:26.237419    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:36:26.243078 containerd[1433]: time="2025-02-13T15:36:26.239833431Z" level=info msg="CreateContainer within sandbox \"459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb 13 15:36:26.247788 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8882f4fae780f404ee3d5b497d1c2c9f9ed948b68944d1167720bbb65a7d6c5-rootfs.mount: Deactivated successfully.
Feb 13 15:36:26.262149 containerd[1433]: time="2025-02-13T15:36:26.262101511Z" level=info msg="CreateContainer within sandbox \"459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960\""
Feb 13 15:36:26.263361 containerd[1433]: time="2025-02-13T15:36:26.263281735Z" level=info msg="StartContainer for \"e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960\""
Feb 13 15:36:26.298228 systemd[1]: Started cri-containerd-e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960.scope - libcontainer container e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960.
Feb 13 15:36:26.335096 containerd[1433]: time="2025-02-13T15:36:26.335029197Z" level=info msg="StartContainer for \"e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960\" returns successfully"
Feb 13 15:36:26.443973 kubelet[1741]: I0213 15:36:26.443833    1741 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Feb 13 15:36:26.893084 kernel: Initializing XFRM netlink socket
Feb 13 15:36:27.091375 kubelet[1741]: E0213 15:36:27.091336    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:27.242169 kubelet[1741]: E0213 15:36:27.242057    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:36:28.081826 kubelet[1741]: E0213 15:36:28.081779    1741 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:28.092056 kubelet[1741]: E0213 15:36:28.092003    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:28.132634 systemd-networkd[1378]: cilium_host: Link UP
Feb 13 15:36:28.133539 systemd-networkd[1378]: cilium_net: Link UP
Feb 13 15:36:28.133758 systemd-networkd[1378]: cilium_net: Gained carrier
Feb 13 15:36:28.133887 systemd-networkd[1378]: cilium_host: Gained carrier
Feb 13 15:36:28.233405 systemd-networkd[1378]: cilium_vxlan: Link UP
Feb 13 15:36:28.233413 systemd-networkd[1378]: cilium_vxlan: Gained carrier
Feb 13 15:36:28.243909 kubelet[1741]: E0213 15:36:28.243875    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:36:28.396200 systemd-networkd[1378]: cilium_net: Gained IPv6LL
Feb 13 15:36:28.505416 systemd-networkd[1378]: cilium_host: Gained IPv6LL
Feb 13 15:36:28.556318 kernel: NET: Registered PF_ALG protocol family
Feb 13 15:36:29.092398 kubelet[1741]: E0213 15:36:29.092360    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:29.146258 systemd-networkd[1378]: lxc_health: Link UP
Feb 13 15:36:29.146799 systemd-networkd[1378]: lxc_health: Gained carrier
Feb 13 15:36:29.245541 kubelet[1741]: E0213 15:36:29.245501    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:36:29.533184 systemd-networkd[1378]: cilium_vxlan: Gained IPv6LL
Feb 13 15:36:30.004202 kubelet[1741]: I0213 15:36:30.004094    1741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5kf95" podStartSLOduration=10.068750821 podStartE2EDuration="22.004070142s" podCreationTimestamp="2025-02-13 15:36:08 +0000 UTC" firstStartedPulling="2025-02-13 15:36:10.143296282 +0000 UTC m=+2.966053664" lastFinishedPulling="2025-02-13 15:36:22.078615603 +0000 UTC m=+14.901372985" observedRunningTime="2025-02-13 15:36:27.258081603 +0000 UTC m=+20.080838985" watchObservedRunningTime="2025-02-13 15:36:30.004070142 +0000 UTC m=+22.826827524"
Feb 13 15:36:30.004432 kubelet[1741]: I0213 15:36:30.004411    1741 topology_manager.go:215] "Topology Admit Handler" podUID="5e75c95f-e609-450a-92d2-bda0ba26c890" podNamespace="default" podName="nginx-deployment-85f456d6dd-vbn9c"
Feb 13 15:36:30.012402 systemd[1]: Created slice kubepods-besteffort-pod5e75c95f_e609_450a_92d2_bda0ba26c890.slice - libcontainer container kubepods-besteffort-pod5e75c95f_e609_450a_92d2_bda0ba26c890.slice.
Feb 13 15:36:30.027951 kubelet[1741]: I0213 15:36:30.027904    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrnmj\" (UniqueName: \"kubernetes.io/projected/5e75c95f-e609-450a-92d2-bda0ba26c890-kube-api-access-jrnmj\") pod \"nginx-deployment-85f456d6dd-vbn9c\" (UID: \"5e75c95f-e609-450a-92d2-bda0ba26c890\") " pod="default/nginx-deployment-85f456d6dd-vbn9c"
Feb 13 15:36:30.093304 kubelet[1741]: E0213 15:36:30.093253    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:30.247474 kubelet[1741]: E0213 15:36:30.247422    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:36:30.315786 containerd[1433]: time="2025-02-13T15:36:30.315669430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vbn9c,Uid:5e75c95f-e609-450a-92d2-bda0ba26c890,Namespace:default,Attempt:0,}"
Feb 13 15:36:30.387908 systemd-networkd[1378]: lxcf2c13508bbd8: Link UP
Feb 13 15:36:30.402730 kernel: eth0: renamed from tmp3d7ab
Feb 13 15:36:30.407070 systemd-networkd[1378]: lxcf2c13508bbd8: Gained carrier
Feb 13 15:36:31.094426 kubelet[1741]: E0213 15:36:31.094368    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:31.132444 systemd-networkd[1378]: lxc_health: Gained IPv6LL
Feb 13 15:36:31.248991 kubelet[1741]: E0213 15:36:31.248783    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:36:32.028181 systemd-networkd[1378]: lxcf2c13508bbd8: Gained IPv6LL
Feb 13 15:36:32.094580 kubelet[1741]: E0213 15:36:32.094524    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:32.249972 kubelet[1741]: E0213 15:36:32.249928    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:36:33.094717 kubelet[1741]: E0213 15:36:33.094673    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:33.783834 containerd[1433]: time="2025-02-13T15:36:33.783745077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:36:33.784358 containerd[1433]: time="2025-02-13T15:36:33.783811353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:36:33.784358 containerd[1433]: time="2025-02-13T15:36:33.783822792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:36:33.784358 containerd[1433]: time="2025-02-13T15:36:33.783905827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:36:33.811276 systemd[1]: Started cri-containerd-3d7ab66af25788a81081da970d685fc1d6228d5abc93286fa053cb47ac115e8e.scope - libcontainer container 3d7ab66af25788a81081da970d685fc1d6228d5abc93286fa053cb47ac115e8e.
Feb 13 15:36:33.822272 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Feb 13 15:36:33.838571 containerd[1433]: time="2025-02-13T15:36:33.838533621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-vbn9c,Uid:5e75c95f-e609-450a-92d2-bda0ba26c890,Namespace:default,Attempt:0,} returns sandbox id \"3d7ab66af25788a81081da970d685fc1d6228d5abc93286fa053cb47ac115e8e\""
Feb 13 15:36:33.840465 containerd[1433]: time="2025-02-13T15:36:33.840386025Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb 13 15:36:34.095056 kubelet[1741]: E0213 15:36:34.094914    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:35.095918 kubelet[1741]: E0213 15:36:35.095869    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:35.747195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount798951471.mount: Deactivated successfully.
Feb 13 15:36:36.097061 kubelet[1741]: E0213 15:36:36.096936    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:36.572164 containerd[1433]: time="2025-02-13T15:36:36.572104393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:36:36.572551 containerd[1433]: time="2025-02-13T15:36:36.572504732Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69693086"
Feb 13 15:36:36.573364 containerd[1433]: time="2025-02-13T15:36:36.573296610Z" level=info msg="ImageCreate event name:\"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:36:36.576001 containerd[1433]: time="2025-02-13T15:36:36.575963510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:36:36.577059 containerd[1433]: time="2025-02-13T15:36:36.577028573Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 2.736557113s"
Feb 13 15:36:36.577059 containerd[1433]: time="2025-02-13T15:36:36.577059812Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\""
Feb 13 15:36:36.581846 containerd[1433]: time="2025-02-13T15:36:36.579603557Z" level=info msg="CreateContainer within sandbox \"3d7ab66af25788a81081da970d685fc1d6228d5abc93286fa053cb47ac115e8e\" for container &ContainerMetadata{Name:nginx,Attempt:0,}"
Feb 13 15:36:36.593124 containerd[1433]: time="2025-02-13T15:36:36.593059167Z" level=info msg="CreateContainer within sandbox \"3d7ab66af25788a81081da970d685fc1d6228d5abc93286fa053cb47ac115e8e\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"5a9fca3962b3bae0cc2d489022df7f12fb1e72b347c7f3f452d8d062401ac5fe\""
Feb 13 15:36:36.593579 containerd[1433]: time="2025-02-13T15:36:36.593538181Z" level=info msg="StartContainer for \"5a9fca3962b3bae0cc2d489022df7f12fb1e72b347c7f3f452d8d062401ac5fe\""
Feb 13 15:36:36.611580 systemd[1]: run-containerd-runc-k8s.io-5a9fca3962b3bae0cc2d489022df7f12fb1e72b347c7f3f452d8d062401ac5fe-runc.28i27A.mount: Deactivated successfully.
Feb 13 15:36:36.624219 systemd[1]: Started cri-containerd-5a9fca3962b3bae0cc2d489022df7f12fb1e72b347c7f3f452d8d062401ac5fe.scope - libcontainer container 5a9fca3962b3bae0cc2d489022df7f12fb1e72b347c7f3f452d8d062401ac5fe.
Feb 13 15:36:36.648473 containerd[1433]: time="2025-02-13T15:36:36.648042302Z" level=info msg="StartContainer for \"5a9fca3962b3bae0cc2d489022df7f12fb1e72b347c7f3f452d8d062401ac5fe\" returns successfully"
Feb 13 15:36:37.097207 kubelet[1741]: E0213 15:36:37.097155    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:38.097550 kubelet[1741]: E0213 15:36:38.097518    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:39.097955 kubelet[1741]: E0213 15:36:39.097890    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:40.098769 kubelet[1741]: E0213 15:36:40.098721    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:41.099302 kubelet[1741]: E0213 15:36:41.099262    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:42.102798 kubelet[1741]: E0213 15:36:42.100027    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:42.263375 kubelet[1741]: I0213 15:36:42.263292    1741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-vbn9c" podStartSLOduration=10.525498249 podStartE2EDuration="13.263273934s" podCreationTimestamp="2025-02-13 15:36:29 +0000 UTC" firstStartedPulling="2025-02-13 15:36:33.840048446 +0000 UTC m=+26.662805828" lastFinishedPulling="2025-02-13 15:36:36.577824171 +0000 UTC m=+29.400581513" observedRunningTime="2025-02-13 15:36:37.269778327 +0000 UTC m=+30.092535709" watchObservedRunningTime="2025-02-13 15:36:42.263273934 +0000 UTC m=+35.086031316"
Feb 13 15:36:42.263540 kubelet[1741]: I0213 15:36:42.263412    1741 topology_manager.go:215] "Topology Admit Handler" podUID="32e260b6-6e25-4706-99f1-8b026529a063" podNamespace="default" podName="nfs-server-provisioner-0"
Feb 13 15:36:42.268464 systemd[1]: Created slice kubepods-besteffort-pod32e260b6_6e25_4706_99f1_8b026529a063.slice - libcontainer container kubepods-besteffort-pod32e260b6_6e25_4706_99f1_8b026529a063.slice.
Feb 13 15:36:42.280342 kubelet[1741]: I0213 15:36:42.280297    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxlz2\" (UniqueName: \"kubernetes.io/projected/32e260b6-6e25-4706-99f1-8b026529a063-kube-api-access-hxlz2\") pod \"nfs-server-provisioner-0\" (UID: \"32e260b6-6e25-4706-99f1-8b026529a063\") " pod="default/nfs-server-provisioner-0"
Feb 13 15:36:42.280342 kubelet[1741]: I0213 15:36:42.280347    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/32e260b6-6e25-4706-99f1-8b026529a063-data\") pod \"nfs-server-provisioner-0\" (UID: \"32e260b6-6e25-4706-99f1-8b026529a063\") " pod="default/nfs-server-provisioner-0"
Feb 13 15:36:42.571988 containerd[1433]: time="2025-02-13T15:36:42.571936731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:32e260b6-6e25-4706-99f1-8b026529a063,Namespace:default,Attempt:0,}"
Feb 13 15:36:42.625466 systemd-networkd[1378]: lxc752b34fe8ed8: Link UP
Feb 13 15:36:42.639082 kernel: eth0: renamed from tmpd1506
Feb 13 15:36:42.655834 systemd-networkd[1378]: lxc752b34fe8ed8: Gained carrier
Feb 13 15:36:42.814376 containerd[1433]: time="2025-02-13T15:36:42.814009611Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:36:42.814376 containerd[1433]: time="2025-02-13T15:36:42.814085608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:36:42.814376 containerd[1433]: time="2025-02-13T15:36:42.814101328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:36:42.814376 containerd[1433]: time="2025-02-13T15:36:42.814185765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:36:42.834250 systemd[1]: Started cri-containerd-d150605e6ccf6c441705fede908e179621eab38f17c50ee3c79e46f27f4621fc.scope - libcontainer container d150605e6ccf6c441705fede908e179621eab38f17c50ee3c79e46f27f4621fc.
Feb 13 15:36:42.847055 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Feb 13 15:36:42.864918 containerd[1433]: time="2025-02-13T15:36:42.864877213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:32e260b6-6e25-4706-99f1-8b026529a063,Namespace:default,Attempt:0,} returns sandbox id \"d150605e6ccf6c441705fede908e179621eab38f17c50ee3c79e46f27f4621fc\""
Feb 13 15:36:42.866561 containerd[1433]: time="2025-02-13T15:36:42.866528830Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\""
Feb 13 15:36:43.100275 kubelet[1741]: E0213 15:36:43.100166    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:43.102064 update_engine[1421]: I20250213 15:36:43.101629  1421 update_attempter.cc:509] Updating boot flags...
Feb 13 15:36:43.125378 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2976)
Feb 13 15:36:43.150131 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2976)
Feb 13 15:36:44.101122 kubelet[1741]: E0213 15:36:44.101063    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:44.444214 systemd-networkd[1378]: lxc752b34fe8ed8: Gained IPv6LL
Feb 13 15:36:45.102241 kubelet[1741]: E0213 15:36:45.102140    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:45.302159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2134767170.mount: Deactivated successfully.
Feb 13 15:36:46.103223 kubelet[1741]: E0213 15:36:46.103178    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:46.652143 containerd[1433]: time="2025-02-13T15:36:46.651771753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:36:46.652642 containerd[1433]: time="2025-02-13T15:36:46.652572408Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625"
Feb 13 15:36:46.652989 containerd[1433]: time="2025-02-13T15:36:46.652963475Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:36:46.668337 containerd[1433]: time="2025-02-13T15:36:46.668251312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:36:46.669436 containerd[1433]: time="2025-02-13T15:36:46.669285799Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.802701972s"
Feb 13 15:36:46.669436 containerd[1433]: time="2025-02-13T15:36:46.669322718Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\""
Feb 13 15:36:46.672692 containerd[1433]: time="2025-02-13T15:36:46.672661293Z" level=info msg="CreateContainer within sandbox \"d150605e6ccf6c441705fede908e179621eab38f17c50ee3c79e46f27f4621fc\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}"
Feb 13 15:36:46.696005 containerd[1433]: time="2025-02-13T15:36:46.695940477Z" level=info msg="CreateContainer within sandbox \"d150605e6ccf6c441705fede908e179621eab38f17c50ee3c79e46f27f4621fc\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a2c5ab00a181da8fad98cd56409b3730c8ff0cd486ff4aadbba2e3902ab9caa2\""
Feb 13 15:36:46.697730 containerd[1433]: time="2025-02-13T15:36:46.696764291Z" level=info msg="StartContainer for \"a2c5ab00a181da8fad98cd56409b3730c8ff0cd486ff4aadbba2e3902ab9caa2\""
Feb 13 15:36:46.783226 systemd[1]: Started cri-containerd-a2c5ab00a181da8fad98cd56409b3730c8ff0cd486ff4aadbba2e3902ab9caa2.scope - libcontainer container a2c5ab00a181da8fad98cd56409b3730c8ff0cd486ff4aadbba2e3902ab9caa2.
Feb 13 15:36:46.833520 containerd[1433]: time="2025-02-13T15:36:46.833348134Z" level=info msg="StartContainer for \"a2c5ab00a181da8fad98cd56409b3730c8ff0cd486ff4aadbba2e3902ab9caa2\" returns successfully"
Feb 13 15:36:47.103443 kubelet[1741]: E0213 15:36:47.103404    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:48.081772 kubelet[1741]: E0213 15:36:48.081729    1741 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:48.104003 kubelet[1741]: E0213 15:36:48.103971    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:49.104765 kubelet[1741]: E0213 15:36:49.104693    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:50.105175 kubelet[1741]: E0213 15:36:50.105129    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:51.105810 kubelet[1741]: E0213 15:36:51.105766    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:52.106509 kubelet[1741]: E0213 15:36:52.106460    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:53.107334 kubelet[1741]: E0213 15:36:53.107270    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:54.107507 kubelet[1741]: E0213 15:36:54.107454    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:55.108387 kubelet[1741]: E0213 15:36:55.108344    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:56.109254 kubelet[1741]: E0213 15:36:56.109204    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:56.972759 kubelet[1741]: I0213 15:36:56.972656    1741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.167996786 podStartE2EDuration="14.972637174s" podCreationTimestamp="2025-02-13 15:36:42 +0000 UTC" firstStartedPulling="2025-02-13 15:36:42.866120285 +0000 UTC m=+35.688877667" lastFinishedPulling="2025-02-13 15:36:46.670760673 +0000 UTC m=+39.493518055" observedRunningTime="2025-02-13 15:36:47.289488979 +0000 UTC m=+40.112246361" watchObservedRunningTime="2025-02-13 15:36:56.972637174 +0000 UTC m=+49.795394556"
Feb 13 15:36:56.973029 kubelet[1741]: I0213 15:36:56.972983    1741 topology_manager.go:215] "Topology Admit Handler" podUID="146acaa6-7025-45f6-85c7-c53d43283e91" podNamespace="default" podName="test-pod-1"
Feb 13 15:36:56.978154 systemd[1]: Created slice kubepods-besteffort-pod146acaa6_7025_45f6_85c7_c53d43283e91.slice - libcontainer container kubepods-besteffort-pod146acaa6_7025_45f6_85c7_c53d43283e91.slice.
Feb 13 15:36:57.110371 kubelet[1741]: E0213 15:36:57.110329    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:57.151593 kubelet[1741]: I0213 15:36:57.151555    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42qkz\" (UniqueName: \"kubernetes.io/projected/146acaa6-7025-45f6-85c7-c53d43283e91-kube-api-access-42qkz\") pod \"test-pod-1\" (UID: \"146acaa6-7025-45f6-85c7-c53d43283e91\") " pod="default/test-pod-1"
Feb 13 15:36:57.151593 kubelet[1741]: I0213 15:36:57.151594    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8af721c1-0f88-482e-a180-49dcb28876a1\" (UniqueName: \"kubernetes.io/nfs/146acaa6-7025-45f6-85c7-c53d43283e91-pvc-8af721c1-0f88-482e-a180-49dcb28876a1\") pod \"test-pod-1\" (UID: \"146acaa6-7025-45f6-85c7-c53d43283e91\") " pod="default/test-pod-1"
Feb 13 15:36:57.274043 kernel: FS-Cache: Loaded
Feb 13 15:36:57.297301 kernel: RPC: Registered named UNIX socket transport module.
Feb 13 15:36:57.297395 kernel: RPC: Registered udp transport module.
Feb 13 15:36:57.297412 kernel: RPC: Registered tcp transport module.
Feb 13 15:36:57.298429 kernel: RPC: Registered tcp-with-tls transport module.
Feb 13 15:36:57.298455 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb 13 15:36:57.461059 kernel: NFS: Registering the id_resolver key type
Feb 13 15:36:57.461163 kernel: Key type id_resolver registered
Feb 13 15:36:57.462141 kernel: Key type id_legacy registered
Feb 13 15:36:57.490202 nfsidmap[3148]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain'
Feb 13 15:36:57.494063 nfsidmap[3151]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain'
Feb 13 15:36:57.581690 containerd[1433]: time="2025-02-13T15:36:57.581275633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:146acaa6-7025-45f6-85c7-c53d43283e91,Namespace:default,Attempt:0,}"
Feb 13 15:36:57.615598 systemd-networkd[1378]: lxca0583c0cfdb3: Link UP
Feb 13 15:36:57.623048 kernel: eth0: renamed from tmp6cd73
Feb 13 15:36:57.627458 systemd-networkd[1378]: lxca0583c0cfdb3: Gained carrier
Feb 13 15:36:57.762083 containerd[1433]: time="2025-02-13T15:36:57.761966711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:36:57.762083 containerd[1433]: time="2025-02-13T15:36:57.762077708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:36:57.762234 containerd[1433]: time="2025-02-13T15:36:57.762093108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:36:57.762603 containerd[1433]: time="2025-02-13T15:36:57.762570099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:36:57.787223 systemd[1]: Started cri-containerd-6cd73b3891c717d8111189f516917ad242a98361e32d2ab1179f5d1be71eddad.scope - libcontainer container 6cd73b3891c717d8111189f516917ad242a98361e32d2ab1179f5d1be71eddad.
Feb 13 15:36:57.797053 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Feb 13 15:36:57.814905 containerd[1433]: time="2025-02-13T15:36:57.814684231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:146acaa6-7025-45f6-85c7-c53d43283e91,Namespace:default,Attempt:0,} returns sandbox id \"6cd73b3891c717d8111189f516917ad242a98361e32d2ab1179f5d1be71eddad\""
Feb 13 15:36:57.816619 containerd[1433]: time="2025-02-13T15:36:57.816311159Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb 13 15:36:58.096536 containerd[1433]: time="2025-02-13T15:36:58.096487864Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:36:58.097215 containerd[1433]: time="2025-02-13T15:36:58.097047094Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61"
Feb 13 15:36:58.100102 containerd[1433]: time="2025-02-13T15:36:58.100071796Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 283.729357ms"
Feb 13 15:36:58.100292 containerd[1433]: time="2025-02-13T15:36:58.100185554Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\""
Feb 13 15:36:58.102105 containerd[1433]: time="2025-02-13T15:36:58.102071078Z" level=info msg="CreateContainer within sandbox \"6cd73b3891c717d8111189f516917ad242a98361e32d2ab1179f5d1be71eddad\" for container &ContainerMetadata{Name:test,Attempt:0,}"
Feb 13 15:36:58.111389 kubelet[1741]: E0213 15:36:58.111349    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:58.111896 containerd[1433]: time="2025-02-13T15:36:58.111847532Z" level=info msg="CreateContainer within sandbox \"6cd73b3891c717d8111189f516917ad242a98361e32d2ab1179f5d1be71eddad\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"266b80706310b7fd1789e5abcd58c9e34d993b4c6a2096c429d0d021b2dd7e6e\""
Feb 13 15:36:58.112344 containerd[1433]: time="2025-02-13T15:36:58.112310364Z" level=info msg="StartContainer for \"266b80706310b7fd1789e5abcd58c9e34d993b4c6a2096c429d0d021b2dd7e6e\""
Feb 13 15:36:58.134175 systemd[1]: Started cri-containerd-266b80706310b7fd1789e5abcd58c9e34d993b4c6a2096c429d0d021b2dd7e6e.scope - libcontainer container 266b80706310b7fd1789e5abcd58c9e34d993b4c6a2096c429d0d021b2dd7e6e.
Feb 13 15:36:58.157610 containerd[1433]: time="2025-02-13T15:36:58.157489745Z" level=info msg="StartContainer for \"266b80706310b7fd1789e5abcd58c9e34d993b4c6a2096c429d0d021b2dd7e6e\" returns successfully"
Feb 13 15:36:58.311026 kubelet[1741]: I0213 15:36:58.310954    1741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.026176413 podStartE2EDuration="16.31093919s" podCreationTimestamp="2025-02-13 15:36:42 +0000 UTC" firstStartedPulling="2025-02-13 15:36:57.816093044 +0000 UTC m=+50.638850426" lastFinishedPulling="2025-02-13 15:36:58.100855821 +0000 UTC m=+50.923613203" observedRunningTime="2025-02-13 15:36:58.310864632 +0000 UTC m=+51.133622014" watchObservedRunningTime="2025-02-13 15:36:58.31093919 +0000 UTC m=+51.133696572"
Feb 13 15:36:59.112384 kubelet[1741]: E0213 15:36:59.112331    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:36:59.676214 systemd-networkd[1378]: lxca0583c0cfdb3: Gained IPv6LL
Feb 13 15:37:00.113239 kubelet[1741]: E0213 15:37:00.113175    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:01.113705 kubelet[1741]: E0213 15:37:01.113652    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:02.114784 kubelet[1741]: E0213 15:37:02.114735    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:03.115228 kubelet[1741]: E0213 15:37:03.115183    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:04.115696 kubelet[1741]: E0213 15:37:04.115641    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:05.052829 containerd[1433]: time="2025-02-13T15:37:05.052776754Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 15:37:05.061707 containerd[1433]: time="2025-02-13T15:37:05.061656740Z" level=info msg="StopContainer for \"e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960\" with timeout 2 (s)"
Feb 13 15:37:05.063473 containerd[1433]: time="2025-02-13T15:37:05.063431553Z" level=info msg="Stop container \"e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960\" with signal terminated"
Feb 13 15:37:05.069145 systemd-networkd[1378]: lxc_health: Link DOWN
Feb 13 15:37:05.069154 systemd-networkd[1378]: lxc_health: Lost carrier
Feb 13 15:37:05.093578 systemd[1]: cri-containerd-e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960.scope: Deactivated successfully.
Feb 13 15:37:05.094140 systemd[1]: cri-containerd-e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960.scope: Consumed 6.771s CPU time.
Feb 13 15:37:05.113423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960-rootfs.mount: Deactivated successfully.
Feb 13 15:37:05.116382 kubelet[1741]: E0213 15:37:05.116347    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:05.123682 containerd[1433]: time="2025-02-13T15:37:05.123628886Z" level=info msg="shim disconnected" id=e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960 namespace=k8s.io
Feb 13 15:37:05.124047 containerd[1433]: time="2025-02-13T15:37:05.123856842Z" level=warning msg="cleaning up after shim disconnected" id=e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960 namespace=k8s.io
Feb 13 15:37:05.124047 containerd[1433]: time="2025-02-13T15:37:05.123872242Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:37:05.136982 containerd[1433]: time="2025-02-13T15:37:05.136861046Z" level=info msg="StopContainer for \"e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960\" returns successfully"
Feb 13 15:37:05.137521 containerd[1433]: time="2025-02-13T15:37:05.137497317Z" level=info msg="StopPodSandbox for \"459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9\""
Feb 13 15:37:05.137564 containerd[1433]: time="2025-02-13T15:37:05.137534196Z" level=info msg="Container to stop \"594daa148b87f52414bf82fa391061d17542954dd193a37608c306a08d2330f4\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:37:05.137564 containerd[1433]: time="2025-02-13T15:37:05.137545076Z" level=info msg="Container to stop \"a8882f4fae780f404ee3d5b497d1c2c9f9ed948b68944d1167720bbb65a7d6c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:37:05.137564 containerd[1433]: time="2025-02-13T15:37:05.137553636Z" level=info msg="Container to stop \"e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:37:05.137564 containerd[1433]: time="2025-02-13T15:37:05.137561556Z" level=info msg="Container to stop \"ff1e5a799bc6ee2efa587ac440245e8a63daac55ea1cfda5d9c3b279fd9d4cb6\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:37:05.137672 containerd[1433]: time="2025-02-13T15:37:05.137569756Z" level=info msg="Container to stop \"c94d17e5546222424407ffdcad6f28101a787e58b7b3464ca552d62a616e5013\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:37:05.139004 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9-shm.mount: Deactivated successfully.
Feb 13 15:37:05.144420 systemd[1]: cri-containerd-459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9.scope: Deactivated successfully.
Feb 13 15:37:05.164839 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9-rootfs.mount: Deactivated successfully.
Feb 13 15:37:05.168847 containerd[1433]: time="2025-02-13T15:37:05.168785205Z" level=info msg="shim disconnected" id=459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9 namespace=k8s.io
Feb 13 15:37:05.168847 containerd[1433]: time="2025-02-13T15:37:05.168840644Z" level=warning msg="cleaning up after shim disconnected" id=459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9 namespace=k8s.io
Feb 13 15:37:05.169139 containerd[1433]: time="2025-02-13T15:37:05.168849244Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:37:05.183157 containerd[1433]: time="2025-02-13T15:37:05.183056270Z" level=info msg="TearDown network for sandbox \"459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9\" successfully"
Feb 13 15:37:05.183157 containerd[1433]: time="2025-02-13T15:37:05.183101069Z" level=info msg="StopPodSandbox for \"459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9\" returns successfully"
Feb 13 15:37:05.293044 kubelet[1741]: I0213 15:37:05.292991    1741 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-host-proc-sys-kernel\") pod \"a67a9995-159f-4453-97f3-afbee008ae12\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") "
Feb 13 15:37:05.293044 kubelet[1741]: I0213 15:37:05.293042    1741 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-bpf-maps\") pod \"a67a9995-159f-4453-97f3-afbee008ae12\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") "
Feb 13 15:37:05.293243 kubelet[1741]: I0213 15:37:05.293066    1741 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a67a9995-159f-4453-97f3-afbee008ae12-hubble-tls\") pod \"a67a9995-159f-4453-97f3-afbee008ae12\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") "
Feb 13 15:37:05.293243 kubelet[1741]: I0213 15:37:05.293097    1741 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvw9f\" (UniqueName: \"kubernetes.io/projected/a67a9995-159f-4453-97f3-afbee008ae12-kube-api-access-pvw9f\") pod \"a67a9995-159f-4453-97f3-afbee008ae12\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") "
Feb 13 15:37:05.293243 kubelet[1741]: I0213 15:37:05.293103    1741 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a67a9995-159f-4453-97f3-afbee008ae12" (UID: "a67a9995-159f-4453-97f3-afbee008ae12"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:37:05.293243 kubelet[1741]: I0213 15:37:05.293137    1741 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a67a9995-159f-4453-97f3-afbee008ae12" (UID: "a67a9995-159f-4453-97f3-afbee008ae12"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:37:05.293243 kubelet[1741]: I0213 15:37:05.293103    1741 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a67a9995-159f-4453-97f3-afbee008ae12" (UID: "a67a9995-159f-4453-97f3-afbee008ae12"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:37:05.293366 kubelet[1741]: I0213 15:37:05.293115    1741 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-host-proc-sys-net\") pod \"a67a9995-159f-4453-97f3-afbee008ae12\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") "
Feb 13 15:37:05.293366 kubelet[1741]: I0213 15:37:05.293186    1741 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-cilium-cgroup\") pod \"a67a9995-159f-4453-97f3-afbee008ae12\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") "
Feb 13 15:37:05.293366 kubelet[1741]: I0213 15:37:05.293209    1741 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a67a9995-159f-4453-97f3-afbee008ae12-cilium-config-path\") pod \"a67a9995-159f-4453-97f3-afbee008ae12\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") "
Feb 13 15:37:05.293366 kubelet[1741]: I0213 15:37:05.293230    1741 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-etc-cni-netd\") pod \"a67a9995-159f-4453-97f3-afbee008ae12\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") "
Feb 13 15:37:05.293366 kubelet[1741]: I0213 15:37:05.293246    1741 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-xtables-lock\") pod \"a67a9995-159f-4453-97f3-afbee008ae12\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") "
Feb 13 15:37:05.293366 kubelet[1741]: I0213 15:37:05.293265    1741 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a67a9995-159f-4453-97f3-afbee008ae12-clustermesh-secrets\") pod \"a67a9995-159f-4453-97f3-afbee008ae12\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") "
Feb 13 15:37:05.293511 kubelet[1741]: I0213 15:37:05.293280    1741 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-lib-modules\") pod \"a67a9995-159f-4453-97f3-afbee008ae12\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") "
Feb 13 15:37:05.293511 kubelet[1741]: I0213 15:37:05.293296    1741 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-cilium-run\") pod \"a67a9995-159f-4453-97f3-afbee008ae12\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") "
Feb 13 15:37:05.293511 kubelet[1741]: I0213 15:37:05.293310    1741 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-hostproc\") pod \"a67a9995-159f-4453-97f3-afbee008ae12\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") "
Feb 13 15:37:05.293511 kubelet[1741]: I0213 15:37:05.293325    1741 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-cni-path\") pod \"a67a9995-159f-4453-97f3-afbee008ae12\" (UID: \"a67a9995-159f-4453-97f3-afbee008ae12\") "
Feb 13 15:37:05.293511 kubelet[1741]: I0213 15:37:05.293353    1741 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-bpf-maps\") on node \"10.0.0.105\" DevicePath \"\""
Feb 13 15:37:05.293511 kubelet[1741]: I0213 15:37:05.293364    1741 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-host-proc-sys-kernel\") on node \"10.0.0.105\" DevicePath \"\""
Feb 13 15:37:05.293511 kubelet[1741]: I0213 15:37:05.293372    1741 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-host-proc-sys-net\") on node \"10.0.0.105\" DevicePath \"\""
Feb 13 15:37:05.293655 kubelet[1741]: I0213 15:37:05.293397    1741 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-cni-path" (OuterVolumeSpecName: "cni-path") pod "a67a9995-159f-4453-97f3-afbee008ae12" (UID: "a67a9995-159f-4453-97f3-afbee008ae12"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:37:05.293655 kubelet[1741]: I0213 15:37:05.293413    1741 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a67a9995-159f-4453-97f3-afbee008ae12" (UID: "a67a9995-159f-4453-97f3-afbee008ae12"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:37:05.295430 kubelet[1741]: I0213 15:37:05.295224    1741 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a67a9995-159f-4453-97f3-afbee008ae12-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a67a9995-159f-4453-97f3-afbee008ae12" (UID: "a67a9995-159f-4453-97f3-afbee008ae12"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb 13 15:37:05.295430 kubelet[1741]: I0213 15:37:05.295302    1741 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a67a9995-159f-4453-97f3-afbee008ae12" (UID: "a67a9995-159f-4453-97f3-afbee008ae12"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:37:05.295430 kubelet[1741]: I0213 15:37:05.295322    1741 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a67a9995-159f-4453-97f3-afbee008ae12" (UID: "a67a9995-159f-4453-97f3-afbee008ae12"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:37:05.295430 kubelet[1741]: I0213 15:37:05.295340    1741 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-hostproc" (OuterVolumeSpecName: "hostproc") pod "a67a9995-159f-4453-97f3-afbee008ae12" (UID: "a67a9995-159f-4453-97f3-afbee008ae12"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:37:05.295430 kubelet[1741]: I0213 15:37:05.295360    1741 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a67a9995-159f-4453-97f3-afbee008ae12" (UID: "a67a9995-159f-4453-97f3-afbee008ae12"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:37:05.295675 kubelet[1741]: I0213 15:37:05.295379    1741 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a67a9995-159f-4453-97f3-afbee008ae12" (UID: "a67a9995-159f-4453-97f3-afbee008ae12"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:37:05.297799 kubelet[1741]: I0213 15:37:05.297759    1741 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a67a9995-159f-4453-97f3-afbee008ae12-kube-api-access-pvw9f" (OuterVolumeSpecName: "kube-api-access-pvw9f") pod "a67a9995-159f-4453-97f3-afbee008ae12" (UID: "a67a9995-159f-4453-97f3-afbee008ae12"). InnerVolumeSpecName "kube-api-access-pvw9f". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 13 15:37:05.298106 systemd[1]: var-lib-kubelet-pods-a67a9995\x2d159f\x2d4453\x2d97f3\x2dafbee008ae12-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpvw9f.mount: Deactivated successfully.
Feb 13 15:37:05.298208 systemd[1]: var-lib-kubelet-pods-a67a9995\x2d159f\x2d4453\x2d97f3\x2dafbee008ae12-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb 13 15:37:05.298611 kubelet[1741]: I0213 15:37:05.298558    1741 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a67a9995-159f-4453-97f3-afbee008ae12-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a67a9995-159f-4453-97f3-afbee008ae12" (UID: "a67a9995-159f-4453-97f3-afbee008ae12"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb 13 15:37:05.298611 kubelet[1741]: I0213 15:37:05.298577    1741 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a67a9995-159f-4453-97f3-afbee008ae12-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a67a9995-159f-4453-97f3-afbee008ae12" (UID: "a67a9995-159f-4453-97f3-afbee008ae12"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 13 15:37:05.319478 kubelet[1741]: I0213 15:37:05.319353    1741 scope.go:117] "RemoveContainer" containerID="e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960"
Feb 13 15:37:05.321661 containerd[1433]: time="2025-02-13T15:37:05.321625061Z" level=info msg="RemoveContainer for \"e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960\""
Feb 13 15:37:05.325197 systemd[1]: Removed slice kubepods-burstable-poda67a9995_159f_4453_97f3_afbee008ae12.slice - libcontainer container kubepods-burstable-poda67a9995_159f_4453_97f3_afbee008ae12.slice.
Feb 13 15:37:05.325287 systemd[1]: kubepods-burstable-poda67a9995_159f_4453_97f3_afbee008ae12.slice: Consumed 6.897s CPU time.
Feb 13 15:37:05.326763 containerd[1433]: time="2025-02-13T15:37:05.326730544Z" level=info msg="RemoveContainer for \"e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960\" returns successfully"
Feb 13 15:37:05.326989 kubelet[1741]: I0213 15:37:05.326966    1741 scope.go:117] "RemoveContainer" containerID="a8882f4fae780f404ee3d5b497d1c2c9f9ed948b68944d1167720bbb65a7d6c5"
Feb 13 15:37:05.328496 containerd[1433]: time="2025-02-13T15:37:05.328363319Z" level=info msg="RemoveContainer for \"a8882f4fae780f404ee3d5b497d1c2c9f9ed948b68944d1167720bbb65a7d6c5\""
Feb 13 15:37:05.331480 containerd[1433]: time="2025-02-13T15:37:05.331448713Z" level=info msg="RemoveContainer for \"a8882f4fae780f404ee3d5b497d1c2c9f9ed948b68944d1167720bbb65a7d6c5\" returns successfully"
Feb 13 15:37:05.331708 kubelet[1741]: I0213 15:37:05.331678    1741 scope.go:117] "RemoveContainer" containerID="594daa148b87f52414bf82fa391061d17542954dd193a37608c306a08d2330f4"
Feb 13 15:37:05.332852 containerd[1433]: time="2025-02-13T15:37:05.332823212Z" level=info msg="RemoveContainer for \"594daa148b87f52414bf82fa391061d17542954dd193a37608c306a08d2330f4\""
Feb 13 15:37:05.335813 containerd[1433]: time="2025-02-13T15:37:05.335766328Z" level=info msg="RemoveContainer for \"594daa148b87f52414bf82fa391061d17542954dd193a37608c306a08d2330f4\" returns successfully"
Feb 13 15:37:05.336038 kubelet[1741]: I0213 15:37:05.336005    1741 scope.go:117] "RemoveContainer" containerID="ff1e5a799bc6ee2efa587ac440245e8a63daac55ea1cfda5d9c3b279fd9d4cb6"
Feb 13 15:37:05.336999 containerd[1433]: time="2025-02-13T15:37:05.336976109Z" level=info msg="RemoveContainer for \"ff1e5a799bc6ee2efa587ac440245e8a63daac55ea1cfda5d9c3b279fd9d4cb6\""
Feb 13 15:37:05.339827 containerd[1433]: time="2025-02-13T15:37:05.339753948Z" level=info msg="RemoveContainer for \"ff1e5a799bc6ee2efa587ac440245e8a63daac55ea1cfda5d9c3b279fd9d4cb6\" returns successfully"
Feb 13 15:37:05.339998 kubelet[1741]: I0213 15:37:05.339972    1741 scope.go:117] "RemoveContainer" containerID="c94d17e5546222424407ffdcad6f28101a787e58b7b3464ca552d62a616e5013"
Feb 13 15:37:05.341356 containerd[1433]: time="2025-02-13T15:37:05.341107007Z" level=info msg="RemoveContainer for \"c94d17e5546222424407ffdcad6f28101a787e58b7b3464ca552d62a616e5013\""
Feb 13 15:37:05.343423 containerd[1433]: time="2025-02-13T15:37:05.343391853Z" level=info msg="RemoveContainer for \"c94d17e5546222424407ffdcad6f28101a787e58b7b3464ca552d62a616e5013\" returns successfully"
Feb 13 15:37:05.343733 kubelet[1741]: I0213 15:37:05.343701    1741 scope.go:117] "RemoveContainer" containerID="e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960"
Feb 13 15:37:05.344067 containerd[1433]: time="2025-02-13T15:37:05.344019203Z" level=error msg="ContainerStatus for \"e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960\": not found"
Feb 13 15:37:05.344199 kubelet[1741]: E0213 15:37:05.344177    1741 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960\": not found" containerID="e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960"
Feb 13 15:37:05.344280 kubelet[1741]: I0213 15:37:05.344206    1741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960"} err="failed to get container status \"e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960\": rpc error: code = NotFound desc = an error occurred when try to find container \"e09ac3e6147ea09d690d039c012edb64843392a92cbece36d2eb582b46a13960\": not found"
Feb 13 15:37:05.344317 kubelet[1741]: I0213 15:37:05.344281    1741 scope.go:117] "RemoveContainer" containerID="a8882f4fae780f404ee3d5b497d1c2c9f9ed948b68944d1167720bbb65a7d6c5"
Feb 13 15:37:05.344454 containerd[1433]: time="2025-02-13T15:37:05.344429077Z" level=error msg="ContainerStatus for \"a8882f4fae780f404ee3d5b497d1c2c9f9ed948b68944d1167720bbb65a7d6c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a8882f4fae780f404ee3d5b497d1c2c9f9ed948b68944d1167720bbb65a7d6c5\": not found"
Feb 13 15:37:05.344571 kubelet[1741]: E0213 15:37:05.344548    1741 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a8882f4fae780f404ee3d5b497d1c2c9f9ed948b68944d1167720bbb65a7d6c5\": not found" containerID="a8882f4fae780f404ee3d5b497d1c2c9f9ed948b68944d1167720bbb65a7d6c5"
Feb 13 15:37:05.344663 kubelet[1741]: I0213 15:37:05.344579    1741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a8882f4fae780f404ee3d5b497d1c2c9f9ed948b68944d1167720bbb65a7d6c5"} err="failed to get container status \"a8882f4fae780f404ee3d5b497d1c2c9f9ed948b68944d1167720bbb65a7d6c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"a8882f4fae780f404ee3d5b497d1c2c9f9ed948b68944d1167720bbb65a7d6c5\": not found"
Feb 13 15:37:05.344663 kubelet[1741]: I0213 15:37:05.344599    1741 scope.go:117] "RemoveContainer" containerID="594daa148b87f52414bf82fa391061d17542954dd193a37608c306a08d2330f4"
Feb 13 15:37:05.345030 containerd[1433]: time="2025-02-13T15:37:05.344890670Z" level=error msg="ContainerStatus for \"594daa148b87f52414bf82fa391061d17542954dd193a37608c306a08d2330f4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"594daa148b87f52414bf82fa391061d17542954dd193a37608c306a08d2330f4\": not found"
Feb 13 15:37:05.345114 kubelet[1741]: E0213 15:37:05.345031    1741 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"594daa148b87f52414bf82fa391061d17542954dd193a37608c306a08d2330f4\": not found" containerID="594daa148b87f52414bf82fa391061d17542954dd193a37608c306a08d2330f4"
Feb 13 15:37:05.345114 kubelet[1741]: I0213 15:37:05.345053    1741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"594daa148b87f52414bf82fa391061d17542954dd193a37608c306a08d2330f4"} err="failed to get container status \"594daa148b87f52414bf82fa391061d17542954dd193a37608c306a08d2330f4\": rpc error: code = NotFound desc = an error occurred when try to find container \"594daa148b87f52414bf82fa391061d17542954dd193a37608c306a08d2330f4\": not found"
Feb 13 15:37:05.345114 kubelet[1741]: I0213 15:37:05.345070    1741 scope.go:117] "RemoveContainer" containerID="ff1e5a799bc6ee2efa587ac440245e8a63daac55ea1cfda5d9c3b279fd9d4cb6"
Feb 13 15:37:05.345584 containerd[1433]: time="2025-02-13T15:37:05.345325344Z" level=error msg="ContainerStatus for \"ff1e5a799bc6ee2efa587ac440245e8a63daac55ea1cfda5d9c3b279fd9d4cb6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ff1e5a799bc6ee2efa587ac440245e8a63daac55ea1cfda5d9c3b279fd9d4cb6\": not found"
Feb 13 15:37:05.345636 kubelet[1741]: E0213 15:37:05.345451    1741 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ff1e5a799bc6ee2efa587ac440245e8a63daac55ea1cfda5d9c3b279fd9d4cb6\": not found" containerID="ff1e5a799bc6ee2efa587ac440245e8a63daac55ea1cfda5d9c3b279fd9d4cb6"
Feb 13 15:37:05.345636 kubelet[1741]: I0213 15:37:05.345477    1741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ff1e5a799bc6ee2efa587ac440245e8a63daac55ea1cfda5d9c3b279fd9d4cb6"} err="failed to get container status \"ff1e5a799bc6ee2efa587ac440245e8a63daac55ea1cfda5d9c3b279fd9d4cb6\": rpc error: code = NotFound desc = an error occurred when try to find container \"ff1e5a799bc6ee2efa587ac440245e8a63daac55ea1cfda5d9c3b279fd9d4cb6\": not found"
Feb 13 15:37:05.345636 kubelet[1741]: I0213 15:37:05.345497    1741 scope.go:117] "RemoveContainer" containerID="c94d17e5546222424407ffdcad6f28101a787e58b7b3464ca552d62a616e5013"
Feb 13 15:37:05.345732 containerd[1433]: time="2025-02-13T15:37:05.345685498Z" level=error msg="ContainerStatus for \"c94d17e5546222424407ffdcad6f28101a787e58b7b3464ca552d62a616e5013\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c94d17e5546222424407ffdcad6f28101a787e58b7b3464ca552d62a616e5013\": not found"
Feb 13 15:37:05.345881 kubelet[1741]: E0213 15:37:05.345813    1741 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c94d17e5546222424407ffdcad6f28101a787e58b7b3464ca552d62a616e5013\": not found" containerID="c94d17e5546222424407ffdcad6f28101a787e58b7b3464ca552d62a616e5013"
Feb 13 15:37:05.345881 kubelet[1741]: I0213 15:37:05.345846    1741 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c94d17e5546222424407ffdcad6f28101a787e58b7b3464ca552d62a616e5013"} err="failed to get container status \"c94d17e5546222424407ffdcad6f28101a787e58b7b3464ca552d62a616e5013\": rpc error: code = NotFound desc = an error occurred when try to find container \"c94d17e5546222424407ffdcad6f28101a787e58b7b3464ca552d62a616e5013\": not found"
Feb 13 15:37:05.394416 kubelet[1741]: I0213 15:37:05.394374    1741 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-cilium-run\") on node \"10.0.0.105\" DevicePath \"\""
Feb 13 15:37:05.394416 kubelet[1741]: I0213 15:37:05.394404    1741 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-hostproc\") on node \"10.0.0.105\" DevicePath \"\""
Feb 13 15:37:05.394416 kubelet[1741]: I0213 15:37:05.394414    1741 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-cni-path\") on node \"10.0.0.105\" DevicePath \"\""
Feb 13 15:37:05.394416 kubelet[1741]: I0213 15:37:05.394423    1741 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a67a9995-159f-4453-97f3-afbee008ae12-hubble-tls\") on node \"10.0.0.105\" DevicePath \"\""
Feb 13 15:37:05.394601 kubelet[1741]: I0213 15:37:05.394432    1741 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-pvw9f\" (UniqueName: \"kubernetes.io/projected/a67a9995-159f-4453-97f3-afbee008ae12-kube-api-access-pvw9f\") on node \"10.0.0.105\" DevicePath \"\""
Feb 13 15:37:05.394601 kubelet[1741]: I0213 15:37:05.394442    1741 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-cilium-cgroup\") on node \"10.0.0.105\" DevicePath \"\""
Feb 13 15:37:05.394601 kubelet[1741]: I0213 15:37:05.394450    1741 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a67a9995-159f-4453-97f3-afbee008ae12-cilium-config-path\") on node \"10.0.0.105\" DevicePath \"\""
Feb 13 15:37:05.394601 kubelet[1741]: I0213 15:37:05.394458    1741 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-etc-cni-netd\") on node \"10.0.0.105\" DevicePath \"\""
Feb 13 15:37:05.394601 kubelet[1741]: I0213 15:37:05.394465    1741 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-xtables-lock\") on node \"10.0.0.105\" DevicePath \"\""
Feb 13 15:37:05.394601 kubelet[1741]: I0213 15:37:05.394473    1741 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a67a9995-159f-4453-97f3-afbee008ae12-clustermesh-secrets\") on node \"10.0.0.105\" DevicePath \"\""
Feb 13 15:37:05.394601 kubelet[1741]: I0213 15:37:05.394481    1741 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a67a9995-159f-4453-97f3-afbee008ae12-lib-modules\") on node \"10.0.0.105\" DevicePath \"\""
Feb 13 15:37:06.040822 systemd[1]: var-lib-kubelet-pods-a67a9995\x2d159f\x2d4453\x2d97f3\x2dafbee008ae12-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb 13 15:37:06.116831 kubelet[1741]: E0213 15:37:06.116786    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:06.199784 kubelet[1741]: I0213 15:37:06.199740    1741 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a67a9995-159f-4453-97f3-afbee008ae12" path="/var/lib/kubelet/pods/a67a9995-159f-4453-97f3-afbee008ae12/volumes"
Feb 13 15:37:07.116977 kubelet[1741]: E0213 15:37:07.116915    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:08.081394 kubelet[1741]: E0213 15:37:08.081323    1741 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:08.095188 containerd[1433]: time="2025-02-13T15:37:08.095156415Z" level=info msg="StopPodSandbox for \"459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9\""
Feb 13 15:37:08.095474 containerd[1433]: time="2025-02-13T15:37:08.095241094Z" level=info msg="TearDown network for sandbox \"459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9\" successfully"
Feb 13 15:37:08.095474 containerd[1433]: time="2025-02-13T15:37:08.095252454Z" level=info msg="StopPodSandbox for \"459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9\" returns successfully"
Feb 13 15:37:08.104667 containerd[1433]: time="2025-02-13T15:37:08.104507406Z" level=info msg="RemovePodSandbox for \"459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9\""
Feb 13 15:37:08.104667 containerd[1433]: time="2025-02-13T15:37:08.104551485Z" level=info msg="Forcibly stopping sandbox \"459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9\""
Feb 13 15:37:08.104667 containerd[1433]: time="2025-02-13T15:37:08.104603164Z" level=info msg="TearDown network for sandbox \"459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9\" successfully"
Feb 13 15:37:08.107009 containerd[1433]: time="2025-02-13T15:37:08.106973891Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:37:08.107086 containerd[1433]: time="2025-02-13T15:37:08.107037970Z" level=info msg="RemovePodSandbox \"459c03bfbdc4a4e0b7165fe7f67ef2d54e7831d69882983002393c94c9fedae9\" returns successfully"
Feb 13 15:37:08.117853 kubelet[1741]: E0213 15:37:08.117824    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:08.209124 kubelet[1741]: E0213 15:37:08.209063    1741 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 13 15:37:08.386707 kubelet[1741]: I0213 15:37:08.386578    1741 topology_manager.go:215] "Topology Admit Handler" podUID="50a6729e-e1dc-496e-943f-a3b23b5ffac2" podNamespace="kube-system" podName="cilium-h9tm2"
Feb 13 15:37:08.386707 kubelet[1741]: E0213 15:37:08.386634    1741 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a67a9995-159f-4453-97f3-afbee008ae12" containerName="mount-cgroup"
Feb 13 15:37:08.386707 kubelet[1741]: E0213 15:37:08.386643    1741 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a67a9995-159f-4453-97f3-afbee008ae12" containerName="apply-sysctl-overwrites"
Feb 13 15:37:08.386707 kubelet[1741]: E0213 15:37:08.386650    1741 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a67a9995-159f-4453-97f3-afbee008ae12" containerName="mount-bpf-fs"
Feb 13 15:37:08.386707 kubelet[1741]: E0213 15:37:08.386657    1741 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a67a9995-159f-4453-97f3-afbee008ae12" containerName="clean-cilium-state"
Feb 13 15:37:08.386707 kubelet[1741]: E0213 15:37:08.386664    1741 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a67a9995-159f-4453-97f3-afbee008ae12" containerName="cilium-agent"
Feb 13 15:37:08.386707 kubelet[1741]: I0213 15:37:08.386686    1741 memory_manager.go:354] "RemoveStaleState removing state" podUID="a67a9995-159f-4453-97f3-afbee008ae12" containerName="cilium-agent"
Feb 13 15:37:08.388902 kubelet[1741]: I0213 15:37:08.388868    1741 topology_manager.go:215] "Topology Admit Handler" podUID="5c55c50b-2faa-4835-b198-da38c0417ade" podNamespace="kube-system" podName="cilium-operator-599987898-jqkgd"
Feb 13 15:37:08.392418 systemd[1]: Created slice kubepods-burstable-pod50a6729e_e1dc_496e_943f_a3b23b5ffac2.slice - libcontainer container kubepods-burstable-pod50a6729e_e1dc_496e_943f_a3b23b5ffac2.slice.
Feb 13 15:37:08.419312 systemd[1]: Created slice kubepods-besteffort-pod5c55c50b_2faa_4835_b198_da38c0417ade.slice - libcontainer container kubepods-besteffort-pod5c55c50b_2faa_4835_b198_da38c0417ade.slice.
Feb 13 15:37:08.510632 kubelet[1741]: I0213 15:37:08.510591    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50a6729e-e1dc-496e-943f-a3b23b5ffac2-xtables-lock\") pod \"cilium-h9tm2\" (UID: \"50a6729e-e1dc-496e-943f-a3b23b5ffac2\") " pod="kube-system/cilium-h9tm2"
Feb 13 15:37:08.510632 kubelet[1741]: I0213 15:37:08.510630    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50a6729e-e1dc-496e-943f-a3b23b5ffac2-bpf-maps\") pod \"cilium-h9tm2\" (UID: \"50a6729e-e1dc-496e-943f-a3b23b5ffac2\") " pod="kube-system/cilium-h9tm2"
Feb 13 15:37:08.510797 kubelet[1741]: I0213 15:37:08.510652    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50a6729e-e1dc-496e-943f-a3b23b5ffac2-etc-cni-netd\") pod \"cilium-h9tm2\" (UID: \"50a6729e-e1dc-496e-943f-a3b23b5ffac2\") " pod="kube-system/cilium-h9tm2"
Feb 13 15:37:08.510797 kubelet[1741]: I0213 15:37:08.510673    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50a6729e-e1dc-496e-943f-a3b23b5ffac2-cilium-cgroup\") pod \"cilium-h9tm2\" (UID: \"50a6729e-e1dc-496e-943f-a3b23b5ffac2\") " pod="kube-system/cilium-h9tm2"
Feb 13 15:37:08.510797 kubelet[1741]: I0213 15:37:08.510689    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50a6729e-e1dc-496e-943f-a3b23b5ffac2-host-proc-sys-kernel\") pod \"cilium-h9tm2\" (UID: \"50a6729e-e1dc-496e-943f-a3b23b5ffac2\") " pod="kube-system/cilium-h9tm2"
Feb 13 15:37:08.510797 kubelet[1741]: I0213 15:37:08.510704    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50a6729e-e1dc-496e-943f-a3b23b5ffac2-cilium-run\") pod \"cilium-h9tm2\" (UID: \"50a6729e-e1dc-496e-943f-a3b23b5ffac2\") " pod="kube-system/cilium-h9tm2"
Feb 13 15:37:08.510797 kubelet[1741]: I0213 15:37:08.510718    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50a6729e-e1dc-496e-943f-a3b23b5ffac2-hostproc\") pod \"cilium-h9tm2\" (UID: \"50a6729e-e1dc-496e-943f-a3b23b5ffac2\") " pod="kube-system/cilium-h9tm2"
Feb 13 15:37:08.510797 kubelet[1741]: I0213 15:37:08.510734    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/50a6729e-e1dc-496e-943f-a3b23b5ffac2-cilium-ipsec-secrets\") pod \"cilium-h9tm2\" (UID: \"50a6729e-e1dc-496e-943f-a3b23b5ffac2\") " pod="kube-system/cilium-h9tm2"
Feb 13 15:37:08.510920 kubelet[1741]: I0213 15:37:08.510754    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5c55c50b-2faa-4835-b198-da38c0417ade-cilium-config-path\") pod \"cilium-operator-599987898-jqkgd\" (UID: \"5c55c50b-2faa-4835-b198-da38c0417ade\") " pod="kube-system/cilium-operator-599987898-jqkgd"
Feb 13 15:37:08.510920 kubelet[1741]: I0213 15:37:08.510773    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zfrg\" (UniqueName: \"kubernetes.io/projected/5c55c50b-2faa-4835-b198-da38c0417ade-kube-api-access-8zfrg\") pod \"cilium-operator-599987898-jqkgd\" (UID: \"5c55c50b-2faa-4835-b198-da38c0417ade\") " pod="kube-system/cilium-operator-599987898-jqkgd"
Feb 13 15:37:08.510920 kubelet[1741]: I0213 15:37:08.510788    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50a6729e-e1dc-496e-943f-a3b23b5ffac2-lib-modules\") pod \"cilium-h9tm2\" (UID: \"50a6729e-e1dc-496e-943f-a3b23b5ffac2\") " pod="kube-system/cilium-h9tm2"
Feb 13 15:37:08.510920 kubelet[1741]: I0213 15:37:08.510802    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50a6729e-e1dc-496e-943f-a3b23b5ffac2-clustermesh-secrets\") pod \"cilium-h9tm2\" (UID: \"50a6729e-e1dc-496e-943f-a3b23b5ffac2\") " pod="kube-system/cilium-h9tm2"
Feb 13 15:37:08.510920 kubelet[1741]: I0213 15:37:08.510818    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50a6729e-e1dc-496e-943f-a3b23b5ffac2-host-proc-sys-net\") pod \"cilium-h9tm2\" (UID: \"50a6729e-e1dc-496e-943f-a3b23b5ffac2\") " pod="kube-system/cilium-h9tm2"
Feb 13 15:37:08.511056 kubelet[1741]: I0213 15:37:08.510832    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50a6729e-e1dc-496e-943f-a3b23b5ffac2-hubble-tls\") pod \"cilium-h9tm2\" (UID: \"50a6729e-e1dc-496e-943f-a3b23b5ffac2\") " pod="kube-system/cilium-h9tm2"
Feb 13 15:37:08.511056 kubelet[1741]: I0213 15:37:08.510850    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5pjq\" (UniqueName: \"kubernetes.io/projected/50a6729e-e1dc-496e-943f-a3b23b5ffac2-kube-api-access-v5pjq\") pod \"cilium-h9tm2\" (UID: \"50a6729e-e1dc-496e-943f-a3b23b5ffac2\") " pod="kube-system/cilium-h9tm2"
Feb 13 15:37:08.511056 kubelet[1741]: I0213 15:37:08.510865    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50a6729e-e1dc-496e-943f-a3b23b5ffac2-cni-path\") pod \"cilium-h9tm2\" (UID: \"50a6729e-e1dc-496e-943f-a3b23b5ffac2\") " pod="kube-system/cilium-h9tm2"
Feb 13 15:37:08.511056 kubelet[1741]: I0213 15:37:08.510881    1741 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50a6729e-e1dc-496e-943f-a3b23b5ffac2-cilium-config-path\") pod \"cilium-h9tm2\" (UID: \"50a6729e-e1dc-496e-943f-a3b23b5ffac2\") " pod="kube-system/cilium-h9tm2"
Feb 13 15:37:08.717405 kubelet[1741]: E0213 15:37:08.717261    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:08.718355 containerd[1433]: time="2025-02-13T15:37:08.718304333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h9tm2,Uid:50a6729e-e1dc-496e-943f-a3b23b5ffac2,Namespace:kube-system,Attempt:0,}"
Feb 13 15:37:08.721430 kubelet[1741]: E0213 15:37:08.721402    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:08.721824 containerd[1433]: time="2025-02-13T15:37:08.721791685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-jqkgd,Uid:5c55c50b-2faa-4835-b198-da38c0417ade,Namespace:kube-system,Attempt:0,}"
Feb 13 15:37:08.750803 containerd[1433]: time="2025-02-13T15:37:08.750533366Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:37:08.750803 containerd[1433]: time="2025-02-13T15:37:08.750590766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:37:08.750803 containerd[1433]: time="2025-02-13T15:37:08.750605925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:37:08.750803 containerd[1433]: time="2025-02-13T15:37:08.750682084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:37:08.752590 containerd[1433]: time="2025-02-13T15:37:08.752350941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:37:08.752590 containerd[1433]: time="2025-02-13T15:37:08.752410620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:37:08.752590 containerd[1433]: time="2025-02-13T15:37:08.752423700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:37:08.752590 containerd[1433]: time="2025-02-13T15:37:08.752564898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:37:08.770210 systemd[1]: Started cri-containerd-4263bbfc13b0a7d0151461cad5dcd008cb0108288a2a1dca2171380eb85cfa30.scope - libcontainer container 4263bbfc13b0a7d0151461cad5dcd008cb0108288a2a1dca2171380eb85cfa30.
Feb 13 15:37:08.772954 systemd[1]: Started cri-containerd-fc8cda6ae9381513e984f9ff08dd91e3f0afe30502856c338cb9ae23eecd72e1.scope - libcontainer container fc8cda6ae9381513e984f9ff08dd91e3f0afe30502856c338cb9ae23eecd72e1.
Feb 13 15:37:08.792338 containerd[1433]: time="2025-02-13T15:37:08.792293547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-h9tm2,Uid:50a6729e-e1dc-496e-943f-a3b23b5ffac2,Namespace:kube-system,Attempt:0,} returns sandbox id \"4263bbfc13b0a7d0151461cad5dcd008cb0108288a2a1dca2171380eb85cfa30\""
Feb 13 15:37:08.792971 kubelet[1741]: E0213 15:37:08.792928    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:08.794827 containerd[1433]: time="2025-02-13T15:37:08.794796633Z" level=info msg="CreateContainer within sandbox \"4263bbfc13b0a7d0151461cad5dcd008cb0108288a2a1dca2171380eb85cfa30\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb 13 15:37:08.805432 containerd[1433]: time="2025-02-13T15:37:08.805362206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-jqkgd,Uid:5c55c50b-2faa-4835-b198-da38c0417ade,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc8cda6ae9381513e984f9ff08dd91e3f0afe30502856c338cb9ae23eecd72e1\""
Feb 13 15:37:08.806256 kubelet[1741]: E0213 15:37:08.806163    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:08.807044 containerd[1433]: time="2025-02-13T15:37:08.807001543Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Feb 13 15:37:08.807720 containerd[1433]: time="2025-02-13T15:37:08.807681054Z" level=info msg="CreateContainer within sandbox \"4263bbfc13b0a7d0151461cad5dcd008cb0108288a2a1dca2171380eb85cfa30\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d2b6f1d6f817c73c3e5ec9f70520ea90fee04f8f25d94161d60b8728ef1d683f\""
Feb 13 15:37:08.808093 containerd[1433]: time="2025-02-13T15:37:08.808070609Z" level=info msg="StartContainer for \"d2b6f1d6f817c73c3e5ec9f70520ea90fee04f8f25d94161d60b8728ef1d683f\""
Feb 13 15:37:08.831178 systemd[1]: Started cri-containerd-d2b6f1d6f817c73c3e5ec9f70520ea90fee04f8f25d94161d60b8728ef1d683f.scope - libcontainer container d2b6f1d6f817c73c3e5ec9f70520ea90fee04f8f25d94161d60b8728ef1d683f.
Feb 13 15:37:08.851261 containerd[1433]: time="2025-02-13T15:37:08.851212090Z" level=info msg="StartContainer for \"d2b6f1d6f817c73c3e5ec9f70520ea90fee04f8f25d94161d60b8728ef1d683f\" returns successfully"
Feb 13 15:37:08.906894 systemd[1]: cri-containerd-d2b6f1d6f817c73c3e5ec9f70520ea90fee04f8f25d94161d60b8728ef1d683f.scope: Deactivated successfully.
Feb 13 15:37:08.932588 containerd[1433]: time="2025-02-13T15:37:08.932515203Z" level=info msg="shim disconnected" id=d2b6f1d6f817c73c3e5ec9f70520ea90fee04f8f25d94161d60b8728ef1d683f namespace=k8s.io
Feb 13 15:37:08.932588 containerd[1433]: time="2025-02-13T15:37:08.932577242Z" level=warning msg="cleaning up after shim disconnected" id=d2b6f1d6f817c73c3e5ec9f70520ea90fee04f8f25d94161d60b8728ef1d683f namespace=k8s.io
Feb 13 15:37:08.932588 containerd[1433]: time="2025-02-13T15:37:08.932585802Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:37:09.075063 kubelet[1741]: I0213 15:37:09.074967    1741 setters.go:580] "Node became not ready" node="10.0.0.105" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:37:09Z","lastTransitionTime":"2025-02-13T15:37:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"}
Feb 13 15:37:09.118454 kubelet[1741]: E0213 15:37:09.118409    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:09.338499 kubelet[1741]: E0213 15:37:09.338358    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:09.340308 containerd[1433]: time="2025-02-13T15:37:09.340271788Z" level=info msg="CreateContainer within sandbox \"4263bbfc13b0a7d0151461cad5dcd008cb0108288a2a1dca2171380eb85cfa30\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb 13 15:37:09.349089 containerd[1433]: time="2025-02-13T15:37:09.349038149Z" level=info msg="CreateContainer within sandbox \"4263bbfc13b0a7d0151461cad5dcd008cb0108288a2a1dca2171380eb85cfa30\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"605d15cedf7d483c31cb5b38db21f710a20cc6435994139589fdf0dd2e142463\""
Feb 13 15:37:09.349818 containerd[1433]: time="2025-02-13T15:37:09.349763820Z" level=info msg="StartContainer for \"605d15cedf7d483c31cb5b38db21f710a20cc6435994139589fdf0dd2e142463\""
Feb 13 15:37:09.379186 systemd[1]: Started cri-containerd-605d15cedf7d483c31cb5b38db21f710a20cc6435994139589fdf0dd2e142463.scope - libcontainer container 605d15cedf7d483c31cb5b38db21f710a20cc6435994139589fdf0dd2e142463.
Feb 13 15:37:09.404277 containerd[1433]: time="2025-02-13T15:37:09.402529626Z" level=info msg="StartContainer for \"605d15cedf7d483c31cb5b38db21f710a20cc6435994139589fdf0dd2e142463\" returns successfully"
Feb 13 15:37:09.427622 systemd[1]: cri-containerd-605d15cedf7d483c31cb5b38db21f710a20cc6435994139589fdf0dd2e142463.scope: Deactivated successfully.
Feb 13 15:37:09.457433 containerd[1433]: time="2025-02-13T15:37:09.457275567Z" level=info msg="shim disconnected" id=605d15cedf7d483c31cb5b38db21f710a20cc6435994139589fdf0dd2e142463 namespace=k8s.io
Feb 13 15:37:09.457433 containerd[1433]: time="2025-02-13T15:37:09.457330566Z" level=warning msg="cleaning up after shim disconnected" id=605d15cedf7d483c31cb5b38db21f710a20cc6435994139589fdf0dd2e142463 namespace=k8s.io
Feb 13 15:37:09.457433 containerd[1433]: time="2025-02-13T15:37:09.457338806Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:37:10.119001 kubelet[1741]: E0213 15:37:10.118954    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:10.341479 kubelet[1741]: E0213 15:37:10.341373    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:10.343333 containerd[1433]: time="2025-02-13T15:37:10.343299665Z" level=info msg="CreateContainer within sandbox \"4263bbfc13b0a7d0151461cad5dcd008cb0108288a2a1dca2171380eb85cfa30\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb 13 15:37:10.354898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4095653191.mount: Deactivated successfully.
Feb 13 15:37:10.356769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount284986590.mount: Deactivated successfully.
Feb 13 15:37:10.361047 containerd[1433]: time="2025-02-13T15:37:10.360945593Z" level=info msg="CreateContainer within sandbox \"4263bbfc13b0a7d0151461cad5dcd008cb0108288a2a1dca2171380eb85cfa30\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"498deb8fe1faad037793a83a3f9ce43ab634b8aadfbdddebd1fa92a1656561d2\""
Feb 13 15:37:10.361486 containerd[1433]: time="2025-02-13T15:37:10.361447706Z" level=info msg="StartContainer for \"498deb8fe1faad037793a83a3f9ce43ab634b8aadfbdddebd1fa92a1656561d2\""
Feb 13 15:37:10.407173 systemd[1]: Started cri-containerd-498deb8fe1faad037793a83a3f9ce43ab634b8aadfbdddebd1fa92a1656561d2.scope - libcontainer container 498deb8fe1faad037793a83a3f9ce43ab634b8aadfbdddebd1fa92a1656561d2.
Feb 13 15:37:10.429041 systemd[1]: cri-containerd-498deb8fe1faad037793a83a3f9ce43ab634b8aadfbdddebd1fa92a1656561d2.scope: Deactivated successfully.
Feb 13 15:37:10.429687 containerd[1433]: time="2025-02-13T15:37:10.429248172Z" level=info msg="StartContainer for \"498deb8fe1faad037793a83a3f9ce43ab634b8aadfbdddebd1fa92a1656561d2\" returns successfully"
Feb 13 15:37:10.452606 containerd[1433]: time="2025-02-13T15:37:10.452498546Z" level=info msg="shim disconnected" id=498deb8fe1faad037793a83a3f9ce43ab634b8aadfbdddebd1fa92a1656561d2 namespace=k8s.io
Feb 13 15:37:10.452606 containerd[1433]: time="2025-02-13T15:37:10.452552345Z" level=warning msg="cleaning up after shim disconnected" id=498deb8fe1faad037793a83a3f9ce43ab634b8aadfbdddebd1fa92a1656561d2 namespace=k8s.io
Feb 13 15:37:10.452606 containerd[1433]: time="2025-02-13T15:37:10.452562305Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:37:10.615267 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-498deb8fe1faad037793a83a3f9ce43ab634b8aadfbdddebd1fa92a1656561d2-rootfs.mount: Deactivated successfully.
Feb 13 15:37:10.711932 containerd[1433]: time="2025-02-13T15:37:10.711821687Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:37:10.712919 containerd[1433]: time="2025-02-13T15:37:10.712877113Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306"
Feb 13 15:37:10.713767 containerd[1433]: time="2025-02-13T15:37:10.713604824Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:37:10.715583 containerd[1433]: time="2025-02-13T15:37:10.715471479Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.908420096s"
Feb 13 15:37:10.715583 containerd[1433]: time="2025-02-13T15:37:10.715504639Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\""
Feb 13 15:37:10.717604 containerd[1433]: time="2025-02-13T15:37:10.717578531Z" level=info msg="CreateContainer within sandbox \"fc8cda6ae9381513e984f9ff08dd91e3f0afe30502856c338cb9ae23eecd72e1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Feb 13 15:37:10.729465 containerd[1433]: time="2025-02-13T15:37:10.729415095Z" level=info msg="CreateContainer within sandbox \"fc8cda6ae9381513e984f9ff08dd91e3f0afe30502856c338cb9ae23eecd72e1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8d83e2cde2af4ce8089980970fd4c3e5478445a51a470271526ed9add6b93a0d\""
Feb 13 15:37:10.729907 containerd[1433]: time="2025-02-13T15:37:10.729853529Z" level=info msg="StartContainer for \"8d83e2cde2af4ce8089980970fd4c3e5478445a51a470271526ed9add6b93a0d\""
Feb 13 15:37:10.761192 systemd[1]: Started cri-containerd-8d83e2cde2af4ce8089980970fd4c3e5478445a51a470271526ed9add6b93a0d.scope - libcontainer container 8d83e2cde2af4ce8089980970fd4c3e5478445a51a470271526ed9add6b93a0d.
Feb 13 15:37:10.786340 containerd[1433]: time="2025-02-13T15:37:10.786297305Z" level=info msg="StartContainer for \"8d83e2cde2af4ce8089980970fd4c3e5478445a51a470271526ed9add6b93a0d\" returns successfully"
Feb 13 15:37:11.119586 kubelet[1741]: E0213 15:37:11.119532    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:11.345722 kubelet[1741]: E0213 15:37:11.345582    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:11.347882 containerd[1433]: time="2025-02-13T15:37:11.347847169Z" level=info msg="CreateContainer within sandbox \"4263bbfc13b0a7d0151461cad5dcd008cb0108288a2a1dca2171380eb85cfa30\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb 13 15:37:11.348702 kubelet[1741]: E0213 15:37:11.348683    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:11.360408 containerd[1433]: time="2025-02-13T15:37:11.360356568Z" level=info msg="CreateContainer within sandbox \"4263bbfc13b0a7d0151461cad5dcd008cb0108288a2a1dca2171380eb85cfa30\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7449157ed8bb43df691bea66aff3302d23fc6702d0bd98e83e0eeae1754e3387\""
Feb 13 15:37:11.360853 containerd[1433]: time="2025-02-13T15:37:11.360770243Z" level=info msg="StartContainer for \"7449157ed8bb43df691bea66aff3302d23fc6702d0bd98e83e0eeae1754e3387\""
Feb 13 15:37:11.378612 kubelet[1741]: I0213 15:37:11.377782    1741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-jqkgd" podStartSLOduration=1.468359581 podStartE2EDuration="3.377764144s" podCreationTimestamp="2025-02-13 15:37:08 +0000 UTC" firstStartedPulling="2025-02-13 15:37:08.806759027 +0000 UTC m=+61.629516369" lastFinishedPulling="2025-02-13 15:37:10.71616355 +0000 UTC m=+63.538920932" observedRunningTime="2025-02-13 15:37:11.376730597 +0000 UTC m=+64.199487939" watchObservedRunningTime="2025-02-13 15:37:11.377764144 +0000 UTC m=+64.200521526"
Feb 13 15:37:11.400213 systemd[1]: Started cri-containerd-7449157ed8bb43df691bea66aff3302d23fc6702d0bd98e83e0eeae1754e3387.scope - libcontainer container 7449157ed8bb43df691bea66aff3302d23fc6702d0bd98e83e0eeae1754e3387.
Feb 13 15:37:11.427252 systemd[1]: cri-containerd-7449157ed8bb43df691bea66aff3302d23fc6702d0bd98e83e0eeae1754e3387.scope: Deactivated successfully.
Feb 13 15:37:11.430729 containerd[1433]: time="2025-02-13T15:37:11.430506145Z" level=info msg="StartContainer for \"7449157ed8bb43df691bea66aff3302d23fc6702d0bd98e83e0eeae1754e3387\" returns successfully"
Feb 13 15:37:11.520926 containerd[1433]: time="2025-02-13T15:37:11.520714024Z" level=info msg="shim disconnected" id=7449157ed8bb43df691bea66aff3302d23fc6702d0bd98e83e0eeae1754e3387 namespace=k8s.io
Feb 13 15:37:11.520926 containerd[1433]: time="2025-02-13T15:37:11.520767183Z" level=warning msg="cleaning up after shim disconnected" id=7449157ed8bb43df691bea66aff3302d23fc6702d0bd98e83e0eeae1754e3387 namespace=k8s.io
Feb 13 15:37:11.520926 containerd[1433]: time="2025-02-13T15:37:11.520778503Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:37:11.615339 systemd[1]: run-containerd-runc-k8s.io-8d83e2cde2af4ce8089980970fd4c3e5478445a51a470271526ed9add6b93a0d-runc.OIJtD3.mount: Deactivated successfully.
Feb 13 15:37:12.120091 kubelet[1741]: E0213 15:37:12.120040    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:12.354936 kubelet[1741]: E0213 15:37:12.354896    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:12.354936 kubelet[1741]: E0213 15:37:12.354937    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:12.357513 containerd[1433]: time="2025-02-13T15:37:12.357470513Z" level=info msg="CreateContainer within sandbox \"4263bbfc13b0a7d0151461cad5dcd008cb0108288a2a1dca2171380eb85cfa30\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb 13 15:37:12.385661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1289491797.mount: Deactivated successfully.
Feb 13 15:37:12.386840 containerd[1433]: time="2025-02-13T15:37:12.386799944Z" level=info msg="CreateContainer within sandbox \"4263bbfc13b0a7d0151461cad5dcd008cb0108288a2a1dca2171380eb85cfa30\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c4cbb294efa1abcdbb995ef8b55e1add4cd4a296e2df9f07dc3b68aa909ba3d9\""
Feb 13 15:37:12.387617 containerd[1433]: time="2025-02-13T15:37:12.387587094Z" level=info msg="StartContainer for \"c4cbb294efa1abcdbb995ef8b55e1add4cd4a296e2df9f07dc3b68aa909ba3d9\""
Feb 13 15:37:12.422232 systemd[1]: Started cri-containerd-c4cbb294efa1abcdbb995ef8b55e1add4cd4a296e2df9f07dc3b68aa909ba3d9.scope - libcontainer container c4cbb294efa1abcdbb995ef8b55e1add4cd4a296e2df9f07dc3b68aa909ba3d9.
Feb 13 15:37:12.452506 containerd[1433]: time="2025-02-13T15:37:12.452455878Z" level=info msg="StartContainer for \"c4cbb294efa1abcdbb995ef8b55e1add4cd4a296e2df9f07dc3b68aa909ba3d9\" returns successfully"
Feb 13 15:37:12.724052 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce))
Feb 13 15:37:13.120624 kubelet[1741]: E0213 15:37:13.120577    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:13.359392 kubelet[1741]: E0213 15:37:13.359353    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:14.121797 kubelet[1741]: E0213 15:37:14.121753    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:14.718374 kubelet[1741]: E0213 15:37:14.718331    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:15.122421 kubelet[1741]: E0213 15:37:15.122382    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:15.197381 kubelet[1741]: E0213 15:37:15.197344    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:15.541866 systemd-networkd[1378]: lxc_health: Link UP
Feb 13 15:37:15.548141 systemd-networkd[1378]: lxc_health: Gained carrier
Feb 13 15:37:16.123392 kubelet[1741]: E0213 15:37:16.123333    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:16.719644 kubelet[1741]: E0213 15:37:16.719614    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:16.736381 kubelet[1741]: I0213 15:37:16.736327    1741 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-h9tm2" podStartSLOduration=8.736308721 podStartE2EDuration="8.736308721s" podCreationTimestamp="2025-02-13 15:37:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:37:13.374991091 +0000 UTC m=+66.197748473" watchObservedRunningTime="2025-02-13 15:37:16.736308721 +0000 UTC m=+69.559066103"
Feb 13 15:37:17.124228 kubelet[1741]: E0213 15:37:17.124190    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:17.370110 kubelet[1741]: E0213 15:37:17.370003    1741 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:17.596283 systemd-networkd[1378]: lxc_health: Gained IPv6LL
Feb 13 15:37:18.124423 kubelet[1741]: E0213 15:37:18.124356    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:19.125047 kubelet[1741]: E0213 15:37:19.124997    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:19.202892 systemd[1]: run-containerd-runc-k8s.io-c4cbb294efa1abcdbb995ef8b55e1add4cd4a296e2df9f07dc3b68aa909ba3d9-runc.OzNWKN.mount: Deactivated successfully.
Feb 13 15:37:20.127150 kubelet[1741]: E0213 15:37:20.125333    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:21.125979 kubelet[1741]: E0213 15:37:21.125917    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:22.127078 kubelet[1741]: E0213 15:37:22.127006    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:23.127508 kubelet[1741]: E0213 15:37:23.127459    1741 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"