Feb 13 15:36:59.893487 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1]
Feb 13 15:36:59.893508 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 13:57:00 -00 2025
Feb 13 15:36:59.893518 kernel: KASLR enabled
Feb 13 15:36:59.893524 kernel: efi: EFI v2.7 by EDK II
Feb 13 15:36:59.893529 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 
Feb 13 15:36:59.893535 kernel: random: crng init done
Feb 13 15:36:59.893542 kernel: secureboot: Secure boot disabled
Feb 13 15:36:59.893548 kernel: ACPI: Early table checksum verification disabled
Feb 13 15:36:59.893553 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS )
Feb 13 15:36:59.893561 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS  BXPC     00000001      01000013)
Feb 13 15:36:59.893567 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:36:59.893573 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:36:59.893579 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:36:59.893585 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:36:59.893592 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:36:59.893600 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:36:59.893606 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:36:59.893612 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:36:59.893619 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Feb 13 15:36:59.893625 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600
Feb 13 15:36:59.893631 kernel: NUMA: Failed to initialise from firmware
Feb 13 15:36:59.893637 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff]
Feb 13 15:36:59.893644 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff]
Feb 13 15:36:59.893650 kernel: Zone ranges:
Feb 13 15:36:59.893656 kernel:   DMA      [mem 0x0000000040000000-0x00000000dcffffff]
Feb 13 15:36:59.893663 kernel:   DMA32    empty
Feb 13 15:36:59.893669 kernel:   Normal   empty
Feb 13 15:36:59.893675 kernel: Movable zone start for each node
Feb 13 15:36:59.893681 kernel: Early memory node ranges
Feb 13 15:36:59.893688 kernel:   node   0: [mem 0x0000000040000000-0x00000000d976ffff]
Feb 13 15:36:59.893694 kernel:   node   0: [mem 0x00000000d9770000-0x00000000d9b3ffff]
Feb 13 15:36:59.893701 kernel:   node   0: [mem 0x00000000d9b40000-0x00000000dce1ffff]
Feb 13 15:36:59.893707 kernel:   node   0: [mem 0x00000000dce20000-0x00000000dceaffff]
Feb 13 15:36:59.893713 kernel:   node   0: [mem 0x00000000dceb0000-0x00000000dcebffff]
Feb 13 15:36:59.893719 kernel:   node   0: [mem 0x00000000dcec0000-0x00000000dcfdffff]
Feb 13 15:36:59.893725 kernel:   node   0: [mem 0x00000000dcfe0000-0x00000000dcffffff]
Feb 13 15:36:59.893731 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff]
Feb 13 15:36:59.893739 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges
Feb 13 15:36:59.893745 kernel: psci: probing for conduit method from ACPI.
Feb 13 15:36:59.893751 kernel: psci: PSCIv1.1 detected in firmware.
Feb 13 15:36:59.893760 kernel: psci: Using standard PSCI v0.2 function IDs
Feb 13 15:36:59.893774 kernel: psci: Trusted OS migration not required
Feb 13 15:36:59.893781 kernel: psci: SMC Calling Convention v1.1
Feb 13 15:36:59.893790 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003)
Feb 13 15:36:59.893796 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976
Feb 13 15:36:59.893803 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096
Feb 13 15:36:59.893809 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 
Feb 13 15:36:59.893816 kernel: Detected PIPT I-cache on CPU0
Feb 13 15:36:59.893822 kernel: CPU features: detected: GIC system register CPU interface
Feb 13 15:36:59.893829 kernel: CPU features: detected: Hardware dirty bit management
Feb 13 15:36:59.893836 kernel: CPU features: detected: Spectre-v4
Feb 13 15:36:59.893842 kernel: CPU features: detected: Spectre-BHB
Feb 13 15:36:59.893849 kernel: CPU features: kernel page table isolation forced ON by KASLR
Feb 13 15:36:59.893856 kernel: CPU features: detected: Kernel page table isolation (KPTI)
Feb 13 15:36:59.893863 kernel: CPU features: detected: ARM erratum 1418040
Feb 13 15:36:59.893870 kernel: CPU features: detected: SSBS not fully self-synchronizing
Feb 13 15:36:59.893876 kernel: alternatives: applying boot alternatives
Feb 13 15:36:59.893884 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6
Feb 13 15:36:59.893891 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb 13 15:36:59.893897 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 13 15:36:59.893904 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb 13 15:36:59.893911 kernel: Fallback order for Node 0: 0 
Feb 13 15:36:59.893917 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 633024
Feb 13 15:36:59.893924 kernel: Policy zone: DMA
Feb 13 15:36:59.893931 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 13 15:36:59.893938 kernel: software IO TLB: area num 4.
Feb 13 15:36:59.893945 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB)
Feb 13 15:36:59.893952 kernel: Memory: 2386324K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 185964K reserved, 0K cma-reserved)
Feb 13 15:36:59.893958 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Feb 13 15:36:59.893965 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 13 15:36:59.893972 kernel: rcu:         RCU event tracing is enabled.
Feb 13 15:36:59.893979 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4.
Feb 13 15:36:59.893986 kernel:         Trampoline variant of Tasks RCU enabled.
Feb 13 15:36:59.893992 kernel:         Tracing variant of Tasks RCU enabled.
Feb 13 15:36:59.893999 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 13 15:36:59.894005 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Feb 13 15:36:59.894013 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Feb 13 15:36:59.894020 kernel: GICv3: 256 SPIs implemented
Feb 13 15:36:59.894026 kernel: GICv3: 0 Extended SPIs implemented
Feb 13 15:36:59.894033 kernel: Root IRQ handler: gic_handle_irq
Feb 13 15:36:59.894039 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI
Feb 13 15:36:59.894046 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000
Feb 13 15:36:59.894053 kernel: ITS [mem 0x08080000-0x0809ffff]
Feb 13 15:36:59.894059 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1)
Feb 13 15:36:59.894066 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1)
Feb 13 15:36:59.894082 kernel: GICv3: using LPI property table @0x00000000400f0000
Feb 13 15:36:59.894093 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000
Feb 13 15:36:59.894104 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 13 15:36:59.894111 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 15:36:59.894117 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt).
Feb 13 15:36:59.894124 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns
Feb 13 15:36:59.894131 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns
Feb 13 15:36:59.894138 kernel: arm-pv: using stolen time PV
Feb 13 15:36:59.894144 kernel: Console: colour dummy device 80x25
Feb 13 15:36:59.894151 kernel: ACPI: Core revision 20230628
Feb 13 15:36:59.894158 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000)
Feb 13 15:36:59.894165 kernel: pid_max: default: 32768 minimum: 301
Feb 13 15:36:59.894174 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Feb 13 15:36:59.894180 kernel: landlock: Up and running.
Feb 13 15:36:59.894187 kernel: SELinux:  Initializing.
Feb 13 15:36:59.894194 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 15:36:59.894201 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 15:36:59.894207 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Feb 13 15:36:59.894215 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Feb 13 15:36:59.894221 kernel: rcu: Hierarchical SRCU implementation.
Feb 13 15:36:59.894228 kernel: rcu:         Max phase no-delay instances is 400.
Feb 13 15:36:59.894236 kernel: Platform MSI: ITS@0x8080000 domain created
Feb 13 15:36:59.894243 kernel: PCI/MSI: ITS@0x8080000 domain created
Feb 13 15:36:59.894250 kernel: Remapping and enabling EFI services.
Feb 13 15:36:59.894257 kernel: smp: Bringing up secondary CPUs ...
Feb 13 15:36:59.894263 kernel: Detected PIPT I-cache on CPU1
Feb 13 15:36:59.894270 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000
Feb 13 15:36:59.894277 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000
Feb 13 15:36:59.894284 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 15:36:59.894290 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1]
Feb 13 15:36:59.894297 kernel: Detected PIPT I-cache on CPU2
Feb 13 15:36:59.894305 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000
Feb 13 15:36:59.894314 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000
Feb 13 15:36:59.894330 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 15:36:59.894338 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1]
Feb 13 15:36:59.894345 kernel: Detected PIPT I-cache on CPU3
Feb 13 15:36:59.894352 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000
Feb 13 15:36:59.894359 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000
Feb 13 15:36:59.894366 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Feb 13 15:36:59.894373 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1]
Feb 13 15:36:59.894382 kernel: smp: Brought up 1 node, 4 CPUs
Feb 13 15:36:59.894389 kernel: SMP: Total of 4 processors activated.
Feb 13 15:36:59.894396 kernel: CPU features: detected: 32-bit EL0 Support
Feb 13 15:36:59.894403 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence
Feb 13 15:36:59.894410 kernel: CPU features: detected: Common not Private translations
Feb 13 15:36:59.894417 kernel: CPU features: detected: CRC32 instructions
Feb 13 15:36:59.894424 kernel: CPU features: detected: Enhanced Virtualization Traps
Feb 13 15:36:59.894431 kernel: CPU features: detected: RCpc load-acquire (LDAPR)
Feb 13 15:36:59.894440 kernel: CPU features: detected: LSE atomic instructions
Feb 13 15:36:59.894447 kernel: CPU features: detected: Privileged Access Never
Feb 13 15:36:59.894454 kernel: CPU features: detected: RAS Extension Support
Feb 13 15:36:59.894462 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS)
Feb 13 15:36:59.894469 kernel: CPU: All CPU(s) started at EL1
Feb 13 15:36:59.894476 kernel: alternatives: applying system-wide alternatives
Feb 13 15:36:59.894483 kernel: devtmpfs: initialized
Feb 13 15:36:59.894491 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 13 15:36:59.894498 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Feb 13 15:36:59.894506 kernel: pinctrl core: initialized pinctrl subsystem
Feb 13 15:36:59.894513 kernel: SMBIOS 3.0.0 present.
Feb 13 15:36:59.894520 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022
Feb 13 15:36:59.894528 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 13 15:36:59.894535 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Feb 13 15:36:59.894542 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 13 15:36:59.894550 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 13 15:36:59.894557 kernel: audit: initializing netlink subsys (disabled)
Feb 13 15:36:59.894564 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1
Feb 13 15:36:59.894572 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 13 15:36:59.894580 kernel: cpuidle: using governor menu
Feb 13 15:36:59.894587 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Feb 13 15:36:59.894594 kernel: ASID allocator initialised with 32768 entries
Feb 13 15:36:59.894601 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 13 15:36:59.894608 kernel: Serial: AMBA PL011 UART driver
Feb 13 15:36:59.894615 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL
Feb 13 15:36:59.894622 kernel: Modules: 0 pages in range for non-PLT usage
Feb 13 15:36:59.894629 kernel: Modules: 508960 pages in range for PLT usage
Feb 13 15:36:59.894638 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 13 15:36:59.894645 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
Feb 13 15:36:59.894653 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
Feb 13 15:36:59.894660 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
Feb 13 15:36:59.894667 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 13 15:36:59.894674 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
Feb 13 15:36:59.894681 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
Feb 13 15:36:59.894689 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
Feb 13 15:36:59.894695 kernel: ACPI: Added _OSI(Module Device)
Feb 13 15:36:59.894704 kernel: ACPI: Added _OSI(Processor Device)
Feb 13 15:36:59.894711 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb 13 15:36:59.894718 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 13 15:36:59.894725 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 13 15:36:59.894732 kernel: ACPI: Interpreter enabled
Feb 13 15:36:59.894739 kernel: ACPI: Using GIC for interrupt routing
Feb 13 15:36:59.894746 kernel: ACPI: MCFG table detected, 1 entries
Feb 13 15:36:59.894754 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA
Feb 13 15:36:59.894764 kernel: printk: console [ttyAMA0] enabled
Feb 13 15:36:59.894774 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 13 15:36:59.894930 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Feb 13 15:36:59.895004 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR]
Feb 13 15:36:59.895066 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
Feb 13 15:36:59.895153 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00
Feb 13 15:36:59.895215 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff]
Feb 13 15:36:59.895224 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io  0x0000-0xffff window]
Feb 13 15:36:59.895235 kernel: PCI host bridge to bus 0000:00
Feb 13 15:36:59.895307 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window]
Feb 13 15:36:59.895364 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0xffff window]
Feb 13 15:36:59.895422 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window]
Feb 13 15:36:59.895482 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 13 15:36:59.895562 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000
Feb 13 15:36:59.895637 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00
Feb 13 15:36:59.895706 kernel: pci 0000:00:01.0: reg 0x10: [io  0x0000-0x001f]
Feb 13 15:36:59.895781 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff]
Feb 13 15:36:59.895851 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref]
Feb 13 15:36:59.895932 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref]
Feb 13 15:36:59.895998 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff]
Feb 13 15:36:59.896064 kernel: pci 0000:00:01.0: BAR 0: assigned [io  0x1000-0x101f]
Feb 13 15:36:59.896136 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window]
Feb 13 15:36:59.896196 kernel: pci_bus 0000:00: resource 5 [io  0x0000-0xffff window]
Feb 13 15:36:59.896253 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window]
Feb 13 15:36:59.896262 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35
Feb 13 15:36:59.896270 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36
Feb 13 15:36:59.896277 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37
Feb 13 15:36:59.896284 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38
Feb 13 15:36:59.896291 kernel: iommu: Default domain type: Translated
Feb 13 15:36:59.896299 kernel: iommu: DMA domain TLB invalidation policy: strict mode
Feb 13 15:36:59.896308 kernel: efivars: Registered efivars operations
Feb 13 15:36:59.896315 kernel: vgaarb: loaded
Feb 13 15:36:59.896322 kernel: clocksource: Switched to clocksource arch_sys_counter
Feb 13 15:36:59.896329 kernel: VFS: Disk quotas dquot_6.6.0
Feb 13 15:36:59.896336 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 13 15:36:59.896343 kernel: pnp: PnP ACPI init
Feb 13 15:36:59.896417 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved
Feb 13 15:36:59.896428 kernel: pnp: PnP ACPI: found 1 devices
Feb 13 15:36:59.896438 kernel: NET: Registered PF_INET protocol family
Feb 13 15:36:59.896445 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 13 15:36:59.896453 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Feb 13 15:36:59.896460 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 13 15:36:59.896467 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb 13 15:36:59.896475 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Feb 13 15:36:59.896482 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Feb 13 15:36:59.896489 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 15:36:59.896497 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 15:36:59.896505 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 13 15:36:59.896512 kernel: PCI: CLS 0 bytes, default 64
Feb 13 15:36:59.896519 kernel: kvm [1]: HYP mode not available
Feb 13 15:36:59.896526 kernel: Initialise system trusted keyrings
Feb 13 15:36:59.896533 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Feb 13 15:36:59.896541 kernel: Key type asymmetric registered
Feb 13 15:36:59.896547 kernel: Asymmetric key parser 'x509' registered
Feb 13 15:36:59.896555 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
Feb 13 15:36:59.896562 kernel: io scheduler mq-deadline registered
Feb 13 15:36:59.896570 kernel: io scheduler kyber registered
Feb 13 15:36:59.896578 kernel: io scheduler bfq registered
Feb 13 15:36:59.896586 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Feb 13 15:36:59.896593 kernel: ACPI: button: Power Button [PWRB]
Feb 13 15:36:59.896601 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36
Feb 13 15:36:59.896666 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007)
Feb 13 15:36:59.896676 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 13 15:36:59.896683 kernel: thunder_xcv, ver 1.0
Feb 13 15:36:59.896690 kernel: thunder_bgx, ver 1.0
Feb 13 15:36:59.896699 kernel: nicpf, ver 1.0
Feb 13 15:36:59.896706 kernel: nicvf, ver 1.0
Feb 13 15:36:59.896790 kernel: rtc-efi rtc-efi.0: registered as rtc0
Feb 13 15:36:59.896856 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T15:36:59 UTC (1739461019)
Feb 13 15:36:59.896866 kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 13 15:36:59.896873 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available
Feb 13 15:36:59.896881 kernel: watchdog: Delayed init of the lockup detector failed: -19
Feb 13 15:36:59.896888 kernel: watchdog: Hard watchdog permanently disabled
Feb 13 15:36:59.896897 kernel: NET: Registered PF_INET6 protocol family
Feb 13 15:36:59.896904 kernel: Segment Routing with IPv6
Feb 13 15:36:59.896911 kernel: In-situ OAM (IOAM) with IPv6
Feb 13 15:36:59.896918 kernel: NET: Registered PF_PACKET protocol family
Feb 13 15:36:59.896926 kernel: Key type dns_resolver registered
Feb 13 15:36:59.896933 kernel: registered taskstats version 1
Feb 13 15:36:59.896940 kernel: Loading compiled-in X.509 certificates
Feb 13 15:36:59.896948 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 4531cdb19689f90a81e7969ac7d8e25a95254f51'
Feb 13 15:36:59.896955 kernel: Key type .fscrypt registered
Feb 13 15:36:59.896964 kernel: Key type fscrypt-provisioning registered
Feb 13 15:36:59.896971 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 13 15:36:59.896979 kernel: ima: Allocated hash algorithm: sha1
Feb 13 15:36:59.896986 kernel: ima: No architecture policies found
Feb 13 15:36:59.896994 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Feb 13 15:36:59.897001 kernel: clk: Disabling unused clocks
Feb 13 15:36:59.897008 kernel: Freeing unused kernel memory: 39680K
Feb 13 15:36:59.897016 kernel: Run /init as init process
Feb 13 15:36:59.897023 kernel:   with arguments:
Feb 13 15:36:59.897032 kernel:     /init
Feb 13 15:36:59.897039 kernel:   with environment:
Feb 13 15:36:59.897047 kernel:     HOME=/
Feb 13 15:36:59.897054 kernel:     TERM=linux
Feb 13 15:36:59.897061 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb 13 15:36:59.897165 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 15:36:59.897177 systemd[1]: Detected virtualization kvm.
Feb 13 15:36:59.897185 systemd[1]: Detected architecture arm64.
Feb 13 15:36:59.897195 systemd[1]: Running in initrd.
Feb 13 15:36:59.897203 systemd[1]: No hostname configured, using default hostname.
Feb 13 15:36:59.897211 systemd[1]: Hostname set to <localhost>.
Feb 13 15:36:59.897219 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 15:36:59.897226 systemd[1]: Queued start job for default target initrd.target.
Feb 13 15:36:59.897234 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:36:59.897256 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:36:59.897264 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Feb 13 15:36:59.897275 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 15:36:59.897283 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Feb 13 15:36:59.897291 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Feb 13 15:36:59.897301 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Feb 13 15:36:59.897309 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Feb 13 15:36:59.897317 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:36:59.897326 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:36:59.897334 systemd[1]: Reached target paths.target - Path Units.
Feb 13 15:36:59.897342 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 15:36:59.897350 systemd[1]: Reached target swap.target - Swaps.
Feb 13 15:36:59.897358 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 15:36:59.897366 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 15:36:59.897374 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 15:36:59.897382 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Feb 13 15:36:59.897391 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Feb 13 15:36:59.897401 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:36:59.897409 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:36:59.897417 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:36:59.897425 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 15:36:59.897433 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Feb 13 15:36:59.897443 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 15:36:59.897456 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Feb 13 15:36:59.897465 systemd[1]: Starting systemd-fsck-usr.service...
Feb 13 15:36:59.897473 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 15:36:59.897486 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 15:36:59.897495 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:36:59.897503 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Feb 13 15:36:59.897512 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:36:59.897520 systemd[1]: Finished systemd-fsck-usr.service.
Feb 13 15:36:59.897528 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 15:36:59.897538 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:36:59.897569 systemd-journald[239]: Collecting audit messages is disabled.
Feb 13 15:36:59.897591 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 15:36:59.897600 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:36:59.897609 systemd-journald[239]: Journal started
Feb 13 15:36:59.897628 systemd-journald[239]: Runtime Journal (/run/log/journal/6a147e74707940a0ad1a8bb471933f46) is 5.9M, max 47.3M, 41.4M free.
Feb 13 15:36:59.884290 systemd-modules-load[240]: Inserted module 'overlay'
Feb 13 15:36:59.904164 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 15:36:59.904218 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 13 15:36:59.906658 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 15:36:59.907170 systemd-modules-load[240]: Inserted module 'br_netfilter'
Feb 13 15:36:59.908122 kernel: Bridge firewalling registered
Feb 13 15:36:59.908491 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:36:59.911185 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:36:59.912627 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 15:36:59.917936 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:36:59.922361 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:36:59.923579 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:36:59.925331 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:36:59.935215 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Feb 13 15:36:59.937256 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 15:36:59.946806 dracut-cmdline[277]: dracut-dracut-053
Feb 13 15:36:59.949220 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=07e9b8867aadd0b2e77ba5338d18cdd10706c658e0d745a78e129bcae9a0e4c6
Feb 13 15:36:59.963784 systemd-resolved[279]: Positive Trust Anchors:
Feb 13 15:36:59.963861 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 15:36:59.963892 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 15:36:59.968653 systemd-resolved[279]: Defaulting to hostname 'linux'.
Feb 13 15:36:59.970153 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 15:36:59.971016 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:37:00.024100 kernel: SCSI subsystem initialized
Feb 13 15:37:00.029093 kernel: Loading iSCSI transport class v2.0-870.
Feb 13 15:37:00.036096 kernel: iscsi: registered transport (tcp)
Feb 13 15:37:00.051101 kernel: iscsi: registered transport (qla4xxx)
Feb 13 15:37:00.051117 kernel: QLogic iSCSI HBA Driver
Feb 13 15:37:00.093643 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Feb 13 15:37:00.102259 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Feb 13 15:37:00.119458 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 13 15:37:00.119519 kernel: device-mapper: uevent: version 1.0.3
Feb 13 15:37:00.119558 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Feb 13 15:37:00.168125 kernel: raid6: neonx8   gen() 15744 MB/s
Feb 13 15:37:00.185097 kernel: raid6: neonx4   gen() 15511 MB/s
Feb 13 15:37:00.202088 kernel: raid6: neonx2   gen() 13132 MB/s
Feb 13 15:37:00.219094 kernel: raid6: neonx1   gen() 10435 MB/s
Feb 13 15:37:00.236088 kernel: raid6: int64x8  gen()  6927 MB/s
Feb 13 15:37:00.253085 kernel: raid6: int64x4  gen()  7322 MB/s
Feb 13 15:37:00.270086 kernel: raid6: int64x2  gen()  6117 MB/s
Feb 13 15:37:00.287085 kernel: raid6: int64x1  gen()  5058 MB/s
Feb 13 15:37:00.287098 kernel: raid6: using algorithm neonx8 gen() 15744 MB/s
Feb 13 15:37:00.304093 kernel: raid6: .... xor() 11895 MB/s, rmw enabled
Feb 13 15:37:00.304109 kernel: raid6: using neon recovery algorithm
Feb 13 15:37:00.309242 kernel: xor: measuring software checksum speed
Feb 13 15:37:00.309258 kernel:    8regs           : 19331 MB/sec
Feb 13 15:37:00.310285 kernel:    32regs          : 19688 MB/sec
Feb 13 15:37:00.310299 kernel:    arm64_neon      : 26963 MB/sec
Feb 13 15:37:00.310323 kernel: xor: using function: arm64_neon (26963 MB/sec)
Feb 13 15:37:00.365093 kernel: Btrfs loaded, zoned=no, fsverity=no
Feb 13 15:37:00.378413 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 15:37:00.394292 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:37:00.407251 systemd-udevd[463]: Using default interface naming scheme 'v255'.
Feb 13 15:37:00.410347 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:37:00.416209 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Feb 13 15:37:00.429085 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation
Feb 13 15:37:00.456915 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 15:37:00.468372 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 15:37:00.513492 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:37:00.521257 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Feb 13 15:37:00.534016 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Feb 13 15:37:00.537114 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 15:37:00.538004 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:37:00.539714 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 15:37:00.545224 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Feb 13 15:37:00.556931 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 15:37:00.569418 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 15:37:00.569541 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:37:00.573895 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues
Feb 13 15:37:00.579937 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB)
Feb 13 15:37:00.580055 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb 13 15:37:00.580128 kernel: GPT:9289727 != 19775487
Feb 13 15:37:00.580140 kernel: GPT:Alternate GPT header not at the end of the disk.
Feb 13 15:37:00.580149 kernel: GPT:9289727 != 19775487
Feb 13 15:37:00.580159 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb 13 15:37:00.580169 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb 13 15:37:00.571971 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:37:00.575066 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:37:00.575208 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:37:00.577664 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:37:00.592103 kernel: BTRFS: device fsid 27ad543d-6fdb-4ace-b8f1-8f50b124bd06 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (506)
Feb 13 15:37:00.592146 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (517)
Feb 13 15:37:00.595367 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:37:00.607601 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:37:00.616773 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT.
Feb 13 15:37:00.621417 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM.
Feb 13 15:37:00.625892 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Feb 13 15:37:00.629696 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A.
Feb 13 15:37:00.630771 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132.
Feb 13 15:37:00.651242 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Feb 13 15:37:00.652932 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:37:00.656614 disk-uuid[552]: Primary Header is updated.
Feb 13 15:37:00.656614 disk-uuid[552]: Secondary Entries is updated.
Feb 13 15:37:00.656614 disk-uuid[552]: Secondary Header is updated.
Feb 13 15:37:00.660110 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb 13 15:37:00.682107 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:37:01.670215 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Feb 13 15:37:01.671255 disk-uuid[553]: The operation has completed successfully.
Feb 13 15:37:01.692798 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb 13 15:37:01.692925 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Feb 13 15:37:01.715228 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Feb 13 15:37:01.718200 sh[572]: Success
Feb 13 15:37:01.735138 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Feb 13 15:37:01.763669 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Feb 13 15:37:01.775524 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Feb 13 15:37:01.777837 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Feb 13 15:37:01.786521 kernel: BTRFS info (device dm-0): first mount of filesystem 27ad543d-6fdb-4ace-b8f1-8f50b124bd06
Feb 13 15:37:01.786558 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm
Feb 13 15:37:01.786569 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Feb 13 15:37:01.787320 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Feb 13 15:37:01.788338 kernel: BTRFS info (device dm-0): using free space tree
Feb 13 15:37:01.792426 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Feb 13 15:37:01.793548 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Feb 13 15:37:01.804226 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Feb 13 15:37:01.805558 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Feb 13 15:37:01.814154 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643
Feb 13 15:37:01.814198 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 15:37:01.815082 kernel: BTRFS info (device vda6): using free space tree
Feb 13 15:37:01.817091 kernel: BTRFS info (device vda6): auto enabling async discard
Feb 13 15:37:01.824778 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb 13 15:37:01.825571 kernel: BTRFS info (device vda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643
Feb 13 15:37:01.831360 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Feb 13 15:37:01.838245 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Feb 13 15:37:01.908030 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 15:37:01.916230 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 15:37:01.939893 systemd-networkd[766]: lo: Link UP
Feb 13 15:37:01.939906 systemd-networkd[766]: lo: Gained carrier
Feb 13 15:37:01.940677 systemd-networkd[766]: Enumeration completed
Feb 13 15:37:01.940750 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 15:37:01.941134 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:37:01.941137 systemd-networkd[766]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 15:37:01.942020 systemd-networkd[766]: eth0: Link UP
Feb 13 15:37:01.942023 systemd-networkd[766]: eth0: Gained carrier
Feb 13 15:37:01.946700 ignition[663]: Ignition 2.20.0
Feb 13 15:37:01.942029 systemd-networkd[766]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:37:01.946706 ignition[663]: Stage: fetch-offline
Feb 13 15:37:01.942268 systemd[1]: Reached target network.target - Network.
Feb 13 15:37:01.946739 ignition[663]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:37:01.946746 ignition[663]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:37:01.946924 ignition[663]: parsed url from cmdline: ""
Feb 13 15:37:01.946927 ignition[663]: no config URL provided
Feb 13 15:37:01.946932 ignition[663]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 15:37:01.946938 ignition[663]: no config at "/usr/lib/ignition/user.ign"
Feb 13 15:37:01.946962 ignition[663]: op(1): [started]  loading QEMU firmware config module
Feb 13 15:37:01.946966 ignition[663]: op(1): executing: "modprobe" "qemu_fw_cfg"
Feb 13 15:37:01.958886 ignition[663]: op(1): [finished] loading QEMU firmware config module
Feb 13 15:37:01.960122 systemd-networkd[766]: eth0: DHCPv4 address 10.0.0.136/16, gateway 10.0.0.1 acquired from 10.0.0.1
Feb 13 15:37:01.966330 ignition[663]: parsing config with SHA512: a17abd8f11bad3a13b41a5992cc7fda460143ba28b64acadd44d5be50fb54058ce81c5ebabef2b8e90d7447aa1832f7c997a23e709b70610427aec0bb08cea9b
Feb 13 15:37:01.969778 unknown[663]: fetched base config from "system"
Feb 13 15:37:01.969789 unknown[663]: fetched user config from "qemu"
Feb 13 15:37:01.970182 ignition[663]: fetch-offline: fetch-offline passed
Feb 13 15:37:01.970259 ignition[663]: Ignition finished successfully
Feb 13 15:37:01.972546 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 15:37:01.974189 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json).
Feb 13 15:37:01.979220 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Feb 13 15:37:01.989474 ignition[773]: Ignition 2.20.0
Feb 13 15:37:01.989483 ignition[773]: Stage: kargs
Feb 13 15:37:01.989643 ignition[773]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:37:01.989652 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:37:01.992349 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Feb 13 15:37:01.990335 ignition[773]: kargs: kargs passed
Feb 13 15:37:01.990374 ignition[773]: Ignition finished successfully
Feb 13 15:37:01.994876 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Feb 13 15:37:02.007807 ignition[782]: Ignition 2.20.0
Feb 13 15:37:02.007818 ignition[782]: Stage: disks
Feb 13 15:37:02.007976 ignition[782]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:37:02.007986 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:37:02.008645 ignition[782]: disks: disks passed
Feb 13 15:37:02.010510 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Feb 13 15:37:02.008686 ignition[782]: Ignition finished successfully
Feb 13 15:37:02.011934 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Feb 13 15:37:02.012949 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Feb 13 15:37:02.014335 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 15:37:02.015441 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 15:37:02.016737 systemd[1]: Reached target basic.target - Basic System.
Feb 13 15:37:02.033284 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Feb 13 15:37:02.044261 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Feb 13 15:37:02.048651 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Feb 13 15:37:02.050956 systemd[1]: Mounting sysroot.mount - /sysroot...
Feb 13 15:37:02.095100 kernel: EXT4-fs (vda9): mounted filesystem b8d8a7c2-9667-48db-9266-035fd118dfdf r/w with ordered data mode. Quota mode: none.
Feb 13 15:37:02.095632 systemd[1]: Mounted sysroot.mount - /sysroot.
Feb 13 15:37:02.096805 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Feb 13 15:37:02.111192 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 15:37:02.112788 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Feb 13 15:37:02.113574 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Feb 13 15:37:02.113611 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb 13 15:37:02.119127 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (801)
Feb 13 15:37:02.113634 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 15:37:02.119306 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Feb 13 15:37:02.124002 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643
Feb 13 15:37:02.124022 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 15:37:02.124032 kernel: BTRFS info (device vda6): using free space tree
Feb 13 15:37:02.124042 kernel: BTRFS info (device vda6): auto enabling async discard
Feb 13 15:37:02.123539 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Feb 13 15:37:02.126083 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 15:37:02.168800 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory
Feb 13 15:37:02.172763 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory
Feb 13 15:37:02.176952 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory
Feb 13 15:37:02.180045 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory
Feb 13 15:37:02.248012 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Feb 13 15:37:02.264173 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Feb 13 15:37:02.265501 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Feb 13 15:37:02.270157 kernel: BTRFS info (device vda6): last unmount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643
Feb 13 15:37:02.286131 ignition[914]: INFO     : Ignition 2.20.0
Feb 13 15:37:02.289163 ignition[914]: INFO     : Stage: mount
Feb 13 15:37:02.289163 ignition[914]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:37:02.289163 ignition[914]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:37:02.289163 ignition[914]: INFO     : mount: mount passed
Feb 13 15:37:02.289163 ignition[914]: INFO     : Ignition finished successfully
Feb 13 15:37:02.289093 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Feb 13 15:37:02.289971 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Feb 13 15:37:02.297184 systemd[1]: Starting ignition-files.service - Ignition (files)...
Feb 13 15:37:02.785934 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Feb 13 15:37:02.800273 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 15:37:02.805082 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (929)
Feb 13 15:37:02.806665 kernel: BTRFS info (device vda6): first mount of filesystem e9f4fc6e-82c5-478d-829e-7273b573b643
Feb 13 15:37:02.806679 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 15:37:02.807168 kernel: BTRFS info (device vda6): using free space tree
Feb 13 15:37:02.809097 kernel: BTRFS info (device vda6): auto enabling async discard
Feb 13 15:37:02.810168 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 15:37:02.824757 ignition[947]: INFO     : Ignition 2.20.0
Feb 13 15:37:02.824757 ignition[947]: INFO     : Stage: files
Feb 13 15:37:02.825959 ignition[947]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:37:02.825959 ignition[947]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:37:02.825959 ignition[947]: DEBUG    : files: compiled without relabeling support, skipping
Feb 13 15:37:02.828590 ignition[947]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb 13 15:37:02.828590 ignition[947]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb 13 15:37:02.828590 ignition[947]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb 13 15:37:02.828590 ignition[947]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb 13 15:37:02.833132 ignition[947]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb 13 15:37:02.833132 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/home/core/install.sh"
Feb 13 15:37:02.833132 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh"
Feb 13 15:37:02.833132 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 15:37:02.833132 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 15:37:02.833132 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Feb 13 15:37:02.833132 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Feb 13 15:37:02.833132 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Feb 13 15:37:02.833132 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1
Feb 13 15:37:02.828657 unknown[947]: wrote ssh authorized keys file for user: core
Feb 13 15:37:03.105303 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET result: OK
Feb 13 15:37:03.338603 ignition[947]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Feb 13 15:37:03.338603 ignition[947]: INFO     : files: op(7): [started]  processing unit "coreos-metadata.service"
Feb 13 15:37:03.341263 ignition[947]: INFO     : files: op(7): op(8): [started]  writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Feb 13 15:37:03.341263 ignition[947]: INFO     : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Feb 13 15:37:03.341263 ignition[947]: INFO     : files: op(7): [finished] processing unit "coreos-metadata.service"
Feb 13 15:37:03.341263 ignition[947]: INFO     : files: op(9): [started]  setting preset to disabled for "coreos-metadata.service"
Feb 13 15:37:03.359602 ignition[947]: INFO     : files: op(9): op(a): [started]  removing enablement symlink(s) for "coreos-metadata.service"
Feb 13 15:37:03.362982 ignition[947]: INFO     : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service"
Feb 13 15:37:03.364156 ignition[947]: INFO     : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service"
Feb 13 15:37:03.364156 ignition[947]: INFO     : files: createResultFile: createFiles: op(b): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb 13 15:37:03.364156 ignition[947]: INFO     : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb 13 15:37:03.364156 ignition[947]: INFO     : files: files passed
Feb 13 15:37:03.364156 ignition[947]: INFO     : Ignition finished successfully
Feb 13 15:37:03.364765 systemd[1]: Finished ignition-files.service - Ignition (files).
Feb 13 15:37:03.372198 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Feb 13 15:37:03.374232 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Feb 13 15:37:03.377508 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb 13 15:37:03.377600 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Feb 13 15:37:03.380135 initrd-setup-root-after-ignition[975]: grep: /sysroot/oem/oem-release: No such file or directory
Feb 13 15:37:03.382624 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:37:03.382624 initrd-setup-root-after-ignition[977]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:37:03.385624 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:37:03.386222 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 15:37:03.387622 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Feb 13 15:37:03.400198 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Feb 13 15:37:03.417680 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 13 15:37:03.417788 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Feb 13 15:37:03.419396 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Feb 13 15:37:03.420778 systemd[1]: Reached target initrd.target - Initrd Default Target.
Feb 13 15:37:03.422038 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Feb 13 15:37:03.422749 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Feb 13 15:37:03.437552 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 15:37:03.444224 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Feb 13 15:37:03.448162 systemd-networkd[766]: eth0: Gained IPv6LL
Feb 13 15:37:03.451993 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:37:03.453043 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:37:03.454566 systemd[1]: Stopped target timers.target - Timer Units.
Feb 13 15:37:03.455804 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 13 15:37:03.455917 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 15:37:03.457688 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Feb 13 15:37:03.459194 systemd[1]: Stopped target basic.target - Basic System.
Feb 13 15:37:03.460353 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Feb 13 15:37:03.461573 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 15:37:03.462934 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Feb 13 15:37:03.464351 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Feb 13 15:37:03.465666 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 15:37:03.467150 systemd[1]: Stopped target sysinit.target - System Initialization.
Feb 13 15:37:03.468591 systemd[1]: Stopped target local-fs.target - Local File Systems.
Feb 13 15:37:03.469824 systemd[1]: Stopped target swap.target - Swaps.
Feb 13 15:37:03.470911 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 13 15:37:03.471029 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 15:37:03.472794 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:37:03.474219 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:37:03.475583 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Feb 13 15:37:03.476954 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:37:03.477906 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 13 15:37:03.478014 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Feb 13 15:37:03.480016 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb 13 15:37:03.480142 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 15:37:03.481578 systemd[1]: Stopped target paths.target - Path Units.
Feb 13 15:37:03.482739 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 13 15:37:03.486115 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:37:03.488049 systemd[1]: Stopped target slices.target - Slice Units.
Feb 13 15:37:03.488769 systemd[1]: Stopped target sockets.target - Socket Units.
Feb 13 15:37:03.489919 systemd[1]: iscsid.socket: Deactivated successfully.
Feb 13 15:37:03.490005 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 15:37:03.491126 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb 13 15:37:03.491204 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 15:37:03.492333 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb 13 15:37:03.492441 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 15:37:03.493703 systemd[1]: ignition-files.service: Deactivated successfully.
Feb 13 15:37:03.493808 systemd[1]: Stopped ignition-files.service - Ignition (files).
Feb 13 15:37:03.510290 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Feb 13 15:37:03.510973 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 13 15:37:03.511117 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:37:03.513873 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Feb 13 15:37:03.514536 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 13 15:37:03.514647 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:37:03.515971 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 13 15:37:03.516151 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 15:37:03.521925 ignition[1001]: INFO     : Ignition 2.20.0
Feb 13 15:37:03.521925 ignition[1001]: INFO     : Stage: umount
Feb 13 15:37:03.526357 ignition[1001]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:37:03.526357 ignition[1001]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Feb 13 15:37:03.526357 ignition[1001]: INFO     : umount: umount passed
Feb 13 15:37:03.526357 ignition[1001]: INFO     : Ignition finished successfully
Feb 13 15:37:03.523808 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 13 15:37:03.523891 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Feb 13 15:37:03.527599 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb 13 15:37:03.528188 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb 13 15:37:03.528275 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Feb 13 15:37:03.529912 systemd[1]: Stopped target network.target - Network.
Feb 13 15:37:03.530851 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb 13 15:37:03.530902 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Feb 13 15:37:03.532263 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb 13 15:37:03.532306 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Feb 13 15:37:03.533736 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb 13 15:37:03.533784 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Feb 13 15:37:03.534939 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Feb 13 15:37:03.534977 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Feb 13 15:37:03.536606 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Feb 13 15:37:03.537806 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Feb 13 15:37:03.544628 systemd-networkd[766]: eth0: DHCPv6 lease lost
Feb 13 15:37:03.546946 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb 13 15:37:03.547060 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Feb 13 15:37:03.548732 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb 13 15:37:03.548776 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:37:03.562243 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Feb 13 15:37:03.562906 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb 13 15:37:03.562964 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 15:37:03.564671 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:37:03.567394 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb 13 15:37:03.567616 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Feb 13 15:37:03.570655 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 15:37:03.570705 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:37:03.571566 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 13 15:37:03.571607 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:37:03.573256 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 13 15:37:03.573299 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:37:03.577019 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb 13 15:37:03.577123 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Feb 13 15:37:03.580292 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb 13 15:37:03.580419 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Feb 13 15:37:03.581996 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb 13 15:37:03.582115 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Feb 13 15:37:03.584677 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 13 15:37:03.584848 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:37:03.588264 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 13 15:37:03.588310 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:37:03.589740 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 13 15:37:03.589779 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:37:03.591019 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 13 15:37:03.591057 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 15:37:03.593043 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 13 15:37:03.593087 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Feb 13 15:37:03.594967 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 15:37:03.595005 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:37:03.604193 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Feb 13 15:37:03.604936 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 13 15:37:03.604984 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:37:03.606539 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:37:03.606575 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:37:03.611285 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 13 15:37:03.611377 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Feb 13 15:37:03.613048 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Feb 13 15:37:03.616510 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Feb 13 15:37:03.625341 systemd[1]: Switching root.
Feb 13 15:37:03.649905 systemd-journald[239]: Journal stopped
Feb 13 15:37:04.297622 systemd-journald[239]: Received SIGTERM from PID 1 (systemd).
Feb 13 15:37:04.297672 kernel: SELinux:  policy capability network_peer_controls=1
Feb 13 15:37:04.297684 kernel: SELinux:  policy capability open_perms=1
Feb 13 15:37:04.297694 kernel: SELinux:  policy capability extended_socket_class=1
Feb 13 15:37:04.297703 kernel: SELinux:  policy capability always_check_network=0
Feb 13 15:37:04.297719 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 13 15:37:04.297729 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 13 15:37:04.297738 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb 13 15:37:04.297758 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb 13 15:37:04.297768 kernel: audit: type=1403 audit(1739461023.770:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 13 15:37:04.297778 systemd[1]: Successfully loaded SELinux policy in 31.952ms.
Feb 13 15:37:04.297797 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.146ms.
Feb 13 15:37:04.297808 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 15:37:04.297819 systemd[1]: Detected virtualization kvm.
Feb 13 15:37:04.297831 systemd[1]: Detected architecture arm64.
Feb 13 15:37:04.297841 systemd[1]: Detected first boot.
Feb 13 15:37:04.297857 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 15:37:04.297868 zram_generator::config[1048]: No configuration found.
Feb 13 15:37:04.297879 systemd[1]: Populated /etc with preset unit settings.
Feb 13 15:37:04.297890 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Feb 13 15:37:04.297900 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Feb 13 15:37:04.297913 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 13 15:37:04.297924 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Feb 13 15:37:04.297935 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Feb 13 15:37:04.297945 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Feb 13 15:37:04.297956 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Feb 13 15:37:04.297966 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Feb 13 15:37:04.297978 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Feb 13 15:37:04.297989 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Feb 13 15:37:04.298000 systemd[1]: Created slice user.slice - User and Session Slice.
Feb 13 15:37:04.298010 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:37:04.298022 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:37:04.298032 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Feb 13 15:37:04.298042 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Feb 13 15:37:04.298053 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Feb 13 15:37:04.298063 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 15:37:04.299966 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0...
Feb 13 15:37:04.299987 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:37:04.300004 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Feb 13 15:37:04.300016 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Feb 13 15:37:04.301138 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Feb 13 15:37:04.301167 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Feb 13 15:37:04.301178 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:37:04.301189 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 15:37:04.301205 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 15:37:04.301216 systemd[1]: Reached target swap.target - Swaps.
Feb 13 15:37:04.301226 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Feb 13 15:37:04.301237 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Feb 13 15:37:04.301247 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:37:04.301259 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:37:04.301270 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:37:04.301280 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Feb 13 15:37:04.301291 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Feb 13 15:37:04.301304 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Feb 13 15:37:04.301314 systemd[1]: Mounting media.mount - External Media Directory...
Feb 13 15:37:04.301325 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Feb 13 15:37:04.301335 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Feb 13 15:37:04.301346 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Feb 13 15:37:04.301358 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 13 15:37:04.301369 systemd[1]: Reached target machines.target - Containers.
Feb 13 15:37:04.301379 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Feb 13 15:37:04.301391 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:37:04.301402 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 15:37:04.301413 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Feb 13 15:37:04.301423 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:37:04.301433 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 15:37:04.301444 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:37:04.301455 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Feb 13 15:37:04.301465 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:37:04.301477 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb 13 15:37:04.301489 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Feb 13 15:37:04.301500 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Feb 13 15:37:04.301511 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Feb 13 15:37:04.301528 systemd[1]: Stopped systemd-fsck-usr.service.
Feb 13 15:37:04.301539 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 15:37:04.301550 kernel: loop: module loaded
Feb 13 15:37:04.301560 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 15:37:04.301571 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Feb 13 15:37:04.301581 kernel: fuse: init (API version 7.39)
Feb 13 15:37:04.301593 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Feb 13 15:37:04.301603 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 15:37:04.301614 systemd[1]: verity-setup.service: Deactivated successfully.
Feb 13 15:37:04.301625 systemd[1]: Stopped verity-setup.service.
Feb 13 15:37:04.301635 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Feb 13 15:37:04.301646 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Feb 13 15:37:04.301656 systemd[1]: Mounted media.mount - External Media Directory.
Feb 13 15:37:04.301668 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Feb 13 15:37:04.301679 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Feb 13 15:37:04.301689 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Feb 13 15:37:04.301699 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:37:04.301710 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 13 15:37:04.301720 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Feb 13 15:37:04.301732 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:37:04.301749 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:37:04.301760 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:37:04.301799 systemd-journald[1115]: Collecting audit messages is disabled.
Feb 13 15:37:04.301819 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:37:04.301831 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 13 15:37:04.301840 kernel: ACPI: bus type drm_connector registered
Feb 13 15:37:04.301850 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Feb 13 15:37:04.301863 systemd-journald[1115]: Journal started
Feb 13 15:37:04.301883 systemd-journald[1115]: Runtime Journal (/run/log/journal/6a147e74707940a0ad1a8bb471933f46) is 5.9M, max 47.3M, 41.4M free.
Feb 13 15:37:04.103365 systemd[1]: Queued start job for default target multi-user.target.
Feb 13 15:37:04.119641 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6.
Feb 13 15:37:04.119981 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 13 15:37:04.304153 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 15:37:04.304633 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 15:37:04.304769 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 15:37:04.305794 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:37:04.305919 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:37:04.306934 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:37:04.307972 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Feb 13 15:37:04.309196 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Feb 13 15:37:04.320182 systemd[1]: Reached target network-pre.target - Preparation for Network.
Feb 13 15:37:04.326177 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Feb 13 15:37:04.327918 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Feb 13 15:37:04.328795 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb 13 15:37:04.328828 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 15:37:04.330430 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Feb 13 15:37:04.332191 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Feb 13 15:37:04.333879 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Feb 13 15:37:04.334753 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:37:04.338596 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Feb 13 15:37:04.340215 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Feb 13 15:37:04.341083 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 15:37:04.343242 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Feb 13 15:37:04.344125 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 15:37:04.346242 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:37:04.350264 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Feb 13 15:37:04.354164 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Feb 13 15:37:04.356146 systemd-journald[1115]: Time spent on flushing to /var/log/journal/6a147e74707940a0ad1a8bb471933f46 is 19.375ms for 838 entries.
Feb 13 15:37:04.356146 systemd-journald[1115]: System Journal (/var/log/journal/6a147e74707940a0ad1a8bb471933f46) is 8.0M, max 195.6M, 187.6M free.
Feb 13 15:37:04.440690 systemd-journald[1115]: Received client request to flush runtime journal.
Feb 13 15:37:04.440770 kernel: loop0: detected capacity change from 0 to 189592
Feb 13 15:37:04.440801 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Feb 13 15:37:04.440818 kernel: loop1: detected capacity change from 0 to 116808
Feb 13 15:37:04.355286 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:37:04.357096 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Feb 13 15:37:04.358007 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Feb 13 15:37:04.359219 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Feb 13 15:37:04.363692 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Feb 13 15:37:04.365289 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Feb 13 15:37:04.381634 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in.
Feb 13 15:37:04.393823 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:37:04.401595 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Feb 13 15:37:04.403670 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Feb 13 15:37:04.416671 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Feb 13 15:37:04.419099 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Feb 13 15:37:04.421218 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 15:37:04.442736 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Feb 13 15:37:04.447729 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb 13 15:37:04.448843 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Feb 13 15:37:04.458893 systemd-tmpfiles[1174]: ACLs are not supported, ignoring.
Feb 13 15:37:04.458947 systemd-tmpfiles[1174]: ACLs are not supported, ignoring.
Feb 13 15:37:04.463091 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:37:04.469345 kernel: loop2: detected capacity change from 0 to 113536
Feb 13 15:37:04.509106 kernel: loop3: detected capacity change from 0 to 189592
Feb 13 15:37:04.514134 kernel: loop4: detected capacity change from 0 to 116808
Feb 13 15:37:04.519258 kernel: loop5: detected capacity change from 0 to 113536
Feb 13 15:37:04.522049 (sd-merge)[1183]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'.
Feb 13 15:37:04.522932 (sd-merge)[1183]: Merged extensions into '/usr'.
Feb 13 15:37:04.527149 systemd[1]: Reloading requested from client PID 1158 ('systemd-sysext') (unit systemd-sysext.service)...
Feb 13 15:37:04.527164 systemd[1]: Reloading...
Feb 13 15:37:04.589100 zram_generator::config[1207]: No configuration found.
Feb 13 15:37:04.644134 ldconfig[1153]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb 13 15:37:04.688779 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:37:04.723653 systemd[1]: Reloading finished in 196 ms.
Feb 13 15:37:04.752356 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Feb 13 15:37:04.753440 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Feb 13 15:37:04.764228 systemd[1]: Starting ensure-sysext.service...
Feb 13 15:37:04.766007 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 15:37:04.773707 systemd[1]: Reloading requested from client PID 1244 ('systemctl') (unit ensure-sysext.service)...
Feb 13 15:37:04.773724 systemd[1]: Reloading...
Feb 13 15:37:04.782696 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb 13 15:37:04.782959 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Feb 13 15:37:04.783599 systemd-tmpfiles[1245]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb 13 15:37:04.783816 systemd-tmpfiles[1245]: ACLs are not supported, ignoring.
Feb 13 15:37:04.783861 systemd-tmpfiles[1245]: ACLs are not supported, ignoring.
Feb 13 15:37:04.786346 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 15:37:04.786452 systemd-tmpfiles[1245]: Skipping /boot
Feb 13 15:37:04.793584 systemd-tmpfiles[1245]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 15:37:04.793679 systemd-tmpfiles[1245]: Skipping /boot
Feb 13 15:37:04.820097 zram_generator::config[1272]: No configuration found.
Feb 13 15:37:04.897202 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:37:04.931925 systemd[1]: Reloading finished in 157 ms.
Feb 13 15:37:04.949119 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Feb 13 15:37:04.960436 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:37:04.966659 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 15:37:04.968852 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Feb 13 15:37:04.970834 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Feb 13 15:37:04.973300 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 15:37:04.977847 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:37:04.980401 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Feb 13 15:37:04.984237 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:37:04.986838 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:37:04.990114 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:37:04.992533 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:37:04.993546 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:37:04.994821 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:37:04.994968 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:37:05.004035 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:37:05.012312 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:37:05.013236 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:37:05.022306 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Feb 13 15:37:05.023868 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Feb 13 15:37:05.025283 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Feb 13 15:37:05.027213 systemd-udevd[1316]: Using default interface naming scheme 'v255'.
Feb 13 15:37:05.028045 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:37:05.029192 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:37:05.030430 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:37:05.030540 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:37:05.031855 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:37:05.031968 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:37:05.039210 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:37:05.046598 augenrules[1346]: No rules
Feb 13 15:37:05.049321 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:37:05.051774 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 15:37:05.056281 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:37:05.059663 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:37:05.060554 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:37:05.061781 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Feb 13 15:37:05.064254 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:37:05.065570 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Feb 13 15:37:05.066885 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 15:37:05.067066 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 15:37:05.068263 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Feb 13 15:37:05.069537 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:37:05.069652 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:37:05.070880 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 15:37:05.070988 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 15:37:05.072201 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:37:05.072306 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:37:05.073793 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:37:05.075107 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:37:05.083162 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Feb 13 15:37:05.088735 systemd[1]: Finished ensure-sysext.service.
Feb 13 15:37:05.106129 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1385)
Feb 13 15:37:05.110245 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 15:37:05.111951 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 15:37:05.112020 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 15:37:05.113950 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization...
Feb 13 15:37:05.115773 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb 13 15:37:05.118326 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped.
Feb 13 15:37:05.144819 systemd-resolved[1311]: Positive Trust Anchors:
Feb 13 15:37:05.145256 systemd-resolved[1311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 15:37:05.146445 systemd-resolved[1311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 15:37:05.153122 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Feb 13 15:37:05.156704 systemd-resolved[1311]: Defaulting to hostname 'linux'.
Feb 13 15:37:05.163387 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Feb 13 15:37:05.164445 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 15:37:05.165436 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:37:05.171951 systemd-networkd[1388]: lo: Link UP
Feb 13 15:37:05.171959 systemd-networkd[1388]: lo: Gained carrier
Feb 13 15:37:05.174406 systemd-networkd[1388]: Enumeration completed
Feb 13 15:37:05.174507 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 15:37:05.175257 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:37:05.175266 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 15:37:05.175486 systemd[1]: Reached target network.target - Network.
Feb 13 15:37:05.175872 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:37:05.175899 systemd-networkd[1388]: eth0: Link UP
Feb 13 15:37:05.175902 systemd-networkd[1388]: eth0: Gained carrier
Feb 13 15:37:05.175910 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:37:05.188221 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Feb 13 15:37:05.189325 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Feb 13 15:37:05.191130 systemd-networkd[1388]: eth0: DHCPv4 address 10.0.0.136/16, gateway 10.0.0.1 acquired from 10.0.0.1
Feb 13 15:37:05.200175 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization.
Feb 13 15:37:05.201196 systemd[1]: Reached target time-set.target - System Time Set.
Feb 13 15:37:05.202672 systemd-timesyncd[1389]: Network configuration changed, trying to establish connection.
Feb 13 15:37:05.203896 systemd-timesyncd[1389]: Contacted time server 10.0.0.1:123 (10.0.0.1).
Feb 13 15:37:05.204014 systemd-timesyncd[1389]: Initial clock synchronization to Thu 2025-02-13 15:37:04.993536 UTC.
Feb 13 15:37:05.222386 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:37:05.241636 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Feb 13 15:37:05.257769 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Feb 13 15:37:05.268469 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:37:05.281963 lvm[1408]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 15:37:05.316646 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Feb 13 15:37:05.317841 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:37:05.318750 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 15:37:05.319781 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Feb 13 15:37:05.320715 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Feb 13 15:37:05.321839 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Feb 13 15:37:05.322783 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Feb 13 15:37:05.323719 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Feb 13 15:37:05.324628 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb 13 15:37:05.324659 systemd[1]: Reached target paths.target - Path Units.
Feb 13 15:37:05.325375 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 15:37:05.326709 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Feb 13 15:37:05.329061 systemd[1]: Starting docker.socket - Docker Socket for the API...
Feb 13 15:37:05.341267 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Feb 13 15:37:05.343675 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Feb 13 15:37:05.345132 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Feb 13 15:37:05.346003 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 15:37:05.346799 systemd[1]: Reached target basic.target - Basic System.
Feb 13 15:37:05.347544 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Feb 13 15:37:05.347572 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Feb 13 15:37:05.348491 systemd[1]: Starting containerd.service - containerd container runtime...
Feb 13 15:37:05.350204 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Feb 13 15:37:05.351196 lvm[1415]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 15:37:05.353366 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Feb 13 15:37:05.356915 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Feb 13 15:37:05.362149 jq[1418]: false
Feb 13 15:37:05.360343 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Feb 13 15:37:05.361564 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Feb 13 15:37:05.363539 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Feb 13 15:37:05.366519 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Feb 13 15:37:05.371251 systemd[1]: Starting systemd-logind.service - User Login Management...
Feb 13 15:37:05.372857 extend-filesystems[1419]: Found loop3
Feb 13 15:37:05.378927 extend-filesystems[1419]: Found loop4
Feb 13 15:37:05.378927 extend-filesystems[1419]: Found loop5
Feb 13 15:37:05.378927 extend-filesystems[1419]: Found vda
Feb 13 15:37:05.378927 extend-filesystems[1419]: Found vda1
Feb 13 15:37:05.378927 extend-filesystems[1419]: Found vda2
Feb 13 15:37:05.378927 extend-filesystems[1419]: Found vda3
Feb 13 15:37:05.378927 extend-filesystems[1419]: Found usr
Feb 13 15:37:05.378927 extend-filesystems[1419]: Found vda4
Feb 13 15:37:05.378927 extend-filesystems[1419]: Found vda6
Feb 13 15:37:05.378927 extend-filesystems[1419]: Found vda7
Feb 13 15:37:05.378927 extend-filesystems[1419]: Found vda9
Feb 13 15:37:05.378927 extend-filesystems[1419]: Checking size of /dev/vda9
Feb 13 15:37:05.373035 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Feb 13 15:37:05.404995 extend-filesystems[1419]: Resized partition /dev/vda9
Feb 13 15:37:05.373490 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Feb 13 15:37:05.378832 systemd[1]: Starting update-engine.service - Update Engine...
Feb 13 15:37:05.383695 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Feb 13 15:37:05.406020 jq[1431]: true
Feb 13 15:37:05.387815 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Feb 13 15:37:05.406525 dbus-daemon[1417]: [system] SELinux support is enabled
Feb 13 15:37:05.412937 extend-filesystems[1441]: resize2fs 1.47.1 (20-May-2024)
Feb 13 15:37:05.391898 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb 13 15:37:05.392061 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Feb 13 15:37:05.392347 systemd[1]: motdgen.service: Deactivated successfully.
Feb 13 15:37:05.392478 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Feb 13 15:37:05.393603 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb 13 15:37:05.393749 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Feb 13 15:37:05.409318 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Feb 13 15:37:05.418129 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks
Feb 13 15:37:05.419837 jq[1440]: true
Feb 13 15:37:05.426723 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb 13 15:37:05.426772 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Feb 13 15:37:05.427846 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb 13 15:37:05.427861 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Feb 13 15:37:05.429125 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1368)
Feb 13 15:37:05.435842 update_engine[1425]: I20250213 15:37:05.435676  1425 main.cc:92] Flatcar Update Engine starting
Feb 13 15:37:05.437821 (ntainerd)[1448]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Feb 13 15:37:05.438717 update_engine[1425]: I20250213 15:37:05.438674  1425 update_check_scheduler.cc:74] Next update check in 5m23s
Feb 13 15:37:05.439054 systemd[1]: Started update-engine.service - Update Engine.
Feb 13 15:37:05.441096 kernel: EXT4-fs (vda9): resized filesystem to 1864699
Feb 13 15:37:05.442846 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Feb 13 15:37:05.459143 systemd-logind[1424]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 13 15:37:05.460440 systemd-logind[1424]: New seat seat0.
Feb 13 15:37:05.463096 extend-filesystems[1441]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Feb 13 15:37:05.463096 extend-filesystems[1441]: old_desc_blocks = 1, new_desc_blocks = 1
Feb 13 15:37:05.463096 extend-filesystems[1441]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long.
Feb 13 15:37:05.464214 systemd[1]: Started systemd-logind.service - User Login Management.
Feb 13 15:37:05.467879 extend-filesystems[1419]: Resized filesystem in /dev/vda9
Feb 13 15:37:05.465301 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb 13 15:37:05.467125 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Feb 13 15:37:05.496826 bash[1468]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 15:37:05.498327 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Feb 13 15:37:05.499896 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met.
Feb 13 15:37:05.515505 locksmithd[1457]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb 13 15:37:05.632670 containerd[1448]: time="2025-02-13T15:37:05.632591600Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23
Feb 13 15:37:05.664191 containerd[1448]: time="2025-02-13T15:37:05.663885720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:37:05.665314 containerd[1448]: time="2025-02-13T15:37:05.665277160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:37:05.665446 containerd[1448]: time="2025-02-13T15:37:05.665428560Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb 13 15:37:05.665505 containerd[1448]: time="2025-02-13T15:37:05.665492280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb 13 15:37:05.665701 containerd[1448]: time="2025-02-13T15:37:05.665680840Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Feb 13 15:37:05.665785 containerd[1448]: time="2025-02-13T15:37:05.665768640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Feb 13 15:37:05.665897 containerd[1448]: time="2025-02-13T15:37:05.665876840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:37:05.666860 containerd[1448]: time="2025-02-13T15:37:05.665945440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:37:05.666860 containerd[1448]: time="2025-02-13T15:37:05.666150240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:37:05.666860 containerd[1448]: time="2025-02-13T15:37:05.666166440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb 13 15:37:05.666860 containerd[1448]: time="2025-02-13T15:37:05.666180280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:37:05.666860 containerd[1448]: time="2025-02-13T15:37:05.666189480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb 13 15:37:05.666860 containerd[1448]: time="2025-02-13T15:37:05.666259840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:37:05.666860 containerd[1448]: time="2025-02-13T15:37:05.666449520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:37:05.666860 containerd[1448]: time="2025-02-13T15:37:05.666551360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:37:05.666860 containerd[1448]: time="2025-02-13T15:37:05.666564200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb 13 15:37:05.666860 containerd[1448]: time="2025-02-13T15:37:05.666633240Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb 13 15:37:05.666860 containerd[1448]: time="2025-02-13T15:37:05.666675920Z" level=info msg="metadata content store policy set" policy=shared
Feb 13 15:37:05.670579 containerd[1448]: time="2025-02-13T15:37:05.670172000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb 13 15:37:05.670579 containerd[1448]: time="2025-02-13T15:37:05.670233080Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb 13 15:37:05.670579 containerd[1448]: time="2025-02-13T15:37:05.670248520Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Feb 13 15:37:05.670579 containerd[1448]: time="2025-02-13T15:37:05.670263520Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Feb 13 15:37:05.670579 containerd[1448]: time="2025-02-13T15:37:05.670277920Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb 13 15:37:05.670579 containerd[1448]: time="2025-02-13T15:37:05.670403520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb 13 15:37:05.670718 containerd[1448]: time="2025-02-13T15:37:05.670655360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb 13 15:37:05.670824 containerd[1448]: time="2025-02-13T15:37:05.670801360Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Feb 13 15:37:05.670848 containerd[1448]: time="2025-02-13T15:37:05.670824360Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Feb 13 15:37:05.670848 containerd[1448]: time="2025-02-13T15:37:05.670839440Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Feb 13 15:37:05.670886 containerd[1448]: time="2025-02-13T15:37:05.670853200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb 13 15:37:05.670886 containerd[1448]: time="2025-02-13T15:37:05.670865760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb 13 15:37:05.670886 containerd[1448]: time="2025-02-13T15:37:05.670878200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb 13 15:37:05.670933 containerd[1448]: time="2025-02-13T15:37:05.670891040Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb 13 15:37:05.670933 containerd[1448]: time="2025-02-13T15:37:05.670905880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb 13 15:37:05.670933 containerd[1448]: time="2025-02-13T15:37:05.670918680Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb 13 15:37:05.670933 containerd[1448]: time="2025-02-13T15:37:05.670930440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb 13 15:37:05.670998 containerd[1448]: time="2025-02-13T15:37:05.670941680Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb 13 15:37:05.670998 containerd[1448]: time="2025-02-13T15:37:05.670961600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb 13 15:37:05.670998 containerd[1448]: time="2025-02-13T15:37:05.670974520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb 13 15:37:05.670998 containerd[1448]: time="2025-02-13T15:37:05.670985920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb 13 15:37:05.671067 containerd[1448]: time="2025-02-13T15:37:05.670998840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb 13 15:37:05.671067 containerd[1448]: time="2025-02-13T15:37:05.671010000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb 13 15:37:05.671067 containerd[1448]: time="2025-02-13T15:37:05.671026480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb 13 15:37:05.671067 containerd[1448]: time="2025-02-13T15:37:05.671038080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb 13 15:37:05.671067 containerd[1448]: time="2025-02-13T15:37:05.671050320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb 13 15:37:05.671067 containerd[1448]: time="2025-02-13T15:37:05.671062760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Feb 13 15:37:05.671177 containerd[1448]: time="2025-02-13T15:37:05.671100200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Feb 13 15:37:05.671177 containerd[1448]: time="2025-02-13T15:37:05.671112840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb 13 15:37:05.671177 containerd[1448]: time="2025-02-13T15:37:05.671123720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Feb 13 15:37:05.671177 containerd[1448]: time="2025-02-13T15:37:05.671136280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb 13 15:37:05.671177 containerd[1448]: time="2025-02-13T15:37:05.671150120Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Feb 13 15:37:05.671177 containerd[1448]: time="2025-02-13T15:37:05.671169160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Feb 13 15:37:05.671279 containerd[1448]: time="2025-02-13T15:37:05.671182080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb 13 15:37:05.671279 containerd[1448]: time="2025-02-13T15:37:05.671193520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb 13 15:37:05.671371 containerd[1448]: time="2025-02-13T15:37:05.671358160Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb 13 15:37:05.671391 containerd[1448]: time="2025-02-13T15:37:05.671378320Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Feb 13 15:37:05.671414 containerd[1448]: time="2025-02-13T15:37:05.671388360Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb 13 15:37:05.671414 containerd[1448]: time="2025-02-13T15:37:05.671400480Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Feb 13 15:37:05.671414 containerd[1448]: time="2025-02-13T15:37:05.671409720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb 13 15:37:05.671466 containerd[1448]: time="2025-02-13T15:37:05.671421240Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Feb 13 15:37:05.671466 containerd[1448]: time="2025-02-13T15:37:05.671430680Z" level=info msg="NRI interface is disabled by configuration."
Feb 13 15:37:05.671466 containerd[1448]: time="2025-02-13T15:37:05.671441280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb 13 15:37:05.671811 containerd[1448]: time="2025-02-13T15:37:05.671768000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb 13 15:37:05.671911 containerd[1448]: time="2025-02-13T15:37:05.671819720Z" level=info msg="Connect containerd service"
Feb 13 15:37:05.671911 containerd[1448]: time="2025-02-13T15:37:05.671855280Z" level=info msg="using legacy CRI server"
Feb 13 15:37:05.671911 containerd[1448]: time="2025-02-13T15:37:05.671862640Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Feb 13 15:37:05.672114 containerd[1448]: time="2025-02-13T15:37:05.672098760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb 13 15:37:05.672710 containerd[1448]: time="2025-02-13T15:37:05.672672120Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 15:37:05.673252 containerd[1448]: time="2025-02-13T15:37:05.673224520Z" level=info msg="Start subscribing containerd event"
Feb 13 15:37:05.673289 containerd[1448]: time="2025-02-13T15:37:05.673270520Z" level=info msg="Start recovering state"
Feb 13 15:37:05.673375 containerd[1448]: time="2025-02-13T15:37:05.673328960Z" level=info msg="Start event monitor"
Feb 13 15:37:05.673375 containerd[1448]: time="2025-02-13T15:37:05.673361080Z" level=info msg="Start snapshots syncer"
Feb 13 15:37:05.673375 containerd[1448]: time="2025-02-13T15:37:05.673374160Z" level=info msg="Start cni network conf syncer for default"
Feb 13 15:37:05.673440 containerd[1448]: time="2025-02-13T15:37:05.673381280Z" level=info msg="Start streaming server"
Feb 13 15:37:05.673741 containerd[1448]: time="2025-02-13T15:37:05.673717320Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb 13 15:37:05.673782 containerd[1448]: time="2025-02-13T15:37:05.673772800Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb 13 15:37:05.673955 systemd[1]: Started containerd.service - containerd container runtime.
Feb 13 15:37:05.676624 containerd[1448]: time="2025-02-13T15:37:05.675047880Z" level=info msg="containerd successfully booted in 0.043815s"
Feb 13 15:37:06.252204 sshd_keygen[1437]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb 13 15:37:06.271221 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Feb 13 15:37:06.281398 systemd[1]: Starting issuegen.service - Generate /run/issue...
Feb 13 15:37:06.286438 systemd[1]: issuegen.service: Deactivated successfully.
Feb 13 15:37:06.286618 systemd[1]: Finished issuegen.service - Generate /run/issue.
Feb 13 15:37:06.288765 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Feb 13 15:37:06.300687 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Feb 13 15:37:06.302883 systemd[1]: Started getty@tty1.service - Getty on tty1.
Feb 13 15:37:06.304599 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0.
Feb 13 15:37:06.305566 systemd[1]: Reached target getty.target - Login Prompts.
Feb 13 15:37:06.903258 systemd-networkd[1388]: eth0: Gained IPv6LL
Feb 13 15:37:06.906378 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Feb 13 15:37:06.907970 systemd[1]: Reached target network-online.target - Network is Online.
Feb 13 15:37:06.923300 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent...
Feb 13 15:37:06.925434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:37:06.927182 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Feb 13 15:37:06.941112 systemd[1]: coreos-metadata.service: Deactivated successfully.
Feb 13 15:37:06.942140 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent.
Feb 13 15:37:06.943504 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Feb 13 15:37:06.943944 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Feb 13 15:37:07.403227 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:37:07.404481 systemd[1]: Reached target multi-user.target - Multi-User System.
Feb 13 15:37:07.407168 (kubelet)[1523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 15:37:07.410511 systemd[1]: Startup finished in 538ms (kernel) + 4.072s (initrd) + 3.671s (userspace) = 8.282s.
Feb 13 15:37:07.875065 kubelet[1523]: E0213 15:37:07.874964    1523 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 15:37:07.877230 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 15:37:07.877372 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 15:37:12.140856 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Feb 13 15:37:12.142041 systemd[1]: Started sshd@0-10.0.0.136:22-10.0.0.1:37586.service - OpenSSH per-connection server daemon (10.0.0.1:37586).
Feb 13 15:37:12.202989 sshd[1536]: Accepted publickey for core from 10.0.0.1 port 37586 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8
Feb 13 15:37:12.205136 sshd-session[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:37:12.213956 systemd-logind[1424]: New session 1 of user core.
Feb 13 15:37:12.214945 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Feb 13 15:37:12.237968 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Feb 13 15:37:12.253112 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Feb 13 15:37:12.265501 systemd[1]: Starting user@500.service - User Manager for UID 500...
Feb 13 15:37:12.267951 (systemd)[1540]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb 13 15:37:12.342781 systemd[1540]: Queued start job for default target default.target.
Feb 13 15:37:12.354238 systemd[1540]: Created slice app.slice - User Application Slice.
Feb 13 15:37:12.354284 systemd[1540]: Reached target paths.target - Paths.
Feb 13 15:37:12.354296 systemd[1540]: Reached target timers.target - Timers.
Feb 13 15:37:12.355469 systemd[1540]: Starting dbus.socket - D-Bus User Message Bus Socket...
Feb 13 15:37:12.364568 systemd[1540]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Feb 13 15:37:12.364635 systemd[1540]: Reached target sockets.target - Sockets.
Feb 13 15:37:12.364646 systemd[1540]: Reached target basic.target - Basic System.
Feb 13 15:37:12.364680 systemd[1540]: Reached target default.target - Main User Target.
Feb 13 15:37:12.364705 systemd[1540]: Startup finished in 89ms.
Feb 13 15:37:12.365022 systemd[1]: Started user@500.service - User Manager for UID 500.
Feb 13 15:37:12.366420 systemd[1]: Started session-1.scope - Session 1 of User core.
Feb 13 15:37:12.435032 systemd[1]: Started sshd@1-10.0.0.136:22-10.0.0.1:37588.service - OpenSSH per-connection server daemon (10.0.0.1:37588).
Feb 13 15:37:12.475888 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 37588 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8
Feb 13 15:37:12.477274 sshd-session[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:37:12.481309 systemd-logind[1424]: New session 2 of user core.
Feb 13 15:37:12.492261 systemd[1]: Started session-2.scope - Session 2 of User core.
Feb 13 15:37:12.544973 sshd[1553]: Connection closed by 10.0.0.1 port 37588
Feb 13 15:37:12.545607 sshd-session[1551]: pam_unix(sshd:session): session closed for user core
Feb 13 15:37:12.555535 systemd[1]: sshd@1-10.0.0.136:22-10.0.0.1:37588.service: Deactivated successfully.
Feb 13 15:37:12.556963 systemd[1]: session-2.scope: Deactivated successfully.
Feb 13 15:37:12.559716 systemd-logind[1424]: Session 2 logged out. Waiting for processes to exit.
Feb 13 15:37:12.565423 systemd[1]: Started sshd@2-10.0.0.136:22-10.0.0.1:56568.service - OpenSSH per-connection server daemon (10.0.0.1:56568).
Feb 13 15:37:12.567384 systemd-logind[1424]: Removed session 2.
Feb 13 15:37:12.605966 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 56568 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8
Feb 13 15:37:12.607295 sshd-session[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:37:12.611268 systemd-logind[1424]: New session 3 of user core.
Feb 13 15:37:12.628247 systemd[1]: Started session-3.scope - Session 3 of User core.
Feb 13 15:37:12.678031 sshd[1560]: Connection closed by 10.0.0.1 port 56568
Feb 13 15:37:12.678484 sshd-session[1558]: pam_unix(sshd:session): session closed for user core
Feb 13 15:37:12.687186 systemd[1]: sshd@2-10.0.0.136:22-10.0.0.1:56568.service: Deactivated successfully.
Feb 13 15:37:12.689508 systemd[1]: session-3.scope: Deactivated successfully.
Feb 13 15:37:12.690661 systemd-logind[1424]: Session 3 logged out. Waiting for processes to exit.
Feb 13 15:37:12.698397 systemd[1]: Started sshd@3-10.0.0.136:22-10.0.0.1:56574.service - OpenSSH per-connection server daemon (10.0.0.1:56574).
Feb 13 15:37:12.699488 systemd-logind[1424]: Removed session 3.
Feb 13 15:37:12.735203 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 56574 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8
Feb 13 15:37:12.736491 sshd-session[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:37:12.740825 systemd-logind[1424]: New session 4 of user core.
Feb 13 15:37:12.749220 systemd[1]: Started session-4.scope - Session 4 of User core.
Feb 13 15:37:12.802908 sshd[1567]: Connection closed by 10.0.0.1 port 56574
Feb 13 15:37:12.802776 sshd-session[1565]: pam_unix(sshd:session): session closed for user core
Feb 13 15:37:12.814463 systemd[1]: sshd@3-10.0.0.136:22-10.0.0.1:56574.service: Deactivated successfully.
Feb 13 15:37:12.815810 systemd[1]: session-4.scope: Deactivated successfully.
Feb 13 15:37:12.816903 systemd-logind[1424]: Session 4 logged out. Waiting for processes to exit.
Feb 13 15:37:12.818022 systemd[1]: Started sshd@4-10.0.0.136:22-10.0.0.1:56586.service - OpenSSH per-connection server daemon (10.0.0.1:56586).
Feb 13 15:37:12.818775 systemd-logind[1424]: Removed session 4.
Feb 13 15:37:12.858637 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 56586 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8
Feb 13 15:37:12.859789 sshd-session[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:37:12.863900 systemd-logind[1424]: New session 5 of user core.
Feb 13 15:37:12.877271 systemd[1]: Started session-5.scope - Session 5 of User core.
Feb 13 15:37:12.937592 sudo[1575]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Feb 13 15:37:12.937867 sudo[1575]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:37:12.949814 sudo[1575]: pam_unix(sudo:session): session closed for user root
Feb 13 15:37:12.951246 sshd[1574]: Connection closed by 10.0.0.1 port 56586
Feb 13 15:37:12.951715 sshd-session[1572]: pam_unix(sshd:session): session closed for user core
Feb 13 15:37:12.960298 systemd[1]: sshd@4-10.0.0.136:22-10.0.0.1:56586.service: Deactivated successfully.
Feb 13 15:37:12.961646 systemd[1]: session-5.scope: Deactivated successfully.
Feb 13 15:37:12.964272 systemd-logind[1424]: Session 5 logged out. Waiting for processes to exit.
Feb 13 15:37:12.972567 systemd[1]: Started sshd@5-10.0.0.136:22-10.0.0.1:56598.service - OpenSSH per-connection server daemon (10.0.0.1:56598).
Feb 13 15:37:12.973658 systemd-logind[1424]: Removed session 5.
Feb 13 15:37:13.008570 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 56598 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8
Feb 13 15:37:13.009919 sshd-session[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:37:13.013907 systemd-logind[1424]: New session 6 of user core.
Feb 13 15:37:13.029285 systemd[1]: Started session-6.scope - Session 6 of User core.
Feb 13 15:37:13.080237 sudo[1584]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Feb 13 15:37:13.080500 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:37:13.083839 sudo[1584]: pam_unix(sudo:session): session closed for user root
Feb 13 15:37:13.088350 sudo[1583]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules
Feb 13 15:37:13.088616 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:37:13.101681 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 15:37:13.124146 augenrules[1606]: No rules
Feb 13 15:37:13.124799 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 15:37:13.124960 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 15:37:13.127865 sudo[1583]: pam_unix(sudo:session): session closed for user root
Feb 13 15:37:13.129135 sshd[1582]: Connection closed by 10.0.0.1 port 56598
Feb 13 15:37:13.129463 sshd-session[1580]: pam_unix(sshd:session): session closed for user core
Feb 13 15:37:13.138384 systemd[1]: sshd@5-10.0.0.136:22-10.0.0.1:56598.service: Deactivated successfully.
Feb 13 15:37:13.139713 systemd[1]: session-6.scope: Deactivated successfully.
Feb 13 15:37:13.140931 systemd-logind[1424]: Session 6 logged out. Waiting for processes to exit.
Feb 13 15:37:13.142039 systemd[1]: Started sshd@6-10.0.0.136:22-10.0.0.1:56612.service - OpenSSH per-connection server daemon (10.0.0.1:56612).
Feb 13 15:37:13.142983 systemd-logind[1424]: Removed session 6.
Feb 13 15:37:13.180559 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 56612 ssh2: RSA SHA256:Nj7eQKGyA0WrBR5yxLJj7i9YHGzEkxm/3KYsW1FrsQ8
Feb 13 15:37:13.181647 sshd-session[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:37:13.186240 systemd-logind[1424]: New session 7 of user core.
Feb 13 15:37:13.195255 systemd[1]: Started session-7.scope - Session 7 of User core.
Feb 13 15:37:13.244419 sudo[1617]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb 13 15:37:13.244690 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:37:13.262477 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent...
Feb 13 15:37:13.276001 systemd[1]: coreos-metadata.service: Deactivated successfully.
Feb 13 15:37:13.276201 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent.
Feb 13 15:37:13.670681 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:37:13.681297 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:37:13.700251 systemd[1]: Reloading requested from client PID 1658 ('systemctl') (unit session-7.scope)...
Feb 13 15:37:13.700264 systemd[1]: Reloading...
Feb 13 15:37:13.767207 zram_generator::config[1699]: No configuration found.
Feb 13 15:37:13.934180 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:37:13.984995 systemd[1]: Reloading finished in 284 ms.
Feb 13 15:37:14.026905 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:37:14.029494 systemd[1]: kubelet.service: Deactivated successfully.
Feb 13 15:37:14.030255 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:37:14.031896 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:37:14.124952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:37:14.129851 (kubelet)[1743]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 15:37:14.163820 kubelet[1743]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:37:14.163820 kubelet[1743]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 13 15:37:14.163820 kubelet[1743]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:37:14.164162 kubelet[1743]: I0213 15:37:14.163940    1743 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 13 15:37:14.707374 kubelet[1743]: I0213 15:37:14.707329    1743 server.go:486] "Kubelet version" kubeletVersion="v1.31.0"
Feb 13 15:37:14.707374 kubelet[1743]: I0213 15:37:14.707363    1743 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 13 15:37:14.707883 kubelet[1743]: I0213 15:37:14.707861    1743 server.go:929] "Client rotation is on, will bootstrap in background"
Feb 13 15:37:14.752360 kubelet[1743]: I0213 15:37:14.752302    1743 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 15:37:14.760120 kubelet[1743]: E0213 15:37:14.760067    1743 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Feb 13 15:37:14.760120 kubelet[1743]: I0213 15:37:14.760115    1743 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Feb 13 15:37:14.763428 kubelet[1743]: I0213 15:37:14.763386    1743 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 13 15:37:14.764479 kubelet[1743]: I0213 15:37:14.764455    1743 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
Feb 13 15:37:14.764643 kubelet[1743]: I0213 15:37:14.764611    1743 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 13 15:37:14.764809 kubelet[1743]: I0213 15:37:14.764638    1743 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.136","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Feb 13 15:37:14.764994 kubelet[1743]: I0213 15:37:14.764981    1743 topology_manager.go:138] "Creating topology manager with none policy"
Feb 13 15:37:14.764994 kubelet[1743]: I0213 15:37:14.764995    1743 container_manager_linux.go:300] "Creating device plugin manager"
Feb 13 15:37:14.765221 kubelet[1743]: I0213 15:37:14.765196    1743 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:37:14.768362 kubelet[1743]: I0213 15:37:14.768338    1743 kubelet.go:408] "Attempting to sync node with API server"
Feb 13 15:37:14.768530 kubelet[1743]: I0213 15:37:14.768438    1743 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 13 15:37:14.768530 kubelet[1743]: I0213 15:37:14.768475    1743 kubelet.go:314] "Adding apiserver pod source"
Feb 13 15:37:14.768530 kubelet[1743]: I0213 15:37:14.768486    1743 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 13 15:37:14.768706 kubelet[1743]: E0213 15:37:14.768683    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:14.768880 kubelet[1743]: E0213 15:37:14.768794    1743 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:14.774762 kubelet[1743]: I0213 15:37:14.774737    1743 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Feb 13 15:37:14.776425 kubelet[1743]: I0213 15:37:14.776397    1743 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 13 15:37:14.776721 kubelet[1743]: W0213 15:37:14.776699    1743 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb 13 15:37:14.777963 kubelet[1743]: I0213 15:37:14.777938    1743 server.go:1269] "Started kubelet"
Feb 13 15:37:14.778574 kubelet[1743]: I0213 15:37:14.778285    1743 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Feb 13 15:37:14.779529 kubelet[1743]: I0213 15:37:14.778903    1743 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 13 15:37:14.779529 kubelet[1743]: I0213 15:37:14.779290    1743 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 13 15:37:14.780195 kubelet[1743]: I0213 15:37:14.780113    1743 server.go:460] "Adding debug handlers to kubelet server"
Feb 13 15:37:14.781771 kubelet[1743]: W0213 15:37:14.781744    1743 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.136" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb 13 15:37:14.781832 kubelet[1743]: E0213 15:37:14.781789    1743 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.136\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
Feb 13 15:37:14.781884 kubelet[1743]: W0213 15:37:14.781868    1743 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb 13 15:37:14.781909 kubelet[1743]: E0213 15:37:14.781885    1743 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
Feb 13 15:37:14.783023 kubelet[1743]: I0213 15:37:14.782983    1743 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 13 15:37:14.786230 kubelet[1743]: I0213 15:37:14.786153    1743 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Feb 13 15:37:14.794385 kubelet[1743]: I0213 15:37:14.790314    1743 volume_manager.go:289] "Starting Kubelet Volume Manager"
Feb 13 15:37:14.794385 kubelet[1743]: I0213 15:37:14.790443    1743 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Feb 13 15:37:14.794385 kubelet[1743]: I0213 15:37:14.790509    1743 reconciler.go:26] "Reconciler: start to sync state"
Feb 13 15:37:14.794385 kubelet[1743]: E0213 15:37:14.791413    1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.136\" not found"
Feb 13 15:37:14.794533 kubelet[1743]: I0213 15:37:14.794515    1743 factory.go:221] Registration of the systemd container factory successfully
Feb 13 15:37:14.795032 kubelet[1743]: I0213 15:37:14.795000    1743 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 13 15:37:14.796708 kubelet[1743]: E0213 15:37:14.796681    1743 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 13 15:37:14.797515 kubelet[1743]: I0213 15:37:14.797497    1743 factory.go:221] Registration of the containerd container factory successfully
Feb 13 15:37:14.813129 kubelet[1743]: E0213 15:37:14.813063    1743 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.136\" not found" node="10.0.0.136"
Feb 13 15:37:14.813484 kubelet[1743]: I0213 15:37:14.813455    1743 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 13 15:37:14.813484 kubelet[1743]: I0213 15:37:14.813469    1743 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 13 15:37:14.813484 kubelet[1743]: I0213 15:37:14.813485    1743 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:37:14.874046 kubelet[1743]: I0213 15:37:14.873882    1743 policy_none.go:49] "None policy: Start"
Feb 13 15:37:14.875329 kubelet[1743]: I0213 15:37:14.875220    1743 memory_manager.go:170] "Starting memorymanager" policy="None"
Feb 13 15:37:14.875329 kubelet[1743]: I0213 15:37:14.875334    1743 state_mem.go:35] "Initializing new in-memory state store"
Feb 13 15:37:14.881170 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Feb 13 15:37:14.892521 kubelet[1743]: E0213 15:37:14.891630    1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.136\" not found"
Feb 13 15:37:14.892220 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Feb 13 15:37:14.895425 kubelet[1743]: I0213 15:37:14.895375    1743 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 13 15:37:14.895755 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Feb 13 15:37:14.896541 kubelet[1743]: I0213 15:37:14.896514    1743 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 13 15:37:14.896541 kubelet[1743]: I0213 15:37:14.896538    1743 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb 13 15:37:14.896609 kubelet[1743]: I0213 15:37:14.896562    1743 kubelet.go:2321] "Starting kubelet main sync loop"
Feb 13 15:37:14.897010 kubelet[1743]: E0213 15:37:14.896663    1743 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 13 15:37:14.906093 kubelet[1743]: I0213 15:37:14.906054    1743 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 13 15:37:14.906093 kubelet[1743]: I0213 15:37:14.906309    1743 eviction_manager.go:189] "Eviction manager: starting control loop"
Feb 13 15:37:14.906093 kubelet[1743]: I0213 15:37:14.906323    1743 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Feb 13 15:37:14.906093 kubelet[1743]: I0213 15:37:14.906519    1743 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 13 15:37:14.907618 kubelet[1743]: E0213 15:37:14.907584    1743 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.136\" not found"
Feb 13 15:37:15.008296 kubelet[1743]: I0213 15:37:15.008170    1743 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.136"
Feb 13 15:37:15.014314 kubelet[1743]: I0213 15:37:15.014215    1743 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.136"
Feb 13 15:37:15.014314 kubelet[1743]: E0213 15:37:15.014249    1743 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.136\": node \"10.0.0.136\" not found"
Feb 13 15:37:15.026903 kubelet[1743]: E0213 15:37:15.026854    1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.136\" not found"
Feb 13 15:37:15.127628 kubelet[1743]: E0213 15:37:15.127570    1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.136\" not found"
Feb 13 15:37:15.153632 sudo[1617]: pam_unix(sudo:session): session closed for user root
Feb 13 15:37:15.154791 sshd[1616]: Connection closed by 10.0.0.1 port 56612
Feb 13 15:37:15.155213 sshd-session[1614]: pam_unix(sshd:session): session closed for user core
Feb 13 15:37:15.158568 systemd[1]: sshd@6-10.0.0.136:22-10.0.0.1:56612.service: Deactivated successfully.
Feb 13 15:37:15.160736 systemd[1]: session-7.scope: Deactivated successfully.
Feb 13 15:37:15.161420 systemd-logind[1424]: Session 7 logged out. Waiting for processes to exit.
Feb 13 15:37:15.162238 systemd-logind[1424]: Removed session 7.
Feb 13 15:37:15.228764 kubelet[1743]: E0213 15:37:15.228724    1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.136\" not found"
Feb 13 15:37:15.329772 kubelet[1743]: E0213 15:37:15.329656    1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.136\" not found"
Feb 13 15:37:15.430208 kubelet[1743]: E0213 15:37:15.430161    1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.136\" not found"
Feb 13 15:37:15.530719 kubelet[1743]: E0213 15:37:15.530665    1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.136\" not found"
Feb 13 15:37:15.631436 kubelet[1743]: E0213 15:37:15.631307    1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.136\" not found"
Feb 13 15:37:15.709950 kubelet[1743]: I0213 15:37:15.709867    1743 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials"
Feb 13 15:37:15.710133 kubelet[1743]: W0213 15:37:15.710050    1743 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
Feb 13 15:37:15.710133 kubelet[1743]: W0213 15:37:15.710108    1743 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
Feb 13 15:37:15.732119 kubelet[1743]: E0213 15:37:15.732080    1743 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.136\" not found"
Feb 13 15:37:15.769255 kubelet[1743]: E0213 15:37:15.769216    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:15.833491 kubelet[1743]: I0213 15:37:15.833457    1743 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24"
Feb 13 15:37:15.833818 containerd[1448]: time="2025-02-13T15:37:15.833757673Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb 13 15:37:15.834116 kubelet[1743]: I0213 15:37:15.833935    1743 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24"
Feb 13 15:37:16.769583 kubelet[1743]: E0213 15:37:16.769534    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:16.769583 kubelet[1743]: I0213 15:37:16.769555    1743 apiserver.go:52] "Watching apiserver"
Feb 13 15:37:16.777945 systemd[1]: Created slice kubepods-besteffort-pod43c73dac_67d4_48a0_be94_f35e8a4f8123.slice - libcontainer container kubepods-besteffort-pod43c73dac_67d4_48a0_be94_f35e8a4f8123.slice.
Feb 13 15:37:16.791417 systemd[1]: Created slice kubepods-burstable-podcb418317_5c11_4d21_8133_fe46de3492b6.slice - libcontainer container kubepods-burstable-podcb418317_5c11_4d21_8133_fe46de3492b6.slice.
Feb 13 15:37:16.791580 kubelet[1743]: I0213 15:37:16.791414    1743 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
Feb 13 15:37:16.803473 kubelet[1743]: I0213 15:37:16.803431    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-cilium-run\") pod \"cilium-jbhxx\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") " pod="kube-system/cilium-jbhxx"
Feb 13 15:37:16.803473 kubelet[1743]: I0213 15:37:16.803475    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-cni-path\") pod \"cilium-jbhxx\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") " pod="kube-system/cilium-jbhxx"
Feb 13 15:37:16.803625 kubelet[1743]: I0213 15:37:16.803497    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cb418317-5c11-4d21-8133-fe46de3492b6-clustermesh-secrets\") pod \"cilium-jbhxx\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") " pod="kube-system/cilium-jbhxx"
Feb 13 15:37:16.803625 kubelet[1743]: I0213 15:37:16.803514    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-hostproc\") pod \"cilium-jbhxx\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") " pod="kube-system/cilium-jbhxx"
Feb 13 15:37:16.803625 kubelet[1743]: I0213 15:37:16.803531    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-lib-modules\") pod \"cilium-jbhxx\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") " pod="kube-system/cilium-jbhxx"
Feb 13 15:37:16.803625 kubelet[1743]: I0213 15:37:16.803545    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43c73dac-67d4-48a0-be94-f35e8a4f8123-xtables-lock\") pod \"kube-proxy-9mhq4\" (UID: \"43c73dac-67d4-48a0-be94-f35e8a4f8123\") " pod="kube-system/kube-proxy-9mhq4"
Feb 13 15:37:16.803625 kubelet[1743]: I0213 15:37:16.803560    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb9rw\" (UniqueName: \"kubernetes.io/projected/43c73dac-67d4-48a0-be94-f35e8a4f8123-kube-api-access-wb9rw\") pod \"kube-proxy-9mhq4\" (UID: \"43c73dac-67d4-48a0-be94-f35e8a4f8123\") " pod="kube-system/kube-proxy-9mhq4"
Feb 13 15:37:16.803729 kubelet[1743]: I0213 15:37:16.803575    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr8zz\" (UniqueName: \"kubernetes.io/projected/cb418317-5c11-4d21-8133-fe46de3492b6-kube-api-access-qr8zz\") pod \"cilium-jbhxx\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") " pod="kube-system/cilium-jbhxx"
Feb 13 15:37:16.803729 kubelet[1743]: I0213 15:37:16.803590    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43c73dac-67d4-48a0-be94-f35e8a4f8123-lib-modules\") pod \"kube-proxy-9mhq4\" (UID: \"43c73dac-67d4-48a0-be94-f35e8a4f8123\") " pod="kube-system/kube-proxy-9mhq4"
Feb 13 15:37:16.803729 kubelet[1743]: I0213 15:37:16.803606    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-bpf-maps\") pod \"cilium-jbhxx\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") " pod="kube-system/cilium-jbhxx"
Feb 13 15:37:16.803729 kubelet[1743]: I0213 15:37:16.803627    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-xtables-lock\") pod \"cilium-jbhxx\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") " pod="kube-system/cilium-jbhxx"
Feb 13 15:37:16.803729 kubelet[1743]: I0213 15:37:16.803643    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-host-proc-sys-kernel\") pod \"cilium-jbhxx\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") " pod="kube-system/cilium-jbhxx"
Feb 13 15:37:16.803729 kubelet[1743]: I0213 15:37:16.803658    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cb418317-5c11-4d21-8133-fe46de3492b6-hubble-tls\") pod \"cilium-jbhxx\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") " pod="kube-system/cilium-jbhxx"
Feb 13 15:37:16.803866 kubelet[1743]: I0213 15:37:16.803671    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/43c73dac-67d4-48a0-be94-f35e8a4f8123-kube-proxy\") pod \"kube-proxy-9mhq4\" (UID: \"43c73dac-67d4-48a0-be94-f35e8a4f8123\") " pod="kube-system/kube-proxy-9mhq4"
Feb 13 15:37:16.803866 kubelet[1743]: I0213 15:37:16.803684    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-cilium-cgroup\") pod \"cilium-jbhxx\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") " pod="kube-system/cilium-jbhxx"
Feb 13 15:37:16.803866 kubelet[1743]: I0213 15:37:16.803699    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-etc-cni-netd\") pod \"cilium-jbhxx\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") " pod="kube-system/cilium-jbhxx"
Feb 13 15:37:16.803866 kubelet[1743]: I0213 15:37:16.803720    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb418317-5c11-4d21-8133-fe46de3492b6-cilium-config-path\") pod \"cilium-jbhxx\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") " pod="kube-system/cilium-jbhxx"
Feb 13 15:37:16.803866 kubelet[1743]: I0213 15:37:16.803736    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-host-proc-sys-net\") pod \"cilium-jbhxx\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") " pod="kube-system/cilium-jbhxx"
Feb 13 15:37:17.091648 kubelet[1743]: E0213 15:37:17.091324    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:17.092534 containerd[1448]: time="2025-02-13T15:37:17.092135100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9mhq4,Uid:43c73dac-67d4-48a0-be94-f35e8a4f8123,Namespace:kube-system,Attempt:0,}"
Feb 13 15:37:17.102002 kubelet[1743]: E0213 15:37:17.101965    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:17.102569 containerd[1448]: time="2025-02-13T15:37:17.102532211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jbhxx,Uid:cb418317-5c11-4d21-8133-fe46de3492b6,Namespace:kube-system,Attempt:0,}"
Feb 13 15:37:17.589108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3049681838.mount: Deactivated successfully.
Feb 13 15:37:17.599673 containerd[1448]: time="2025-02-13T15:37:17.599621866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:37:17.600753 containerd[1448]: time="2025-02-13T15:37:17.600656727Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175"
Feb 13 15:37:17.602383 containerd[1448]: time="2025-02-13T15:37:17.602335826Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:37:17.603820 containerd[1448]: time="2025-02-13T15:37:17.603777132Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:37:17.604000 containerd[1448]: time="2025-02-13T15:37:17.603959940Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Feb 13 15:37:17.605484 containerd[1448]: time="2025-02-13T15:37:17.605448558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Feb 13 15:37:17.607616 containerd[1448]: time="2025-02-13T15:37:17.607590361Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 515.376977ms"
Feb 13 15:37:17.608309 containerd[1448]: time="2025-02-13T15:37:17.608242431Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 505.599335ms"
Feb 13 15:37:17.713954 containerd[1448]: time="2025-02-13T15:37:17.713827399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:37:17.713954 containerd[1448]: time="2025-02-13T15:37:17.713891011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:37:17.713954 containerd[1448]: time="2025-02-13T15:37:17.713902780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:37:17.714925 containerd[1448]: time="2025-02-13T15:37:17.713998915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:37:17.714925 containerd[1448]: time="2025-02-13T15:37:17.713785931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:37:17.714925 containerd[1448]: time="2025-02-13T15:37:17.714142719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:37:17.714925 containerd[1448]: time="2025-02-13T15:37:17.714155402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:37:17.714925 containerd[1448]: time="2025-02-13T15:37:17.714227364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:37:17.769867 kubelet[1743]: E0213 15:37:17.769832    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:17.796277 systemd[1]: Started cri-containerd-6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9.scope - libcontainer container 6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9.
Feb 13 15:37:17.799582 systemd[1]: Started cri-containerd-2aa08670bd1dbf6f1ebca2c06aebd6031c3ff8befb48caac681bc9f32e47cc84.scope - libcontainer container 2aa08670bd1dbf6f1ebca2c06aebd6031c3ff8befb48caac681bc9f32e47cc84.
Feb 13 15:37:17.819719 containerd[1448]: time="2025-02-13T15:37:17.819490769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jbhxx,Uid:cb418317-5c11-4d21-8133-fe46de3492b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9\""
Feb 13 15:37:17.820053 containerd[1448]: time="2025-02-13T15:37:17.820001102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9mhq4,Uid:43c73dac-67d4-48a0-be94-f35e8a4f8123,Namespace:kube-system,Attempt:0,} returns sandbox id \"2aa08670bd1dbf6f1ebca2c06aebd6031c3ff8befb48caac681bc9f32e47cc84\""
Feb 13 15:37:17.820987 kubelet[1743]: E0213 15:37:17.820722    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:17.821382 kubelet[1743]: E0213 15:37:17.821359    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:17.822614 containerd[1448]: time="2025-02-13T15:37:17.822566367Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\""
Feb 13 15:37:18.770818 kubelet[1743]: E0213 15:37:18.770767    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:19.771866 kubelet[1743]: E0213 15:37:19.771830    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:20.772905 kubelet[1743]: E0213 15:37:20.772839    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:21.773839 kubelet[1743]: E0213 15:37:21.773787    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:22.149772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount584885631.mount: Deactivated successfully.
Feb 13 15:37:22.774591 kubelet[1743]: E0213 15:37:22.774556    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:23.389608 containerd[1448]: time="2025-02-13T15:37:23.389554158Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:37:23.390058 containerd[1448]: time="2025-02-13T15:37:23.390008800Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710"
Feb 13 15:37:23.390953 containerd[1448]: time="2025-02-13T15:37:23.390905917Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:37:23.392539 containerd[1448]: time="2025-02-13T15:37:23.392507517Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.569881762s"
Feb 13 15:37:23.392583 containerd[1448]: time="2025-02-13T15:37:23.392541345Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\""
Feb 13 15:37:23.393946 containerd[1448]: time="2025-02-13T15:37:23.393906388Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\""
Feb 13 15:37:23.394975 containerd[1448]: time="2025-02-13T15:37:23.394931956Z" level=info msg="CreateContainer within sandbox \"6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb 13 15:37:23.407154 containerd[1448]: time="2025-02-13T15:37:23.407105450Z" level=info msg="CreateContainer within sandbox \"6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4b28a049d0f7593bd42da9bb41bd0793256114ab9161c023b98e3d61ac618cac\""
Feb 13 15:37:23.407794 containerd[1448]: time="2025-02-13T15:37:23.407734458Z" level=info msg="StartContainer for \"4b28a049d0f7593bd42da9bb41bd0793256114ab9161c023b98e3d61ac618cac\""
Feb 13 15:37:23.431226 systemd[1]: Started cri-containerd-4b28a049d0f7593bd42da9bb41bd0793256114ab9161c023b98e3d61ac618cac.scope - libcontainer container 4b28a049d0f7593bd42da9bb41bd0793256114ab9161c023b98e3d61ac618cac.
Feb 13 15:37:23.457108 containerd[1448]: time="2025-02-13T15:37:23.457024613Z" level=info msg="StartContainer for \"4b28a049d0f7593bd42da9bb41bd0793256114ab9161c023b98e3d61ac618cac\" returns successfully"
Feb 13 15:37:23.491259 systemd[1]: cri-containerd-4b28a049d0f7593bd42da9bb41bd0793256114ab9161c023b98e3d61ac618cac.scope: Deactivated successfully.
Feb 13 15:37:23.634001 containerd[1448]: time="2025-02-13T15:37:23.633931379Z" level=info msg="shim disconnected" id=4b28a049d0f7593bd42da9bb41bd0793256114ab9161c023b98e3d61ac618cac namespace=k8s.io
Feb 13 15:37:23.634001 containerd[1448]: time="2025-02-13T15:37:23.633993610Z" level=warning msg="cleaning up after shim disconnected" id=4b28a049d0f7593bd42da9bb41bd0793256114ab9161c023b98e3d61ac618cac namespace=k8s.io
Feb 13 15:37:23.634001 containerd[1448]: time="2025-02-13T15:37:23.634004659Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:37:23.775322 kubelet[1743]: E0213 15:37:23.775215    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:23.911576 kubelet[1743]: E0213 15:37:23.911545    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:23.913214 containerd[1448]: time="2025-02-13T15:37:23.913180413Z" level=info msg="CreateContainer within sandbox \"6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb 13 15:37:23.927307 containerd[1448]: time="2025-02-13T15:37:23.927264944Z" level=info msg="CreateContainer within sandbox \"6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"35fbc5f8fe9c298d34b6108d8ced83d6e73acda7063e2aa3fdfa91d7189547dc\""
Feb 13 15:37:23.927811 containerd[1448]: time="2025-02-13T15:37:23.927787242Z" level=info msg="StartContainer for \"35fbc5f8fe9c298d34b6108d8ced83d6e73acda7063e2aa3fdfa91d7189547dc\""
Feb 13 15:37:23.961243 systemd[1]: Started cri-containerd-35fbc5f8fe9c298d34b6108d8ced83d6e73acda7063e2aa3fdfa91d7189547dc.scope - libcontainer container 35fbc5f8fe9c298d34b6108d8ced83d6e73acda7063e2aa3fdfa91d7189547dc.
Feb 13 15:37:23.980547 containerd[1448]: time="2025-02-13T15:37:23.980496128Z" level=info msg="StartContainer for \"35fbc5f8fe9c298d34b6108d8ced83d6e73acda7063e2aa3fdfa91d7189547dc\" returns successfully"
Feb 13 15:37:23.991684 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 15:37:23.991882 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:37:23.991953 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:37:23.998345 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:37:23.998511 systemd[1]: cri-containerd-35fbc5f8fe9c298d34b6108d8ced83d6e73acda7063e2aa3fdfa91d7189547dc.scope: Deactivated successfully.
Feb 13 15:37:24.009104 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:37:24.028718 containerd[1448]: time="2025-02-13T15:37:24.028580512Z" level=info msg="shim disconnected" id=35fbc5f8fe9c298d34b6108d8ced83d6e73acda7063e2aa3fdfa91d7189547dc namespace=k8s.io
Feb 13 15:37:24.028718 containerd[1448]: time="2025-02-13T15:37:24.028631750Z" level=warning msg="cleaning up after shim disconnected" id=35fbc5f8fe9c298d34b6108d8ced83d6e73acda7063e2aa3fdfa91d7189547dc namespace=k8s.io
Feb 13 15:37:24.028718 containerd[1448]: time="2025-02-13T15:37:24.028639611Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:37:24.403251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b28a049d0f7593bd42da9bb41bd0793256114ab9161c023b98e3d61ac618cac-rootfs.mount: Deactivated successfully.
Feb 13 15:37:24.610586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2194818746.mount: Deactivated successfully.
Feb 13 15:37:24.775996 kubelet[1743]: E0213 15:37:24.775806    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:24.824829 containerd[1448]: time="2025-02-13T15:37:24.824484388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:37:24.825265 containerd[1448]: time="2025-02-13T15:37:24.825194497Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=26769258"
Feb 13 15:37:24.825902 containerd[1448]: time="2025-02-13T15:37:24.825840798Z" level=info msg="ImageCreate event name:\"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:37:24.830743 containerd[1448]: time="2025-02-13T15:37:24.830706689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:37:24.832107 containerd[1448]: time="2025-02-13T15:37:24.831699165Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"26768275\" in 1.437760104s"
Feb 13 15:37:24.832107 containerd[1448]: time="2025-02-13T15:37:24.831733683Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:dc056e81c1f77e8e42df4198221b86ec1562514cb649244b847d9dc91c52b534\""
Feb 13 15:37:24.834317 containerd[1448]: time="2025-02-13T15:37:24.834287401Z" level=info msg="CreateContainer within sandbox \"2aa08670bd1dbf6f1ebca2c06aebd6031c3ff8befb48caac681bc9f32e47cc84\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb 13 15:37:24.848852 containerd[1448]: time="2025-02-13T15:37:24.848800277Z" level=info msg="CreateContainer within sandbox \"2aa08670bd1dbf6f1ebca2c06aebd6031c3ff8befb48caac681bc9f32e47cc84\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"92a3b3b866e3162aa74f7fa8204418f96bbd8d20f659841197eeacda15983f1b\""
Feb 13 15:37:24.849504 containerd[1448]: time="2025-02-13T15:37:24.849278179Z" level=info msg="StartContainer for \"92a3b3b866e3162aa74f7fa8204418f96bbd8d20f659841197eeacda15983f1b\""
Feb 13 15:37:24.876247 systemd[1]: Started cri-containerd-92a3b3b866e3162aa74f7fa8204418f96bbd8d20f659841197eeacda15983f1b.scope - libcontainer container 92a3b3b866e3162aa74f7fa8204418f96bbd8d20f659841197eeacda15983f1b.
Feb 13 15:37:24.904877 containerd[1448]: time="2025-02-13T15:37:24.904764792Z" level=info msg="StartContainer for \"92a3b3b866e3162aa74f7fa8204418f96bbd8d20f659841197eeacda15983f1b\" returns successfully"
Feb 13 15:37:24.913413 kubelet[1743]: E0213 15:37:24.913385    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:24.917204 kubelet[1743]: E0213 15:37:24.916795    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:24.920456 containerd[1448]: time="2025-02-13T15:37:24.920422143Z" level=info msg="CreateContainer within sandbox \"6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb 13 15:37:24.939836 kubelet[1743]: I0213 15:37:24.939780    1743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9mhq4" podStartSLOduration=3.929341466 podStartE2EDuration="10.939760487s" podCreationTimestamp="2025-02-13 15:37:14 +0000 UTC" firstStartedPulling="2025-02-13 15:37:17.822297881 +0000 UTC m=+3.688046130" lastFinishedPulling="2025-02-13 15:37:24.832716902 +0000 UTC m=+10.698465151" observedRunningTime="2025-02-13 15:37:24.924249787 +0000 UTC m=+10.789998036" watchObservedRunningTime="2025-02-13 15:37:24.939760487 +0000 UTC m=+10.805508736"
Feb 13 15:37:24.941830 containerd[1448]: time="2025-02-13T15:37:24.941783030Z" level=info msg="CreateContainer within sandbox \"6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bcd5d3fb297a6d6ddef3a98ec0bd73c65768edc874ae17a6fc871b7710faeb87\""
Feb 13 15:37:24.942732 containerd[1448]: time="2025-02-13T15:37:24.942703238Z" level=info msg="StartContainer for \"bcd5d3fb297a6d6ddef3a98ec0bd73c65768edc874ae17a6fc871b7710faeb87\""
Feb 13 15:37:24.971268 systemd[1]: Started cri-containerd-bcd5d3fb297a6d6ddef3a98ec0bd73c65768edc874ae17a6fc871b7710faeb87.scope - libcontainer container bcd5d3fb297a6d6ddef3a98ec0bd73c65768edc874ae17a6fc871b7710faeb87.
Feb 13 15:37:24.994523 containerd[1448]: time="2025-02-13T15:37:24.994483678Z" level=info msg="StartContainer for \"bcd5d3fb297a6d6ddef3a98ec0bd73c65768edc874ae17a6fc871b7710faeb87\" returns successfully"
Feb 13 15:37:25.020942 systemd[1]: cri-containerd-bcd5d3fb297a6d6ddef3a98ec0bd73c65768edc874ae17a6fc871b7710faeb87.scope: Deactivated successfully.
Feb 13 15:37:25.155704 containerd[1448]: time="2025-02-13T15:37:25.155421432Z" level=info msg="shim disconnected" id=bcd5d3fb297a6d6ddef3a98ec0bd73c65768edc874ae17a6fc871b7710faeb87 namespace=k8s.io
Feb 13 15:37:25.155704 containerd[1448]: time="2025-02-13T15:37:25.155477515Z" level=warning msg="cleaning up after shim disconnected" id=bcd5d3fb297a6d6ddef3a98ec0bd73c65768edc874ae17a6fc871b7710faeb87 namespace=k8s.io
Feb 13 15:37:25.155704 containerd[1448]: time="2025-02-13T15:37:25.155485618Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:37:25.776828 kubelet[1743]: E0213 15:37:25.776795    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:25.920274 kubelet[1743]: E0213 15:37:25.920247    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:25.920525 kubelet[1743]: E0213 15:37:25.920337    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:25.922145 containerd[1448]: time="2025-02-13T15:37:25.922106453Z" level=info msg="CreateContainer within sandbox \"6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb 13 15:37:25.935316 containerd[1448]: time="2025-02-13T15:37:25.935274740Z" level=info msg="CreateContainer within sandbox \"6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cc3416875e3441f4e11c27f2f674e9977ab8be9326a98e624e7e44a0aebb01c0\""
Feb 13 15:37:25.935842 containerd[1448]: time="2025-02-13T15:37:25.935815014Z" level=info msg="StartContainer for \"cc3416875e3441f4e11c27f2f674e9977ab8be9326a98e624e7e44a0aebb01c0\""
Feb 13 15:37:25.965227 systemd[1]: Started cri-containerd-cc3416875e3441f4e11c27f2f674e9977ab8be9326a98e624e7e44a0aebb01c0.scope - libcontainer container cc3416875e3441f4e11c27f2f674e9977ab8be9326a98e624e7e44a0aebb01c0.
Feb 13 15:37:25.983311 systemd[1]: cri-containerd-cc3416875e3441f4e11c27f2f674e9977ab8be9326a98e624e7e44a0aebb01c0.scope: Deactivated successfully.
Feb 13 15:37:25.984489 containerd[1448]: time="2025-02-13T15:37:25.984434247Z" level=info msg="StartContainer for \"cc3416875e3441f4e11c27f2f674e9977ab8be9326a98e624e7e44a0aebb01c0\" returns successfully"
Feb 13 15:37:26.003798 containerd[1448]: time="2025-02-13T15:37:26.003735452Z" level=info msg="shim disconnected" id=cc3416875e3441f4e11c27f2f674e9977ab8be9326a98e624e7e44a0aebb01c0 namespace=k8s.io
Feb 13 15:37:26.003798 containerd[1448]: time="2025-02-13T15:37:26.003787078Z" level=warning msg="cleaning up after shim disconnected" id=cc3416875e3441f4e11c27f2f674e9977ab8be9326a98e624e7e44a0aebb01c0 namespace=k8s.io
Feb 13 15:37:26.003798 containerd[1448]: time="2025-02-13T15:37:26.003798457Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:37:26.402318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc3416875e3441f4e11c27f2f674e9977ab8be9326a98e624e7e44a0aebb01c0-rootfs.mount: Deactivated successfully.
Feb 13 15:37:26.777092 kubelet[1743]: E0213 15:37:26.776934    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:26.926096 kubelet[1743]: E0213 15:37:26.926032    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:26.927700 containerd[1448]: time="2025-02-13T15:37:26.927664648Z" level=info msg="CreateContainer within sandbox \"6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb 13 15:37:26.941366 containerd[1448]: time="2025-02-13T15:37:26.941278718Z" level=info msg="CreateContainer within sandbox \"6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc\""
Feb 13 15:37:26.941919 containerd[1448]: time="2025-02-13T15:37:26.941892160Z" level=info msg="StartContainer for \"2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc\""
Feb 13 15:37:26.972241 systemd[1]: Started cri-containerd-2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc.scope - libcontainer container 2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc.
Feb 13 15:37:26.993813 containerd[1448]: time="2025-02-13T15:37:26.993773652Z" level=info msg="StartContainer for \"2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc\" returns successfully"
Feb 13 15:37:27.126471 kubelet[1743]: I0213 15:37:27.126315    1743 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
Feb 13 15:37:27.470104 kernel: Initializing XFRM netlink socket
Feb 13 15:37:27.777908 kubelet[1743]: E0213 15:37:27.777856    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:27.930552 kubelet[1743]: E0213 15:37:27.930478    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:28.778139 kubelet[1743]: E0213 15:37:28.778094    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:28.932321 kubelet[1743]: E0213 15:37:28.932286    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:29.087319 systemd-networkd[1388]: cilium_host: Link UP
Feb 13 15:37:29.087433 systemd-networkd[1388]: cilium_net: Link UP
Feb 13 15:37:29.087567 systemd-networkd[1388]: cilium_net: Gained carrier
Feb 13 15:37:29.087675 systemd-networkd[1388]: cilium_host: Gained carrier
Feb 13 15:37:29.158086 systemd-networkd[1388]: cilium_vxlan: Link UP
Feb 13 15:37:29.158093 systemd-networkd[1388]: cilium_vxlan: Gained carrier
Feb 13 15:37:29.184193 systemd-networkd[1388]: cilium_host: Gained IPv6LL
Feb 13 15:37:29.448129 kernel: NET: Registered PF_ALG protocol family
Feb 13 15:37:29.751270 systemd-networkd[1388]: cilium_net: Gained IPv6LL
Feb 13 15:37:29.778510 kubelet[1743]: E0213 15:37:29.778468    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:29.933405 kubelet[1743]: E0213 15:37:29.933373    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:29.986178 systemd-networkd[1388]: lxc_health: Link UP
Feb 13 15:37:29.993753 systemd-networkd[1388]: lxc_health: Gained carrier
Feb 13 15:37:30.775207 systemd-networkd[1388]: cilium_vxlan: Gained IPv6LL
Feb 13 15:37:30.778607 kubelet[1743]: E0213 15:37:30.778552    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:31.104654 kubelet[1743]: E0213 15:37:31.104147    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:31.120013 kubelet[1743]: I0213 15:37:31.119919    1743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jbhxx" podStartSLOduration=11.548231856 podStartE2EDuration="17.119901087s" podCreationTimestamp="2025-02-13 15:37:14 +0000 UTC" firstStartedPulling="2025-02-13 15:37:17.821682428 +0000 UTC m=+3.687430676" lastFinishedPulling="2025-02-13 15:37:23.393351618 +0000 UTC m=+9.259099907" observedRunningTime="2025-02-13 15:37:27.944974152 +0000 UTC m=+13.810722401" watchObservedRunningTime="2025-02-13 15:37:31.119901087 +0000 UTC m=+16.985649336"
Feb 13 15:37:31.177449 systemd[1]: Created slice kubepods-besteffort-pod88c7ad6a_ebed_4f36_b6c0_df8e934c89eb.slice - libcontainer container kubepods-besteffort-pod88c7ad6a_ebed_4f36_b6c0_df8e934c89eb.slice.
Feb 13 15:37:31.186140 kubelet[1743]: I0213 15:37:31.184275    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnbzc\" (UniqueName: \"kubernetes.io/projected/88c7ad6a-ebed-4f36-b6c0-df8e934c89eb-kube-api-access-lnbzc\") pod \"nginx-deployment-8587fbcb89-fzbj8\" (UID: \"88c7ad6a-ebed-4f36-b6c0-df8e934c89eb\") " pod="default/nginx-deployment-8587fbcb89-fzbj8"
Feb 13 15:37:31.480755 containerd[1448]: time="2025-02-13T15:37:31.480645869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-fzbj8,Uid:88c7ad6a-ebed-4f36-b6c0-df8e934c89eb,Namespace:default,Attempt:0,}"
Feb 13 15:37:31.538469 systemd-networkd[1388]: lxc394b8827d6f0: Link UP
Feb 13 15:37:31.560088 kernel: eth0: renamed from tmpc6ebe
Feb 13 15:37:31.566080 systemd-networkd[1388]: lxc394b8827d6f0: Gained carrier
Feb 13 15:37:31.779554 kubelet[1743]: E0213 15:37:31.779447    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:31.927291 systemd-networkd[1388]: lxc_health: Gained IPv6LL
Feb 13 15:37:31.936824 kubelet[1743]: E0213 15:37:31.936787    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:37:32.780251 kubelet[1743]: E0213 15:37:32.780198    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:33.527239 systemd-networkd[1388]: lxc394b8827d6f0: Gained IPv6LL
Feb 13 15:37:33.784209 kubelet[1743]: E0213 15:37:33.784090    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:34.476690 containerd[1448]: time="2025-02-13T15:37:34.476599753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:37:34.476690 containerd[1448]: time="2025-02-13T15:37:34.476656437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:37:34.477146 containerd[1448]: time="2025-02-13T15:37:34.476668750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:37:34.477146 containerd[1448]: time="2025-02-13T15:37:34.476735108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:37:34.499218 systemd[1]: Started cri-containerd-c6ebe9ebe1df9a9f70d7d56f48cb0ab3695853dc78c3ad4355ca7652f2b513f9.scope - libcontainer container c6ebe9ebe1df9a9f70d7d56f48cb0ab3695853dc78c3ad4355ca7652f2b513f9.
Feb 13 15:37:34.508214 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Feb 13 15:37:34.523123 containerd[1448]: time="2025-02-13T15:37:34.523088357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-fzbj8,Uid:88c7ad6a-ebed-4f36-b6c0-df8e934c89eb,Namespace:default,Attempt:0,} returns sandbox id \"c6ebe9ebe1df9a9f70d7d56f48cb0ab3695853dc78c3ad4355ca7652f2b513f9\""
Feb 13 15:37:34.524835 containerd[1448]: time="2025-02-13T15:37:34.524807322Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb 13 15:37:34.769018 kubelet[1743]: E0213 15:37:34.768890    1743 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:34.784472 kubelet[1743]: E0213 15:37:34.784356    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:35.785351 kubelet[1743]: E0213 15:37:35.785310    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:36.580893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2559977024.mount: Deactivated successfully.
Feb 13 15:37:36.785634 kubelet[1743]: E0213 15:37:36.785589    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:37.302772 containerd[1448]: time="2025-02-13T15:37:37.302296016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:37:37.303462 containerd[1448]: time="2025-02-13T15:37:37.303424183Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69693086"
Feb 13 15:37:37.304356 containerd[1448]: time="2025-02-13T15:37:37.304305094Z" level=info msg="ImageCreate event name:\"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:37:37.307461 containerd[1448]: time="2025-02-13T15:37:37.307416071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:37:37.310313 containerd[1448]: time="2025-02-13T15:37:37.310269916Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 2.785427174s"
Feb 13 15:37:37.310313 containerd[1448]: time="2025-02-13T15:37:37.310309659Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\""
Feb 13 15:37:37.312559 containerd[1448]: time="2025-02-13T15:37:37.312529769Z" level=info msg="CreateContainer within sandbox \"c6ebe9ebe1df9a9f70d7d56f48cb0ab3695853dc78c3ad4355ca7652f2b513f9\" for container &ContainerMetadata{Name:nginx,Attempt:0,}"
Feb 13 15:37:37.322294 containerd[1448]: time="2025-02-13T15:37:37.322219790Z" level=info msg="CreateContainer within sandbox \"c6ebe9ebe1df9a9f70d7d56f48cb0ab3695853dc78c3ad4355ca7652f2b513f9\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"540559f6b176349519300b9b95c707e44e93dc970cfb249238e55ab1c4f32f6b\""
Feb 13 15:37:37.322936 containerd[1448]: time="2025-02-13T15:37:37.322650809Z" level=info msg="StartContainer for \"540559f6b176349519300b9b95c707e44e93dc970cfb249238e55ab1c4f32f6b\""
Feb 13 15:37:37.347228 systemd[1]: Started cri-containerd-540559f6b176349519300b9b95c707e44e93dc970cfb249238e55ab1c4f32f6b.scope - libcontainer container 540559f6b176349519300b9b95c707e44e93dc970cfb249238e55ab1c4f32f6b.
Feb 13 15:37:37.368602 containerd[1448]: time="2025-02-13T15:37:37.368483329Z" level=info msg="StartContainer for \"540559f6b176349519300b9b95c707e44e93dc970cfb249238e55ab1c4f32f6b\" returns successfully"
Feb 13 15:37:37.786241 kubelet[1743]: E0213 15:37:37.786125    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:37.960483 kubelet[1743]: I0213 15:37:37.960420    1743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-fzbj8" podStartSLOduration=4.173467833 podStartE2EDuration="6.960405015s" podCreationTimestamp="2025-02-13 15:37:31 +0000 UTC" firstStartedPulling="2025-02-13 15:37:34.524448866 +0000 UTC m=+20.390197115" lastFinishedPulling="2025-02-13 15:37:37.311386048 +0000 UTC m=+23.177134297" observedRunningTime="2025-02-13 15:37:37.959960959 +0000 UTC m=+23.825709208" watchObservedRunningTime="2025-02-13 15:37:37.960405015 +0000 UTC m=+23.826153224"
Feb 13 15:37:38.787208 kubelet[1743]: E0213 15:37:38.787162    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:39.787334 kubelet[1743]: E0213 15:37:39.787275    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:40.788429 kubelet[1743]: E0213 15:37:40.788384    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:41.788847 kubelet[1743]: E0213 15:37:41.788799    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:42.789982 kubelet[1743]: E0213 15:37:42.789936    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:43.310266 systemd[1]: Created slice kubepods-besteffort-pod1041806c_074f_4144_be44_4323e3e9674d.slice - libcontainer container kubepods-besteffort-pod1041806c_074f_4144_be44_4323e3e9674d.slice.
Feb 13 15:37:43.346477 kubelet[1743]: I0213 15:37:43.346436    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/1041806c-074f-4144-be44-4323e3e9674d-data\") pod \"nfs-server-provisioner-0\" (UID: \"1041806c-074f-4144-be44-4323e3e9674d\") " pod="default/nfs-server-provisioner-0"
Feb 13 15:37:43.346477 kubelet[1743]: I0213 15:37:43.346476    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k68n5\" (UniqueName: \"kubernetes.io/projected/1041806c-074f-4144-be44-4323e3e9674d-kube-api-access-k68n5\") pod \"nfs-server-provisioner-0\" (UID: \"1041806c-074f-4144-be44-4323e3e9674d\") " pod="default/nfs-server-provisioner-0"
Feb 13 15:37:43.613419 containerd[1448]: time="2025-02-13T15:37:43.613311423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1041806c-074f-4144-be44-4323e3e9674d,Namespace:default,Attempt:0,}"
Feb 13 15:37:43.635966 systemd-networkd[1388]: lxce44513a27163: Link UP
Feb 13 15:37:43.641476 kernel: eth0: renamed from tmp0af3d
Feb 13 15:37:43.645093 systemd-networkd[1388]: lxce44513a27163: Gained carrier
Feb 13 15:37:43.790404 kubelet[1743]: E0213 15:37:43.790353    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:43.824781 containerd[1448]: time="2025-02-13T15:37:43.824297902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:37:43.824781 containerd[1448]: time="2025-02-13T15:37:43.824732914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:37:43.824781 containerd[1448]: time="2025-02-13T15:37:43.824746115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:37:43.824987 containerd[1448]: time="2025-02-13T15:37:43.824841197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:37:43.850237 systemd[1]: Started cri-containerd-0af3def7c72039f18a16a06f6a4af38b8cd3628b3c1c0bbc7997acb3e325d7d2.scope - libcontainer container 0af3def7c72039f18a16a06f6a4af38b8cd3628b3c1c0bbc7997acb3e325d7d2.
Feb 13 15:37:43.859458 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Feb 13 15:37:43.873836 containerd[1448]: time="2025-02-13T15:37:43.873746969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:1041806c-074f-4144-be44-4323e3e9674d,Namespace:default,Attempt:0,} returns sandbox id \"0af3def7c72039f18a16a06f6a4af38b8cd3628b3c1c0bbc7997acb3e325d7d2\""
Feb 13 15:37:43.875268 containerd[1448]: time="2025-02-13T15:37:43.875242051Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\""
Feb 13 15:37:44.790821 kubelet[1743]: E0213 15:37:44.790765    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:45.356735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2277596396.mount: Deactivated successfully.
Feb 13 15:37:45.495313 systemd-networkd[1388]: lxce44513a27163: Gained IPv6LL
Feb 13 15:37:45.791899 kubelet[1743]: E0213 15:37:45.791800    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:46.651146 containerd[1448]: time="2025-02-13T15:37:46.651067075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:37:46.651888 containerd[1448]: time="2025-02-13T15:37:46.651832973Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625"
Feb 13 15:37:46.652489 containerd[1448]: time="2025-02-13T15:37:46.652449068Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:37:46.655743 containerd[1448]: time="2025-02-13T15:37:46.655703986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:37:46.657328 containerd[1448]: time="2025-02-13T15:37:46.657293583Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 2.782020451s"
Feb 13 15:37:46.657374 containerd[1448]: time="2025-02-13T15:37:46.657327424Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\""
Feb 13 15:37:46.659221 containerd[1448]: time="2025-02-13T15:37:46.659181508Z" level=info msg="CreateContainer within sandbox \"0af3def7c72039f18a16a06f6a4af38b8cd3628b3c1c0bbc7997acb3e325d7d2\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}"
Feb 13 15:37:46.668993 containerd[1448]: time="2025-02-13T15:37:46.668931181Z" level=info msg="CreateContainer within sandbox \"0af3def7c72039f18a16a06f6a4af38b8cd3628b3c1c0bbc7997acb3e325d7d2\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"20747ff77ffa104e0bf39fc92001f4b72e13c71fd2d4c112f366cdb082bc7b49\""
Feb 13 15:37:46.669533 containerd[1448]: time="2025-02-13T15:37:46.669490354Z" level=info msg="StartContainer for \"20747ff77ffa104e0bf39fc92001f4b72e13c71fd2d4c112f366cdb082bc7b49\""
Feb 13 15:37:46.751239 systemd[1]: Started cri-containerd-20747ff77ffa104e0bf39fc92001f4b72e13c71fd2d4c112f366cdb082bc7b49.scope - libcontainer container 20747ff77ffa104e0bf39fc92001f4b72e13c71fd2d4c112f366cdb082bc7b49.
Feb 13 15:37:46.792813 kubelet[1743]: E0213 15:37:46.792769    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:46.801878 containerd[1448]: time="2025-02-13T15:37:46.801811429Z" level=info msg="StartContainer for \"20747ff77ffa104e0bf39fc92001f4b72e13c71fd2d4c112f366cdb082bc7b49\" returns successfully"
Feb 13 15:37:46.982720 kubelet[1743]: I0213 15:37:46.982542    1743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.199559223 podStartE2EDuration="3.982527298s" podCreationTimestamp="2025-02-13 15:37:43 +0000 UTC" firstStartedPulling="2025-02-13 15:37:43.874953363 +0000 UTC m=+29.740701572" lastFinishedPulling="2025-02-13 15:37:46.657921438 +0000 UTC m=+32.523669647" observedRunningTime="2025-02-13 15:37:46.982061847 +0000 UTC m=+32.847810136" watchObservedRunningTime="2025-02-13 15:37:46.982527298 +0000 UTC m=+32.848275547"
Feb 13 15:37:47.793495 kubelet[1743]: E0213 15:37:47.793442    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:48.793958 kubelet[1743]: E0213 15:37:48.793901    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:49.794954 kubelet[1743]: E0213 15:37:49.794883    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:50.795956 kubelet[1743]: E0213 15:37:50.795892    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:51.133045 update_engine[1425]: I20250213 15:37:51.132826  1425 update_attempter.cc:509] Updating boot flags...
Feb 13 15:37:51.213144 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3137)
Feb 13 15:37:51.239142 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3136)
Feb 13 15:37:51.796243 kubelet[1743]: E0213 15:37:51.796199    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:52.797340 kubelet[1743]: E0213 15:37:52.797297    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:53.798244 kubelet[1743]: E0213 15:37:53.798179    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:54.769120 kubelet[1743]: E0213 15:37:54.769036    1743 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:54.798589 kubelet[1743]: E0213 15:37:54.798540    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:55.799837 kubelet[1743]: E0213 15:37:55.799787    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:56.682756 systemd[1]: Created slice kubepods-besteffort-podf33d8100_281e_4a39_8a16_42b6369a3adb.slice - libcontainer container kubepods-besteffort-podf33d8100_281e_4a39_8a16_42b6369a3adb.slice.
Feb 13 15:37:56.724980 kubelet[1743]: I0213 15:37:56.724879    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3fc9a3b0-7af9-4805-8a82-ac7de6d380e8\" (UniqueName: \"kubernetes.io/nfs/f33d8100-281e-4a39-8a16-42b6369a3adb-pvc-3fc9a3b0-7af9-4805-8a82-ac7de6d380e8\") pod \"test-pod-1\" (UID: \"f33d8100-281e-4a39-8a16-42b6369a3adb\") " pod="default/test-pod-1"
Feb 13 15:37:56.724980 kubelet[1743]: I0213 15:37:56.724924    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhpg2\" (UniqueName: \"kubernetes.io/projected/f33d8100-281e-4a39-8a16-42b6369a3adb-kube-api-access-bhpg2\") pod \"test-pod-1\" (UID: \"f33d8100-281e-4a39-8a16-42b6369a3adb\") " pod="default/test-pod-1"
Feb 13 15:37:56.800785 kubelet[1743]: E0213 15:37:56.800746    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:56.859099 kernel: FS-Cache: Loaded
Feb 13 15:37:56.885456 kernel: RPC: Registered named UNIX socket transport module.
Feb 13 15:37:56.885563 kernel: RPC: Registered udp transport module.
Feb 13 15:37:56.885588 kernel: RPC: Registered tcp transport module.
Feb 13 15:37:56.885615 kernel: RPC: Registered tcp-with-tls transport module.
Feb 13 15:37:56.886525 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb 13 15:37:57.070160 kernel: NFS: Registering the id_resolver key type
Feb 13 15:37:57.070341 kernel: Key type id_resolver registered
Feb 13 15:37:57.070360 kernel: Key type id_legacy registered
Feb 13 15:37:57.096937 nfsidmap[3164]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain'
Feb 13 15:37:57.100902 nfsidmap[3167]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain'
Feb 13 15:37:57.286400 containerd[1448]: time="2025-02-13T15:37:57.286345650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f33d8100-281e-4a39-8a16-42b6369a3adb,Namespace:default,Attempt:0,}"
Feb 13 15:37:57.311329 systemd-networkd[1388]: lxc3a3d2a817791: Link UP
Feb 13 15:37:57.322684 kernel: eth0: renamed from tmp00d50
Feb 13 15:37:57.328830 systemd-networkd[1388]: lxc3a3d2a817791: Gained carrier
Feb 13 15:37:57.531179 containerd[1448]: time="2025-02-13T15:37:57.531044872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:37:57.531179 containerd[1448]: time="2025-02-13T15:37:57.531154874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:37:57.531179 containerd[1448]: time="2025-02-13T15:37:57.531175514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:37:57.531393 containerd[1448]: time="2025-02-13T15:37:57.531271475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:37:57.548268 systemd[1]: Started cri-containerd-00d50e30d14efcb6a44400f2f1b214d228d06e05b531247f88823f81aa33bea1.scope - libcontainer container 00d50e30d14efcb6a44400f2f1b214d228d06e05b531247f88823f81aa33bea1.
Feb 13 15:37:57.558205 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Feb 13 15:37:57.573172 containerd[1448]: time="2025-02-13T15:37:57.573135734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f33d8100-281e-4a39-8a16-42b6369a3adb,Namespace:default,Attempt:0,} returns sandbox id \"00d50e30d14efcb6a44400f2f1b214d228d06e05b531247f88823f81aa33bea1\""
Feb 13 15:37:57.574443 containerd[1448]: time="2025-02-13T15:37:57.574420512Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb 13 15:37:57.801308 kubelet[1743]: E0213 15:37:57.801263    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:57.891153 containerd[1448]: time="2025-02-13T15:37:57.890923246Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:37:57.891457 containerd[1448]: time="2025-02-13T15:37:57.891412613Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61"
Feb 13 15:37:57.894912 containerd[1448]: time="2025-02-13T15:37:57.894875341Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 320.423869ms"
Feb 13 15:37:57.894912 containerd[1448]: time="2025-02-13T15:37:57.894909061Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\""
Feb 13 15:37:57.896846 containerd[1448]: time="2025-02-13T15:37:57.896808728Z" level=info msg="CreateContainer within sandbox \"00d50e30d14efcb6a44400f2f1b214d228d06e05b531247f88823f81aa33bea1\" for container &ContainerMetadata{Name:test,Attempt:0,}"
Feb 13 15:37:57.907092 containerd[1448]: time="2025-02-13T15:37:57.907034069Z" level=info msg="CreateContainer within sandbox \"00d50e30d14efcb6a44400f2f1b214d228d06e05b531247f88823f81aa33bea1\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"1bb328de3f71cf4bcb80bc7e724998931498d2289b1b28f3aa19b16bd058d4f5\""
Feb 13 15:37:57.907750 containerd[1448]: time="2025-02-13T15:37:57.907703958Z" level=info msg="StartContainer for \"1bb328de3f71cf4bcb80bc7e724998931498d2289b1b28f3aa19b16bd058d4f5\""
Feb 13 15:37:57.941288 systemd[1]: Started cri-containerd-1bb328de3f71cf4bcb80bc7e724998931498d2289b1b28f3aa19b16bd058d4f5.scope - libcontainer container 1bb328de3f71cf4bcb80bc7e724998931498d2289b1b28f3aa19b16bd058d4f5.
Feb 13 15:37:57.963970 containerd[1448]: time="2025-02-13T15:37:57.963904295Z" level=info msg="StartContainer for \"1bb328de3f71cf4bcb80bc7e724998931498d2289b1b28f3aa19b16bd058d4f5\" returns successfully"
Feb 13 15:37:58.002216 kubelet[1743]: I0213 15:37:58.002154    1743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=14.680806422 podStartE2EDuration="15.002136463s" podCreationTimestamp="2025-02-13 15:37:43 +0000 UTC" firstStartedPulling="2025-02-13 15:37:57.574202869 +0000 UTC m=+43.439951118" lastFinishedPulling="2025-02-13 15:37:57.89553291 +0000 UTC m=+43.761281159" observedRunningTime="2025-02-13 15:37:58.00192486 +0000 UTC m=+43.867673109" watchObservedRunningTime="2025-02-13 15:37:58.002136463 +0000 UTC m=+43.867884752"
Feb 13 15:37:58.359312 systemd-networkd[1388]: lxc3a3d2a817791: Gained IPv6LL
Feb 13 15:37:58.801664 kubelet[1743]: E0213 15:37:58.801606    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:37:59.802382 kubelet[1743]: E0213 15:37:59.802337    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:38:00.803487 kubelet[1743]: E0213 15:38:00.803438    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:38:01.455397 systemd[1]: run-containerd-runc-k8s.io-2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc-runc.zBekIX.mount: Deactivated successfully.
Feb 13 15:38:01.482451 containerd[1448]: time="2025-02-13T15:38:01.482401142Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE        \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 15:38:01.490157 containerd[1448]: time="2025-02-13T15:38:01.490110991Z" level=info msg="StopContainer for \"2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc\" with timeout 2 (s)"
Feb 13 15:38:01.490374 containerd[1448]: time="2025-02-13T15:38:01.490352594Z" level=info msg="Stop container \"2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc\" with signal terminated"
Feb 13 15:38:01.499448 systemd-networkd[1388]: lxc_health: Link DOWN
Feb 13 15:38:01.499459 systemd-networkd[1388]: lxc_health: Lost carrier
Feb 13 15:38:01.529566 systemd[1]: cri-containerd-2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc.scope: Deactivated successfully.
Feb 13 15:38:01.529870 systemd[1]: cri-containerd-2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc.scope: Consumed 6.358s CPU time.
Feb 13 15:38:01.559512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc-rootfs.mount: Deactivated successfully.
Feb 13 15:38:01.568709 containerd[1448]: time="2025-02-13T15:38:01.568638383Z" level=info msg="shim disconnected" id=2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc namespace=k8s.io
Feb 13 15:38:01.568709 containerd[1448]: time="2025-02-13T15:38:01.568700664Z" level=warning msg="cleaning up after shim disconnected" id=2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc namespace=k8s.io
Feb 13 15:38:01.568709 containerd[1448]: time="2025-02-13T15:38:01.568709424Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:38:01.581219 containerd[1448]: time="2025-02-13T15:38:01.581174449Z" level=info msg="StopContainer for \"2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc\" returns successfully"
Feb 13 15:38:01.581962 containerd[1448]: time="2025-02-13T15:38:01.581936738Z" level=info msg="StopPodSandbox for \"6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9\""
Feb 13 15:38:01.588726 containerd[1448]: time="2025-02-13T15:38:01.588683216Z" level=info msg="Container to stop \"4b28a049d0f7593bd42da9bb41bd0793256114ab9161c023b98e3d61ac618cac\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:38:01.588726 containerd[1448]: time="2025-02-13T15:38:01.588721017Z" level=info msg="Container to stop \"cc3416875e3441f4e11c27f2f674e9977ab8be9326a98e624e7e44a0aebb01c0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:38:01.588808 containerd[1448]: time="2025-02-13T15:38:01.588731177Z" level=info msg="Container to stop \"2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:38:01.588808 containerd[1448]: time="2025-02-13T15:38:01.588741377Z" level=info msg="Container to stop \"35fbc5f8fe9c298d34b6108d8ced83d6e73acda7063e2aa3fdfa91d7189547dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:38:01.588808 containerd[1448]: time="2025-02-13T15:38:01.588750257Z" level=info msg="Container to stop \"bcd5d3fb297a6d6ddef3a98ec0bd73c65768edc874ae17a6fc871b7710faeb87\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:38:01.590676 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9-shm.mount: Deactivated successfully.
Feb 13 15:38:01.594597 systemd[1]: cri-containerd-6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9.scope: Deactivated successfully.
Feb 13 15:38:01.609690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9-rootfs.mount: Deactivated successfully.
Feb 13 15:38:01.614325 containerd[1448]: time="2025-02-13T15:38:01.614264313Z" level=info msg="shim disconnected" id=6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9 namespace=k8s.io
Feb 13 15:38:01.614644 containerd[1448]: time="2025-02-13T15:38:01.614490556Z" level=warning msg="cleaning up after shim disconnected" id=6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9 namespace=k8s.io
Feb 13 15:38:01.614644 containerd[1448]: time="2025-02-13T15:38:01.614506276Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:38:01.624759 containerd[1448]: time="2025-02-13T15:38:01.624718915Z" level=info msg="TearDown network for sandbox \"6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9\" successfully"
Feb 13 15:38:01.624759 containerd[1448]: time="2025-02-13T15:38:01.624754275Z" level=info msg="StopPodSandbox for \"6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9\" returns successfully"
Feb 13 15:38:01.653032 kubelet[1743]: I0213 15:38:01.652996    1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-host-proc-sys-net\") pod \"cb418317-5c11-4d21-8133-fe46de3492b6\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") "
Feb 13 15:38:01.653032 kubelet[1743]: I0213 15:38:01.653031    1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-cni-path\") pod \"cb418317-5c11-4d21-8133-fe46de3492b6\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") "
Feb 13 15:38:01.653032 kubelet[1743]: I0213 15:38:01.653049    1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-cilium-cgroup\") pod \"cb418317-5c11-4d21-8133-fe46de3492b6\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") "
Feb 13 15:38:01.653032 kubelet[1743]: I0213 15:38:01.653077    1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb418317-5c11-4d21-8133-fe46de3492b6-cilium-config-path\") pod \"cb418317-5c11-4d21-8133-fe46de3492b6\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") "
Feb 13 15:38:01.653032 kubelet[1743]: I0213 15:38:01.653104    1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cb418317-5c11-4d21-8133-fe46de3492b6-clustermesh-secrets\") pod \"cb418317-5c11-4d21-8133-fe46de3492b6\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") "
Feb 13 15:38:01.653032 kubelet[1743]: I0213 15:38:01.653132    1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr8zz\" (UniqueName: \"kubernetes.io/projected/cb418317-5c11-4d21-8133-fe46de3492b6-kube-api-access-qr8zz\") pod \"cb418317-5c11-4d21-8133-fe46de3492b6\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") "
Feb 13 15:38:01.653714 kubelet[1743]: I0213 15:38:01.653149    1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-xtables-lock\") pod \"cb418317-5c11-4d21-8133-fe46de3492b6\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") "
Feb 13 15:38:01.653714 kubelet[1743]: I0213 15:38:01.653163    1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-host-proc-sys-kernel\") pod \"cb418317-5c11-4d21-8133-fe46de3492b6\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") "
Feb 13 15:38:01.653714 kubelet[1743]: I0213 15:38:01.653178    1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cb418317-5c11-4d21-8133-fe46de3492b6-hubble-tls\") pod \"cb418317-5c11-4d21-8133-fe46de3492b6\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") "
Feb 13 15:38:01.653714 kubelet[1743]: I0213 15:38:01.653193    1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-cilium-run\") pod \"cb418317-5c11-4d21-8133-fe46de3492b6\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") "
Feb 13 15:38:01.653714 kubelet[1743]: I0213 15:38:01.653207    1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-bpf-maps\") pod \"cb418317-5c11-4d21-8133-fe46de3492b6\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") "
Feb 13 15:38:01.653714 kubelet[1743]: I0213 15:38:01.653220    1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-etc-cni-netd\") pod \"cb418317-5c11-4d21-8133-fe46de3492b6\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") "
Feb 13 15:38:01.653847 kubelet[1743]: I0213 15:38:01.653234    1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-hostproc\") pod \"cb418317-5c11-4d21-8133-fe46de3492b6\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") "
Feb 13 15:38:01.653847 kubelet[1743]: I0213 15:38:01.653249    1743 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-lib-modules\") pod \"cb418317-5c11-4d21-8133-fe46de3492b6\" (UID: \"cb418317-5c11-4d21-8133-fe46de3492b6\") "
Feb 13 15:38:01.653847 kubelet[1743]: I0213 15:38:01.653125    1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cb418317-5c11-4d21-8133-fe46de3492b6" (UID: "cb418317-5c11-4d21-8133-fe46de3492b6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:38:01.653847 kubelet[1743]: I0213 15:38:01.653126    1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-cni-path" (OuterVolumeSpecName: "cni-path") pod "cb418317-5c11-4d21-8133-fe46de3492b6" (UID: "cb418317-5c11-4d21-8133-fe46de3492b6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:38:01.653847 kubelet[1743]: I0213 15:38:01.653148    1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cb418317-5c11-4d21-8133-fe46de3492b6" (UID: "cb418317-5c11-4d21-8133-fe46de3492b6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:38:01.653954 kubelet[1743]: I0213 15:38:01.653287    1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cb418317-5c11-4d21-8133-fe46de3492b6" (UID: "cb418317-5c11-4d21-8133-fe46de3492b6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:38:01.653954 kubelet[1743]: I0213 15:38:01.653333    1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cb418317-5c11-4d21-8133-fe46de3492b6" (UID: "cb418317-5c11-4d21-8133-fe46de3492b6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:38:01.653954 kubelet[1743]: I0213 15:38:01.653349    1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cb418317-5c11-4d21-8133-fe46de3492b6" (UID: "cb418317-5c11-4d21-8133-fe46de3492b6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:38:01.653954 kubelet[1743]: I0213 15:38:01.653574    1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cb418317-5c11-4d21-8133-fe46de3492b6" (UID: "cb418317-5c11-4d21-8133-fe46de3492b6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:38:01.653954 kubelet[1743]: I0213 15:38:01.653600    1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cb418317-5c11-4d21-8133-fe46de3492b6" (UID: "cb418317-5c11-4d21-8133-fe46de3492b6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:38:01.654056 kubelet[1743]: I0213 15:38:01.653616    1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cb418317-5c11-4d21-8133-fe46de3492b6" (UID: "cb418317-5c11-4d21-8133-fe46de3492b6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:38:01.655026 kubelet[1743]: I0213 15:38:01.654994    1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-hostproc" (OuterVolumeSpecName: "hostproc") pod "cb418317-5c11-4d21-8133-fe46de3492b6" (UID: "cb418317-5c11-4d21-8133-fe46de3492b6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:38:01.655084 kubelet[1743]: I0213 15:38:01.655047    1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb418317-5c11-4d21-8133-fe46de3492b6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cb418317-5c11-4d21-8133-fe46de3492b6" (UID: "cb418317-5c11-4d21-8133-fe46de3492b6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb 13 15:38:01.656866 kubelet[1743]: I0213 15:38:01.656818    1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb418317-5c11-4d21-8133-fe46de3492b6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cb418317-5c11-4d21-8133-fe46de3492b6" (UID: "cb418317-5c11-4d21-8133-fe46de3492b6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 13 15:38:01.656945 kubelet[1743]: I0213 15:38:01.656893    1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb418317-5c11-4d21-8133-fe46de3492b6-kube-api-access-qr8zz" (OuterVolumeSpecName: "kube-api-access-qr8zz") pod "cb418317-5c11-4d21-8133-fe46de3492b6" (UID: "cb418317-5c11-4d21-8133-fe46de3492b6"). InnerVolumeSpecName "kube-api-access-qr8zz". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 13 15:38:01.656945 kubelet[1743]: I0213 15:38:01.656910    1743 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb418317-5c11-4d21-8133-fe46de3492b6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cb418317-5c11-4d21-8133-fe46de3492b6" (UID: "cb418317-5c11-4d21-8133-fe46de3492b6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb 13 15:38:01.753616 kubelet[1743]: I0213 15:38:01.753502    1743 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-cilium-cgroup\") on node \"10.0.0.136\" DevicePath \"\""
Feb 13 15:38:01.753616 kubelet[1743]: I0213 15:38:01.753552    1743 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cb418317-5c11-4d21-8133-fe46de3492b6-cilium-config-path\") on node \"10.0.0.136\" DevicePath \"\""
Feb 13 15:38:01.753616 kubelet[1743]: I0213 15:38:01.753566    1743 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-host-proc-sys-net\") on node \"10.0.0.136\" DevicePath \"\""
Feb 13 15:38:01.753616 kubelet[1743]: I0213 15:38:01.753574    1743 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-cni-path\") on node \"10.0.0.136\" DevicePath \"\""
Feb 13 15:38:01.753616 kubelet[1743]: I0213 15:38:01.753581    1743 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-xtables-lock\") on node \"10.0.0.136\" DevicePath \"\""
Feb 13 15:38:01.753616 kubelet[1743]: I0213 15:38:01.753589    1743 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-host-proc-sys-kernel\") on node \"10.0.0.136\" DevicePath \"\""
Feb 13 15:38:01.753616 kubelet[1743]: I0213 15:38:01.753598    1743 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cb418317-5c11-4d21-8133-fe46de3492b6-hubble-tls\") on node \"10.0.0.136\" DevicePath \"\""
Feb 13 15:38:01.753616 kubelet[1743]: I0213 15:38:01.753605    1743 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cb418317-5c11-4d21-8133-fe46de3492b6-clustermesh-secrets\") on node \"10.0.0.136\" DevicePath \"\""
Feb 13 15:38:01.753852 kubelet[1743]: I0213 15:38:01.753613    1743 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qr8zz\" (UniqueName: \"kubernetes.io/projected/cb418317-5c11-4d21-8133-fe46de3492b6-kube-api-access-qr8zz\") on node \"10.0.0.136\" DevicePath \"\""
Feb 13 15:38:01.753852 kubelet[1743]: I0213 15:38:01.753622    1743 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-cilium-run\") on node \"10.0.0.136\" DevicePath \"\""
Feb 13 15:38:01.753852 kubelet[1743]: I0213 15:38:01.753629    1743 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-bpf-maps\") on node \"10.0.0.136\" DevicePath \"\""
Feb 13 15:38:01.753852 kubelet[1743]: I0213 15:38:01.753636    1743 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-etc-cni-netd\") on node \"10.0.0.136\" DevicePath \"\""
Feb 13 15:38:01.753852 kubelet[1743]: I0213 15:38:01.753643    1743 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-hostproc\") on node \"10.0.0.136\" DevicePath \"\""
Feb 13 15:38:01.753852 kubelet[1743]: I0213 15:38:01.753650    1743 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb418317-5c11-4d21-8133-fe46de3492b6-lib-modules\") on node \"10.0.0.136\" DevicePath \"\""
Feb 13 15:38:01.804218 kubelet[1743]: E0213 15:38:01.804172    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:38:02.010234 kubelet[1743]: I0213 15:38:02.010131    1743 scope.go:117] "RemoveContainer" containerID="2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc"
Feb 13 15:38:02.013746 containerd[1448]: time="2025-02-13T15:38:02.013435665Z" level=info msg="RemoveContainer for \"2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc\""
Feb 13 15:38:02.014772 systemd[1]: Removed slice kubepods-burstable-podcb418317_5c11_4d21_8133_fe46de3492b6.slice - libcontainer container kubepods-burstable-podcb418317_5c11_4d21_8133_fe46de3492b6.slice.
Feb 13 15:38:02.014879 systemd[1]: kubepods-burstable-podcb418317_5c11_4d21_8133_fe46de3492b6.slice: Consumed 6.490s CPU time.
Feb 13 15:38:02.016816 containerd[1448]: time="2025-02-13T15:38:02.016783822Z" level=info msg="RemoveContainer for \"2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc\" returns successfully"
Feb 13 15:38:02.017017 kubelet[1743]: I0213 15:38:02.016994    1743 scope.go:117] "RemoveContainer" containerID="cc3416875e3441f4e11c27f2f674e9977ab8be9326a98e624e7e44a0aebb01c0"
Feb 13 15:38:02.018500 containerd[1448]: time="2025-02-13T15:38:02.018476641Z" level=info msg="RemoveContainer for \"cc3416875e3441f4e11c27f2f674e9977ab8be9326a98e624e7e44a0aebb01c0\""
Feb 13 15:38:02.020757 containerd[1448]: time="2025-02-13T15:38:02.020720746Z" level=info msg="RemoveContainer for \"cc3416875e3441f4e11c27f2f674e9977ab8be9326a98e624e7e44a0aebb01c0\" returns successfully"
Feb 13 15:38:02.020978 kubelet[1743]: I0213 15:38:02.020892    1743 scope.go:117] "RemoveContainer" containerID="bcd5d3fb297a6d6ddef3a98ec0bd73c65768edc874ae17a6fc871b7710faeb87"
Feb 13 15:38:02.022048 containerd[1448]: time="2025-02-13T15:38:02.022011320Z" level=info msg="RemoveContainer for \"bcd5d3fb297a6d6ddef3a98ec0bd73c65768edc874ae17a6fc871b7710faeb87\""
Feb 13 15:38:02.024810 containerd[1448]: time="2025-02-13T15:38:02.024784751Z" level=info msg="RemoveContainer for \"bcd5d3fb297a6d6ddef3a98ec0bd73c65768edc874ae17a6fc871b7710faeb87\" returns successfully"
Feb 13 15:38:02.025056 kubelet[1743]: I0213 15:38:02.024944    1743 scope.go:117] "RemoveContainer" containerID="35fbc5f8fe9c298d34b6108d8ced83d6e73acda7063e2aa3fdfa91d7189547dc"
Feb 13 15:38:02.025925 containerd[1448]: time="2025-02-13T15:38:02.025900564Z" level=info msg="RemoveContainer for \"35fbc5f8fe9c298d34b6108d8ced83d6e73acda7063e2aa3fdfa91d7189547dc\""
Feb 13 15:38:02.028180 containerd[1448]: time="2025-02-13T15:38:02.028149669Z" level=info msg="RemoveContainer for \"35fbc5f8fe9c298d34b6108d8ced83d6e73acda7063e2aa3fdfa91d7189547dc\" returns successfully"
Feb 13 15:38:02.028340 kubelet[1743]: I0213 15:38:02.028314    1743 scope.go:117] "RemoveContainer" containerID="4b28a049d0f7593bd42da9bb41bd0793256114ab9161c023b98e3d61ac618cac"
Feb 13 15:38:02.029434 containerd[1448]: time="2025-02-13T15:38:02.029411723Z" level=info msg="RemoveContainer for \"4b28a049d0f7593bd42da9bb41bd0793256114ab9161c023b98e3d61ac618cac\""
Feb 13 15:38:02.031479 containerd[1448]: time="2025-02-13T15:38:02.031454586Z" level=info msg="RemoveContainer for \"4b28a049d0f7593bd42da9bb41bd0793256114ab9161c023b98e3d61ac618cac\" returns successfully"
Feb 13 15:38:02.031712 kubelet[1743]: I0213 15:38:02.031631    1743 scope.go:117] "RemoveContainer" containerID="2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc"
Feb 13 15:38:02.031842 containerd[1448]: time="2025-02-13T15:38:02.031808630Z" level=error msg="ContainerStatus for \"2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc\": not found"
Feb 13 15:38:02.031969 kubelet[1743]: E0213 15:38:02.031942    1743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc\": not found" containerID="2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc"
Feb 13 15:38:02.032054 kubelet[1743]: I0213 15:38:02.031973    1743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc"} err="failed to get container status \"2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc\": rpc error: code = NotFound desc = an error occurred when try to find container \"2ef955560411fcc336e414171e1f15646794326080f6c34a7a9da51b99259abc\": not found"
Feb 13 15:38:02.032098 kubelet[1743]: I0213 15:38:02.032056    1743 scope.go:117] "RemoveContainer" containerID="cc3416875e3441f4e11c27f2f674e9977ab8be9326a98e624e7e44a0aebb01c0"
Feb 13 15:38:02.032338 containerd[1448]: time="2025-02-13T15:38:02.032314515Z" level=error msg="ContainerStatus for \"cc3416875e3441f4e11c27f2f674e9977ab8be9326a98e624e7e44a0aebb01c0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cc3416875e3441f4e11c27f2f674e9977ab8be9326a98e624e7e44a0aebb01c0\": not found"
Feb 13 15:38:02.032471 kubelet[1743]: E0213 15:38:02.032450    1743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cc3416875e3441f4e11c27f2f674e9977ab8be9326a98e624e7e44a0aebb01c0\": not found" containerID="cc3416875e3441f4e11c27f2f674e9977ab8be9326a98e624e7e44a0aebb01c0"
Feb 13 15:38:02.032519 kubelet[1743]: I0213 15:38:02.032500    1743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cc3416875e3441f4e11c27f2f674e9977ab8be9326a98e624e7e44a0aebb01c0"} err="failed to get container status \"cc3416875e3441f4e11c27f2f674e9977ab8be9326a98e624e7e44a0aebb01c0\": rpc error: code = NotFound desc = an error occurred when try to find container \"cc3416875e3441f4e11c27f2f674e9977ab8be9326a98e624e7e44a0aebb01c0\": not found"
Feb 13 15:38:02.032549 kubelet[1743]: I0213 15:38:02.032521    1743 scope.go:117] "RemoveContainer" containerID="bcd5d3fb297a6d6ddef3a98ec0bd73c65768edc874ae17a6fc871b7710faeb87"
Feb 13 15:38:02.032723 containerd[1448]: time="2025-02-13T15:38:02.032692919Z" level=error msg="ContainerStatus for \"bcd5d3fb297a6d6ddef3a98ec0bd73c65768edc874ae17a6fc871b7710faeb87\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bcd5d3fb297a6d6ddef3a98ec0bd73c65768edc874ae17a6fc871b7710faeb87\": not found"
Feb 13 15:38:02.032940 kubelet[1743]: E0213 15:38:02.032819    1743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bcd5d3fb297a6d6ddef3a98ec0bd73c65768edc874ae17a6fc871b7710faeb87\": not found" containerID="bcd5d3fb297a6d6ddef3a98ec0bd73c65768edc874ae17a6fc871b7710faeb87"
Feb 13 15:38:02.032940 kubelet[1743]: I0213 15:38:02.032852    1743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bcd5d3fb297a6d6ddef3a98ec0bd73c65768edc874ae17a6fc871b7710faeb87"} err="failed to get container status \"bcd5d3fb297a6d6ddef3a98ec0bd73c65768edc874ae17a6fc871b7710faeb87\": rpc error: code = NotFound desc = an error occurred when try to find container \"bcd5d3fb297a6d6ddef3a98ec0bd73c65768edc874ae17a6fc871b7710faeb87\": not found"
Feb 13 15:38:02.032940 kubelet[1743]: I0213 15:38:02.032867    1743 scope.go:117] "RemoveContainer" containerID="35fbc5f8fe9c298d34b6108d8ced83d6e73acda7063e2aa3fdfa91d7189547dc"
Feb 13 15:38:02.033092 containerd[1448]: time="2025-02-13T15:38:02.033048403Z" level=error msg="ContainerStatus for \"35fbc5f8fe9c298d34b6108d8ced83d6e73acda7063e2aa3fdfa91d7189547dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"35fbc5f8fe9c298d34b6108d8ced83d6e73acda7063e2aa3fdfa91d7189547dc\": not found"
Feb 13 15:38:02.033234 kubelet[1743]: E0213 15:38:02.033205    1743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"35fbc5f8fe9c298d34b6108d8ced83d6e73acda7063e2aa3fdfa91d7189547dc\": not found" containerID="35fbc5f8fe9c298d34b6108d8ced83d6e73acda7063e2aa3fdfa91d7189547dc"
Feb 13 15:38:02.033269 kubelet[1743]: I0213 15:38:02.033233    1743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"35fbc5f8fe9c298d34b6108d8ced83d6e73acda7063e2aa3fdfa91d7189547dc"} err="failed to get container status \"35fbc5f8fe9c298d34b6108d8ced83d6e73acda7063e2aa3fdfa91d7189547dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"35fbc5f8fe9c298d34b6108d8ced83d6e73acda7063e2aa3fdfa91d7189547dc\": not found"
Feb 13 15:38:02.033269 kubelet[1743]: I0213 15:38:02.033251    1743 scope.go:117] "RemoveContainer" containerID="4b28a049d0f7593bd42da9bb41bd0793256114ab9161c023b98e3d61ac618cac"
Feb 13 15:38:02.033433 containerd[1448]: time="2025-02-13T15:38:02.033405127Z" level=error msg="ContainerStatus for \"4b28a049d0f7593bd42da9bb41bd0793256114ab9161c023b98e3d61ac618cac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b28a049d0f7593bd42da9bb41bd0793256114ab9161c023b98e3d61ac618cac\": not found"
Feb 13 15:38:02.033547 kubelet[1743]: E0213 15:38:02.033518    1743 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b28a049d0f7593bd42da9bb41bd0793256114ab9161c023b98e3d61ac618cac\": not found" containerID="4b28a049d0f7593bd42da9bb41bd0793256114ab9161c023b98e3d61ac618cac"
Feb 13 15:38:02.033579 kubelet[1743]: I0213 15:38:02.033545    1743 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b28a049d0f7593bd42da9bb41bd0793256114ab9161c023b98e3d61ac618cac"} err="failed to get container status \"4b28a049d0f7593bd42da9bb41bd0793256114ab9161c023b98e3d61ac618cac\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b28a049d0f7593bd42da9bb41bd0793256114ab9161c023b98e3d61ac618cac\": not found"
Feb 13 15:38:02.452362 systemd[1]: var-lib-kubelet-pods-cb418317\x2d5c11\x2d4d21\x2d8133\x2dfe46de3492b6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqr8zz.mount: Deactivated successfully.
Feb 13 15:38:02.452474 systemd[1]: var-lib-kubelet-pods-cb418317\x2d5c11\x2d4d21\x2d8133\x2dfe46de3492b6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully.
Feb 13 15:38:02.452527 systemd[1]: var-lib-kubelet-pods-cb418317\x2d5c11\x2d4d21\x2d8133\x2dfe46de3492b6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully.
Feb 13 15:38:02.805326 kubelet[1743]: E0213 15:38:02.805289    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:38:02.903694 kubelet[1743]: I0213 15:38:02.903653    1743 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb418317-5c11-4d21-8133-fe46de3492b6" path="/var/lib/kubelet/pods/cb418317-5c11-4d21-8133-fe46de3492b6/volumes"
Feb 13 15:38:03.806403 kubelet[1743]: E0213 15:38:03.806367    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:38:04.296849 kubelet[1743]: E0213 15:38:04.296806    1743 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb418317-5c11-4d21-8133-fe46de3492b6" containerName="mount-bpf-fs"
Feb 13 15:38:04.296849 kubelet[1743]: E0213 15:38:04.296839    1743 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb418317-5c11-4d21-8133-fe46de3492b6" containerName="cilium-agent"
Feb 13 15:38:04.296849 kubelet[1743]: E0213 15:38:04.296846    1743 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb418317-5c11-4d21-8133-fe46de3492b6" containerName="mount-cgroup"
Feb 13 15:38:04.296849 kubelet[1743]: E0213 15:38:04.296851    1743 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb418317-5c11-4d21-8133-fe46de3492b6" containerName="apply-sysctl-overwrites"
Feb 13 15:38:04.296849 kubelet[1743]: E0213 15:38:04.296857    1743 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cb418317-5c11-4d21-8133-fe46de3492b6" containerName="clean-cilium-state"
Feb 13 15:38:04.297099 kubelet[1743]: I0213 15:38:04.296876    1743 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb418317-5c11-4d21-8133-fe46de3492b6" containerName="cilium-agent"
Feb 13 15:38:04.301924 systemd[1]: Created slice kubepods-besteffort-podafec3ecd_0050_41fb_8493_bee9c0c4d799.slice - libcontainer container kubepods-besteffort-podafec3ecd_0050_41fb_8493_bee9c0c4d799.slice.
Feb 13 15:38:04.317520 systemd[1]: Created slice kubepods-burstable-poda092c563_4bf9_4a24_94f3_97e9b9b6c0e5.slice - libcontainer container kubepods-burstable-poda092c563_4bf9_4a24_94f3_97e9b9b6c0e5.slice.
Feb 13 15:38:04.368947 kubelet[1743]: I0213 15:38:04.368894    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a092c563-4bf9-4a24-94f3-97e9b9b6c0e5-xtables-lock\") pod \"cilium-bwcnr\" (UID: \"a092c563-4bf9-4a24-94f3-97e9b9b6c0e5\") " pod="kube-system/cilium-bwcnr"
Feb 13 15:38:04.368947 kubelet[1743]: I0213 15:38:04.368939    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a092c563-4bf9-4a24-94f3-97e9b9b6c0e5-cilium-run\") pod \"cilium-bwcnr\" (UID: \"a092c563-4bf9-4a24-94f3-97e9b9b6c0e5\") " pod="kube-system/cilium-bwcnr"
Feb 13 15:38:04.368947 kubelet[1743]: I0213 15:38:04.368959    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a092c563-4bf9-4a24-94f3-97e9b9b6c0e5-cilium-cgroup\") pod \"cilium-bwcnr\" (UID: \"a092c563-4bf9-4a24-94f3-97e9b9b6c0e5\") " pod="kube-system/cilium-bwcnr"
Feb 13 15:38:04.368947 kubelet[1743]: I0213 15:38:04.368974    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wbhw\" (UniqueName: \"kubernetes.io/projected/a092c563-4bf9-4a24-94f3-97e9b9b6c0e5-kube-api-access-6wbhw\") pod \"cilium-bwcnr\" (UID: \"a092c563-4bf9-4a24-94f3-97e9b9b6c0e5\") " pod="kube-system/cilium-bwcnr"
Feb 13 15:38:04.369208 kubelet[1743]: I0213 15:38:04.368991    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pwh8\" (UniqueName: \"kubernetes.io/projected/afec3ecd-0050-41fb-8493-bee9c0c4d799-kube-api-access-9pwh8\") pod \"cilium-operator-5d85765b45-lc7hn\" (UID: \"afec3ecd-0050-41fb-8493-bee9c0c4d799\") " pod="kube-system/cilium-operator-5d85765b45-lc7hn"
Feb 13 15:38:04.369208 kubelet[1743]: I0213 15:38:04.369005    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a092c563-4bf9-4a24-94f3-97e9b9b6c0e5-cni-path\") pod \"cilium-bwcnr\" (UID: \"a092c563-4bf9-4a24-94f3-97e9b9b6c0e5\") " pod="kube-system/cilium-bwcnr"
Feb 13 15:38:04.369208 kubelet[1743]: I0213 15:38:04.369024    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a092c563-4bf9-4a24-94f3-97e9b9b6c0e5-clustermesh-secrets\") pod \"cilium-bwcnr\" (UID: \"a092c563-4bf9-4a24-94f3-97e9b9b6c0e5\") " pod="kube-system/cilium-bwcnr"
Feb 13 15:38:04.369208 kubelet[1743]: I0213 15:38:04.369039    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a092c563-4bf9-4a24-94f3-97e9b9b6c0e5-etc-cni-netd\") pod \"cilium-bwcnr\" (UID: \"a092c563-4bf9-4a24-94f3-97e9b9b6c0e5\") " pod="kube-system/cilium-bwcnr"
Feb 13 15:38:04.369208 kubelet[1743]: I0213 15:38:04.369055    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a092c563-4bf9-4a24-94f3-97e9b9b6c0e5-lib-modules\") pod \"cilium-bwcnr\" (UID: \"a092c563-4bf9-4a24-94f3-97e9b9b6c0e5\") " pod="kube-system/cilium-bwcnr"
Feb 13 15:38:04.369311 kubelet[1743]: I0213 15:38:04.369086    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a092c563-4bf9-4a24-94f3-97e9b9b6c0e5-bpf-maps\") pod \"cilium-bwcnr\" (UID: \"a092c563-4bf9-4a24-94f3-97e9b9b6c0e5\") " pod="kube-system/cilium-bwcnr"
Feb 13 15:38:04.369311 kubelet[1743]: I0213 15:38:04.369103    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a092c563-4bf9-4a24-94f3-97e9b9b6c0e5-hostproc\") pod \"cilium-bwcnr\" (UID: \"a092c563-4bf9-4a24-94f3-97e9b9b6c0e5\") " pod="kube-system/cilium-bwcnr"
Feb 13 15:38:04.369311 kubelet[1743]: I0213 15:38:04.369116    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a092c563-4bf9-4a24-94f3-97e9b9b6c0e5-host-proc-sys-net\") pod \"cilium-bwcnr\" (UID: \"a092c563-4bf9-4a24-94f3-97e9b9b6c0e5\") " pod="kube-system/cilium-bwcnr"
Feb 13 15:38:04.369311 kubelet[1743]: I0213 15:38:04.369130    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a092c563-4bf9-4a24-94f3-97e9b9b6c0e5-host-proc-sys-kernel\") pod \"cilium-bwcnr\" (UID: \"a092c563-4bf9-4a24-94f3-97e9b9b6c0e5\") " pod="kube-system/cilium-bwcnr"
Feb 13 15:38:04.369311 kubelet[1743]: I0213 15:38:04.369151    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a092c563-4bf9-4a24-94f3-97e9b9b6c0e5-hubble-tls\") pod \"cilium-bwcnr\" (UID: \"a092c563-4bf9-4a24-94f3-97e9b9b6c0e5\") " pod="kube-system/cilium-bwcnr"
Feb 13 15:38:04.369413 kubelet[1743]: I0213 15:38:04.369168    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/afec3ecd-0050-41fb-8493-bee9c0c4d799-cilium-config-path\") pod \"cilium-operator-5d85765b45-lc7hn\" (UID: \"afec3ecd-0050-41fb-8493-bee9c0c4d799\") " pod="kube-system/cilium-operator-5d85765b45-lc7hn"
Feb 13 15:38:04.369413 kubelet[1743]: I0213 15:38:04.369182    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a092c563-4bf9-4a24-94f3-97e9b9b6c0e5-cilium-config-path\") pod \"cilium-bwcnr\" (UID: \"a092c563-4bf9-4a24-94f3-97e9b9b6c0e5\") " pod="kube-system/cilium-bwcnr"
Feb 13 15:38:04.369413 kubelet[1743]: I0213 15:38:04.369198    1743 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a092c563-4bf9-4a24-94f3-97e9b9b6c0e5-cilium-ipsec-secrets\") pod \"cilium-bwcnr\" (UID: \"a092c563-4bf9-4a24-94f3-97e9b9b6c0e5\") " pod="kube-system/cilium-bwcnr"
Feb 13 15:38:04.604724 kubelet[1743]: E0213 15:38:04.604569    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:38:04.605204 containerd[1448]: time="2025-02-13T15:38:04.605167889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-lc7hn,Uid:afec3ecd-0050-41fb-8493-bee9c0c4d799,Namespace:kube-system,Attempt:0,}"
Feb 13 15:38:04.621807 containerd[1448]: time="2025-02-13T15:38:04.621445057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:38:04.621807 containerd[1448]: time="2025-02-13T15:38:04.621778340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:38:04.621807 containerd[1448]: time="2025-02-13T15:38:04.621791220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:38:04.622110 containerd[1448]: time="2025-02-13T15:38:04.621866061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:38:04.632519 kubelet[1743]: E0213 15:38:04.632278    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:38:04.632801 containerd[1448]: time="2025-02-13T15:38:04.632759853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bwcnr,Uid:a092c563-4bf9-4a24-94f3-97e9b9b6c0e5,Namespace:kube-system,Attempt:0,}"
Feb 13 15:38:04.637261 systemd[1]: Started cri-containerd-832fddf632f70461229bd0272efa3e532746f5777fd38d1964bc7d0d602849b9.scope - libcontainer container 832fddf632f70461229bd0272efa3e532746f5777fd38d1964bc7d0d602849b9.
Feb 13 15:38:04.649501 containerd[1448]: time="2025-02-13T15:38:04.649413265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:38:04.649501 containerd[1448]: time="2025-02-13T15:38:04.649474865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:38:04.649501 containerd[1448]: time="2025-02-13T15:38:04.649490185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:38:04.649782 containerd[1448]: time="2025-02-13T15:38:04.649557706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:38:04.671326 systemd[1]: Started cri-containerd-e4b1c14025dc68e87454a9b0372c6667c5f3c80f86a663f4a743e2888e8a7cb5.scope - libcontainer container e4b1c14025dc68e87454a9b0372c6667c5f3c80f86a663f4a743e2888e8a7cb5.
Feb 13 15:38:04.672067 containerd[1448]: time="2025-02-13T15:38:04.672009497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-lc7hn,Uid:afec3ecd-0050-41fb-8493-bee9c0c4d799,Namespace:kube-system,Attempt:0,} returns sandbox id \"832fddf632f70461229bd0272efa3e532746f5777fd38d1964bc7d0d602849b9\""
Feb 13 15:38:04.672780 kubelet[1743]: E0213 15:38:04.672592    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:38:04.673477 containerd[1448]: time="2025-02-13T15:38:04.673448432Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\""
Feb 13 15:38:04.690258 containerd[1448]: time="2025-02-13T15:38:04.690222645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bwcnr,Uid:a092c563-4bf9-4a24-94f3-97e9b9b6c0e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4b1c14025dc68e87454a9b0372c6667c5f3c80f86a663f4a743e2888e8a7cb5\""
Feb 13 15:38:04.691091 kubelet[1743]: E0213 15:38:04.691060    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:38:04.692780 containerd[1448]: time="2025-02-13T15:38:04.692745751Z" level=info msg="CreateContainer within sandbox \"e4b1c14025dc68e87454a9b0372c6667c5f3c80f86a663f4a743e2888e8a7cb5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}"
Feb 13 15:38:04.704283 containerd[1448]: time="2025-02-13T15:38:04.704198349Z" level=info msg="CreateContainer within sandbox \"e4b1c14025dc68e87454a9b0372c6667c5f3c80f86a663f4a743e2888e8a7cb5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b8c18c6da3e4504c0e46169c573ad767a9fcd3156fc7a550bb265e79d0cb9c13\""
Feb 13 15:38:04.704757 containerd[1448]: time="2025-02-13T15:38:04.704730914Z" level=info msg="StartContainer for \"b8c18c6da3e4504c0e46169c573ad767a9fcd3156fc7a550bb265e79d0cb9c13\""
Feb 13 15:38:04.728246 systemd[1]: Started cri-containerd-b8c18c6da3e4504c0e46169c573ad767a9fcd3156fc7a550bb265e79d0cb9c13.scope - libcontainer container b8c18c6da3e4504c0e46169c573ad767a9fcd3156fc7a550bb265e79d0cb9c13.
Feb 13 15:38:04.748172 containerd[1448]: time="2025-02-13T15:38:04.748125081Z" level=info msg="StartContainer for \"b8c18c6da3e4504c0e46169c573ad767a9fcd3156fc7a550bb265e79d0cb9c13\" returns successfully"
Feb 13 15:38:04.806158 systemd[1]: cri-containerd-b8c18c6da3e4504c0e46169c573ad767a9fcd3156fc7a550bb265e79d0cb9c13.scope: Deactivated successfully.
Feb 13 15:38:04.806790 kubelet[1743]: E0213 15:38:04.806758    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:38:04.831120 containerd[1448]: time="2025-02-13T15:38:04.831024575Z" level=info msg="shim disconnected" id=b8c18c6da3e4504c0e46169c573ad767a9fcd3156fc7a550bb265e79d0cb9c13 namespace=k8s.io
Feb 13 15:38:04.831120 containerd[1448]: time="2025-02-13T15:38:04.831117416Z" level=warning msg="cleaning up after shim disconnected" id=b8c18c6da3e4504c0e46169c573ad767a9fcd3156fc7a550bb265e79d0cb9c13 namespace=k8s.io
Feb 13 15:38:04.831120 containerd[1448]: time="2025-02-13T15:38:04.831127416Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:38:04.916470 kubelet[1743]: E0213 15:38:04.916257    1743 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Feb 13 15:38:05.009971 kubelet[1743]: E0213 15:38:05.009904    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:38:05.011638 containerd[1448]: time="2025-02-13T15:38:05.011604631Z" level=info msg="CreateContainer within sandbox \"e4b1c14025dc68e87454a9b0372c6667c5f3c80f86a663f4a743e2888e8a7cb5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}"
Feb 13 15:38:05.019826 containerd[1448]: time="2025-02-13T15:38:05.019743872Z" level=info msg="CreateContainer within sandbox \"e4b1c14025dc68e87454a9b0372c6667c5f3c80f86a663f4a743e2888e8a7cb5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8f90ff65e4cbf9f642c8840e51c5ab99d8877192fc29ad90cab71b509170ec8f\""
Feb 13 15:38:05.020527 containerd[1448]: time="2025-02-13T15:38:05.020228077Z" level=info msg="StartContainer for \"8f90ff65e4cbf9f642c8840e51c5ab99d8877192fc29ad90cab71b509170ec8f\""
Feb 13 15:38:05.046241 systemd[1]: Started cri-containerd-8f90ff65e4cbf9f642c8840e51c5ab99d8877192fc29ad90cab71b509170ec8f.scope - libcontainer container 8f90ff65e4cbf9f642c8840e51c5ab99d8877192fc29ad90cab71b509170ec8f.
Feb 13 15:38:05.067852 containerd[1448]: time="2025-02-13T15:38:05.067816349Z" level=info msg="StartContainer for \"8f90ff65e4cbf9f642c8840e51c5ab99d8877192fc29ad90cab71b509170ec8f\" returns successfully"
Feb 13 15:38:05.088871 systemd[1]: cri-containerd-8f90ff65e4cbf9f642c8840e51c5ab99d8877192fc29ad90cab71b509170ec8f.scope: Deactivated successfully.
Feb 13 15:38:05.108391 containerd[1448]: time="2025-02-13T15:38:05.108330550Z" level=info msg="shim disconnected" id=8f90ff65e4cbf9f642c8840e51c5ab99d8877192fc29ad90cab71b509170ec8f namespace=k8s.io
Feb 13 15:38:05.108391 containerd[1448]: time="2025-02-13T15:38:05.108384191Z" level=warning msg="cleaning up after shim disconnected" id=8f90ff65e4cbf9f642c8840e51c5ab99d8877192fc29ad90cab71b509170ec8f namespace=k8s.io
Feb 13 15:38:05.108391 containerd[1448]: time="2025-02-13T15:38:05.108392671Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:38:05.570161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1330955724.mount: Deactivated successfully.
Feb 13 15:38:05.778788 kubelet[1743]: I0213 15:38:05.778727    1743 setters.go:600] "Node became not ready" node="10.0.0.136" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-02-13T15:38:05Z","lastTransitionTime":"2025-02-13T15:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"}
Feb 13 15:38:05.807788 kubelet[1743]: E0213 15:38:05.807735    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:38:06.016259 kubelet[1743]: E0213 15:38:06.016226    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:38:06.018290 containerd[1448]: time="2025-02-13T15:38:06.018159163Z" level=info msg="CreateContainer within sandbox \"e4b1c14025dc68e87454a9b0372c6667c5f3c80f86a663f4a743e2888e8a7cb5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}"
Feb 13 15:38:06.030049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4280024637.mount: Deactivated successfully.
Feb 13 15:38:06.034640 containerd[1448]: time="2025-02-13T15:38:06.034599040Z" level=info msg="CreateContainer within sandbox \"e4b1c14025dc68e87454a9b0372c6667c5f3c80f86a663f4a743e2888e8a7cb5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9baf5be9596195401d44b2079ad696f2902e8c555a7b677be6080a9bd6840303\""
Feb 13 15:38:06.035223 containerd[1448]: time="2025-02-13T15:38:06.035197326Z" level=info msg="StartContainer for \"9baf5be9596195401d44b2079ad696f2902e8c555a7b677be6080a9bd6840303\""
Feb 13 15:38:06.061250 systemd[1]: Started cri-containerd-9baf5be9596195401d44b2079ad696f2902e8c555a7b677be6080a9bd6840303.scope - libcontainer container 9baf5be9596195401d44b2079ad696f2902e8c555a7b677be6080a9bd6840303.
Feb 13 15:38:06.088628 containerd[1448]: time="2025-02-13T15:38:06.088580556Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:38:06.089409 containerd[1448]: time="2025-02-13T15:38:06.089369644Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306"
Feb 13 15:38:06.090011 systemd[1]: cri-containerd-9baf5be9596195401d44b2079ad696f2902e8c555a7b677be6080a9bd6840303.scope: Deactivated successfully.
Feb 13 15:38:06.092031 containerd[1448]: time="2025-02-13T15:38:06.091775146Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Feb 13 15:38:06.092031 containerd[1448]: time="2025-02-13T15:38:06.091928108Z" level=info msg="StartContainer for \"9baf5be9596195401d44b2079ad696f2902e8c555a7b677be6080a9bd6840303\" returns successfully"
Feb 13 15:38:06.093659 containerd[1448]: time="2025-02-13T15:38:06.093630124Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.420053251s"
Feb 13 15:38:06.093824 containerd[1448]: time="2025-02-13T15:38:06.093746525Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\""
Feb 13 15:38:06.095836 containerd[1448]: time="2025-02-13T15:38:06.095660184Z" level=info msg="CreateContainer within sandbox \"832fddf632f70461229bd0272efa3e532746f5777fd38d1964bc7d0d602849b9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}"
Feb 13 15:38:06.144255 containerd[1448]: time="2025-02-13T15:38:06.144189847Z" level=info msg="shim disconnected" id=9baf5be9596195401d44b2079ad696f2902e8c555a7b677be6080a9bd6840303 namespace=k8s.io
Feb 13 15:38:06.144255 containerd[1448]: time="2025-02-13T15:38:06.144249168Z" level=warning msg="cleaning up after shim disconnected" id=9baf5be9596195401d44b2079ad696f2902e8c555a7b677be6080a9bd6840303 namespace=k8s.io
Feb 13 15:38:06.144255 containerd[1448]: time="2025-02-13T15:38:06.144257048Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:38:06.145468 containerd[1448]: time="2025-02-13T15:38:06.145430339Z" level=info msg="CreateContainer within sandbox \"832fddf632f70461229bd0272efa3e532746f5777fd38d1964bc7d0d602849b9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"aaf4714a3ebcfc18c5462d8b1a239b7d8780076e100a57208a24f6b1347dcf46\""
Feb 13 15:38:06.146486 containerd[1448]: time="2025-02-13T15:38:06.145940424Z" level=info msg="StartContainer for \"aaf4714a3ebcfc18c5462d8b1a239b7d8780076e100a57208a24f6b1347dcf46\""
Feb 13 15:38:06.178251 systemd[1]: Started cri-containerd-aaf4714a3ebcfc18c5462d8b1a239b7d8780076e100a57208a24f6b1347dcf46.scope - libcontainer container aaf4714a3ebcfc18c5462d8b1a239b7d8780076e100a57208a24f6b1347dcf46.
Feb 13 15:38:06.196750 containerd[1448]: time="2025-02-13T15:38:06.196637908Z" level=info msg="StartContainer for \"aaf4714a3ebcfc18c5462d8b1a239b7d8780076e100a57208a24f6b1347dcf46\" returns successfully"
Feb 13 15:38:06.808700 kubelet[1743]: E0213 15:38:06.808649    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:38:07.019366 kubelet[1743]: E0213 15:38:07.019288    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:38:07.020166 kubelet[1743]: E0213 15:38:07.020141    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:38:07.021585 containerd[1448]: time="2025-02-13T15:38:07.021398624Z" level=info msg="CreateContainer within sandbox \"e4b1c14025dc68e87454a9b0372c6667c5f3c80f86a663f4a743e2888e8a7cb5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}"
Feb 13 15:38:07.034000 containerd[1448]: time="2025-02-13T15:38:07.033905339Z" level=info msg="CreateContainer within sandbox \"e4b1c14025dc68e87454a9b0372c6667c5f3c80f86a663f4a743e2888e8a7cb5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e4f7e32f209d1a7545fe7c74221e9f1f20aef4d5f0f17952dcd12a3556cdf9e8\""
Feb 13 15:38:07.034897 containerd[1448]: time="2025-02-13T15:38:07.034393263Z" level=info msg="StartContainer for \"e4f7e32f209d1a7545fe7c74221e9f1f20aef4d5f0f17952dcd12a3556cdf9e8\""
Feb 13 15:38:07.065267 systemd[1]: Started cri-containerd-e4f7e32f209d1a7545fe7c74221e9f1f20aef4d5f0f17952dcd12a3556cdf9e8.scope - libcontainer container e4f7e32f209d1a7545fe7c74221e9f1f20aef4d5f0f17952dcd12a3556cdf9e8.
Feb 13 15:38:07.085281 systemd[1]: cri-containerd-e4f7e32f209d1a7545fe7c74221e9f1f20aef4d5f0f17952dcd12a3556cdf9e8.scope: Deactivated successfully.
Feb 13 15:38:07.089308 containerd[1448]: time="2025-02-13T15:38:07.089266489Z" level=info msg="StartContainer for \"e4f7e32f209d1a7545fe7c74221e9f1f20aef4d5f0f17952dcd12a3556cdf9e8\" returns successfully"
Feb 13 15:38:07.110680 containerd[1448]: time="2025-02-13T15:38:07.110616725Z" level=info msg="shim disconnected" id=e4f7e32f209d1a7545fe7c74221e9f1f20aef4d5f0f17952dcd12a3556cdf9e8 namespace=k8s.io
Feb 13 15:38:07.110680 containerd[1448]: time="2025-02-13T15:38:07.110671606Z" level=warning msg="cleaning up after shim disconnected" id=e4f7e32f209d1a7545fe7c74221e9f1f20aef4d5f0f17952dcd12a3556cdf9e8 namespace=k8s.io
Feb 13 15:38:07.110680 containerd[1448]: time="2025-02-13T15:38:07.110680526Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:38:07.474547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4f7e32f209d1a7545fe7c74221e9f1f20aef4d5f0f17952dcd12a3556cdf9e8-rootfs.mount: Deactivated successfully.
Feb 13 15:38:07.809348 kubelet[1743]: E0213 15:38:07.809222    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:38:08.025391 kubelet[1743]: E0213 15:38:08.025352    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:38:08.026059 kubelet[1743]: E0213 15:38:08.025889    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:38:08.028936 containerd[1448]: time="2025-02-13T15:38:08.028890617Z" level=info msg="CreateContainer within sandbox \"e4b1c14025dc68e87454a9b0372c6667c5f3c80f86a663f4a743e2888e8a7cb5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}"
Feb 13 15:38:08.044528 kubelet[1743]: I0213 15:38:08.044477    1743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-lc7hn" podStartSLOduration=2.623043852 podStartE2EDuration="4.044427515s" podCreationTimestamp="2025-02-13 15:38:04 +0000 UTC" firstStartedPulling="2025-02-13 15:38:04.67321707 +0000 UTC m=+50.538965319" lastFinishedPulling="2025-02-13 15:38:06.094600733 +0000 UTC m=+51.960348982" observedRunningTime="2025-02-13 15:38:07.041408528 +0000 UTC m=+52.907156777" watchObservedRunningTime="2025-02-13 15:38:08.044427515 +0000 UTC m=+53.910175764"
Feb 13 15:38:08.044925 containerd[1448]: time="2025-02-13T15:38:08.044887439Z" level=info msg="CreateContainer within sandbox \"e4b1c14025dc68e87454a9b0372c6667c5f3c80f86a663f4a743e2888e8a7cb5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"45ced12510a07e4f9d1d3baf230ad5cc50ce14b33e2210fd58eeaad572df7fe1\""
Feb 13 15:38:08.045589 containerd[1448]: time="2025-02-13T15:38:08.045557365Z" level=info msg="StartContainer for \"45ced12510a07e4f9d1d3baf230ad5cc50ce14b33e2210fd58eeaad572df7fe1\""
Feb 13 15:38:08.076240 systemd[1]: Started cri-containerd-45ced12510a07e4f9d1d3baf230ad5cc50ce14b33e2210fd58eeaad572df7fe1.scope - libcontainer container 45ced12510a07e4f9d1d3baf230ad5cc50ce14b33e2210fd58eeaad572df7fe1.
Feb 13 15:38:08.110301 containerd[1448]: time="2025-02-13T15:38:08.110192580Z" level=info msg="StartContainer for \"45ced12510a07e4f9d1d3baf230ad5cc50ce14b33e2210fd58eeaad572df7fe1\" returns successfully"
Feb 13 15:38:08.373155 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce))
Feb 13 15:38:08.809702 kubelet[1743]: E0213 15:38:08.809656    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:38:09.029559 kubelet[1743]: E0213 15:38:09.029521    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:38:09.044346 kubelet[1743]: I0213 15:38:09.044295    1743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bwcnr" podStartSLOduration=5.044280837 podStartE2EDuration="5.044280837s" podCreationTimestamp="2025-02-13 15:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:38:09.043344709 +0000 UTC m=+54.909092958" watchObservedRunningTime="2025-02-13 15:38:09.044280837 +0000 UTC m=+54.910029086"
Feb 13 15:38:09.810271 kubelet[1743]: E0213 15:38:09.810218    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:38:10.633317 kubelet[1743]: E0213 15:38:10.633270    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:38:10.811247 kubelet[1743]: E0213 15:38:10.811177    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:38:11.183507 systemd-networkd[1388]: lxc_health: Link UP
Feb 13 15:38:11.192293 systemd-networkd[1388]: lxc_health: Gained carrier
Feb 13 15:38:11.811934 kubelet[1743]: E0213 15:38:11.811882    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:38:12.634565 kubelet[1743]: E0213 15:38:12.634430    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:38:12.812528 kubelet[1743]: E0213 15:38:12.812480    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:38:12.823274 systemd-networkd[1388]: lxc_health: Gained IPv6LL
Feb 13 15:38:13.036471 kubelet[1743]: E0213 15:38:13.036371    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:38:13.813402 kubelet[1743]: E0213 15:38:13.813319    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:38:14.038685 kubelet[1743]: E0213 15:38:14.038505    1743 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Feb 13 15:38:14.768894 kubelet[1743]: E0213 15:38:14.768850    1743 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:38:14.799476 containerd[1448]: time="2025-02-13T15:38:14.799443108Z" level=info msg="StopPodSandbox for \"6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9\""
Feb 13 15:38:14.799793 containerd[1448]: time="2025-02-13T15:38:14.799528429Z" level=info msg="TearDown network for sandbox \"6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9\" successfully"
Feb 13 15:38:14.799793 containerd[1448]: time="2025-02-13T15:38:14.799540269Z" level=info msg="StopPodSandbox for \"6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9\" returns successfully"
Feb 13 15:38:14.799903 containerd[1448]: time="2025-02-13T15:38:14.799875671Z" level=info msg="RemovePodSandbox for \"6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9\""
Feb 13 15:38:14.799958 containerd[1448]: time="2025-02-13T15:38:14.799903431Z" level=info msg="Forcibly stopping sandbox \"6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9\""
Feb 13 15:38:14.799958 containerd[1448]: time="2025-02-13T15:38:14.799946392Z" level=info msg="TearDown network for sandbox \"6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9\" successfully"
Feb 13 15:38:14.803539 containerd[1448]: time="2025-02-13T15:38:14.803510978Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:38:14.803635 containerd[1448]: time="2025-02-13T15:38:14.803563258Z" level=info msg="RemovePodSandbox \"6734dbbeb017c4600cc6d4dca71126a55df32046dead1fceb579902ea1083db9\" returns successfully"
Feb 13 15:38:14.814012 kubelet[1743]: E0213 15:38:14.813983    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:38:15.814997 kubelet[1743]: E0213 15:38:15.814954    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:38:16.815727 kubelet[1743]: E0213 15:38:16.815689    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:38:17.815972 kubelet[1743]: E0213 15:38:17.815923    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:38:18.816534 kubelet[1743]: E0213 15:38:18.816490    1743 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"